Historiography (art)

Historiography (art), the study of the history of the visual arts, a field that can range from the detailed, objective cataloguing of works of art to philosophical musing on the nature of beauty.

It was not until the 19th century that art history became a fully fledged academic discipline, but its origins go back to Classical antiquity. The most important work dealing with art to survive from the ancient world is the encyclopedic Latin treatise Natural History, written by Pliny the Elder in the 1st-century ad. This work has often been criticized as careless and superficial, but it contains a good deal of valuable information and some entertaining anecdotes on painters and sculptors (the information on art is in the section on metals and stones and their uses). A century after Pliny, the Greek traveller Pausanias compiled a Description of Greece that is a fount of information on architecture, painting, and sculpture, and the ancestor of modern guidebooks.

The most substantial writings on art from the Middle Ages are the book On Buildings by the Byzantine historian Procopius (6th century), dealing with the architecture of the age of Justinian I, and a treatise on arts and crafts entitled De Diversis Artibus (On the Various Arts), written under the pseudonym Theophilus, probably in the early 12th century (the author was possibly Roger of Helmarshausen, a German goldsmith, and monk).

In the 15th century Leon Battista Alberti wrote treatises on architecture, painting, and sculpture, and the sculptor Lorenzo Ghiberti compiled a manuscript entitled Commentaries that includes a survey of ancient art (based on Pliny), notes on 14th-century Italian artists, and also his autobiography, the earliest by an artist to survive. The true founding father of art history, however, came a century later in Giorgio Vasari, who wrote the most famous and influential book ever published on the subject— Le Vite de’ Più Eccellenti Architetti, Pittori, et Scultori Italiani (The Lives of the Most Eminent Italian Architects, Painters, and Sculptors), generally referred to simply as Vasari’s Lives. It was first published in 1550 and a second, much-enlarged edition appeared in 1568.

Vasari believed that the arts had reached a high level in Classical antiquity, then declined into barbarism in the Middle Ages, before being revived in Italy in the 14th century by artists such as Giotto and rising to a peak in the work of Vasari’s contemporary, Michelangelo, whom he idealized. This idea of art following a pattern of decay and renewal coloured thinking about the Renaissance for centuries, and Vasari’s biographical method inspired several important imitators, beginning with Karel van Mander (“the Dutch Vasari”), who in 1604 published Het Schilder-Boeck (The Book of Painters), which is the most important source of information on northern European artists up to that date. Among other major biographical compilations were those published in French by André Félibien (1666-1688), in German by Joachim von Sandrart (1675-1679), and in Spanish by Antonio Palomino (1724).

The next great milestone in art-historical writing came not from a biographer, however, but from the German classical archaeologist Johann Joachim Winckelmann, who wrote two major books: Gedanken über die Nachahmung der Griechischen Werke in der Malerei und Bildhauerkunst (Reflections on the Painting and Sculpture of the Greeks), published in 1755; and Geschichte der Kunst des Altertums (History of Ancient Art), published in 1764 (the latter marks the first occurrence of the phrase “history of art” in the title of a book). Winckelmann saw art as part of the general evolution of thought and culture and he tried to explain its character in terms of such factors as social conditions and religious customs. His work was important in gaining for art history the recognition as a serious intellectual pursuit and in establishing Germany as its principal home.

The first university professorship in art history was established in 1844 in Berlin for Gustav Friedrich Waagen, an indefatigable traveller who published a mass of information on works of art in public and private collections, notably Treasures of Art in Great Britain (3 vols., 1854). Waagen was not the only outstanding compiler of his time, for he lived in the great age of fact-finding in art history, when prodigious work was done in archival research and the writing of comprehensive catalogues. Among the great enterprises from this period that formed a foundation for much subsequent work is the 20-volume series Le Peintre Graveur (1803-1821) by the Austrian authority on prints Adam von Bartsch; the numbering system used in this pioneering study of painter-engravers has been adopted by most subsequent scholars in the field.

Part of this process of the accumulation of knowledge resulted from trying to establish on stylistic grounds which artists were responsible for works that were not firmly documented. Giovanni Morelli (an Italian connoisseur who wrote in German) attempted to give attribution a scientific basis by minutely studying a painter’s treatment of details such as ears and fingernails. His work was very influential, this kind of connoisseurship becoming a major strand in art-historical studies well into the 20th century. Kenneth Clark, for example, wrote: “When I was an undergraduate [in the 1920s] the idea that art scholarship consisted in finding out who painted a picture on the basis of internal evidence alone had the same unquestioned prestige as textual emendation in the field of classical scholarship”.

The American art critic Bernard Berenson was the most famous practitioner of this kind of connoisseurship, and the various lists he compiled of the work of Italian Renaissance painters are still valuable, although many of his attributions have subsequently been questioned. Another approach to stylistic analysis was seen in the work of the Swiss scholar Heinrich Wölfflin, who tried to show that style followed evolutionary principles, most notably in his book Kunstgeschichtliche Grundbegriffe (1915; Principles of Art History, 1932). Wölfflin’s visual analysis was much more subtle and searching than that of his predecessors, and Herbert Read wrote: “it could be said of him that he found art criticism a subjective chaos and left it a science”.

Alongside the methodology that placed paramount importance on the stylistic values of a work of art, there developed another, in which the work was studied as part of the intellectual history of its time, with a new emphasis on the interpretation of subject matter (iconography). The great pioneer of this approach was the German, Aby Warburg, whose superb library developed into a research institute, incorporated into the University of London in 1944 as the Warburg Institute. Many outstanding art historians have been associated with the Warburg Institute, notably Ernst Gombrich, but the scholar who is most renowned for his iconographical analysis is probably Erwin Panofsky, who spent most of his career at Princeton University in the United States. Kenneth Clark described Panofsky as “unquestionably the greatest art historian of his time” and he combined his immense erudition with rare sensitivity. Some of his followers have been accused of taking his methods too far, “overinterpreting” pictures to find “hidden symbolism” that does not really exist.

Connoisseurship and iconography continue to be important in art history, but since the 1970s there has been a reaction against traditional methodology in the subject. This reaction has been dubbed “the new art history”—“a capacious and convenient title that sums up the impact of feminist, Marxist, structuralist, psychoanalytic, and socio-political ideas on a discipline notorious for its conservative taste in art and its orthodoxy in research” (The New Art History, ed. A. L. Rees and Frances Borzello, 1986). Some traditionalists would reply that the new art history tends to be pretentious and jargon-ridden.

Credite images and video: Daily Mail


Further Education


Further Education (FE), the tertiary sector of education, following primary and secondary education, and sometimes preceding higher education. Whereas in the rest of Europe the tertiary sector is generally confined to vocational education and training, in the United Kingdom FE embraces both academic and vocational or professional study programmes. FE in the United Kingdom has no direct equivalent in other parts of the world. Other systems tend towards separation of the vocational system from schools and universities.

Most full-time students in FE study in further education colleges between the ages of 16 and 19, but the majority of FE college students overall are adults and study part-time. FE is often regarded as the “college sector” which provides study opportunities between school and university. However, the boundaries between further education colleges and higher education institutions are becoming increasingly blurred.

FE offers study opportunities for those who need help with basic skills: literacy, numeracy, and key skills at a foundation level. The majority of students are following courses at level 1 to 3 (foundation, intermediate, and advanced), and there are more A-level students in colleges than in school sixth forms. About 20 per cent of FE colleges also offer some higher education, and several universities (generally the former polytechnics) offer some FE.


FE in the United Kingdom is a distinctly modern phenomenon. It has its beginnings in the mechanical and technical institutes of the early 19th century. The first institute was formally constituted in Edinburgh in 1821. The subsequent growth in institutes was phenomenal and was matched by the development of the first national examining bodies from the time of the RSA Examinations Board in 1856. The RSA has now merged with other examinations boards to become the OCR Awarding Body, one of several awarding bodies that also include City and Guilds and Edexcel.

From the early 20th century until the 1960s, the UK had a tripartite system of schools, grammar, technical, and secondary modern, and the role of FE was primarily as a provider of evening study programmes in the local technical college. Before 1940, the technical college was a place of vocational education for the employed. The end of World War II and the demand for new skills meant further education concentrated on day release from work and evening classes. A new era of partnership with industry began. This developed with the industry training boards and levy systems of the 1960s and 1970s and the Manpower Services Commission (MSC) of the 1980s, to the Training and Enterprise Councils of the 1990s.

In the 1960s, with the arrival of comprehensive schools, many local education authorities (LEAs) wanted to offer second-chance opportunities to students at 16 and evening classes to adult students. The tertiary college was born. Many LEAs reorganized in order to set up tertiary colleges for all post-16 students and adults. The FE sector as we know it was created, although the process of providing schools for just 11- to 16-year-olds was not continued, and there is therefore now a diverse system.

FE today is provided by various institutions, including general further education colleges, agricultural and horticultural colleges, art and design colleges, and other specialist colleges; sixth forms in secondary schools, sixth-form colleges (England and Wales only), and universities. In 1999-2000 there were approximately 3.1 million students in FE in England; 22 percent were full-time students and 78 percent part-time. The government has actively encouraged the increase in FE provision.


The structures of funding and quality assurance are different within England, Wales, Scotland, and Northern Ireland. In England, Wales, and Scotland, colleges of further education, tertiary colleges, and sixth-form colleges, which previously received grants directly through their LEAs, have since April 1993 had the autonomy to run their own affairs within the further education sector. Northern Ireland followed suit in 1998. Internal organization, as well as finance and management issues (including pay and conditions of service contracts), are matters for each college to determine. All have governing bodies, which include representatives of the local business community and many courses are run in conjunction with local employers. In April 2001 a national Learning and Skills Council (LSC) was established in England, taking on the funding responsibilities of the English Further Education Funding Council (FEFC) and the functions of the Training and Enterprise Councils (TECs).


In the United Kingdom,, the aim is to establish a qualifications framework which includes both academic and vocational qualifications, overcoming traditional barriers and promoting greater flexibility within the system. The intention is that individuals and employers will establish systems to allow employees to attain progressively higher levels of skill. There is close cooperation between the regulatory bodies in England, Scotland, Wales, and Northern Ireland. In England, the Qualifications and Curriculum Authority (QCA) is responsible for a comprehensive qualifications framework, including accreditation for National Vocational Qualifications (NVQs). This accreditation also extends to Wales and Northern Ireland. In Wales, the Qualification, Curriculum, and Assessment Authority (ACCAC) perform a similar role to the QCA, while the Welsh Joint Educational Committee offers A-level and GCSE assessment. In Scotland, the Scottish Qualifications Authority is a regulatory and awarding body. The main thrust of the framework there is to offer parity of esteem for all qualifications across the system and to include the new competence-based system that has been devised by industry.

In September 2000 a new curriculum, known as “Curriculum 2000”, was introduced. This gives students the opportunity to study more subjects (normally four or five in one year) at AS (Advanced Subsidiary) level, and they can then specialize in two to four subjects in the second year, at the A2 level. The former GNVQs have been renamed as vocational A levels. Key skills of numeracy, literacy, and IT (information technology) efficiency are also examined within the curriculum at different levels.


College systems overseas tend to concentrate exclusively on the vocational provision, as compared with the UK FE college system, which combines academic and vocational provision. Other countries have priorities similar to those of the United Kingdom, including curriculum, qualification, and funding reforms; the decentralization of decision-making; the encouragement of links between industry and education; quality and standards; guidance and counselling; progression; and non-completion. Generally, vocational qualifications have the same low status overseas as in the United Kingdom; however, governments everywhere are aiming to change this perception.

Contributed By:
British Training International

Credited Images:aliexpress


Educational Psychology


Educational Psychology, field of psychology concerned with the development, learning, and behaviour of children and young people as students in schools, colleges, and universities. It includes the study of children within the family and other social settings, and also focuses on students with disabilities and special educational needs. Educational psychology is concerned with areas of education and psychology which may overlap, particularly child development, evaluation and assessment, social psychology, clinical psychology, and educational policy development and management.


In the 1880s the German psychologist Hermann Ebbinghaus developed techniques for the experimental study of memory and forgetting. Before Ebbinghaus, these higher mental processes had never been scientifically studied; the importance of this work for the practical world of education was immediately recognized.

In the late 1890s William James of Harvard University examined the relationship between psychology and teaching. James, who was influenced by Charles Darwin, was interested in how people’s behaviour adapted to different environments. This functional approach to behavioural research led James to study practical areas of human endeavour, such as education.

James’s student Edward Lee Thorndike is usually considered to be the first educational psychologist. In his book Educational Psychology (1903), he claimed to report only scientific and quantifiable research. Thorndike made major contributions to the study of intelligence and ability testing, mathematics and reading instruction, and the way learning transfers from one situation to another. In addition, he developed an important theory of learning that describes how stimuli and responses are connected.


Educational psychology has changed significantly over the 20th century. Early investigations of children as learners and the impact of different kinds of teaching approaches were largely characterized by attempts to identify general and consistent characteristics. The approaches used varied considerably. Jean Piaget, for example, recorded the development of individuals in detail, assessing changes with age and experience. Others, such as Robert Gagné, focused on the nature of teaching and learning, attempting to lay down taxonomies of learning outcomes. Alfred Binet and Cyril Burt were interested in methods of assessing children’s development and identifying those children considered to be of high or low general intelligence.

This work led to productive research which refined the theories of development, learning, instruction, assessment, and evaluation, and built up an increasingly detailed picture of how students learn. Educational psychology became an essential part of the training of teachers, who for several generations were instructed in the theories emanating from its research to help train them in classroom teaching practice.

A Changing Approaches

Recently the approach of educational psychology has changed significantly in the United Kingdom, as has its contribution to teacher education. In part these changes reflect political decisions to alter the pattern of teacher training: based on the belief that theory is not useful, and that “hands-on” training is preferable. However, discipline and teacher education have each been changing of their own accord. Moving away from the emphasis on all-encompassing theories, such as those of Jean Piaget, Sigmund Freud, and B. F. Skinner, the concerns of educational psychologists have shifted to practical issues and problems faced by the learner or the teacher. Consequently, rather than, for example, impart to teachers in training Skinner’s theory of operant conditioning, and then seek ways of their applying it in classrooms, educational psychologists have tended to begin with the practical issues—how to teach reading; how to differentiate a curriculum (a planned course of teaching and learning) across a range of children with differing levels of achievement and needs; and how to manage discipline in classrooms.

Theory-driven research increasingly suggested that more elaborated conceptions of development were required. For example, the earlier work on intelligence by Binet, Burt, and Lewis Madison Terman focused on the assessment of general intelligence, while recognizing that intellectual activity included verbal reasoning skills, general knowledge, and non-verbal abilities such as pattern recognition. More recently the emphasis has shifted to accentuate the differing profiles of abilities, or “multiple intelligences” as proposed by the American psychologist Howard Gardner, who argues that there is good evidence for at least seven, possibly more, intelligences including kinaesthetic and musical as well as the more traditionally valued linguistic and logico-mathematical types of intelligence.

There has also been a shift in emphasis from the student as an individual to the student in a social context, at all levels from specific cognitive (thinking and reasoning) abilities to general behaviour. For example, practical intelligence and its links with “common sense” have been addressed and investigations made into how individuals may have relatively low intelligence as measured by conventional intelligence tests, yet be seen to be highly intelligent in everyday tasks and “real-life” settings. Recognition of the impact of the environment on a child’s general development has been informed by research on the effects of poverty, socio-economic status, gender, and cultural diversity, together with the effects of schooling itself. Also, the emphasis has changed from one of regarding differences in their performance on specific tasks as deficits compared with some norm, to an appreciation that deficits in performance may reflect unequal opportunity, or that differences may even reflect a positive diversity.

It is now apparent that there are also important biological factors determined by a child’s genetic make-up and its prenatal existence, as well as social factors concerned with the family, school, and general social environment. Because these various factors all interact uniquely in the development of an individual, consequently there are limitations in the possible applicability of any one theory in educational psychology.


Professional educational psychologists (EPs) draw upon theory and research from other disciplines in order to benefit individual children, their families, and educational institutions, particularly schools through the following activities:

A Individual Children

An EP may be asked to advise a parent on how to deal with a pre-school child with major temper tantrums; to assess a young child with profound and multiple disabilities; to advise teachers on the nature of a 7-year-old’s reading difficulties; to advise teachers and parents on an adolescent’s problematic behaviour; to undertake play therapy with an 8-year-old who has been sexually and physically abused; or to give an adolescent counselling or psychotherapy.

In each case there is an assessment to identify the nature of the problem, followed by an intervention appropriate to this analysis. The assessment may include the use of standardized tests of cognitive abilities, not necessarily to derive an Intelligence Quotient (IQ) but to investigate a range of aspects of intellectual development; informal assessment and standardized tests of attainment (such as reading and spelling); interviews; observation of the child in class, or with parents or friends; or methods designed to understand the child’s view of their world, including play and structured pictures and tasks where the child arranges the materials to represent their own views of family, or other social arrangements. The interventions (planned procedures) may be equally wide-ranging. In some cases the EP will try to help key adults to understand the child and the nature of the problem. In other cases, more direct advice may be given on how to handle disturbing aspects of a child’s behaviour. In other instances the EPs may advise or produce a specific programme of work, counselling, or behaviour change, which they might implement directly, or they may advise on and monitor the practices of teachers and parents.

In some instances the main basis for advice might be evidence obtained from research on child development; or evidence on intellectual development and its assessment in ethnic minority populations; or theories of learning and instruction as applied to helping a child with literacy difficulties; or theories of counselling or psychotherapeutic intervention may help an adolescent with significant emotional problems. EPs normally work collaboratively with teachers and parents, and with medical and other colleagues. They play a major role in providing advice to local education authorities or school districts in those countries which make statutory assessments of students’ special educational needs.

B Institutions

Often the involvement of an EP with an individual child in a school will lead teachers to recognize that the same issues apply more generally. For example, other children may also have similar learning difficulties or problems in controlling aggression. The EP may then provide a consultancy service to the teacher or school. In some cases this service may be sought direct, for example when a new headteacher wishes to review a previous assessment or the school’s current behaviour policy. Research has indicated, for example, how schools can reduce bullying, improve pupil performance by rearranging classrooms, for example, and optimize the inclusion of children with special educational needs.


Educational psychology continues to provide a major basis for the initial education of teachers, particularly in management of learning and behaviour, but also on curriculum design, with special attention given to the needs of individual children. Increasingly, educational psychology is also contributing to student teachers’ understanding of the school as a system and the importance of this wider perspective for optimizing their performance; to their professional development by helping them analyse their own practice, beliefs, and attitudes and, once they begin the practice of teaching, to their continuing professional development based on experience in schools—particularly in areas such as special needs and disability. The impact of information technology and the increasing development of inclusive education provide particular challenges.

Contributed By:
Geoff Lindsay

Credited Images: Yesstyle



Animal Behaviour


Animal Behaviour, the way different kinds of animals behave, which has fascinated inquiring minds since at least the time of Plato and Aristotle. Particularly intriguing has been the ability of simple creatures to perform complicated tasks—weave the web, build a nest, sing a song, find a home, or capture food—at just the right time with little or no instruction. Such behaviour can be viewed from two quite different perspectives, discussed below: either animal learn everything they do (from “nurture”), or they know what to do instinctively (from “nature”). Neither extreme has proved to be correct.


Until recently the dominant school in behavioural theory has been behaviourism, whose best-known figures are J. B. Watson and B. F. Skinner. Strict behaviourists hold that all behaviour, even breathing and the circulation of blood, according to Watson, is learned; they believe that animals are, in effect, born as blank slates upon which chance and experience are to write their messages. Through conditioning, they believe, an animal’s behaviour is formed. Behaviourists recognize two sorts of conditioning: classical and operant.

In the late 19th century the Russian physiologist Ivan Pavlov discovered classical conditioning while studying digestion. He found that dogs automatically salivate at the sight of food—an unconditioned response to an unconditioned stimulus, to use his terminology. If Pavlov always rang a bell when he offered food, the dogs began slowly to associate this irrelevant (conditioned) stimulus with the food. Eventually, the sound of the bell alone could elicit salivation. Hence, the dogs had learned to associate a certain cue with food. Behaviourists see salivation as a simple reflex behaviour—something like the knee-jerk reflex doctors trigger when they tap a patient’s knee with a hammer.

The other category, operant conditioning, works on the principle of punishment or reward. In operant conditioning a rat, for example, is taught to press a bar for food by first being rewarded for facing the correct end of the cage, next being rewarded only when it stands next to the bar, then only when it touches the bar with its body, and so on, until the behaviour is shaped to suit the task. Behaviourists believe that this sort of trial-and-error learning, combined with the associative learning of Pavlov, can serve to link any number of reflexes and simple responses into complex chains that depend on whatever cues nature provides. To an extreme behaviourist, then, animals must learn all the behavioural patterns that they need to know.


In contrast, ethology—a discipline that developed in Europe—holds that much of what animals know is innate (instinctive). A particular species of digger wasp, for example, finds and captures only honey bees. With no previous experience a female wasp will excavate an elaborate burrow, find a bee, paralyse it with a careful and precise sting to the neck, navigate back to her inconspicuous home, and, when the larder has been stocked with the correct number of bees, lay an egg on one of them and seal the chamber.

The female wasp’s entire behaviour is designed so that she can function in a single specialized way. Ethologists believe that this entire behavioural sequence has been programmed into the wasp by its genes at birth and that, in varying degrees, such patterns of innate guidance may be seen throughout the animal world. Extreme ethnologists have even held that all novel behaviours result from maturation—flying in birds, for example, which requires no learning but is delayed until the chick is strong enough—or imprinting, a kind of automatic memorization discussed below.

The three Nobel Prize-winning founders of ethology—Konrad Lorenz of Austria, Nikolaas Tinbergen of the Netherlands, and Karl von Frisch of Germany—uncovered four basic strategies by which genetic programming helps direct the lives of animals: sign stimuli (frequently called releasers), motor programs, drive, and programmed learning (including imprinting).


Sign stimuli are crude, sketchy cues that enable animals to recognize important objects or individuals when they encounter them for the first time. Baby herring gulls, for example, must know from the outset to whom they should direct their begging calls and pecks in order to be fed. An adult returning to the nest with food holds its bill downwards and swings it back and forth in front of the chicks. The baby gulls peck at the red spot on the tip of the bill, causing the parent to regurgitate a meal. The young chick’s recognition of a parent is based entirely on the sign stimulus of the bill’s vertical line and red spot moving horizontally. A wooden model of the bill works as well as the real parent; a knitting needle with a spot is more effective than either in getting the chicks to respond.

Sign stimuli need not be visual. The begging call that a chick produces is a releaser for its parents’ feeding behaviour. The special scent, or pheromone, emitted by female moths is a sign stimulus that attracts males. Tactile (touch) and even electrical sign stimuli are also known.

The most widespread uses of sign stimuli in the animal world are in communication, hunting, and predator avoidance. The young of most species of snake-hunting birds, for instance, innately recognize and avoid deadly coral snakes; young fowl and ducklings are born able to recognize and flee from the silhouette of hawks. Similar sign stimuli are often used in food gathering. The bee-hunting wasp recognizes honey bees by means of a series of releasers: the odour of the bee attracts the wasp upwind; the sight of any small, dark object guides it to the attack; and, finally, the odour of the object as the wasp prepares to sting determines whether the attack will be completed.

This use of a series of releasers, one after the other, greatly increases the specificity of what are individually crude and schematic cues; it is a strategy frequently employed in communication and is known as display. Most animal species are solitary except when courting and rearing young. To avoid confusion, the signals that identify the sex and species of an animal’s potential mate must be clear and unambiguous (see Courtship below).


A second major discovery by ethologists is that many complex behaviours come pre-packaged as motor programs—self-contained circuits able to direct the coordinated movements of many different muscles to accomplish a task. The courtship dancing of sticklebacks, the stinging action of wasps, and the pecking of gull chicks are all motor programs.

The first motor program analysed in much detail was the egg-rolling response of geese. When a goose sees an egg outside its nest, it stares at the egg, stretches its neck until its bill is just on the other side of the egg, and then gently rolls the egg back into the nest. At first glance this seems a thoughtful and intelligent piece of behaviour, but it is in fact a mechanical motor program; almost any smooth, rounded object (the sign stimulus) will release the response. Furthermore, removal of the egg once the program has begun does not stop the goose from finishing its neck extension and delicately rolling the non-existent object into the nest.

Such a response is one of a special group of motor programs known as fixed-action patterns. Programs of this class are wholly innate, although they are frequently wired so that some of the movements are adjusted automatically to compensate for unpredictable contingencies, such as the roughness and slope of the ground the goose must nudge the egg across. Apparently, the possible complexity of such programs is almost unlimited; birds’ nests and the familiar beautiful webs of orb-weaving spiders are examples.

Another class of motor programs is learned. In the human species walking, swimming, bicycle riding, and shoe tying, for example, begin as laborious efforts requiring full, conscious attention. After a time, however, these activities become so automatic that, like innate motor programs, they can be performed unconsciously and without normal feedback. This need for feedback in only the early stages of learning is widespread. Both songbirds and humans, for example, must hear themselves as they begin to vocalize, but once song or speech is mastered, deafness has little effect. The necessary motor programs have been wired into the system.


The third general principle of ethology is drive. Animals know when to migrate, when (and how) to court one another, when to feed their young, and so on. In most animals these abilities are behavioural units that are switched on or off as appropriate. Geese, for example, will only roll eggs from about a week before egg laying until a week after the young have hatched. At other times eggs have no meaning to them.

The switching on and off of these programs often involves complex inborn releasers and timers. In birds, preparations for spring migration, as well as the development of sexual dimorphisms (separate forms), territorial defence, and courtship behaviour, are all triggered by the lengthening period of daylight. This alters hormone levels in the blood, thereby triggering each of these dramatic but essential changes in behaviour.

In general, however, no good explanation exists for the way in which motivation is continually modulated over short periods in an animal’s life. A cat will stalk small animals or toys even though it is well supplied with food. Deprived of all stimuli, its threshold (the quality of stimulus required to elicit a behaviour) will drop sufficiently so that thoroughly bored cats will stalk, chase, capture, and disembowel entirely imaginary targets. This unaccountable release of what appears to be pent-up motivation is known as vacuum activity—a behaviour that will occur even in the absence of a proper stimulus.

One simple mechanism by which animals alter their levels of responsiveness (and which may ultimately help explain motivation) is known as habituation. Habituation is essentially a central behavioural boredom; repeated presentation of the same stimulus causes the normal response to wane. A chemical present on the tentacles of its arch-enemy, the starfish, triggers a sea slug’s frantic escape behaviour. After several encounters in rapid succession, however, the threshold for the escape response begins to rise and the sea slug refuses to flee the overworked threat. Simple muscle fatigue is not involved, and stimulation of some other form—a flash of light, for instance—instantly restores the normal threshold (a phenomenon known as sensitization). Hence, nervous systems are pre-wired to “learn” to ignore the normal background levels of stimuli and to focus instead on changes from the accustomed level.


The fourth contribution ethology has made to the study of animal behaviour is the concept of programmed learning. Ethologists have shown that many animals are wired to learn particular things in specific ways at preordained times in their lives.

A Imprinting

One famous example of programmed learning is imprinting. The young of certain species—ducks, for example—must be able to follow their parents almost from birth. Each young animal, even if it is pre-programmed to recognize its own species, must quickly learn to distinguish its own particular parents from all other adults. Evolution has accomplished this essential bit of memorization in ducks by wiring ducklings to follow the first moving object they see that produces the species-specific exodus call. The call acts as an acoustic sign stimulus that directs the response of following.

It is the physical act of following, however, that triggers the learning process; chicks passively transported behind a calling parent do not imprint at all. (In fact, presenting obstacles so that a chick has to work harder to follow its parent actually speeds the imprinting process.) As long as the substitute parent makes the right sounds and moves, ducklings can be imprinted on a motley collection of objects, including rubber balls, shoe boxes, and human beings.

This parental-imprinting phase is generally early and brief, often ending 36 hours after birth. Another round of imprinting usually takes place later; it serves to define the species image the animal will use to select an appropriate mate when it matures. Ethologists suspect that genetic programming cannot specify much visual detail; otherwise, selective advantage would probably require chicks to come pre-wired with a mental picture of their own species.

As the world has become increasingly crowded with species, the role of sign stimuli in some animals has shifted from that of identifying each animal’s species uniquely to that of simply directing the learning necessary to distinguish an animal’s own kind from many similar creatures. This strategy works because, at the early age involved, most animals’ ranges of contact are so limited that a mistake in identifying what to imprint on is highly unlikely.

B Characteristics of Programmed Learning

Imprinting, therefore, has four basic qualities that distinguish it from ordinary learning: (1) a specific time, or critical period, exists when the learning must take place; (2) a specific context exists, usually defined by the presence of a sign stimulus; (3) the learning is often constrained in such a way that an animal remembers only a specific cue such as odour and ignores other conspicuous characteristics; and (4) no reward is necessary to ensure that the animal remembers.

These qualities are now becoming evident in many kinds of learning, and the value of such innately directed learning is beginning to be understood: in a world full of stimuli, it enables an animal to know what to learn and what to ignore. As though for the sake of economy, animals need pick up only the least amount of information that will suffice in a situation. For example, ducklings of one species seem able to learn the voices of their parents, whereas those of another recall only what their parents look like. When poisoned, rats remember only the taste and odour of the dangerous food, whereas quail recall only its colour. This phenomenon, known as rapid food-avoidance conditioning, is so strongly wired into many species that a single exposure to a toxic substance is usually sufficient to train an animal for life.

The same sorts of biases are observed in nearly every species. Pigeons, for instance, readily learn to peck when food is the reward, but not to hop on a treadle for a meal; on the other hand, it is virtually impossible to teach a bird to peck to avoid danger, but they learn treadle hopping in dangerous situations easily. Such biases make sense in the context of an animal’s natural history; pigeons, for example, normally obtain food with the beak rather than the feet, and react to danger with their feet (and wings).

Perhaps the example of complex programmed learning understood in most complete detail is song learning in birds. Some species, such as doves, are born wired to produce their species-specific coos, and no amount of exposure to the songs of other species or the absence of their own has any effect. The same is true for the repertoire of 20 or so simple calls that virtually all birds use to communicate messages such as hunger or danger.

The elaborate songs of songbirds, however, are often heavily influenced by learning. A bird reared in isolation, for example, sings a very simple outline of the sort of song that develops naturally in the wild. Yet song learning shows all the characteristics of imprinting. Usually a critical period exists during which the birds learn while they are young. Exactly what is learned—what a songbird chooses to copy from the world of sound around it—is restricted to the songs of its own species. Hence, a white-crowned sparrow, when subjected to a medley of songs of various species, will unerringly pick out its own and commit it to memory. The recognition of the specific song is based on acoustic sign stimuli.

Despite its obvious constraints, song learning permits considerable latitude: any song will do as long as it has a few essential features. Because the memorization is not quite perfect and admits some flexibility, the songs of many birds have developed regional dialects and serve as vehicles for a kind of “cultural” behaviour.

A far more dramatic example of programmed cultural learning in birds is seen in the transmission of knowledge about predators. Most birds are subject to two sorts of danger: they may be attacked directly by birds of prey, or their helpless young may be eaten by nest predators. When they see birds of prey, birds regularly give a specific, whistlelike alarm call that signals the need to hide. A staccato mobbing call, on the other hand, is given for nest predators and serves as a call to arms, inciting all the nesting birds in the vicinity to harass the potential predator and drive it away. Both calls are sign stimuli.

Birds are born knowing little about which species are safe and which are dangerous; they learn this by observing the objects of the calls they hear. So totally automatic is the formation of this list of enemies that caged birds can even be tricked into mobbing milk bottles (and will pass the practice on from generation to generation) if they hear a mobbing call while being shown a bottle. This variation on imprinting appears to be the mechanism by which many mammals (primates included) gain and pass on critical cultural information about both food and danger. The fairly recent realization of the power of programmed learning in animal behaviour has reduced the apparent role that simple copying and trial-and-error learning play in modifying behaviour.


Evolution, working on the four general mechanisms described by ethology, has generated a nearly endless list of behavioural wonders by which animals seem almost perfectly adapted to their world. Prime examples are the honey bee’s systems of navigation, communication, and social organization. Bees rely primarily on the Sun as a reference point for navigation, keeping track of their flight direction with respect to the Sun, and factoring out the effects of the winds that may be blowing them off course. The Sun is a difficult landmark for navigation because of its apparent motion from east to west, but bees are born knowing how to compensate for that. When a cloud obscures the Sun, bees use the patterns of ultraviolet polarized light in the sky to determine the Sun’s location. When an overcast obscures both Sun and sky, bees automatically switch to a third navigational system based on their mental map of the landmarks in their home range.

Study of the honey bee’s navigational system has revealed much about the mechanisms used by higher animals. Homing pigeons, for instance, are now known to use the Sun as their compass; they compensate for its apparent movement, see both ultraviolet and polarized light, and employ a backup compass for cloudy days. The secondary compass for pigeons is magnetic. Pigeons surpass bees in having a map sense as well as a compass as part of their navigational system. A pigeon taken hundreds of kilometres from its loft in total darkness will nevertheless depart almost directly for home when it is released. The nature of this map sense remains one of ethology’s most intriguing mysteries.

Honey bees also exhibit excellent communication abilities. A foraging bee returning from a good source of food will perform a “waggle dance” on the vertical sheets of honeycomb. The dance specifies to other bees the distance and direction of the food. The dance takes the form of a flattened figure 8; during the crucial part of the manoeuvre (the two parts of the figure 8 that cross) the forager vibrates her body. The angle of this part of the run specifies the direction of the food: if this part of the dance points up, the source is in the direction of the Sun, whereas if it is aimed, for example, 70° left of vertical, the food is 70° left of the Sun. The number of waggling motions specifies the distance to the food.

The complexity of this dance language has paved the way for studies of higher animals. Some species are now known to have a variety of signals to smooth the operations of social living. Vervet monkeys, for example, have the usual set of gestures and sounds to express emotional states and social needs, but they also have a four-word predator vocabulary: a specific call alerts the troop to airborne predators, one to four-legged predators such as leopards, another to snakes, and one to other primates. Each type of alarm elicits a different behaviour. Leopard alarms send the vervets into trees and to the top branches, whereas the airborne predator call causes them to drop like stones into the interior of the tree. The calls and general categories they represent seem innate, but the young learn by observation which species of each predator class is dangerous. An infant vervet may deliver an aerial alarm to a vulture, a stork, or even a falling leaf, but eventually comes to ignore everything airborne except the martial eagle.


Animal courtship behaviour precedes and accompanies the sexual act, to which it is directly related. It often involves stereotyped displays, which can be elaborate, prolonged, and spectacular, and includes the exhibition of sign stimuli (releasers), dramatic body colours, plumage, or markings. It may also involve ritualized combat between rival males.

Its primary purpose is to bring both partners to a state of sexual receptiveness simultaneously. This is especially important in aquatic animals whose eggs are fertilized externally and may be dispersed by water currents before sperm can make contact with them. Copulation is often over quickly and an elaborate courtship ensures its success.

A male three-spined stickleback starts his courtship by building a nest inside a territory he defends. When a female approaches he performs a zig-zag dance towards and away from the nest until she turns towards him and raises her head. He then leads her to the nest and waits, head down, beside the entrance. If she enters the nest he nudges her tail. She lays eggs, and leaves by swimming through the nest. He follows, fertilizing the eggs. In this ritual, each response stimulates the next activity and an incorrect response causes the last step to be repeated.

Courtship rituals differ from one species to another and individuals are attracted only by the attentions of members of their own species. This greatly reduces the risk of unproductive hybrid matings, especially between species that look alike. Male Photinus fireflies attract mates by patterns of flashing light, but each species has its own distinct pattern.

Many female animals secrete odours, pheromones, when they are sexually receptive. The female silkworm moth uses her wings to disperse bombykol, a pheromone secreted by abdominal scent glands, which is detectable by males up to 10 km (6.2 mi) away. Female mammals, including the rhesus macaque and other primates, also use pheromones as sexual attractants.

In most species, however, males court females. Females invest much more time and energy in producing eggs and raising young than males invest in producing sperm, and males usually contribute little or nothing to the care of offspring. It is in the interests of females to choose mates that will give their young the best start in life and in the interests of males to mate with as many females as possible. A male, therefore, must persuade females of his suitability; he must court them.

The song with which many male birds proclaim their presence also serves to attract females, but once they arrive the male must impress them, often by displaying extravagant or brightly coloured plumage. The plumage of the mallard drake is brighter in the breeding season, and the peacock impresses the peahen by displaying his huge tail. Males of some species, such as the sage grouse, gather in large numbers at a particular place, called a lek, where they all display while passing females make their choices. In other species, males present gifts to the female, and bowerbirds try to attract females into elaborate display grounds that they have constructed and decorated with brightly coloured objects.

Such extravagant plumage and behaviour evolved (see Evolution) by “runaway” sexual selection as opposed to natural selection. Males adorned with ornamentation that restricted their mobility, or who spent time obtaining gifts or constructing decorated bowers, showed their strength and competence. Females chose the most spectacular, so with each generation the displays became more extreme until they reached limits beyond which the males would be too encumbered to survive. Darwin’s view of male ornamentation was that it appealed, quite arbitrarily, to female “whim”, but other theorists believe that extravagant male ornamentation advertises genuine male qualities to otherwise sceptical females.


Animal social organization, the sum of all the relationships among members of a group of animals all of the same species, varies considerably. It ranges from the cooperation between a male and female during courtship and mating, to the most complex societies, in which only one female at a time produces young, all other females collaborating in the care of offspring and maintenance of the colony. Some animal societies are hierarchical, with dominant and subordinate members; others are loose arrangements of fairly independent family groups.

These relationships have evolved in response to the circumstances under which the species live. Many birds establish a territory during the breeding season. This ensures a supply of food for the young, but by excluding all other individuals the pair must mate only with one another. Because of this most birds are monogamous, even if they are not territorial. Some aquatic species mate for life. Only if one mate dies will an albatross, kittiwake, or swan seek a new partner. Monogamy is much rarer among mammals, although prairie voles mate for life, as do gibbons and some lemurs.

Where the food supply is dispersed, animals tend to be solitary. Bears, most cats, and European hedgehogs live solitary lives, the female accompanied by young which leave as soon as they are capable of living alone. Adult males and females meet only to mate.

Social groups based on mating and the rearing of young are often seasonal, members dispersing once the young leave. Other groupings, for protection or hunting, are permanent. In open country, many mammals live in large groups. Herds of antelope, deer, zebras, and horses, and troops of baboons are familiar examples. They benefit from the safety of numbers.

Often, these groups comprise females and their young with one adult male. This is a harem. The male mates with all the adult females and spends much of his time trying to prevent them deserting. Adult males without harems form all-male herds, but individuals constantly try to acquire harems by abducting females. The male of the harem is not necessarily the leader of the group. A herd of horses, for example, is led by the senior female, to whom all others defer. Other groups may have more than one male. A wildebeest herd comprises about 150 females and young and up to three bulls, which patrol outside the herd, keeping it together and guarding it from predators.

Elephants form extended-family herds of females and their young, which often include an old male relative among Asian elephants. Other adult males live outside the herd and male African elephants form their own groups.

Lions are the only cats that live in social groups and collaborate in hunting, a pride consisting of up to 3 males and about 15 females and young. Dogs are much more social. Hunting dogs live in packs of up to 90 individuals. They collaborate in hunting and share food amicably, allowing the young to feed first and disgorging food for latecomers. Wolves mate for life and packs consist of one or more family groups, sometimes with outsiders that have been accepted. A strict social hierarchy is maintained by ritualized postures and gestures.

Species that live in colonies exhibit extreme social relationships and are said to be “eusocial”. They include social insects, such as termites, ants, wasps, and some bees, and one species of mammal, the naked mole rat of eastern Africa. The organization of social insects is based on the roles of certain groups within the colony. There is usually one reproductive female, the queen, who may lay thousands of eggs in her lifetime. Most of the other insects in the colony are involved in the construction, maintenance, and defence of the colony. Certain groups of insects may have specific physical features which relate to their roles. The queen bee is usually much longer and has an enlarged abdomen for egg-laying, the worker bees are equipped with stings to defend the colony and pollen baskets for collecting pollen, while the drones are stingless and do not have pollen baskets as their only role is to mate with the queen before they die. Much of the insect colony behaviour is determined either by instinct or by pheromones released by the queen.

A The Question of Altruism

One fascinating aspect of some animal societies is the selfless way one animal seems to render its services to others. In the beehive, for instance, workers labour unceasingly in the hive for three weeks after they emerge and then forage outside for food until they wear out two or three weeks later. Yet the workers leave no offspring. How could natural selection favour such self-sacrifice? This question presents itself in almost every social species.

The apparent altruism is sometimes actually part of a mutual-aid system in which favours are given because they will almost certainly be repaid. One chimpanzee will groom another, removing parasites from areas the receiver could not reach, because later the roles will be exchanged. Such a system, however, requires that animals be able to recognize one another as individuals, and hence be able to reject those who would accept favours without paying them back.

A second kind of altruism is exemplified by the behaviour of male sage grouse, which congregate into displaying groups—leks. Females come to these assemblies to mate, but only a handful of males in the central spots actually sire the next generation. The dozens of other males advertise their virtues vigorously but succeed only in attracting additional females to the favoured few in the centre. Natural selection has not gone wrong here, however; males move further inward every year, through this celibate and demanding apprenticeship, until they reach the centre of the lek.

The altruism of honey bees has an entirely genetic explanation. Through a quirk of hymenopteran genetics, males have only one set of chromosomes. Animals normally have two sets, passing on only one when they mate; hence, they share half their genes with any offspring and the offspring have half their genes in common with one another. Because male Hymenoptera have a single set of chromosomes, however, all the daughters have those genes in common. Added to the genes they happen to share that came from their mother, the queen, most workers are three-quarters related to one another—more related than they would be to their own offspring. Genes that favour a “selfless” sterility that assists in rearing the next generation of sisters, then, should spread faster in the population than those programming the more conventional every-female-for-herself strategy.

This system, known as kin selection, is widespread. All it requires is that an animal perform services of little cost to itself but of great benefit to relations. Bees are the ultimate example of altruism because of the extra genetic benefit that their system confers, but kin selection works almost as well in a variety of genetically conventional animals. The male lions that cooperate in taking over another male’s pride, for example, are usually brothers, whereas the females in a pride that hunt as a group and share food are a complex collection of sisters, daughters, and aunts.

Even human societies may not be immune to the programming of kin selection. In their study of sociobiology, anthropologists consistently report that simple cultures are organized along lines of kinship. Such observations, combined with the recent discovery that human language learning is in part a kind of imprinting—that consonants are innately recognized sign stimuli, for instance—suggest that human behaviour may be connected more with animal behaviour than was hitherto imagined.

Reviewed By:
Michael Allaby

Credited Image: Yesstyle




Musical or Musical Comedy, theatrical production in which songs and choruses, instrumental accompaniments and interludes, and often dance are integrated into a dramatic plot. The genre developed and was refined in the United States, particularly in the theatres along Broadway in New York, during the first half of the 20th century. The musical has origins in a variety of 19th-century theatrical sources, including the operetta, comic opera, pantomime, the minstrel show, vaudeville, and burlesque.


The American musical actually began in 1796, with The Archers; or, The Mountaineers of Switzerland, composed by Benjamin Carr and with libretto by William Dunlap. The Black Crook, produced in 1866, is generally credited as the first musical; actually, it was an extravaganza, combining melodrama with ballet. In the late 19th century, operettas from Vienna (composed by Johann Strauss, Jr. and Franz Lehár), London (by Sir Arthur Sullivan), and Paris (by Jacques Offenbach) were popular with urban audiences in the eastern United States. At the same time, revues (plotless programmes of individual songs, dances, and comedy sketches) abounded not only in theatres but also in some upper-class saloons, such as the music hall operated in New York by the comedy team of Joe Weber and Lew Fields. The successful shows of another comedy team, Ned Harrigan, and Tony Hart were also revues, but with connecting dialogue and continuing characters. These, in turn, spawned the musical shows of producer-playwright-actor-composer George M. Cohan, the first of which appeared in 1901.

In the years before World War I began in 1914, several young operetta composers emigrated from Europe to the United States. Among them were Victor Herbert, Sigmund Romberg, and Rudolf Friml. Herbert’s Naughty Marietta (1910), Friml’s The Firefly (1912), and Romberg’s Maytime (1917) are representative of the new genre these composers created: American operetta, with simple music and librettos and singable songs that were enduringly popular with the public. The text of a musical (the libretto) has been divided since that time between the “book”, which is the spoken dialogue, and the “lyrics”, which are the words of the songs. These two are often by different authors.


In 1914 the composer Jerome Kern began to produce a series of shows in which all the varied elements of a musical were integrated into a single fabric. Produced in the intimate Princess Theatre, Kern used contemporary settings and events, in contrast to operettas, which usually took place in fantasy lands. In 1927 Kern provided the score for Show Boat, perhaps the first musical to have a high-quality libretto. It was also adapted from a successful novel, a technique that was to proliferate in post-1940 musicals.

Gradually the old musical formula began to change. Instead of complicated but never serious plots, sophisticated lyrics and simplified librettos were introduced; underscoring (music played as background to dialogue or movement) was added; and new American musical elements, such as jazz and blues, were utilized by composers. In addition, singers began to pay more attention to the craft of acting. In 1932, Of Thee I Sing became the first musical to be awarded a Pulitzer Prize for drama. Its lyricist and composer, the brothers Ira and George Gershwin, had succeeded in intelligently satirizing contemporary political situations.

In the 1920s, satire, ideas, and wit had been the province of the intimate revue. These sophisticated shows were important as testing grounds for the young composers and lyricists who later helped develop the serious musical. One composer-lyricist pair who started in the intimate revues, Richard Rodgers and Lorenz Hart, wrote Pal Joey in 1940, a show that had many of the elements of the later musicals, including a book with well-rounded characters. But it was not a success until its 1952 revival. In the meantime, Rodgers, with Oscar Hammerstein II as his new writing partner, had produced Oklahoma! (1943), which had ballets, choreographed by Agnes de Mille, that were an integral part of the plot. The choreographer-director was eventually to become vastly influential on the shape and substance of the American musical. Jerome Robbins, Michael Kidd, Bob Fosse, and Michael Bennett are were notable among the skilled choreographers who went on to create important musicals, most notably A Chorus Line (1975) and Dancin’ (1978).


As these and other innovations altered the familiar face of musical theatre, audiences came to expect more variety and complexity in their shows; a host of inventive composers and lyricists obliged. In 1949, Cole Porter, who had written provocative songs with brilliant lyrics for many years, finally wrote a show with an equally fine book: Kiss Me, Kate. Rodgers and Hammerstein followed Oklahoma! with Carousel (1945) and South Pacific (1949). Irving Berlin, who had been writing hit songs since 1911, produced the popular but somewhat old-fashioned Annie Get Your Gun (1946). Frank Loesser provided both words and music for Guys and Dolls (1950), with its raffish Damon Runyon characters. Brigadoon (1947) was the first successful collaboration of the composer Frederick Loewe and book-and-lyric writer Alan Jay Lerner, who were later to contribute My Fair Lady (1956), based on George Bernard Shaw’s Pygmalion, and Camelot (1960).

In the 1950s a number of composers gained prominence. Leonard Bernstein wrote the scores for Candide (1956) and West Side Story (1957). The latter, a modern adaptation of Romeo and Juliet, mostly danced and heavily underscored, was greatly influential. Jule Styne wrote the music for Bells Are Ringing (1956) and Gypsy (1959). In the 1960s and 1970s the composer John Kander and the lyricist Fred Ebb collaborated on Cabaret (1966); composer Sheldon Harnick and lyricist Jerry Bock produced Fiddler on the Roof (1964); and Stephen Sondheim, who wrote the lyrics for West Side Story and Gypsy, did the entire scores for a series of musicals, including Company (1970), Follies (1971), A Little Night Music (1973), and Sweeney Todd (1979).

A show that opened on Broadway in 1968 and went on to affect world theatre was Hair. Called a folk-rock musical, it presented a situation rather than a plot, and its lyrics were often unintelligible. But its youthful exuberance, ingenious theatricality and concentration on rock music produced many imitators, notably Godspell and Jesus Christ Superstar (both 1971). The score for the latter was the work of the English composer Sir Andrew Lloyd Webber, who went on to write the hits Evita (1978), based on the life of the Argentine political figure Eva Perón; Cats (1981), adapted from poems by T. S. Eliot; and Song and Dance (1982). Webber’s adaptation of Gaston Leroux’s novel The Phantom of the Opera opened in London in 1987; the show received wide critical acclaim and achieved great popularity, and was followed by Sunset Boulevard (1994).

By the mid-1980s the traditional La Cage aux Folles (1983) by composer Jerry Herman and playwright Harvey Fierstein and the innovative Sunday in the Park with George (1984) by Sondheim, to a book by James Lapine, marked possible new trends. For their dramatization of the life of the French painter Georges Seurat, Sondheim and Lapine shared the 1985 Pulitzer Prize for drama. In 1986 the musical adaptation of Victor Hugo’s novel Les Misérables opened in London to popular acclaim and on Broadway the following year.

Bill Kenwright’s 1988 revival of Blood Brothers by Willy Russell became an enduring success in the West End and on Broadway through the 1990s. Other successful musicals of the decade included Miss Saigon (1991) by Alain Boublil and Claude-Michel Schönberg; The Kiss of the Spider Woman (1993) with music and lyrics by Kander and Ebb of Cabaret fame; Rent (1996) by Jonathan Larson, which in that year collected four Tony Awards and the Pulitzer for drama; and Fosse (1999), a celebration of the work of the legendary choreographer and showman Bob Fosse. There were the stage versions of popular Walt Disney films Beauty and the Beast (1994) by Howard Ashman and Tim Rice, who later collaborated with Elton John on the hit The Lion King (1997). Whistle Down the Wind, the classic film about some farm children who find an escaped convict hiding in a barn and believe him to be Jesus Christ, was the basis of a musical of the same name by Lloyd Webber and Jim Steinman, which opened in 1998.

At the start of the new century, Lloyd Webber’s next productions were Bombay Dreams (2002), a musical influenced by the Bollywood film genre, and The Woman in White (2004), an adaptation of the Wilkie Collins novel. A US television talk show inspired the Sherman brothers to write Jerry Springer—The Opera (2003), which received several prestigious Best Musical awards after its West End opening (although a performance screened by the BBC in 2005 attracted vociferous protests of blasphemy from some Christian groups). 2005 also saw the premiere of Billy Elliot, an exuberant stage adaptation of the successful 2000 film (both directed by Stephen Daldry), with a score by Elton John.

Credited Images: Yesstyle:


Culture history


Culture, a word in common use but with complex meanings, derived, like the term broadcasting, from the treatment and care of the soil and of what grows on it. It is directly related to cultivation and the adjectives cultural and culture are part of the same verbal complex. A person of culture has identifiable attributes, among them a knowledge of and interest in the arts, literature, and music. Yet the word culture does not refer solely to such knowledge and interest nor, indeed, to education. At least from the 19th century onwards, under the influence of anthropologists and sociologists, the word culture has come to be used generally both in the singular and the plural (cultures) to refer to a whole way of life of people, including their customs, laws, conventions, and values.

Distinctions have consequently been drawn between primitive and advanced culture and cultures, between elite and popular culture, between popular and mass culture, and most recently between national and global cultures. Distinctions have been drawn too between culture and civilization, the latter a word derived not, like culture or agriculture, from the soil, but from the city. The two words are sometimes treated as synonymous. Yet this is misleading. While civilization and barbarism are pitted against each other in what seems to be a perpetual behavioural pattern, the use of the word culture has been strongly influenced by conceptions of evolution in the 19th century and of development in the 20th century. Cultures evolve or develop. They are not static. They have twists and turns. Styles change. So do fashions. There are cultural processes. What, for example, the word culture means has changed substantially since the study of classical (that is, Greek and Roman) literature, philosophy, and history ceased in the 20th century to be central to school and university education. No single alternative focus emerged, although with computers has come electronic culture, affecting kinds of study, and most recently digital culture. As cultures express themselves in new forms not everything gets better or more civilized.

The word culture is now associated with many other words with historical or contemporary relevance, like corporate culture, computer culture, or alien culture, as is the word cultural. There are cultural institutions of various ages, some old, like the Royal Academy, some new, like the UK Department for Culture, Media, and Sport. They each follow cultural strategies or cultural policies and together they constitute what is sometimes called a “cultural sector”. How commercialized that varies from culture to culture. The American writer Leo Bogart, the author of eight books on communications and former vice-president and general manager of the Newspaper Advertising Bureau, wrote an important paper in 1991 on the spread of the Internet with the title “The American Media System and its Commercial Culture”.

The more recently widespread use of the word culture in sport, for example, has rendered largely obsolete two older usages of culture—the idea of it as a veneer on life, not life itself, a polish, the sugar icing, as it were, on the top of a cake and, at the opposite pole, the sense of it being the pursuit of perfection, the best that is known and thought in the world. The second meaning necessarily involves an ideal as well as an idea and critical judgement and discrimination to realize it. Both meanings have been influential, however, and the second, propounded in the 19th century, remained influential in literary criticism and in education, particularly in the teaching of English literature, in the 20th century.

The multiplicity of meanings attached to the word made and make it difficult to define. There is no single, unproblematic definition, although many attempts have been made to establish one. The only non-problematic definitions go back to agricultural (for example, cereal culture or strawberry culture) and medical (for example, bacterial culture or penicillin culture). Since in anthropology and sociology we also acknowledge culture clashes, culture shock, and counter-culture, the range of reference is extremely wide.


In 1952 two distinguished American anthropologists, A.L. Kroeber and Clyde Kluckholm listed no fewer than 164 definitions of culture made by anthropologists from the 1840s onwards. The most quoted early anthropologist was (and is) Edward Tylor, who drew no distinction between culture and civilization, and defined culture and civilization when in his Primitive Culture (1871) he wrote “culture or civilization, taken in its wide ethnographic sense, is that complex whole which includes knowledge, belief, art, morals, law, custom and any other capabilities and habits acquired by man as a member of society”. Many later anthropologists offered a less universalistic and more pluralistic and relativistic conception of culture, confining the term to a particular group of people.

It was to Tylor that the poet and critic T.S. Eliot turned in his properly named Notes Towards a Definition of Culture, first published in 1948. Eliot and Kroeber and Kluckholm rightly pointed out that Tylor’s approach had been anticipated by the German anthropologist Gustav Klemm, who defined culture comprehensively almost 30 years before Tylor as “customs, arts, and skills, domestic and public life in peace or war, religion, science and art”.

Tylor pointed to the relationship between culture and society, Klemm to the relationship of culture to religion. Eliot was preoccupied with both of these relationships. For him, it was the function of the superior members and superior families in a hierarchical society to preserve the “group culture” as it was the function of the producers to alter it. Yet the culture of a whole people was “an incarnation of its religion”. Tylor had a marked distaste for religious authority.

Tylor, like Klemm before him and Eliot after him, was also aware, however, of the importance of “material culture”, raw materials and artifacts, utensils and tools both in the making of cultures and in their role as witnesses to past cultures. Anthropology and archaeology thus went together, with British anthropologists considering their field of study as social anthropology and American and continental European anthropologists preferring the description cultural anthropology. Historians learned both from social and cultural anthropologists and from sociologists. Eliot, who died in 1965, had by comparison little influence on them as the study of everyday things became an increasingly significant element in the study of history, culminating in the identification of a consumer culture, which had its origins, some historians maintained, in the 18th century. More broadly, historians, particularly in France, stressed that the concept of culture cannot be separated from its history. A very different and far stronger influence on historians was exercised by Marxist writers, therefore, although by 1965 there were more diversities of approach and methodology within Marxism than there were among anthropologists.

The original formulation of a Marxist concept of culture was deceptively simple. Marx himself distinguished between an economic base and a cultural superstructure, although he did not use the latter adjective. He was interested in the superstructure, but he did not analyse it as 20th-century Marxists were to do, the first of them the so-called Frankfurt School of sociologists, founded by Theodor Adorno and Max Horkheimer. It was they who developed a critical theory of the media as culture makers before being driven out of Germany in 1934 and moving to the United States. Their return to Frankfurt after World War II revived their influence which, for a time, drew in Jürgen Habermas, whose writings on the public sphere became more influential among sociologists than theirs, and Herbert Marcuse, a joint father of the School, who had become an American citizen. A philosopher, who linked Marx and Freud and discussed class and sex, he played a key role in rebellious students’ movements in the United States during the 1960s. His attack on the repressive power, as he conceived of it, of liberalism seemed a threat to American values.

In Italy Antonio Gramsci, general secretary of the Italian Communist Party, who in 1926 was put into jail by Benito Mussolini, used his time there in severely restrained circumstances to write nine volumes of Prison Notebooks, which were to be widely studied throughout European universities during the 1960s. Distinguishing between forms of culture, he rejected the base/superstructure model and concluded that intellectuals created the “hegemony” or cultural domination by which the ruling class secured the mass support in order to achieve its aims. Culture demanded the discipline of knowing one’s inner self, but it was through cultural institutions, particularly the Church, through the media and through language itself that the cultural climate was determined, this, in turn, shaping political options and prospects of life. He was a pioneer of what came to be called “cultural studies”.

So, too, in England, in particular, was Raymond Williams, whose writings on culture and society—culture for him was what he called a “keyword”—culminated in 1977 in his adoption of a Marxist approach. He had not followed such an approach—and he explained why—in his first highly influential books, among them Culture and Society: 1780—1950 (1958) and The Long Revolution (1961), which more than any other books published in Britain drew attention to the concept of culture and a specifically English tradition, centred on it, which developed after and in response to the Industrial Revolution. The key book was Culture and Anarchy (1869) by Matthew Arnold in which he identified culture with “sweetness and light”. In the 20th century, the tradition was expressed in a conservative fashion, as Williams saw it, by Eliot and the prominent Cambridge literary critic, F.R. Leavis.


Williams was one of the main influences on the lively development of cultural studies in Britain during the 1960s, although before Culture and Society appeared the Birmingham Centre for Contemporary Cultural Studies was founded by Richard Hoggart, whose Uses of Literacy was published in 1957. It was widely read outside and inside universities and was published in paperback in the centenary year of Arnold’s Culture and Anarchy. Like Williams (and the Frankfurt School), Hoggart, never a Marxist, was deeply interested in communications, the subject of a paperback by Williams, Television: Technology and Cultural Form (1974). In 1970, Hoggart left Birmingham for Paris to serve as UNESCO’s assistant director-general (for social sciences, human sciences, and culture).

Another major influence on the Birmingham Centre was Edward Thompson, author of The Making of the English Working Class (1963), who traced his origins to a different tradition from that analysed by Williams, a radical culture emerging in the 18th century but with deeper roots that went underground under repression after the French Revolution. Thompson criticized The Long Revolution on the grounds that no way of life is without its dimension of struggle. Such criticism—and a reading of continental European Marxist writers on literature and culture, notably Lucien Goldmann and György Lukács—impelled Williams to take up Marxist theories.

Meanwhile, the French anthropologist Claude Lévi-Strauss, influenced not by Marx but by Émile Durkheim, had set out to redefine culture, his own keyword, in structural terms, claiming that “any culture may be looked upon as an ensemble of symbolic systems in the front rank of which are to be found language, marriage laws, economic relations, art, science, and religion”. His range of reference extended to material culture and, above all, to food. The complexity of cross-influences and counter-influences is brought out in the history of various “structuralisms”, some specifically Marxist, which shaped much of the language of European sociology in the 1960s and 1970s.


The Birmingham Centre, subject to such multiple influences, derived its programme above all from that of Stuart Hall, born in the Caribbean, who worked with and then succeeded Hoggart, and who subsequently became a professor at the Open University. One of his main fields of study was subcultures—the beliefs, attitudes, customs, and other forms of behaviour of particular groups in society, particularly youth. These differed from those of the dominant society, while at the same time were integrally related to it. The concept of subculture referred also to minority groups such as ethnic minorities and drug users, but it incorporated the ways of life of gay communities and religious groups, the last of these prominent in the 21st century. It was sometimes argued that the subcultures created or expressed by such groups in such forms as dress served to provide recompense for the fact that their members are viewed as outsiders by mainstream society. Hence a drug user with a low social status within conventional society would command respect from other drug users because of his or her group’s individual hierarchy and values. Yet the power of Islamic subcultures could not be explained entirely in such terms. Members of some subcultures were bound most closely together if they were at odds with the values and behaviour of the dominant society. A shared language and a common religion with its own traditions and laws were a bond that transcended national frontiers. Subcultures might also emerge within a minority group—such as punk within youth subculture, separatist feminism within a feminine subculture, Rastafarians within a Caribbean subculture, and an Al-Qaeda group within Islam. Boundaries shifted and loyalties could change. Subcultures, like cultures, developed, and with globalization it was recognized that some subcultures, and indeed cultures, might disappear like lost species.

Theories of subcultures emerged during the 1960s and 1970s when the research was carried out on their formation, development, and relationship to society as a whole. Geographical subcultures tend to be described as regional cultures, and there may be subcultures, particularly class subcultures, within them.


The use of the word globalization is relatively new, more recent than the word modernization, but there was recognition even before the rise of the nation state that there were cultures or civilizations that coexisted, in some cases with links between them. The universal history of the 18th century explicitly acknowledged them. So, too, did various stage theories of development, most of them taking it for granted that there were primitive cultures that were the best thought of as obsolete survivals. Progress came to be considered as a law. For the 19th-century French sociologist Auguste Comte, who gave social science the name of sociology, man’s development had consisted of three stages—theological, metaphysical, and scientific, with the scientific (or positivist, the name given to him) dominating as the subject developed. Indeed, the idea of stages went out of fashion, and all cultures came to be treated as unique in time and place. “Colonial cultures”, however, shared common characteristics that implied cultural as well as economic dependence, and even after the end of imperialism, such dependence did not necessarily end.

Before World War II and the withdrawal from formal empire, two 20th-century historians, the German Oswald Spengler and the Englishman A.J. Toynbee, while following different methods and reaching quite different conclusions, produced chronological and comparative accounts of human history in which the units involved were not nation-states or empires but civilizations or cultures, each with a spiritual unity of its own. By comparing Greece and Rome, classical civilization, with the 20th-century West, Spengler, in his two-volume Untergang des Abendlandes (1918-1922), published at the end of World War I, claimed to have traced a life-cycle (birth, youth, maturity, senescence, death) through which all “advanced” cultures or civilizations pass. Translated into English as The Decline of the West (1926-1928), Spengler’s book had less impact in English-speaking countries than it did in defeated Germany. It provoked English rejoinders, however, though not immediately, notably The Recovery of the West (1941) by Michael Roberts, a great admirer of Eliot, who himself referred to other cultures, among them the Indian, more than Roberts did. The differences between Indian and Chinese civilizations are part of the pattern of global history as it is now interpreted, with more questions posed than answered. The multi-volume Science and Civilization in China (1954- ) by the English biochemist Joseph Needham provides the broadest sweep in English of Chinese culture leading up to what he called “the gunpowder epic”, the transfer of technology to the West, but it has itself been subjected to challenge. Meanwhile, Wang Gungwu has noted carefully how the words civilization and culture, although not the conception of change, were new to the Chinese—and Japanese—in the late 19th century. They were translated as Wenming and Wenhua.

The Cultural Revolution in China, which followed nearly a quarter of a century after the creation of a Communist People’s Republic in 1949 and four years after a brief border war with India in 1962 (see Sino-Indian War), was conceived of as a proletarian purge of anti-revolutionary elements, and in waves of terror its leaders savagely attacked both traditional Chinese culture and all forms of Western culture. The precepts of Mao Zedong stirred several leftist groups in the West, however, and he himself survived the end of the Cultural Revolution in 1969. Marxism too survived, as it did the collapse of communism in the Soviet Union.

Toynbee was the other Western historian to write in terms of “civilizations” and “cultures”—he never clearly distinguished between the two—when he wrote 12 volumes of his magnum opus A Study of History (1934-1961) in which he identified 21 developed civilizations throughout history and 5 “arrested civilizations”. His own experiences were almost as varied as those of most of his civilizations. He had been a delegate to the Peace Conference in Paris in 1919 following World War I, and after having become a professor of Byzantine and modern Greek studies, a journalist, and director of studies at the Royal Institute of International Affairs, he became well known throughout the world, if not universally admired, as a historian. Drawn more to Greek and Roman experience, which he knew the best than to Indian or Chinese, nevertheless at least one Buddhist subculture, Cao Dai, hailed him as a prophet and his works were as well known in Asia as in Europe. His theory of civilizations, based on challenge and response, could be quickly understood, however much detail he used to illustrate it. The most relevant current detail would be provided from Africa, where cultures and subcultures confront all the issues raised by globalization.

Contributed By:
Asa Briggs

Hilady So Phear: Tel: (855)15728271/(855)12849265/(855)12511605 located at Olympic market stall 1Do Floor, Cambodia



Civilization, advanced state of a society possessing historical and cultural unity. This article is concerned with the problem of identifying specific societies that, because of their distinctive achievements, are regarded by historians as separate civilizations. Distinctive features of the various civilizations are discussed elsewhere.

The historical perspective used in viewing a civilization, rather than a country, as the significant unit is of relatively recent origin. Since the Middle Ages, most European historians have adopted either a religious or national perspective. The religious viewpoint was predominant among European historians until the 18th century. Regarding the Christian revelation as the most momentous event in history, they viewed all history as either the prelude to or the aftermath of that event. The early historians of Europe had little occasion to study other cultures except as curiosities or as potential areas for missionary activity. The national viewpoint, as distinct from the religious one, developed in the early 16th century, largely on the basis of the political philosophy of the Italian statesman and historian Niccolò Machiavelli, for whom the proper object of historical study was the state. After that period, however, the many historians who chronicled the histories of the national states of Europe and America rarely dealt with societies beyond the realm of European culture except to describe the subjection of those societies by (in their view) the more progressive European powers.


Historians became interested in other cultures during the Enlightenment. The development in the 18th century of a secular point of view and principles of rational criticism enabled the French writer and philosopher Voltaire and his compatriot the jurist and philosopher Montesquieu to transcend the provincialism of earlier historical thinking. Their attempts at universal history, however, suffered from their own biases and those prevalent in their culture. They tended to deprecate or ignore irrational customs and to imagine that all people were inherently rational beings and therefore very much alike.

Early in the 19th century, philosophers and historians identified with the Romantic movement criticized the 18th-century assumption that people were the same everywhere and at all times. The German philosophers Johann von Herder and G. W. F. Hegel emphasized the profound differences in the minds and works of humans in different cultures, thereby laying the foundation for the comparative study of civilizations.


According to modern historians of civilizations, it is impossible to write a fully intelligible history of any nation without taking into consideration the type of culture to which it belongs. They maintain that much of the life of a nation is affected by its participation in a larger social entity, often composed of a number of nations or states sharing many distinctive characteristics that can be traced to a common origin. It is this larger social entity, cultural rather than political, that such historians consider the truly meaningful object of historical study. In modern times, the existing civilizations have impinged more and more upon one another to the point that no one civilization pursues a separate destiny anymore and all may be considered participants in a common world civilization.

Some historians see striking uniformities in the histories of civilizations. The German philosopher Oswald Spengler, in The Decline of the West (1918-1922), described civilizations as living organisms, each of which passes through identical stages at fixed periods. The British historian Arnold Toynbee, although not so rigid a determinist as Spengler, in A Study of History (1934-1961) also discerned a uniform pattern in the histories of civilizations. According to Toynbee, a civilization may prolong its life indefinitely by successful responses to the various internal and external challenges that constantly arise to confront it. Many historians, however, are exceedingly sceptical of philosophies of history derived from an alleged pattern of the past. They are particularly reluctant to base predictions about the future on such theories.


Historians have found difficulties in delimiting a particular society and correctly labelling it a civilization; they use the term civilization to refer to a number of past and present societies that manifest distinctive cultural and historical patterns. Some of these civilizations are the Andeanone, which originated about 800 bc; the Mexican (c. 3rd century bc); the Far Eastern, which originated in China about 2200 bc and spread to Japan about ad 600; the Indian (c. 1500 bc); the Egyptian (c. 3000 bc); the Sumerian (c. 4000 bc); followed by the Babylonian (c. 1700 bc); the Minoan (c. 2000 bc); the Semitic (c. 1500 bc); the Graeco-Roman (c. 1100 bc); the Byzantine, which originated in the 4th century ad; the Islamic (8th century ad); and the Western, which arose in Western Europe in the early Middle Ages.

See Aegean Civilization; Africa; Archaeology; Aztec; Babylonia; Byzantine Empire; Carthage; Celts; China; Egypt; Etruscan Civilization; Europe; Germanic Peoples; Greece; Hittites; Inca; India; Islam; Japan; Jews; Judaism; Maya; Minoan Civilization; Palestine; Roman Empire; Sumer; Syria.



Subculture, group of people with beliefs, attitudes, customs, and other forms of behaviour differing from those of the dominant society, while at the same time being related to it.

The concept refers to minority groups such as ethnic minorities, drug users, or even religious groups or gay communities. It has been argued that the subculture created by such groups serves to provide recompense for the fact that their members are viewed as outsiders by mainstream society. Hence a drug user with a low social status within conventional society may command great respect from other drug users because of his or her group’s individual hierarchy and values. Members of a subculture are bound closely together if they are at odds with the values and behaviour of the dominant society. Characteristics of these subcultures, such as forms of language or dress, are emphasized to create and maintain a distinction from the dominant culture. This distinction may, however, also represent a pride of identity while at the same time seeking to belong in society. Although a subculture may be a minority group, it may also emerge within a minority group—such as punk within youth; separatist feminists within feminism.

A problem with the concept of subculture is its presupposition of the existence of a concrete, mainstream culture. Many Western communities today are composed of a number of ethnic and social groups; boundaries between groupings based on class, sexuality, age, ethnicity, religion, and place of origin are increasingly blurred, and mobility between these groups is more frequent. While the concept of subculture is not flawless, the concept can be a useful tool for analysing the structure and custom of minority social groups.

Images of Selena Gomez:


Education, Postgraduate


Education, Postgraduate, courses of study in colleges and universities, professional schools, and other postsecondary institutions offered after completion of an undergraduate curriculum. Specific programmes of postgraduate education usually require a baccalaureate or bachelor’s degree or its equivalent as a prerequisite for admission. Education beyond the undergraduate years is often directed towards preparation for entrance into a profession such as law, medicine, or dentistry, in which advanced training is necessary for recognition as a practitioner. Although some professions, such as engineering or teaching, require only a baccalaureate degree for entrance, further education is frequently needed for advancement.



Formal professional training in law and engineering originated in ancient times in Egypt, Greece, and Rome. Medieval universities offered instruction in law, medicine, and theology. Beginning in the 16th century, great impetus was given to advanced technical and medical education as a result of scientific discoveries.



Postgraduate study ranges from courses emphasizing intensive training in a specific aspect of professional practice to degree programmes of several years’ duration, either in an academic discipline or a professional field. Many professions also require periodic postgraduate study in order to maintain certification for practice.

Graduate schools generally award master’s degrees or doctorates to those who have satisfactorily completed prescribed courses of study. A year is usually required to obtain a master’s degree, which demands the acquisition of a higher level of knowledge than is needed for a baccalaureate. The doctoral degree involves a longer period of study and requires participation in and summation of some type of original research, as well as written and oral (viva voce) examinations.

The demands for specific courses of postgraduate study change with the needs of society. In most developing nations, for example, professional training in engineering and the health sciences is in great demand. Preparation for a career in medicine represents one of the most intensive curricula, as a medical degree requires at least four years beyond the baccalaureate, and entry into a medical specialty can require four or more additional years of study. Most postgraduate students require funding of some sort. In the United Kingdom, a certain number of postgraduate grants are provided by research bodies, such as the Economic and Social Research Council or the Medical Research Council. Highly competitive scholarships are also sometimes available to support students from industry or from developing countries.



An ever-increasing number of women are now students in higher education programmes throughout the world. Traditionally, many professions, including engineering, law, and medicine, were dominated by men. Women are now demanding and acquiring equal access to the postgraduate education necessary for entry into all professions. This trend is likely to continue as political, economic, and social barriers to equal opportunities for women are removed.

As per capita income increases in a society, the demand for professional training in technical and human services also increases. Foreign aid from developed nations and educational programmes sponsored by the United Nations have done much to support the expansion of postgraduate education in developing countries. Many nations now include plans for the development of postgraduate studies as part of their own systems of higher education rather than supporting professional training abroad for citizens who may or may not return to their own countries.


Education, Medical

Education, Medical, a process by which individuals acquire and maintain the skills necessary for the effective practice of medicine.

To train as a conventional doctor in the Western world a person needs to have achieved a good level of understanding in the sciences (for example, physics, chemistry, biology), either at senior (high) school or at college. Medical schools are usually part of a university (although not all universities have medical schools) and they offer only a limited number of training places in any one year. This results in fierce competition for places, with only the best students being admitted.

Most medical schools offer a training course of between three and six years in duration. The curriculum is traditionally divided into two parts: a preclinical course in which the basic science of how the human body works is studied; and a clinical course in which the student is introduced to actual patient care in a hospital. The former is usually taught in science departments at the university and the latter at a hospital affiliated with the university.

The preclinical course involves such areas of study as the gross and microscopic appearance and connections of the human body (anatomy), the organization and basic functions of different types of human cell (cell biology), the function and underlying biochemical processes of parts of human cells (biochemistry), the integrated functions of tissues, organs, and body fluids (physiology), the principal actions, distribution, and elimination of drugs in the body (pharmacology), the general principles underlying disease processes and such disease-related micro-organisms as viruses, bacteria, and parasites (pathology), the defence mechanisms of the body (immunology), and the structure and function of genetic material in living and infected cells (genetics).

The clinical part of the course involves medical students working with experienced doctors in general practice and hospitals to learn family practice and general medicine, and such specialized areas of health care as surgery (removal, reconnection, or transplantation of parts of the body), obstetrics (pregnancy and childbirth), paediatrics (diagnosis and treatment of childhood complaints), gynaecology (diagnosis and treatment of ailments of the reproductive system), geriatrics (diagnosis and treatment of ailments suffered by elderly people), and psychiatry (diagnosis and treatment of mental ill-health). During this time, medical students observe and learn from doctors working with patients on the wards and in specialist clinics, and gradually, under their supervision, become involved directly in the provision of health care (for example, diagnosis and administration of therapy).

Students have to pass examinations in all of these different aspects of the course, which take to form of written, practical, and oral tests. Upon graduating, they received a Doctor of Medicine (MD), Bachelor of Medicine (BM), or an equivalent degree. New doctors swear the Hippocratic Oath (or an equivalent professional statement) to adhere at all times to high standards of medical practice and ethics, and to protect the right of every patient to life, dignity, and confidentiality.

It is usual for “junior” doctors to serve at least one year as an “intern” or “house officer” and to have responsibility for both diagnosing and treating patients in the hospital. At this point, they choose to move away to a new hospital. Such a post, however, is considered to be an extension of their training with overall responsibility for their work resting with the senior colleagues supervising their work. In most countries, “junior” doctors often complain that they work excessively long hours for relatively poor pay (that is, relative to other professionals after several years of training).

During his or her time as a junior doctor, an individual must decide whether to work in general or in a specialist branch of medicine. If the latter, the doctor applies to work with a particular specialist and his or her team and once accepted embarks upon a training course which lasts for several years; the training being obtained largely by the experience of working with other more experienced doctors in the group. During this time, he or she is called “registrar” or “intern” and the training culminates in both written and oral exams set by an official body on that subject (for example, the Royal College of Pathologists or the Royal College of Surgery in the United Kingdom, both of which decide whether a doctor is sufficiently knowledgeable and able to practise as a specialist in that particular area of medicine). If successful, the doctor is awarded “membership” of the college.

It is important that doctors keep up with medical progress (the results of medical research concerning new forms of diagnosis and treatment). Most often this takes the form of reading medical journals and books, attending conferences, and discussing medical matters with other specialists in the same or different fields. More recently, doctors have been able to communicate with one another and receive the latest medical information using the Internet (often referred to as the “information superhighway”), which can link computers used by doctors in different hospitals and/or general practices around the world.

Some doctors, especially those in general practice, choose to incorporate such unorthodox medical techniques as acupuncture or reflexology (see Complementary Medicine) into their medical practice and offer these to their patients, where appropriate, usually in parallel with more conventional treatments; these are seldom offered as an alternative to conventional Western medicine. So popular are some of these unorthodox methods that some medical schools are now offering training courses on these topics for both trainee and postgraduate (that is, experienced, practicing) doctors.

Contributed By:
Claire Elizabeth Lewis