Historiography (art)

Historiography (art), the study of the history of the visual arts, a field that can range from the detailed, objective cataloguing of works of art to philosophical musing on the nature of beauty.

It was not until the 19th century that art history became a fully fledged academic discipline, but its origins go back to Classical antiquity. The most important work dealing with art to survive from the ancient world is the encyclopedic Latin treatise Natural History, written by Pliny the Elder in the 1st-century ad. This work has often been criticized as careless and superficial, but it contains a good deal of valuable information and some entertaining anecdotes on painters and sculptors (the information on art is in the section on metals and stones and their uses). A century after Pliny, the Greek traveller Pausanias compiled a Description of Greece that is a fount of information on architecture, painting, and sculpture, and the ancestor of modern guidebooks.

The most substantial writings on art from the Middle Ages are the book On Buildings by the Byzantine historian Procopius (6th century), dealing with the architecture of the age of Justinian I, and a treatise on arts and crafts entitled De Diversis Artibus (On the Various Arts), written under the pseudonym Theophilus, probably in the early 12th century (the author was possibly Roger of Helmarshausen, a German goldsmith, and monk).

In the 15th century Leon Battista Alberti wrote treatises on architecture, painting, and sculpture, and the sculptor Lorenzo Ghiberti compiled a manuscript entitled Commentaries that includes a survey of ancient art (based on Pliny), notes on 14th-century Italian artists, and also his autobiography, the earliest by an artist to survive. The true founding father of art history, however, came a century later in Giorgio Vasari, who wrote the most famous and influential book ever published on the subject— Le Vite de’ Più Eccellenti Architetti, Pittori, et Scultori Italiani (The Lives of the Most Eminent Italian Architects, Painters, and Sculptors), generally referred to simply as Vasari’s Lives. It was first published in 1550 and a second, much-enlarged edition appeared in 1568.

Vasari believed that the arts had reached a high level in Classical antiquity, then declined into barbarism in the Middle Ages, before being revived in Italy in the 14th century by artists such as Giotto and rising to a peak in the work of Vasari’s contemporary, Michelangelo, whom he idealized. This idea of art following a pattern of decay and renewal coloured thinking about the Renaissance for centuries, and Vasari’s biographical method inspired several important imitators, beginning with Karel van Mander (“the Dutch Vasari”), who in 1604 published Het Schilder-Boeck (The Book of Painters), which is the most important source of information on northern European artists up to that date. Among other major biographical compilations were those published in French by André Félibien (1666-1688), in German by Joachim von Sandrart (1675-1679), and in Spanish by Antonio Palomino (1724).

The next great milestone in art-historical writing came not from a biographer, however, but from the German classical archaeologist Johann Joachim Winckelmann, who wrote two major books: Gedanken über die Nachahmung der Griechischen Werke in der Malerei und Bildhauerkunst (Reflections on the Painting and Sculpture of the Greeks), published in 1755; and Geschichte der Kunst des Altertums (History of Ancient Art), published in 1764 (the latter marks the first occurrence of the phrase “history of art” in the title of a book). Winckelmann saw art as part of the general evolution of thought and culture and he tried to explain its character in terms of such factors as social conditions and religious customs. His work was important in gaining for art history the recognition as a serious intellectual pursuit and in establishing Germany as its principal home.

The first university professorship in art history was established in 1844 in Berlin for Gustav Friedrich Waagen, an indefatigable traveller who published a mass of information on works of art in public and private collections, notably Treasures of Art in Great Britain (3 vols., 1854). Waagen was not the only outstanding compiler of his time, for he lived in the great age of fact-finding in art history, when prodigious work was done in archival research and the writing of comprehensive catalogues. Among the great enterprises from this period that formed a foundation for much subsequent work is the 20-volume series Le Peintre Graveur (1803-1821) by the Austrian authority on prints Adam von Bartsch; the numbering system used in this pioneering study of painter-engravers has been adopted by most subsequent scholars in the field.

Part of this process of the accumulation of knowledge resulted from trying to establish on stylistic grounds which artists were responsible for works that were not firmly documented. Giovanni Morelli (an Italian connoisseur who wrote in German) attempted to give attribution a scientific basis by minutely studying a painter’s treatment of details such as ears and fingernails. His work was very influential, this kind of connoisseurship becoming a major strand in art-historical studies well into the 20th century. Kenneth Clark, for example, wrote: “When I was an undergraduate [in the 1920s] the idea that art scholarship consisted in finding out who painted a picture on the basis of internal evidence alone had the same unquestioned prestige as textual emendation in the field of classical scholarship”.

The American art critic Bernard Berenson was the most famous practitioner of this kind of connoisseurship, and the various lists he compiled of the work of Italian Renaissance painters are still valuable, although many of his attributions have subsequently been questioned. Another approach to stylistic analysis was seen in the work of the Swiss scholar Heinrich Wölfflin, who tried to show that style followed evolutionary principles, most notably in his book Kunstgeschichtliche Grundbegriffe (1915; Principles of Art History, 1932). Wölfflin’s visual analysis was much more subtle and searching than that of his predecessors, and Herbert Read wrote: “it could be said of him that he found art criticism a subjective chaos and left it a science”.

Alongside the methodology that placed paramount importance on the stylistic values of a work of art, there developed another, in which the work was studied as part of the intellectual history of its time, with a new emphasis on the interpretation of subject matter (iconography). The great pioneer of this approach was the German, Aby Warburg, whose superb library developed into a research institute, incorporated into the University of London in 1944 as the Warburg Institute. Many outstanding art historians have been associated with the Warburg Institute, notably Ernst Gombrich, but the scholar who is most renowned for his iconographical analysis is probably Erwin Panofsky, who spent most of his career at Princeton University in the United States. Kenneth Clark described Panofsky as “unquestionably the greatest art historian of his time” and he combined his immense erudition with rare sensitivity. Some of his followers have been accused of taking his methods too far, “overinterpreting” pictures to find “hidden symbolism” that does not really exist.

Connoisseurship and iconography continue to be important in art history, but since the 1970s there has been a reaction against traditional methodology in the subject. This reaction has been dubbed “the new art history”—“a capacious and convenient title that sums up the impact of feminist, Marxist, structuralist, psychoanalytic, and socio-political ideas on a discipline notorious for its conservative taste in art and its orthodoxy in research” (The New Art History, ed. A. L. Rees and Frances Borzello, 1986). Some traditionalists would reply that the new art history tends to be pretentious and jargon-ridden.

Credite images and video: Daily Mail




Ethnomusicology, the study of music in its social and cultural context.


Ethnomusicology is commonly thought of as the study of music outside the Western classical tradition (a repertoire that could be defined by the popular term “world music”), but in fact it goes beyond the study of music as patterns of sound and is based on the fundamental tenet that music is a social phenomenon, and must be studied in the context in which it is created, performed, and assimilated. This means that no music lies outside its scope, and an ethnomusicological investigation of Western classical music is not only possible but desirable. Nevertheless, studies have tended to concentrate on folk music and other traditional music of the world, as well as the major classical styles of Asian civilizations (especially those of China and India), and most ethnomusicologists study cultures other than their own. The correct understanding of the aims and methods of ethnomusicology helps to answer the vexed questions of whether a non-Westerner studying Bach or Mozart would also be an ethnomusicologist, and whether ethnomusicology, as a Western invention, is simply a disguise for the continued dominance of Western concepts and values. It is clear that the study of musical techniques does not in itself constitute an ethnomusicological approach, and ethnomusicology seeks, by its very nature, to dispense with any kind of value judgement, other than those accepted by the society under investigation. The questions in ethnomusicologists’ minds, beyond those concerning the sound and structure of the music itself, are its social function, how it is perceived and evaluated within its own society, who produces it, how such members of the society are chosen and trained, for whom they perform, and for what purpose.

This approach makes ethnomusicology a branch of social anthropology. At the same time, the term itself, coined in 1950 by Jaap Kunst, and the one it replaced—Guido Adler’s “comparative musicology” (1885)—share the word “musicology“, and this has given weight to the more popular approach which concentrates on study of the workings of the music itself, often through direct participation in the learning and performance processes. The prefix “ethno” remains problematic, to some on a par with the discarded adjective “primitive”, and the balance of the anthropological and musicological demands continues to be a major concern.

The origins of ethnomusicology go back at least to the late 18th century. The Dictionnaire de Musique (1768) of Jean-Jacques Rousseau included examples from America and China, and others among his French and British contemporaries investigated Arab, Chinese, and Indian music. Much of the early research, nicknamed armchair ethnomusicology, was based on materials brought back by others. Some of the earliest pioneers were not primarily musicians, among them the British mathematician Alexander J. Ellis (1814-1890), known as the father of ethnomusicology. His seminal paper “On the Musical Scales of Various Nations” (1885) not only studied music from around the world with scientific rigour, but, of even more significance, challenged notions of natural tonal and harmonic laws, which initiated the process of sweeping away Western imperialist assumptions of cultural superiority.

Of no less importance was the invention of the phonograph by Thomas Edison in 1877, because it introduced a means of recording music which could be taken home for analysis (by others), and then became indispensable when collector and analyst were the same people, and systematic fieldwork became the norm. Béla Bartók, not only one of the 20th century’s greatest composers but also one of its leading folk-song collectors and researchers, was one of the main beneficiaries of Edison’s invention.


Since fieldwork relies on informants, they must be accorded due respect and remunerated. The responsibility continues when the researcher returns home and seeks to use the information, be it in teaching, publication, performing, or marketing as recordings. Successful fieldwork also relies on modern technology. The photograph has been replaced by the tape recorder, while the smaller cassette or DAT recorder, as well as the still and video camera and computer, are also essential aids. Fieldwork data commonly take the form of recordings, films and photographs, diaries and other jottings, publications from the country visited, and musical instruments. Treatises written within the culture studied, for example, the Sanskrit shastras of India, are an important resource, especially in historical ethnomusicology, and the study and classification of instruments, known as organology, must also be addressed. Since work on data is the first task of returning home, methods of transcription and analysis have been a central concern. The purposes of writing down the music are to facilitate analysis and publication and to assist documentation and preservation. One reason why many ethnomusicologists embark on their work is in order to save musical traditions from extinction or, ironically, Western influence. At the same time, changes in musical practice and repertoire are not only inevitable but are accepted by most ethnomusicologists as a sign of a healthy tradition. Yet different kinds of music, having different functions, will not behave in the same way, so change cannot be expected to be uniform. If a musical genre has a close relationship with a specific ceremony it is quite likely that its use will not only be restricted to that ceremony, but that care will be taken to preserve it in as unchanging a form as possible because its perceived efficacy will be affected if a change occurs. Examples could include religious chants and music used in certain healing rituals. Yet even this cannot be proposed dogmatically. Quite a striking example of change is the Balinese male interlocking sanghyang chant which was associated with trance and exorcism rituals and has become the source of the modern kecak chorus accompanying a dance drama put on for the entertainment of tourists.

Since the central premise of ethnomusicology is that a musical style is inextricably linked with the society which produced it, attempts have been made to find clear parallels between them. One of the most ambitious attempts systematically to find correlations between social types and musical ones was Alan Lomax’s Cantometrics (“measure of song”), formulated in the 1960s. The breadth of its application, the relative narrowness of its sample, and the numerous exceptions to its rules found by other scholars opened it to severe criticism, but such enquiry is by no means invalidated. Ethnomusicology has always balanced the need to examine a particular music on its own terms and in its own cultural context with a search for universals in music. On the one hand, it is known that all societies have something which could be perceived as music, making it as central to social cohesion as language or religion, yet what we would term music is often evaluated in completely different ways (for example, the song call to prayer in Islam, which is melodically rich, is not considered by Muslims to be music) and several societies do not have a term for music. On the other hand, the fact that our ears so readily accept what they hear from all over the world as music seems to prove that some universal modes of discourse are operating. Tonal and rhythmic structures, the principle of repetition, the widespread recognition of the octave, and often the fifth, as fundamental intervals, and the existence of pentatonic scales, from Scotland to China to the Andes, are some examples.


It would be a huge undertaking to assess all the major works of ethnomusicology, or even list the areas of the globe which have been researched. Some important studies are listed in the further reading list below. Instead, it will be useful to cite one or two examples to demonstrate the varying concerns and shifting emphases of the discipline. Several of the early ethnomusicologists intentionally distanced themselves from an involvement in the process of music-making, which they saw as compromising their respectability, especially if they were employed by a colonial administration. A good example is Jaap Kunst, whose monumental study De Toonkunst van Java (1934; Music in Java, 1949) remains unsurpassed, yet who did not devote himself to learning performance, as his successors have done. Nowadays, ethnomusicologists usually attempt to understand music through the practice known in anthropology as participant observation. It attempts to overcome the obstacles of meeting a music on its own terms, by simulating the processes of becoming part of the culture and society experienced from birth by the native musician—a process known as “enculturation”. It not only tries to eliminate the temptation to make misleading comparisons with the researcher’s own culture but also motivates the desire to teach non-Western music in the West itself. It would be too easy to argue that no outsider can go through the same enculturation as the native musician, and processes of comparison can never be truly eliminated. Indeed, much of the discourse and notation systems still used by ethnomusicologists are borrowed from the Western tradition. Nevertheless, the concept of bi-musicality (clearly analogous to bilinguality), propounded by Kunst’s pupil, Mantle Hood, and put into practice by him in the 1950s and 1960s in his pioneering ethnomusicology programme at the University of California, Los Angeles, has had an extraordinary degree of success, and most university departments of ethnomusicology include some element of practical tuition and performance in at least one non-Western music. The focus of Hood’s programme, and now of many others throughout the world, was the gamelan percussion ensembles of Indonesian music.


Despite its tendency to borrow from several disciplines, and to become complex and forbidding in the process, ethnomusicology has a friendly face, and its lessons and benefits, to other scholars and musicians, and to the general public, are beyond doubt. If it takes a workshop in, say, gamelan music to persuade an adult who was branded unmusical in childhood that he or she is quite musical after all, then already something significant has been achieved, not only for that person’s reappraisal of music and music-making, but also for his or her self-confidence and social skills.

One of the earliest contributors to Western knowledge of non-Western music, A. H. Fox Strangways, argued (1914) that the study of Indian music was not only a noble pursuit in itself, but would actually benefit our understanding of Western music. The old-fashioned comparative approach can still have its uses. The study of non-Western music may not only enhance our understanding of the Western tradition but place its strengths and weaknesses in a clearer perspective. Other wider benefits of ethnomusicology are the growth of practical work, often operating on an open-door policy of workshops and “taster sessions”, and the responses from composers. Several leading Western composers have been inspired by non-Western music since Debussy heard a gamelan play in Paris in 1889. Some, such as Messiaen and Britten, have borrowed actual material, while others, for example, Lou Harrison, have composed for non-Western instruments which they have learned to play. Steve Reich, one of the pioneers of minimalism, had lessons on the Ghanaian drums and Balinese gamelan in the early 1970s, and these experiences clearly helped shape his distinctive style. He also made the point that all music is ethnic, which no ethnomusicologist would dispute, and which all musicians would do well to remember.

Contributed By:
Neil Sorrell

Credited Images: aliexpress


Chinese Cinema


Chinese Cinema, historical development of the cinema in China, Hong Kong, and Taiwan. Although the cosmopolitan port city of Shanghai held projections of films from unidentified Western companies in 1896 and 1897, China’s capital Beijing had to wait until 1902 for its first glimpse of the new medium. There was also a disastrous attempt to screen films for the Empress Dowager Cixi in the Forbidden City in 1904. The earliest known Chinese production was Dingjun Shan (Dingjun Mountain, 1905), a record of the Peking Opera star Tan Xinpei in scenes from the stage opera of the same title, made by staff of the Fengtai photographic store in Beijing. The short comedy Tou Shao Ya (Stealing the Roast Duck), also based on a stage opera scene, was shot in Hong Kong in 1909 by the theatre director and sometime actor Liang Shaobo, with financial backing from the American entrepreneur Benjamin Polaski.

By 1920, Shanghai was established as the centre of Chinese film production, with a modest amount of ancillary activity in Hong Kong. However, the Chinese market for films was surprisingly small (a United States government trade official noted that there were fewer than 100 cinemas in 1922, almost all of them in the treaty ports of the east coast), and the bankruptcy rate among film companies was high. At the time, 80 to 90 per cent of all films screened in China and Hong Kong were American imports. China’s own production in the 1920s divided neatly into two types of film: those derived from Hollywood models (chiefly melodramas, comedies, and romances) and those drawn from sources in Chinese popular culture (chiefly historical and legendary stories from the opera stage, and martial arts fantasies from pulp fiction). Very little Chinese cinema from this period survives today.

Chinese cinema reached remarkable creative heights in the 1930s, partly because the medium began to attract young artists and intellectuals, such as the American-educated writer-director Sun Yu and the Japanese-educated Communist screenwriter Xia Yan, and partly because the growing threat of a Japanese invasion provided the impetus for films to become the voice of patriotic resistance and national identity. Formal innovations, generally derived from experimentation in Hollywood and Soviet silent cinema, meshed with an agenda of Communist-inspired themes, including women’s rights, social inequality, and national defence. Technique, however, lagged behind; silent films and part-sound hybrids remained in production until 1935.

The combination of under-investment, poor distribution, and political censorship by the Kuomintang (KMT) government guaranteed that many production companies were short-lived, but the industry was dominated by two “majors” in the 1930s. One was the MGM-like Star Company (Mingxing, founded in 1922 by pioneer directors Zhang Shichuan and Zheng Zhengqiu); the other was United Photoplay Service (Lianhua, founded in 1930 by Luo Mingyou). Both companies averaged one release per month. United, which had the star Ruan Lingyu (“China’s Garbo”) under contract, made such outstanding films as Wu Yonggang’s Shennü (1934; The Goddess), probably the world’s first non-moralistic film about prostitution, and Sun Yu’s startlingly erotic patriotic thriller Da Lu (1934; The Highway). Star peaked with such sophisticated films as Yuan Muzhi’s Malu Tianshi (1937; Street Angel), a tough-but-romantic vignette of life, love, and social injustice in Shanghai’s “lower depths”.

This golden age in Shanghai cinema was abruptly curtailed when the city fell to the Japanese in 1937. Few directors stayed to work under Japanese supervision; most fled to Hong Kong or inland to Wuhan, using severely limited resources to make agitprop films for the war effort. When production resumed in Shanghai in 1946, much had changed. The approaching civil war between KMT Nationalists and Communists sharpened the political climate, forcing all film-makers to take sides. Some left-wing directors fled to Hong Kong to avoid persecution; they were followed by many right-wing directors after the Communist victory in 1949. Shanghai films of the late 1940s relied more on dialogue and theatrical-style staging than had the pre-war films, but included a number of titles now internationally acknowledged as classics, such as Fei Mu’s searching analysis of post-war depression Xiao Cheng zhi Chun (1948; Spring in a Small Town) and Zheng Junli’s parable of working-class solidarity Wuya yu Maque (1949; Crows and Sparrows).

Joey Wong


The establishment of the People’s Republic in 1949 split Chinese cinema into three kinds. In China itself, the Communists set about reinventing cinema as a popular medium for the vast rural hinterlands, which had never seen films before; most films became vehicles for government propaganda, challenging feudal traditions and superstitions, offering ideological education, and publicizing national movements and campaigns. Soon after 1949, foreign film imports were limited to titles from other Communist countries. New state-run film studios were opened in many regions, while distribution and exhibition were expanded to reach the furthest-flung parts of the country. Some 600 films were produced in the years between 1949 and 1966, which marked the start of the hugely disruptive Cultural Revolution and the enforced shutdown of the film industry for six years. Some films from the first 17 years of Communist rule did their best to revive the old Shanghai traditions of entertainment value, style, and sophistication; the best were Xin Juzhang Daolai zhi Qian (1956; Before the New Director Arrives), a Gogol-esque satire by the ex-actor Lü Ban, and Wutai Jiemei (1964; Two Stage Sisters), a sumptuous melodrama about the theatre world by Xie Jin.

The retreat of KMT Nationalists to Taiwan in 1949 laid the foundations for film production on the island. The KMT’s own Central Motion Picture Corporation (CMPC) was the first (and for many years the biggest) producer, specializing in anti-Communist propaganda films, historical dramas, and middle-class melodramas. The CMPC’s example gradually brought other producers into the field, and a thriving subculture of low-budget features in Taiwan’s own dialect took shape alongside the “prestige” productions in Mandarin Chinese. However, the most prominent directors—Li Hanxiang, King Hu (Hu Jinquan), and Li Xing—were all ex-mainlanders dedicated to upholding pre-Communist cultural traditions.

The most prolific centre for the production of Chinese films after 1949 was Hong Kong, where the many companies producing films in the local Cantonese dialect were joined by as many new companies producing films in Mandarin. Production in Cantonese remained extremely prolific until the advent of broadcast (as distinct from cable) television in 1967: averaging 125 features a year throughout the 1950s, production peaked at over 200 films a year in 1960 and 1961. Mandarin production got off to a hesitant start (6 features in 1946, 15 in 1950), but was up to nearly 80 films a year by 1970. Both film industries had dissident left-wing factions determined to raise difficult social questions, but both were dominated by Hollywood-style entertainment films with a Chinese twist. The single most popular genre was the swordplay/martial arts film, which, in its early 1970s incarnation as the kung fu film, gave Chinese cinema its first palpable international successes and made Bruce Lee the first globally famous Chinese star.


Like their contemporaries in North America and Europe, the major Chinese film companies (Shaw Brothers and Golden Harvest in Hong Kong, CMPC in Taiwan) gradually lost touch with the tastes and interests of audiences seduced by the newly available medium of television. By the late 1970s, the industries in Hong Kong and Taiwan were in a slump; at the same time the state-run studios in China were struggling to find a role in the new political and economic climate after Mao’s death. These developments created the conditions for the arrival of successive “new waves”, which transformed Chinese cinema and gradually won it a substantial international audience.

The first “new wave” broke in Hong Kong in 1979, when Ann Hui (Xu Anhua), Tsui Hark (Xu Ke), Yim Ho (Yan Hao), and other young directors—most of them trained in European or American film schools—moved from television into film production, generally working with small, independent production companies and bringing a strong engagement with social realities into their films. Taiwan soon followed suit: the CMPC began producing portmanteau films with episodes by hitherto untried directors such as Edward Yang (Yang Dechang), Hou Xiaoxian, Wang Tong, and Wan Ren. And a ‘new wave’ reached China in 1984, when recent graduates from the Beijing Film Academy began making films with innovative structures and tones, asking questions rather than providing pat political answers. Through such films as Huang Tudi (1984; Yellow Earth, Chen Kaige), Daoma Zei (1986; Horse Thief, Tian Zhuangzhuang), and Hong Gaoliang (1987; Red Sorghum, Zhang Yimou) this group (nicknamed the “Fifth Generation” film-makers by Chinese critics) transformed the image of Chinese cinema at home and abroad.

But the early creative momentum of these “new waves” proved impossible to sustain. As the interest of domestic audiences waned once again in the 1990s, many of the leading directors were forced into commercial or political compromises. Thanks to investment from Hong Kong, Taiwan, and Japan, those in China initially weathered the storm better than their contemporaries. Films such as Zhang Yimou’s Ju Dou (1990) and Dahong Denglong Gaogao Gua (1991; Raise the Red Lantern), Tian Zhuangzhuang’s Lan Fengzheng (1993; The Blue Kite), and Chen Kaige’s Bawang Bieji (1993; Farewell, My Concubine) won festival prizes, Academy Award (Oscar) nominations, and widespread distribution. However, the Chinese government’s attempts to reform the film industry by privatizing the studios and tightening the rules around censorship and foreign investment halted most of such production in its tracks. Meanwhile audiences in Hong Kong and Taiwan were en masse transferring their allegiance from local films to imports from Hollywood and elsewhere. The economic depression of the late 1990s and early 2000s intensified the problems.

All three Chinese film industries are currently shadows of their former selves. Production levels have fallen sharply, and mainstream Chinese films no longer dominate the distribution circuits of East Asia. Paradoxically, though, commercial decline has spurred creativity. Individual film-makers in China, Taiwan, and Hong Kong (many working through their own independent companies) continue to produce outstanding work and win both festival prizes and foreign sales. In Hong Kong, this field is led by Wong Kar-Wai (Wang Jiawei), whose Hua Yang Nian Hua (2000; In the Mood for Love) is one of the most profitable Chinese films ever released in Western Europe. In Taiwan, the most high-profile exports have been Edward Yang’s Yi Yi (2000; A One and a Two…) and Hou Xiaoxian’s Hai Shang Hua (1998; Flowers of Shanghai); younger directors such as Tsai Mingliang have also made their mark, and the Taiwanese-American director Ang Lee gave Chinese cinema its first multiple-Oscar winner with Wo Hu Zang Long (2000; Crouching Tiger, Hidden Dragon).

There has also been a surge of low-budget “non-professional” film-making throughout the region, most visibly in China, where young directors unable to find support in what remains of the film industry have taken matters into their hands by making independent films outside the law. In the 1990s, Wang Xiaoshuai, He Jianjun, Zhang Yuan, and other “indie” film-makers effectively displaced their peers in the industry as upholders of artistic excellence and the spirit of innovation in Chinese cinema. Thanks to the example of Jia Zhangke, director of Xiao Wu (1997), Zhantai (2000; Platform), and Ren Xiao Yao (2002; Unknown Pleasures), more of these “outlaw” film-makers are appearing in China every year. The irony of their success abroad is that their films cannot legally be distributed in China itself. See also New Wave.

Reviewed By:
Tony Rayns

Th images of Joey Wong:




Genocide, crime in international law that, according to the 1948 United Nations (UN) Convention on the Prevention and Punishment of Genocide, is defined as “any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial, or religious group, as such: killing members of the group; causing serious bodily or mental harm to members of the group; deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part; imposing measures intended to prevent births within the group; forcibly transferring children of the group to another group”.

The enshrinement of genocide in international law does not, however, mean that the definition of the crime is agreed upon among scholars or politicians. Accordingly, while there are only a few historical instances which it is unanimously agreed constitute genocide, including the Nazi “final solution of the Jewish question” (see Holocaust) and the 1994 Rwandan killing of up to 1 million Tutsis by a Hutu-dominated regime (see Rwandan Genocide), many more cases arguably qualify in the eyes of various important authorities. Nor, as a rule, has the implicit determination of the UN to prevent and punish genocide resulted either in deterring the perpetration of the crime or in concrete steps to intervene. Punishment has, however, occurred in a few instances of genocide as a result of the creation of ad hoc international criminal tribunals in the 1990s, while the advent of the International Criminal Court in 2002 signals the potential to bring more perpetrators to justice.


The term “genocide”, combining the Greek genos (“race” or “tribe”) with the Latin cide (“to kill”), was coined by the Polish-Jewish jurist Raphael Lemkin during World War II. Though the immediate context was the Nazi occupation of Europe, and, as part of that, the “final solution”, Lemkin was not solely concerned with the murder of 6 million Jews, at least 200,000 Roma (Gypsies), and many millions of civilians of the Slavic countries. He viewed the crime of genocide as being as old as recorded history, and wrote extensively on cases of group destruction from classical antiquity, through the Middle Ages and early modern periods, and in Europe’s colonies in the Americas, Africa, and Australasia.

Lemkin’s interest in the subject was undoubtedly shaped by the experience of ethnic minorities in the increasingly intolerant political atmosphere of central and eastern Europe in the early 20th century, but it was also stimulated by the mass deportation and murder of at least 1 million of the Armenian Christian subjects of the Ottoman Empire during World War I. (While successive Turkish governments deny that the actions of their predecessor Ottoman regime constitute genocide, most independent scholars, Lemkin included, regard them as such.)

The UN definition that Lemkin was so important in establishing is the only legal definition in existence. Nevertheless, the definition is open to wide interpretation and contains the seeds of a number of controversies. First, owing to the convention´s stipulation about proving not only that widespread killing among a group has taken place, but also that the killing was committed with accompanying intent to destroy the group “as such” and “in whole or in part”, gaining convictions for genocide as such is not always straightforward, even when there is sufficient evidence to convict for the perpetration of any given act of mass murder. Following on from this, the second point of controversy in the UN definition is the difficulty of determining how big in absolute or proportional terms the part of the group that the perpetrator intends or intended to destroy needs to be in order for the action to qualify as genocidal.

A third point of controversy involves the five acts enumerated as ways of perpetrating genocide. Only one of these involves outright, direct murder, so is it, therefore, possible to have a crime of genocide that does not involve direct murder at all (or involves only a relatively low level of direct murder), as with, for instance, the forced removal of Australian Aboriginal children from their parents and their placement in white families (see Aborigines); or the undermining of the cultural foundations of a group’s existence by measures such as the removal of the people from their ancestral lands, bans on minority languages, or the destruction of libraries and other cultural centres?

A fourth point of controversy involves the types of groups described as potential victims of genocide. Political groups, for example, are absent from the list of national, ethnical, racial, or religious groups, and yet political groups, often very broadly defined, have been targeted across modern history in episodes that many historians would consider tantamount to genocide. Examples of this phenomenon include the Stalinist attack on the kulak (rich peasant) population of the USSR from 1929, or the assault by the Khmer Rouge on the urban and bourgeois populations of Cambodia in 1975-1978 (the main element of a programme of mass murder that accounted for approximately 1.7 million people, or approximately 20 percent of the country’s population).

One of the reasons for the absence of political groups from the convention’s list is the fact that some of the signatory states were concerned about their own records of and a potential for crushing political opposition being brought under scrutiny. This illustrates that the UN definition was itself the product of political compromise, which in turn helps explain why many scholars are not satisfied with it. More generally, because of the many controversies over the definition, it is impossible to provide a satisfactory list of historical cases of genocide. At the same time, it would be possible to make a strong case for the presence of genocide in so many instances that detailing each of those would take prohibitively long.

The Cambodian Genocide


Every genocide has its own blend of motivations, deriving from the particular history and circumstances of the society in which it occurs. Racism or religious hatred may be common motivations, yet the existence of prejudices alone is an insufficient explanation since many divided societies never descend into genocide.

Situations of social crisis may often precipitate genocide. Revolution is such a prime context, as state and social structures are reshaped according to some radical agenda, and groups deemed to be impeding the revolution are removed. War is another prime context, as military conflict enhances feelings of group solidarity but also of xenophobia and paranoia, and as the embrace of violent means in the military sphere extends itself to violent means in the social and political sphere. The sense of a state or society being under threat of its own existence may also be used to justify radical attacks on groups perceived to be threatening that existence. At the same time, the successful acquisition and consolidation of new territory by military or other means have also traditionally provided an important context for genocide, as, for instance, indigenous populations such as the Native American peoples were dispossessed and killed, or had their conditions of life destroyed, by white European settlers.

The example of the white settlers, many of whom were British subjects or American citizens, shows that it is not just totalitarian states that perpetrate genocide. The largest-scale deliberate mass murder in modern history has been conducted by Nazi Germany, the USSR under Joseph Stalin, and Maoist China, but few cultures or political systems have shown themselves immune to the destruction of other groups within or beyond their own policy. Nor state infrastructures themselves the only possible agents of genocide, as is evidenced by the instance of mass inter-group destruction in societies without recognizable state forms. In many instances in the European colonies too, it was the frontier societies of the colonialists, albeit acting with the tacit consent of their own governments, that perpetrated mass murder on their own initiative, and with their own, more local organization. Finally, as in instances of the forced displacement and killing of indigenous groups in the Amazon rainforests by developers, corporations have also been complicit in the mass destruction of both life and culture.

Cambodian Meo Soknen, 13, stands inside a small shrine full of human bones and skulls, all victims of the Khmer Rouge, near her home Tuesday, March 31, 2009, in the Kandal Steung district of Kandal province, Cambodia. Kaing Guek Eav, also know as “Duch”, the commander of the infamous Toul Sleng prison, accepted responsibility Tuesday during the second day of a UN-backed tribual for torturing and executing thousands of inmates at Toul Sleng. The small shrine, located 27 kilometers, (17 miles) south of Phnom Penh is one of many out of the way and forgotten monuments to the “Killing Fields.” (AP Photo/Heng Sinith)


The late 20th and early 21st centuries have seen an increasing international concern with genocide alongside a broader interest in human rights. The Holocaust, in particular, has become central to the memorial cultures of many Western countries. Yet despite this awareness, and despite the UN’s genocide convention, the UN has not generally proven effective at intervening to prevent genocides in progress.

Action to intervene militarily in genocide under a UN mandate requires the consent of each of the five permanent members of the UN Security Council. Given the political divisions that have often existed between individual members, particularly, but not only, during the Cold War, gaining the unanimous agreement of the Security Council has proven difficult. At times, indeed, each of the five permanent member states has enjoyed good, protective relations with regimes suspected of perpetrating genocide, including, for instance: China and Russia with Sudan during the attacks in the early 21st century on various ethnic groups in Darfur, primarily the Fur, Masalit, and Zaghawa peoples, by armed militias supported by the Islamist Khartoum government; the United States with Indonesia during the state’s attack on Indonesian leftists in the 1960s and 1970s and during the occupation of East Timor (now Timor-Leste) in the 1970s and 1980s; and Britain with Iraq during the Saddam Hussein regime´s assault on the Kurds and other groups during the 1980s. Further, the evidence points to a distinct lack of will for forceful intervention by the most powerful UN member states under any circumstances. This is in large part due to the reluctance of member states to risk their own soldiers in matters not considered to be of direct national interest.

Up to the time of writing, the armed intervention in the former Yugoslavia during the 1990s (tellingly by US-led NATO forces rather than the UN-mandated force technically required under international law) looks more like the exception than the rule in the limited response of the international community to genocide. Many commentators have observed that the lacklustre international response to the genocide in Sudan, as to the earlier Rwandan genocide (occurring at approximately the same time as the Yugoslavian crisis), and to the massive inter-group destruction in the Democratic Republic of the Congo (see Post-Independence African Wars), suggests that the most powerful states in the UN, and the West more generally, are less concerned with the mass murder of Africans than they are of Europeans in closer cultural and physical proximity to themselves.

More progress has been made on the matter of punishment for genocide in the legal sphere than in the matter of intervention or prevention of genocide in the political-military sphere. The formation of ad hoc international criminal tribunals for the former Yugoslavia (at The Hague, in the Netherlands) and for Rwanda (at Arusha, in Tanzania), and elsewhere, and the institution of an International Criminal Court sitting permanently at The Hague, mark significant advances both in bringing perpetrators to justice and in the jurisprudence of international law. The trial at The Hague of the erstwhile president of Yugoslavia, Slobodan Miloševic, from 2002-2006 was, for instance, the first trial for genocide (alongside other charges) of a former head of state. The first successful conviction before an international court for the crime of genocide came in 1998 when Jean-Paul Akayesu was found guilty for actions committed while he was mayor of the Rwandan town of Taba.

The successes and the very existence of the international courts illustrate the contemporary awareness of the significance of genocide and a determination, in principle, to punish the authors and agents of the crime. However, the question of who is brought to trial still frequently depends on the international political constellation. It is also, clearly, easier for states to lend moral and financial support to a legal venture than to lend military support for forces of intervention and occupation. Accordingly, the trial of some perpetrators of genocide may be not so much a compliment to armed intervention as a substitute for it. Whether or not this is so, the ongoing instances of genocide, ethnic cleansing, and other, related crimes in today’s world suggest that the genocide convention and the international legal infrastructure have done little to deter would-be perpetrators. See also International Criminal Tribunal for Rwanda; International Criminal Tribunal for the Former Yugoslavia.

Contributed By:
Donald Bloxham


Austro-Asiatic Languages

Austro-Asiatic Languages, important language family with two subfamilies: Munda, 21 languages spoken by several million people in India; and Mon-Khmer, divided into 8 branches (with many further subdivisions), 168 languages spoken by some 35 to 45 million people in South East Asia. Few of the languages have a written history. Among Mon-Khmer languages are Khmer, the national language of Cambodia; Mon, a related language spoken in parts of Myanmar (Burma) and Thailand; the six Nicobarese languages spoken by several thousands on the Nicobar Islands; and Vietnamese.

The Munda languages are polysyllabic and differ from other Austro-Asiatic languages in their word formation and sentence structure (see Indian Languages). In the Mon-Khmer subfamily, Khmer and Mon have borrowed many words from the Indian languages Sanskrit and Pali. In the Viet-Muong branch of Mon-Khmer, Vietnamese was heavily influenced by Chinese; it is monosyllabic and has a complex tone system, as do other Viet-Muong languages. A few other Mon-Khmer languages have simple tone systems; much more common, however, are differentiations of vowel quality—breathy, creaky, or normal. The sound systems of Austro-Asiatic languages are unusual in that they contain a large number of vowel sounds, often up to 35. Suffixes are not found in Mon-Khmer languages, but prefixes and infixes are common. In sentences, final particles may indicate the speaker’s attitude, and special modifiers called expressives convey images of colours, noises, and feelings. Some languages lack voiced stops such as g, d, and b. Words may end with palatized consonants such as ñ. Other distinctive sounds include imploded d and b, produced by suction of breath.

Mon and Khmer are written with Indic-derived alphabets which have been modified to suit their more complex phonology. Vietnamese was written for centuries with modified Chinese characters. In 1910, however, a system was adopted that uses the Roman alphabet with additional signs; invented in 1650, it was the earliest writing system to notate tones, for which it uses accent marks. Most other Austro-Asiatic languages have been written for less than a century, and, generally, literacy rates remain quite low.

Selected statistical data from Ethnologue: Languages of the World, SIL International.

More beautiful images of Cambodian stars:





Buddhism, a major world religion, founded in north-eastern India and based on the teachings of Siddhartha Gautama, who is known as the Buddha, or Enlightened One.

Originating as a monastic movement within the dominant Brahman tradition of the day, Buddhism quickly developed in a distinctive direction. The Buddha not only rejected significant aspects of Brahmanic philosophy, but also challenged the authority of the priesthood, denied the validity of the Vedic scriptures, and rejected the sacrificial cult based on them. Moreover, he opened his movement to members of all castes, denying that a person’s spiritual worth is a matter of birth.

Buddhism today is divided into two major branches known to their respective followers as Theravada, the Way of the Elders, and Mahayana, the Great Vehicle. Followers of Mahayana refer to Theravada using the derogatory term Hinayana, the Lesser Vehicle.

Buddhism has been significant not only in India but also in Sri Lanka, Thailand, Cambodia, Burma, and Laos, where Theravada has been dominant; Mahayana has had its greatest impact in China, Japan, Taiwan, Tibet, Nepal, Mongolia, Korea, and Vietnam, as well as in India. The number of Buddhists worldwide has been estimated at between 150 and 300 million. The reasons for such a range are twofold: throughout much of Asia religious affiliation has tended to be non-exclusive, and Buddhism has been able to adapt itself to many different local religious and cultural traditions. It is especially difficult to estimate the continuing influence of Buddhism in Communist countries such as China.


Buddhism began with the teachings of the historical Buddha and was propagated through the community of disciples he established, the sangha.

A Buddha’s Life

No complete biography of the Buddha was compiled until centuries after his death; only fragmentary accounts of his life are found in the earliest sources. Western scholars, however, generally agree on 563 bc as the year of his birth.

Siddhartha Gautama, the Buddha, was born in Kapilavastu near the present Indian-Nepal border, the son of the ruler of a petty kingdom. According to legend, at his birth sages recognized in him the marks of a great man with the potential to become either a sage or the ruler of an empire. The young prince was raised in sheltered luxury until at the age of 29 he realized how empty his life up to this point had been. Renouncing earthly attachments, he embarked on a quest for peace and enlightenment, seeking release from the cycle of rebirths. For the next few years, he practised Yoga and adopted a life of radical asceticism.

Eventually, he gave up this approach as fruitless and instead adopted a middle path between the life of indulgence and that of self-denial. Sitting under a bo tree, he meditated, rising through a series of higher states of consciousness until he attained the enlightenment for which he had been searching. Once he had known this ultimate religious truth, the Buddha underwent a period of intense inner struggle. He began to preach, wandering from place to place, gathering a body of disciples, and organizing them into a monastic community known as the sangha. In this way, he spent the rest of his life.

B Buddha’s Teachings

The Buddha was an oral teacher; he left no written body of thought. His teachings were transmitted as an oral tradition for several centuries and were subsequently systematized and interpreted by various individuals and schools within India and elsewhere.

C The Four Noble Truths

At the core of the Buddha’s enlightenment was the realization of the Four Noble Truths. (1) Life is suffering. This is more than a mere recognition of the presence of suffering in existence. It is a statement that, by its very nature, existence is essentially painful from the moment of birth to the moment of death. Even death brings no relief, for the Buddha accepted the prevailing Indian idea of life as cyclical, with death leading to further rebirth. (2) All suffering is caused by ignorance of the nature of reality and the craving, attachment, and grasping that arise from such ignorance. (3) Suffering can be ended by overcoming ignorance and attachment. (4) The path to the suppression of suffering is the Noble Eightfold Path, which consists of right views, right intention, right speech, right action, right livelihood, right effort, right-mindedness, and right contemplation. These eight are usually divided into three categories that form the cornerstone of Buddhist faith: morality, meditation, and wisdom.

D Anatman

Buddhism analyses human existence as made up of five aggregates or “bundles” (skandhas): the material body, feelings, perceptions, predispositions or karmic tendencies, and consciousness. A person is only a temporary composition of these aggregates, which are subject to continual change. No one remains the same for any two consecutive moments. Buddhists deny that the aggregates individually or in combination may be considered a permanent, independently existing self or soul (atman). Indeed, they regard it as a mistake to conceive of any lasting unity behind the aggregates that constitute an individual. The Buddha held that belief in such a self-results in egoism, craving, and hence in suffering. Thus he taught the doctrine of anatman or the denial of a permanent soul. To the Buddha, all existence was characterized by “the three universal truths”: impermanence (anitya), suffering (dukkha), and non-substantiality or no-soul (anatman). The doctrine of anatman made it necessary for the Buddha to reinterpret the Indian idea of repeated rebirth in the cycle of phenomenal existence known as samsara. To this end, he taught the doctrine of pratityasamutpada or dependent origination. This 12-linked chain of causation shows how ignorance in a previous life creates the tendency for a combination of aggregates to develop. These, in turn, cause the mind and senses to operate. Sensations result, which leads to craving and a clinging to existence. This condition triggers the process of becoming once again, producing a renewed cycle of birth, old age, and death. Through this causal chain, a connection is made between one life and the next. What is posted is a stream of renewed existences, rather than a permanent being that moves from life to life—in effect a belief in rebirth without transmigration.

E Karma

Closely related to this belief is the doctrine of karma. The Sanskrit term karma literally means “action”, and as a technical term, it refers to a person’s intentional acts and their ethical consequences. Human actions lead to rebirth, wherein good deeds are inevitably rewarded and evil deeds punished. Thus, neither undeserved pleasure nor unwarranted suffering exists in the world, but rather a universal justice. The karmic process operates through a kind of natural moral law rather than through a system of divine judgment. One’s karma determines such matters as one’s species, beauty, intelligence, longevity, wealth, and social status. According to the Buddha, the karma of varying types can lead to rebirth as a human, an animal, a hungry ghost, a denizen of hell, or even among the various categories of gods.

Although never actually denying the existence of the gods, Buddhism denies them any special status or role. Their lives in heaven are long and pleasurable, but they are in the same predicament as other creatures, being subject eventually to death and further rebirth in lower states of existence. They are not creators of the universe or in control of human destiny, and Buddhism denies the value of prayer and sacrifice to them. Of the possible modes of rebirth, human existence is preferable, because the deities are so engrossed in their own pleasures that they lose sight of the need for salvation. Enlightenment is possible only for humans.

F Nirvana

The ultimate goal of the Buddhist path is release from the round of phenomenal existence with its inherent suffering. To achieve this goal is to attain nirvana, an enlightened state in which the fires of greed, hatred, and ignorance have been quenched. Not to be confused with total annihilation, nirvana is a state of consciousness beyond definition. After attaining nirvana, the enlightened individual may continue to live, burning off any remaining karma until a state of final nirvana (parinirvana) is attained at the moment of death.

In theory, the goal of nirvana is attainable by anyone, although in early Buddhism it is a realistic goal only for members of the monastic community. In Theravada Buddhism, an individual who has achieved enlightenment by following the Eightfold Path is known as an arhat, or worthy one, a type of solitary saint.

For those unable to pursue the ultimate goal, the proximate goal of better rebirth through improved karma is an option. In Theravada Buddhism, this lesser goal is generally pursued by lay Buddhists in the hope that it will eventually lead to a life in which they are capable of pursuing final enlightenment as members of the sangha.

The ethic that leads to nirvana is detached and inner-oriented. It involves cultivating four virtuous attitudes, known as the Abodes of Brahma: loving-kindness, compassion, sympathetic joy, and equanimity. The ethic that leads to better rebirth, however, is centred on fulfilling one’s moral duties as a member of a family or society. It involves acts of charity, especially support of the sangha, as well as observance of the five precepts that constitute the basic moral code of Buddhism. The precepts prohibit killing, stealing, telling lies, sexual misbehaviour, and the use of intoxicants. By observing these precepts, the three roots of evil—lust, hatred, and delusion—may be overcome.

G Early Development

Shortly before his death, the Buddha refused his disciples’ request to appoint a successor, telling them to work out their own salvation with diligence. At that time Buddhist teachings existed only in oral traditions, and it soon became apparent that a new basis for maintaining the community’s unity and purity was needed. Thus, the monastic order met periodically to reach agreement on matters of doctrine and practice. Four such meetings have been focused on in the traditions as major councils.

H Major Councils

The first council was held at Rajagrha (present-day Rajgir) immediately after the Buddha’s death. Presided over by a monk named Mahakasyapa, its purpose was to recite and agree on the Buddha’s actual teachings and on proper monastic discipline.

About a century later, a second great council is said to have met at Vaisali. Its purpose was to deal with ten questionable monastic practices—the use of money, the drinking of palm wine, and other irregularities—of monks from the Vajjian Confederacy; the council declared these practices unlawful. Some scholars trace the origins of the first major split in Buddhism to this event, holding that the accounts of the council refer to the schism between the Mahasanghikas, or Great Assembly, and the stricter Sthaviras, or Elders. More likely, however, the split between these two groups became formalized later as a result of the continued growth of tensions within the sangha over disciplinary issues, the role of the laity, and the nature of the arhat.

In time, further subdivisions within these groups resulted in 18 schools that differed on philosophical matters, religious questions, and points of discipline. Of these 18 traditional sects, only Theravada survives.

The third council at Pataliputra (present-day Patna) was called by King Ashoka in the 3rd-century bc. Convened by the monk Moggaliputta Tissa, it was held in order to purify the sangha of a large number of false monks and heretics who had apparently joined the order because of its royal patronage. This council refuted the offending viewpoints and expelled those who held them. In the process, the compilation of the Buddhist scriptures (Tripitaka) was supposedly completed, with the addition of a body of subtle philosophy (Abhidharma) to the doctrine (dharma) and monastic discipline (Vinaya) that had been recited at the first council. Another result of the third council was the dispatch of missionaries to various countries.

A fourth council, under the patronage of King Kanishka, was held about ad 100 at Jalandhar or in Kashmir. Both branches of Buddhism may have participated in this council, which aimed at creating peace among the various sects, but Theravada Buddhists refuse to recognize its authenticity. The council at Pataliputra is recorded only in Theravada sources. and the council of Kashmir is described only in some Indian sources and subsequent Chinese and Tibetan accounts. These appear therefore to be gatherings representing local traditions rather than the Buddhist sangha as a whole.

I Formation of Buddhist Literature

For several centuries after the death of the Buddha, the scriptural traditions recited at the councils were transmitted orally. These were finally committed to writing about the 1st-century bc. Some early schools used Sanskrit for their scriptural language. Although individual texts are extant, no complete canon has survived in Sanskrit. In contrast, the full canon of the Theravadins survives in Pali, which was apparently a popular dialect during the Buddha’s life.

The Buddhist canon is known as the Tripitaka, or Three Baskets, because it consists of three collections of writings: the Sutra Pitaka, a collection of discourses; the Vinaya Pitaka, the code of monastic discipline; and the Abhidharma Pitaka, which contains philosophical, psychological, and doctrinal systemizations and classifications.

The Sutra Pitaka is primarily composed of dialogues between the Buddha and other people. It consists of five groups of texts: Digha Nikaya (Collection of Long Discourses), Majjhima Nikaya (Collection of Medium-Length Discourses), Samyutta Nikaya (Collection of Grouped Discourses), Anguttara Nikaya (Collection of Discourses on Numbered Topics), and Khuddaka Nikaya (Collection of Miscellaneous Texts). In the fifth group, the Jatakas, comprising stories of former lives of the Buddha, and the Dhammapada (Religious Sentences), a summary of the Buddha’s teachings on mental discipline and morality, are especially popular.

The Vinaya Pitaka consists of more than 225 rules governing the conduct of Buddhist monks and nuns. Each is accompanied by a story explaining the original reason for the rule. The rules are arranged according to the seriousness of the offence resulting from their violation.

The Abhidharma Pitaka consists of seven separate works. They include detailed classifications of psychological phenomena, metaphysical analysis, and a thesaurus of technical vocabulary. Although technically authoritative, the texts in this collection have little influence on the lay Buddhist. The complete canon, much expanded, also exists in Tibetan and Chinese versions.

Two non-canonical texts that have great authority within Theravada Buddhism are the Milindapanha (Questions of King Milinda) and the Visuddhimagga (Path of Purification). The Milindapanha dates from about the 2nd-century ad. It is in the form of a dialogue dealing with a series of fundamental problems in Buddhist thought. The Visuddhimagga is the masterpiece of the most famous of Buddhist commentators, Buddhaghosa (FL. early 5th-century ad). It is a large compendium summarizing Buddhist thought and meditative practice.

Theravada Buddhists have traditionally considered the Tripitaka to be the recorded words of Siddhartha Gautama. Mahayana Buddhists have not limited their scriptures to the teachings of this historical figure, however, nor has Mahayana ever bound itself to a closed canon of sacred writings. Various scriptures retrospectively attributed to the Buddha have thus been authoritative for different branches of Mahayana at various periods of history. Among the more important Mahayana scriptures are the following: the Saddharmapundarika Sutra (Lotus of the Good Law Sutra, popularly known as the Lotus Sutra), the Vimalakirti Sutra, the Avatamsaka Sutra (Garland Sutra), and the Lankavatara Sutra (The Buddha’s Descent to Sri Lanka Sutra), as well as a group of writings known as the Prajnaparamita (Perfection of Wisdom).


As Buddhism developed in its early years, conflicting interpretations of the master’s teachings appeared, resulting in the traditional 18 schools of Buddhist thought. As a group, these schools eventually came to be considered too conservative and literal-minded in their attachment to the master’s message. Among them, Theravada was charged with being too individualistic and insufficiently concerned with the needs of the laity. Such dissatisfaction led a liberal wing of the sangha to begin to break away from the rest of the monks.

While the more conservative monks continued to honour the Buddha as a perfectly enlightened human teacher, the liberal Mahasanghikas developed a new concept. They considered the Buddha an eternal, omnipresent, transcendental being. They speculated that the human Buddha was but an apparition of the transcendental Buddha that was created for the benefit of humankind. In this understanding of the Buddha nature, Mahasanghika thought is something of a prototype of Mahayana.

The origins and development of the 18 early schools are highly complex and problematic: the number 18 is itself somewhat symbolic, and the names of the schools are not the same in all sources. The two major branches into which the sangha divided were the Mahasanghikas and the Sthaviras (Sthavirada in Sanskrit, Thera or Theravada in Pali). Both of these became further subdivided into separate schools; the Sthavira having some ten schools while the Mahasanghika had eight. The Theravada tradition of Sri Lanka and South East Asia definitely belongs to the Sthavira/Thera branch, but it is impossible to determine the tradition’s place within that branch. After the spread of Buddhism to Sri Lanka, the Sthavira schools continued in India and later in China for many centuries. One of these schools, known as the Sarvastivada, produced its own Abhidharma works, which provided a systematic interpretation of early Buddhist doctrines. This Abhidharma became the main target of later Mahayana criticism of the early schools; Mahayana as a whole does not see its origins in any of the early schools.

A Mahayana

The origins of Mahayana are particularly obscure. Even the names of its founders are unknown, and scholars disagree about whether it originated in southern or in north-western India. Its formative years were between the 2nd-century bc and the 1st-century ad.

Speculation about the eternal Buddha continued well after the beginning of the Christian era and culminated in the Mahayana doctrine of his threefold nature, or triple “body” (trikaya). These aspects are the body of the essence, the body of communal bliss, and the body of transformation. The body of essence represents the ultimate nature of the Buddha. Beyond form, it is the unchanging absolute and is variously spoken of as pure consciousness or the absolute voidness, the essential nature of all things, and so on. This essential Buddha nature manifests itself, taking on heavenly form as the body of communal bliss. In this form the Buddha sits in godlike splendour, preaching in the heavens. Lastly, the Buddha nature appears on Earth in human form to convert humankind. Such an appearance is known as a body of transformation. The Buddha has taken on such an appearance countless times. Mahayana considers the historical Buddha, Siddhartha Gautama, only one example of the body of transformation.

The new Mahayana concept of the Buddha made possible such concepts as Buddha’s interventions in the world and ongoing revelation that are lacking in Theravada. Belief in the Buddha’s heavenly manifestations led to the development of a significant devotional strand in Mahayana. Some scholars have therefore described the early development of Mahayana in terms of the “Hinduization” of Buddhism.

Another important new concept in Mahayana is that of the bodhisattva or enlightenment being, as the ideal toward which the good Buddhist should aspire. A bodhisattva is an individual who has set out to achieve perfect enlightenment but delays entry into final nirvana in order to make possible the salvation of all other sentient beings. The bodhisattva transfers merit built up over many lifetimes to less fortunate creatures. The key attributes of this social saint are compassion and loving-kindness. For this reason, Mahayana considers the bodhisattva superior to the arhats who represent the ideal of Theravada. Certain bodhisattvas, such as Maitreya, who represents the Buddha’s loving-kindness, and Avalokitesvara or Kuan-yin, who represents his compassion, have become the focus of popular devotional worship in Mahayana.

B Tantrism

By the 7th century ad, a new form of Buddhism known as Tantrism had developed through the blend of Mahayana with popular folk belief and magic in northern India. Similar to Hindu Tantrism, which arose about the same time, Buddhist Tantrism differs from Mahayana in its strong emphasis on ritual, magic, and particular types of meditation. Also known as Vajrayana, the Diamond Vehicle, Tantrism is an esoteric tradition. Its initiation ceremonies involve entry into a mandala, a mystic circle or symbolic map of the spiritual universe. Also important in Tantrism is the use of mudras, or ritual gestures, and mantras, or sacred syllables, which are repeatedly chanted and used as a focus for meditation. Vajrayana became the dominant form of Buddhism in Tibet and was also transmitted through China to Japan, where it continues to be practised by the Shingon sect.


Buddhism spread rapidly throughout the land of its birth. Missionaries dispatched by King Ashoka introduced the religion to southern India and to the northwest part of the subcontinent. According to inscriptions from the Ashokan period, missionaries were sent to countries along the Mediterranean, although without success.

A Asian Expansion

King Ashoka’s son Mahinda and daughter Sanghamitta are credited with the conversion of Sri Lanka. From the beginning of its history there, Theravada was the state religion of Sri Lanka.

According to tradition, one Buddhist mission reached Burma during the reign of Ashoka, but no firm evidence of its presence there appears until much later. The indigenous inhabitants of the area of present-day Burma and Thailand, the Mons, professed Theravada Buddhism. The earliest states of the Burmese, the Pyu in central Burma and the state of Arakan, date from the 3rd-century ad; under Indian influence, they followed Hindu cults, and Mahayana and Tantric forms of Buddhism. The true Burmese, related to the Pyu, established their capital Pagan in 849. They also followed Tantric Buddhism. The supremacy of Theravada Buddhism, which eventually superseded other forms in Burma, began with the reign of the Burmese king Anuruddha in the 11th century. Buddhism was adopted by the Thai people when they finally entered the region from south-western China from the 12th century. From the 13th century, the Thai kingdom of Sukhothai made Theravada Buddhism the official religion of the country.Theravada was adopted by the royal house in Laos during the 14th century.

Both Mahayana and Hinduism had begun to influence Cambodia by the end of the 2nd-century ad, and both flourished there for several centuries. Extensive archaeological remains at the ancient city of Angkor attest to an impressive religious culture created by the Khmer kings under the influence of Hinduism and Mahayana Buddhism. After the 14th century, however, under Thai influence, Theravada gradually replaced the older establishment as the primary religion in Cambodia.

About the beginning of the Christian era, Buddhism was carried to Central Asia. From there it entered China along the trade routes by the early 1st-century ad. This first period of Chinese Buddhism, lasting until about the 6th century, is generally seen as formative, as Buddhist doctrines and culture were imported and adapted. At first, the religion penetrated and took root in China’s intellectual and cultural elite, and to a lesser extent amongst the populace. The Chinese-speaking foreigners who first propagated Buddhism were gradually supplanted by native converts. Kumarajiva, who arrived at the capital Ch’ang-an in 401, introduced the Madhyamika school and supervised the state-sponsored translation of Buddist texts into Chinese. Such endeavours rendered large numbers of Hinayana, Mahayana and esoteric Buddhist scriptures into Chinese. Both Hinayana and Mahayana became established on Chinese soil, and the monastic ordinations transmitted through the Hinayana Dharmaguptaka school became the prevailing tradition in China and Korea up to the present day. However, Mahayana Buddhism eventually became the predominant doctrine. Effectively patronized by the non-Chinese dynasties who ruled the north prior to the reunification of China under the Sui dynasty (589-618), Buddhism reached its zenith under the Sui and the Tang (618-906). The many large, wealthy and sometimes worldly monasteries were sometimes the objects of persecution, often motivated by hostile Confucian and Taoist circles, but such persecutions focused on monastic institutions rather than lay believers. Although persecuted, Buddhism was never prohibited in China.

Several of the Buddhist schools that flourished in China from the 6th to the 9th centuries were direct or indirect importations of Indian schools. Four other major schools which arose in this period were basically Chinese creations, though making certain claims of Indian origin. Three were based on specific scriptures. The T’ien-t’ai school produced a fivefold gradation of Buddhist teachings, placing the doctrines of the Saddharmapundarika Sutra (or Lotus Sutra) at the apex. The Huayan school accepted the Avatamsaka Sutra (Garland Sutra) as its scriptural authority. The third school, the Pure Land school of belief, based itself on three texts related to the Buddha Amitabha, developing a devotional form of Buddhism which stressed faith and belief in him. The most original and Chinese in character was the radical Ch’an school (Zen in Japanese), which eschewed scripture and doctrine in favour of spontaneous insight, the instantaneous realization of one’s own Buddha-nature. After the great persecution of 845 Buddhism declined in China, albeit enjoying a brief revival during the Mongol Yuan dynasty (1276-1368). It never conquered the country, but made a substantial contribution to China’s culture and religious thought, and became a permanent feature of the Chinese way of life.

At the time of the introduction of Buddhism, Korea consisted of three states: Koguryo, Paekche, and Silla. Koguryo received waves of Buddhist influence from northern and southern China and proclaimed Buddhism its state religion in ad 392. Paekche embraced Buddhism in 384 and Silla in 528, following official missions dispatched from the Chinese court. Korean Buddhism experienced its greatest flourishing in the unified state of the Koryo Period (918-1392). Under the Yi dynasty (1392-1910), it became subordinate to the Confucianism which became the official ideology of the Korean state and ruling classes.

Vietnam, long ruled by China, followed mainly Chinese Buddhism, whilst the south of the country was more influenced by India. Buddhism remained well established after Vietnam broke free of China in the 10th century, and after a decline in the 15th century experienced a revival in the 18th century which induced the rise of indigenous Vietnamese sects of Buddhism.

Buddhism was carried into Japan from Korea. It was known to the Japanese earlier, but the official date for its introduction is given as either ad 538 or 552, depending on the source. It was proclaimed the state religion of Japan in 593 by Prince Shotoku, who is seen as the father of Japanese Buddhism, both in terms of his activities and his moral legacy. Several schools of Buddhism were introduced during the Nara (710-784) and Heian (794-1185) periods. The monk Saicho is credited with the foundation of the Tendai school, an importation of Chinese T’ien-t’ai doctrine which also served as a channel for the introduction of Pure Land, Zen, and Tantric beliefs. Kukai also brought from China the variety of esoteric Buddhism which became the Shingon cult. Although Buddhism gained ground among ordinary people during the Nara and Heian periods, it existed primarily as a state-sponsored religion. The three schools which grew to prominence during the Kamakura period (1185-1333)—Pure Land, Zen, and Nichiren Buddhism—succeeded in spreading Buddhism across the whole spectrum of Japanese society. Though none of these were doctrinally innovative, all assumed a distinctly Japanese character. Interestingly, the tradition of monastic ordination introduced into Japan was gradually eliminated, so that in general the clergy of all Japanese Buddhist schools are permitted to marry.

Tibet was converted to Buddhism through two consecutive propagations. In the first, Buddhism was formally recognized as a state religion in the 7th-century ad. Temples and monasteries were built, Tibetans were ordained as monks and a fair number of Buddhist texts were translated into Tibetan. Two Indian masters are particularly venerated for their impact on the spread of Buddhism in Tibet: Shantarakshita and Padmasambhava. While Shantarakshita introduced Mahayana Buddhism and ordination rites, Padmasambhava, a gifted Tantric master, appropriated local deities to serve as protectors of the new creed. This propagation ended in persecution by followers of indigenous Tibetan beliefs. The second propagation, which began in the 10th century, permanently implanted Buddhism in Tibet. Extensive traffic between India and Tibet introduced various traditions, which eventually consolidated into four major religious orders: Sakyapa, Kagyupa, Nyingmapa, and Gelugpa. Tibetan Buddhism as a whole is a complex but coherent body of Mahayana doctrines and esoteric practices. Though many lamas and masters are married, the overwhelming majority of religious are ordained monks. The tradition of reincarnated lamas is a unique feature of Tibetan Buddhism. Such people are believed to be reincarnations of famous masters or manifestations of certain Buddhas or boddhisattvas.


Differences occur in the religious obligations and observances both within and between the sangha and the laity.

A Monastic Life

From the first, the most devoted followers of the Buddha were organized into the monastic sangha. Its members were identified by their shaved heads and robes made of unsewn orange cloth. The early Buddhist monks, or bhikkhus, wandered from place to place, settling down in communities only during the rainy season when travel was difficult. Each of the settled communities that developed later was independent and democratically organized. Monastic life was governed by the rules of the Vinaya, one of the three canonical collections of scripture. Fortnightly, a formal assembly of monks, the uposatha, was held in each community. Central to this observance was the formal recitation of the Vinaya rules and the public confession of all violations. The sangha included an order for nuns as well as for monks, a unique feature among Indian monastic orders. Theravada monks and nuns were celibate and obtained their food in the form of alms on a daily round of the homes of lay devotees. The Zen school came to disregard the rule that members of the sangha should live on alms. Part of the discipline of this sect required its members to work in the fields to earn their own food. In Japan the popular Shin school, a branch of Pure Land, allows its priests to marry and raise families. Among the traditional functions of the Buddhist monks is the performance of funerals and memorial services in honour of the dead. Major elements of such services include the chanting of scripture and transfer of merit for the benefit of the deceased.

B Lay Worship

Lay worship in Buddhism is primarily individual rather than congregational. Since earliest times a common expression of faith for laity and members of the sangha alike has been taking the Three Refuges, that is, reciting the formula “I take refuge in the Buddha. I take refuge in the dharma. I take refuge in the sangha”. Although technically the Buddha is not worshipped in Theravada, veneration is shown through the stupa cult. A stupa is a dome-like sacred structure containing a relic. Devotees walk around the dome in a clockwise direction, carrying flowers and incense as a sign of reverence. The relic of the Buddha’s tooth in Kandy, Sri Lanka, is the focus of an especially popular festival on the Buddha’s birthday. The Buddha’s birthday is celebrated in every Buddhist country. In Theravada, this celebration is known as Vesakha, after the month in which the Buddha was born. Popular in Theravada lands is a ceremony known as spirit, or protection, in which readings from a collection of protective sutras from the Pali canon are conducted to exorcise evil spirits, cure illness, bless new buildings, and achieve other benefits.

In Mahayana countries, ritual is more important than in Theravada. Images of the buddhas and bodhisattvas on temple altars and in the homes of devotees serve as a focus for worship. Prayer and chanting are common acts of devotion, as are offerings of fruit, flowers, and incense. One of the most popular festivals in China and Japan is the Ullambana Festival, in which offerings are made to the spirits of the dead and to hungry ghosts. It is held that during this celebration the gates to the other world are open so that departed spirits can return to Earth for a brief time.

C Buddhism Today

One of the lasting strengths of Buddhism has been its ability to adapt to changing conditions and to a variety of cultures. It is philosophically opposed to materialism, especially of the Marxist-Communist variety. Buddhism does not recognize a conflict between itself and modern science. On the contrary, it holds that the Buddha applied the experimental approach to questions of ultimate truth.

In Thailand and Burma, Buddhism remains strong. Reacting to charges of being socially unconcerned, its monks have become involved in various social welfare projects. Although Buddhism in India largely died out after the 12th century, resurgence on a small scale was sparked by the conversion of 3.5 million former members of the untouchable caste, under the leadership of Bhimrao Ramji Ambedkar, beginning in 1956. A similar renewal of Buddhism in Sri Lanka dates from the 19th century.

Under the Communist republics in Asia, Buddhism has faced a more difficult time. In China, for example, it continues to exist, although under strict government regulation and supervision. Many monasteries and temples have been converted to schools, dispensaries, and other public use. Monks and nuns have been required to undertake employment in addition to their religious functions. Falun Gong, a mystical sect associated with Buddhism, gained a large following within China and worldwide during the 1990s. The sect was banned by the Chinese government in 1999, and a number of followers have been imprisoned. In Tibet, the Chinese, after their takeover and the escape of the Dalai Lama and other Buddhist officials into India in 1959, attempted to undercut Buddhist influence.

Only in Japan since World War II have truly new Buddhist movements arisen. Notable among these is Soka Gakkai, the Value Creation Society, a lay movement associated with Nichiren Buddhism. It is noted for its effective organization, aggressive conversion techniques, and use of mass media, as well as for its nationalism. It promises material benefit and worldly happiness to its believers. Since 1956 it has been involved in Japanese politics, running candidates for office under the banner of its Komeito, or a Clean Government party.

Growing interest in Asian culture and spiritual values in the West has led to the development of a number of societies devoted to the study and practice of Buddhism. Zen has grown in the West to encompass meditation centres and a number of actual monasteries. Interest in Vajrayana has also increased.

As its influence in the West slowly grows, Buddhism is once again beginning to undergo a process of acculturation to its new environment. Although its influence in the West is still small, it seems that new, distinctively Western forms of Buddhism may eventually develop.

Please note: all the images would be affected by the story because we don’t have the specific image with the story.
Thank you


Khmer Rouge

Khmer Rouge, Cambodian revolutionary movement, notorious for its policies of genocide. In 1963 Pol Pot, then a Communist teacher named Saloth Sar, founded the movement to oppose Cambodia’s Prince Norodom Sihanouk. The prince at first attacked the Khmer Rouge, then allied with them with United Nations support after the coup d’état led by Lon Nol in 1970. American bombers decimated both the Khmer Rouge and the populace after the former refused to observe the 1973 ceasefire that ended United States involvement in the Vietnam War. The Khmer Rouge finally toppled Lon Nol in 1975, then forcibly evacuated all Cambodian cities within a week, dragooning citizens for peasant labour. Money and property were abolished; travel and education ceased. Pol Pot became prime minister in 1976 and, following dogmatic Maoism, collectivized Cambodian agriculture in a disastrous bid for increased rice yields. Between one and four million people died in what became known as the “killing fields”, at least 15 per cent of the population, the death toll increased by Khmer Rouge paranoia.

Escalating clashes led to a Vietnamese invasion in 1978-1979: the Khmer Rouge retreated to the border with Thailand. By 1989 Vietnam had withdrawn, but the Khmer Rouge went on fighting other Cambodian factions, specializing in mine warfare against civilians. UN peacekeeping efforts to co-opt them into the 1993 elections failed, not least because popular hatred for their leaders made it impossible to guarantee their safety. The Khmer Rouge continued fighting the elected government, retaining about 10 per cent of Cambodian territory. In December 1998 the last active Khmer Rouge unit surrendered to the Cambodian government, ending Khmer Rouge insurgency and effectively terminating the movement.

In 1999 negotiations were opened between UN and Cambodian officials to discuss the setting up of a tribunal to prosecute former leaders of the Khmer Rouge accused of genocide. A bill in favour of the proposal was passed by the Cambodian Senate in January 2001 and further discussions were held to formulate the draft legislation of the tribunal. An agreement was reached in which Cambodian and foreign prosecutors and judges were given joint responsibility for indicting defendants and reaching final verdicts; this was approved by the king in August 2001. Discussions concerning legal technicalities and requests for further revisions to the draft legislation continued between the UN and the Cambodian government, but were halted in February 2002 following disagreements over which side would control the proceedings and concerns that those prosecuted would not receive a fair trial. However, in January 2003, negotiations resumed and by March of the same year, an agreement had been reached. The outline agreement for the arrangements for the tribunal stated that the prosecution of the leaders of the Khmer Rouge would be handled jointly by Cambodia and the UN. The trials will be held in Phnom Penh and will be presided over by both Cambodian and foreign judges.

Cambodian  modern lifestyle:

Hilady So Phear: Tel: (855)15728271/(855)12849265/(855)12511605 located at Olympic market stall 1Do Floor, Cambodia


Culture history


Culture, a word in common use but with complex meanings, derived, like the term broadcasting, from the treatment and care of the soil and of what grows on it. It is directly related to cultivation and the adjectives cultural and culture are part of the same verbal complex. A person of culture has identifiable attributes, among them a knowledge of and interest in the arts, literature, and music. Yet the word culture does not refer solely to such knowledge and interest nor, indeed, to education. At least from the 19th century onwards, under the influence of anthropologists and sociologists, the word culture has come to be used generally both in the singular and the plural (cultures) to refer to a whole way of life of people, including their customs, laws, conventions, and values.

Distinctions have consequently been drawn between primitive and advanced culture and cultures, between elite and popular culture, between popular and mass culture, and most recently between national and global cultures. Distinctions have been drawn too between culture and civilization, the latter a word derived not, like culture or agriculture, from the soil, but from the city. The two words are sometimes treated as synonymous. Yet this is misleading. While civilization and barbarism are pitted against each other in what seems to be a perpetual behavioural pattern, the use of the word culture has been strongly influenced by conceptions of evolution in the 19th century and of development in the 20th century. Cultures evolve or develop. They are not static. They have twists and turns. Styles change. So do fashions. There are cultural processes. What, for example, the word culture means has changed substantially since the study of classical (that is, Greek and Roman) literature, philosophy, and history ceased in the 20th century to be central to school and university education. No single alternative focus emerged, although with computers has come electronic culture, affecting kinds of study, and most recently digital culture. As cultures express themselves in new forms not everything gets better or more civilized.

The word culture is now associated with many other words with historical or contemporary relevance, like corporate culture, computer culture, or alien culture, as is the word cultural. There are cultural institutions of various ages, some old, like the Royal Academy, some new, like the UK Department for Culture, Media, and Sport. They each follow cultural strategies or cultural policies and together they constitute what is sometimes called a “cultural sector”. How commercialized that varies from culture to culture. The American writer Leo Bogart, the author of eight books on communications and former vice-president and general manager of the Newspaper Advertising Bureau, wrote an important paper in 1991 on the spread of the Internet with the title “The American Media System and its Commercial Culture”.

The more recently widespread use of the word culture in sport, for example, has rendered largely obsolete two older usages of culture—the idea of it as a veneer on life, not life itself, a polish, the sugar icing, as it were, on the top of a cake and, at the opposite pole, the sense of it being the pursuit of perfection, the best that is known and thought in the world. The second meaning necessarily involves an ideal as well as an idea and critical judgement and discrimination to realize it. Both meanings have been influential, however, and the second, propounded in the 19th century, remained influential in literary criticism and in education, particularly in the teaching of English literature, in the 20th century.

The multiplicity of meanings attached to the word made and make it difficult to define. There is no single, unproblematic definition, although many attempts have been made to establish one. The only non-problematic definitions go back to agricultural (for example, cereal culture or strawberry culture) and medical (for example, bacterial culture or penicillin culture). Since in anthropology and sociology we also acknowledge culture clashes, culture shock, and counter-culture, the range of reference is extremely wide.


In 1952 two distinguished American anthropologists, A.L. Kroeber and Clyde Kluckholm listed no fewer than 164 definitions of culture made by anthropologists from the 1840s onwards. The most quoted early anthropologist was (and is) Edward Tylor, who drew no distinction between culture and civilization, and defined culture and civilization when in his Primitive Culture (1871) he wrote “culture or civilization, taken in its wide ethnographic sense, is that complex whole which includes knowledge, belief, art, morals, law, custom and any other capabilities and habits acquired by man as a member of society”. Many later anthropologists offered a less universalistic and more pluralistic and relativistic conception of culture, confining the term to a particular group of people.

It was to Tylor that the poet and critic T.S. Eliot turned in his properly named Notes Towards a Definition of Culture, first published in 1948. Eliot and Kroeber and Kluckholm rightly pointed out that Tylor’s approach had been anticipated by the German anthropologist Gustav Klemm, who defined culture comprehensively almost 30 years before Tylor as “customs, arts, and skills, domestic and public life in peace or war, religion, science and art”.

Tylor pointed to the relationship between culture and society, Klemm to the relationship of culture to religion. Eliot was preoccupied with both of these relationships. For him, it was the function of the superior members and superior families in a hierarchical society to preserve the “group culture” as it was the function of the producers to alter it. Yet the culture of a whole people was “an incarnation of its religion”. Tylor had a marked distaste for religious authority.

Tylor, like Klemm before him and Eliot after him, was also aware, however, of the importance of “material culture”, raw materials and artifacts, utensils and tools both in the making of cultures and in their role as witnesses to past cultures. Anthropology and archaeology thus went together, with British anthropologists considering their field of study as social anthropology and American and continental European anthropologists preferring the description cultural anthropology. Historians learned both from social and cultural anthropologists and from sociologists. Eliot, who died in 1965, had by comparison little influence on them as the study of everyday things became an increasingly significant element in the study of history, culminating in the identification of a consumer culture, which had its origins, some historians maintained, in the 18th century. More broadly, historians, particularly in France, stressed that the concept of culture cannot be separated from its history. A very different and far stronger influence on historians was exercised by Marxist writers, therefore, although by 1965 there were more diversities of approach and methodology within Marxism than there were among anthropologists.

The original formulation of a Marxist concept of culture was deceptively simple. Marx himself distinguished between an economic base and a cultural superstructure, although he did not use the latter adjective. He was interested in the superstructure, but he did not analyse it as 20th-century Marxists were to do, the first of them the so-called Frankfurt School of sociologists, founded by Theodor Adorno and Max Horkheimer. It was they who developed a critical theory of the media as culture makers before being driven out of Germany in 1934 and moving to the United States. Their return to Frankfurt after World War II revived their influence which, for a time, drew in Jürgen Habermas, whose writings on the public sphere became more influential among sociologists than theirs, and Herbert Marcuse, a joint father of the School, who had become an American citizen. A philosopher, who linked Marx and Freud and discussed class and sex, he played a key role in rebellious students’ movements in the United States during the 1960s. His attack on the repressive power, as he conceived of it, of liberalism seemed a threat to American values.

In Italy Antonio Gramsci, general secretary of the Italian Communist Party, who in 1926 was put into jail by Benito Mussolini, used his time there in severely restrained circumstances to write nine volumes of Prison Notebooks, which were to be widely studied throughout European universities during the 1960s. Distinguishing between forms of culture, he rejected the base/superstructure model and concluded that intellectuals created the “hegemony” or cultural domination by which the ruling class secured the mass support in order to achieve its aims. Culture demanded the discipline of knowing one’s inner self, but it was through cultural institutions, particularly the Church, through the media and through language itself that the cultural climate was determined, this, in turn, shaping political options and prospects of life. He was a pioneer of what came to be called “cultural studies”.

So, too, in England, in particular, was Raymond Williams, whose writings on culture and society—culture for him was what he called a “keyword”—culminated in 1977 in his adoption of a Marxist approach. He had not followed such an approach—and he explained why—in his first highly influential books, among them Culture and Society: 1780—1950 (1958) and The Long Revolution (1961), which more than any other books published in Britain drew attention to the concept of culture and a specifically English tradition, centred on it, which developed after and in response to the Industrial Revolution. The key book was Culture and Anarchy (1869) by Matthew Arnold in which he identified culture with “sweetness and light”. In the 20th century, the tradition was expressed in a conservative fashion, as Williams saw it, by Eliot and the prominent Cambridge literary critic, F.R. Leavis.


Williams was one of the main influences on the lively development of cultural studies in Britain during the 1960s, although before Culture and Society appeared the Birmingham Centre for Contemporary Cultural Studies was founded by Richard Hoggart, whose Uses of Literacy was published in 1957. It was widely read outside and inside universities and was published in paperback in the centenary year of Arnold’s Culture and Anarchy. Like Williams (and the Frankfurt School), Hoggart, never a Marxist, was deeply interested in communications, the subject of a paperback by Williams, Television: Technology and Cultural Form (1974). In 1970, Hoggart left Birmingham for Paris to serve as UNESCO’s assistant director-general (for social sciences, human sciences, and culture).

Another major influence on the Birmingham Centre was Edward Thompson, author of The Making of the English Working Class (1963), who traced his origins to a different tradition from that analysed by Williams, a radical culture emerging in the 18th century but with deeper roots that went underground under repression after the French Revolution. Thompson criticized The Long Revolution on the grounds that no way of life is without its dimension of struggle. Such criticism—and a reading of continental European Marxist writers on literature and culture, notably Lucien Goldmann and György Lukács—impelled Williams to take up Marxist theories.

Meanwhile, the French anthropologist Claude Lévi-Strauss, influenced not by Marx but by Émile Durkheim, had set out to redefine culture, his own keyword, in structural terms, claiming that “any culture may be looked upon as an ensemble of symbolic systems in the front rank of which are to be found language, marriage laws, economic relations, art, science, and religion”. His range of reference extended to material culture and, above all, to food. The complexity of cross-influences and counter-influences is brought out in the history of various “structuralisms”, some specifically Marxist, which shaped much of the language of European sociology in the 1960s and 1970s.


The Birmingham Centre, subject to such multiple influences, derived its programme above all from that of Stuart Hall, born in the Caribbean, who worked with and then succeeded Hoggart, and who subsequently became a professor at the Open University. One of his main fields of study was subcultures—the beliefs, attitudes, customs, and other forms of behaviour of particular groups in society, particularly youth. These differed from those of the dominant society, while at the same time were integrally related to it. The concept of subculture referred also to minority groups such as ethnic minorities and drug users, but it incorporated the ways of life of gay communities and religious groups, the last of these prominent in the 21st century. It was sometimes argued that the subcultures created or expressed by such groups in such forms as dress served to provide recompense for the fact that their members are viewed as outsiders by mainstream society. Hence a drug user with a low social status within conventional society would command respect from other drug users because of his or her group’s individual hierarchy and values. Yet the power of Islamic subcultures could not be explained entirely in such terms. Members of some subcultures were bound most closely together if they were at odds with the values and behaviour of the dominant society. A shared language and a common religion with its own traditions and laws were a bond that transcended national frontiers. Subcultures might also emerge within a minority group—such as punk within youth subculture, separatist feminism within a feminine subculture, Rastafarians within a Caribbean subculture, and an Al-Qaeda group within Islam. Boundaries shifted and loyalties could change. Subcultures, like cultures, developed, and with globalization it was recognized that some subcultures, and indeed cultures, might disappear like lost species.

Theories of subcultures emerged during the 1960s and 1970s when the research was carried out on their formation, development, and relationship to society as a whole. Geographical subcultures tend to be described as regional cultures, and there may be subcultures, particularly class subcultures, within them.


The use of the word globalization is relatively new, more recent than the word modernization, but there was recognition even before the rise of the nation state that there were cultures or civilizations that coexisted, in some cases with links between them. The universal history of the 18th century explicitly acknowledged them. So, too, did various stage theories of development, most of them taking it for granted that there were primitive cultures that were the best thought of as obsolete survivals. Progress came to be considered as a law. For the 19th-century French sociologist Auguste Comte, who gave social science the name of sociology, man’s development had consisted of three stages—theological, metaphysical, and scientific, with the scientific (or positivist, the name given to him) dominating as the subject developed. Indeed, the idea of stages went out of fashion, and all cultures came to be treated as unique in time and place. “Colonial cultures”, however, shared common characteristics that implied cultural as well as economic dependence, and even after the end of imperialism, such dependence did not necessarily end.

Before World War II and the withdrawal from formal empire, two 20th-century historians, the German Oswald Spengler and the Englishman A.J. Toynbee, while following different methods and reaching quite different conclusions, produced chronological and comparative accounts of human history in which the units involved were not nation-states or empires but civilizations or cultures, each with a spiritual unity of its own. By comparing Greece and Rome, classical civilization, with the 20th-century West, Spengler, in his two-volume Untergang des Abendlandes (1918-1922), published at the end of World War I, claimed to have traced a life-cycle (birth, youth, maturity, senescence, death) through which all “advanced” cultures or civilizations pass. Translated into English as The Decline of the West (1926-1928), Spengler’s book had less impact in English-speaking countries than it did in defeated Germany. It provoked English rejoinders, however, though not immediately, notably The Recovery of the West (1941) by Michael Roberts, a great admirer of Eliot, who himself referred to other cultures, among them the Indian, more than Roberts did. The differences between Indian and Chinese civilizations are part of the pattern of global history as it is now interpreted, with more questions posed than answered. The multi-volume Science and Civilization in China (1954- ) by the English biochemist Joseph Needham provides the broadest sweep in English of Chinese culture leading up to what he called “the gunpowder epic”, the transfer of technology to the West, but it has itself been subjected to challenge. Meanwhile, Wang Gungwu has noted carefully how the words civilization and culture, although not the conception of change, were new to the Chinese—and Japanese—in the late 19th century. They were translated as Wenming and Wenhua.

The Cultural Revolution in China, which followed nearly a quarter of a century after the creation of a Communist People’s Republic in 1949 and four years after a brief border war with India in 1962 (see Sino-Indian War), was conceived of as a proletarian purge of anti-revolutionary elements, and in waves of terror its leaders savagely attacked both traditional Chinese culture and all forms of Western culture. The precepts of Mao Zedong stirred several leftist groups in the West, however, and he himself survived the end of the Cultural Revolution in 1969. Marxism too survived, as it did the collapse of communism in the Soviet Union.

Toynbee was the other Western historian to write in terms of “civilizations” and “cultures”—he never clearly distinguished between the two—when he wrote 12 volumes of his magnum opus A Study of History (1934-1961) in which he identified 21 developed civilizations throughout history and 5 “arrested civilizations”. His own experiences were almost as varied as those of most of his civilizations. He had been a delegate to the Peace Conference in Paris in 1919 following World War I, and after having become a professor of Byzantine and modern Greek studies, a journalist, and director of studies at the Royal Institute of International Affairs, he became well known throughout the world, if not universally admired, as a historian. Drawn more to Greek and Roman experience, which he knew the best than to Indian or Chinese, nevertheless at least one Buddhist subculture, Cao Dai, hailed him as a prophet and his works were as well known in Asia as in Europe. His theory of civilizations, based on challenge and response, could be quickly understood, however much detail he used to illustrate it. The most relevant current detail would be provided from Africa, where cultures and subcultures confront all the issues raised by globalization.

Contributed By:
Asa Briggs

Hilady So Phear: Tel: (855)15728271/(855)12849265/(855)12511605 located at Olympic market stall 1Do Floor, Cambodia

Khmer Kingdoms

Khmer Kingdoms, succession of South East Asian monarchies based in Cambodia. Modern Cambodia is the residue of a powerful state which at its peak incorporated large areas of Laos, eastern Thailand, and southern Vietnam. Deriving from the Indian-style state of Funan and the Kingdom of Chenla, the great Khmer empire of Angkor was founded by Jayavarman II (reigned c. 802-850), who took back the remnants of Chenla from the Indonesian Kingdom of Sri Vijaya and was consecrated as a god-king. The capital of the kingdom he created was moved first to Lake Sap, then under Yasovarman I (reigned c. 889-900) to Angkor, where great stone temples to the gods of Hinduism, and reservoirs and canals for irrigation, were built. Khmer culture flourished under royal patronage. After decades of peace, King Suryavarman I (reigned c. 1004-c. 1050) pushed into Thailand and doubled the number of cities under his control. Succession feuds led to a new royal dynasty founded by Suryavarman II (reigned 1113-1150), founder of Angkor Wat, who attacked Thailand, Vietnam, and the eastern Kingdom of Champa.

The chaos that followed usurpation of the Khmer throne and invasion by Champa ended in 1171 with the liberation of Angkor by a prince later crowned as Jayavarman VII (reigned 1181-c. 1219), who reconsolidated the state and subjugated Champa. He favoured Mahayana Buddhism, and built the Bayon, the great Buddhist temple at Angkor with its enormous faces. After his death, the Khmer kingdom began to shrink under pressure from the Thai Kingdom of Sukothai, but retained power and splendour throughout the 13th century. In the 14th century Theravada Buddhism became the state’s dominant creed, dislocating the social hierarchy associated with the Angkor temples.

Repeatedly attacked by the new Thai Kingdom of Ayutthaya, Angkor was finally abandoned around 1431, after which the Khmer rulers withdrew south-eastward to Phnom Penh, reconstituting a rump state based on trade. The following confused and badly recorded period ended with a brief recovery under Chan I (reigned 1516-1566), who reoccupied and restored Angkor. However, the resurgent Ayutthaya Thais invaded once more and seized the new southern capital in 1594. Seeking a counterweight to Ayutthaya, Chetta II (reigned 1618-1625) married a Vietnamese princess and relinquished southern Vietnam, hitherto Khmer land. From then on the Khmer monarchs were clients or puppets of their powerful Thai or Vietnamese neighbours.

Hilady So Phear: Tel: (855)15728271/(855)12849265/(855)12511605 located at Olympic market stall 1Do Floor, Cambodia



Khmer, the dominant group (about 5 million people) in Cambodia (formerly Kampuchea), comprising over 87 percent of the national population.

The Khmer moved down from the area now known as Thailand into the Mekong Delta before 200 bc. Over the following centuries, their culture was subject to a series of waves of Indian influence. The first Khmer kingdom, Funan (1st to 6th centuries), was incorporated into the state of Chenla, which was succeeded by the Khmer Empire. This extensive empire, which reached its zenith between the 9th and 13th centuries, is famed for its artistic and architectural achievements (for example, the temple of Angkor Wat). Forced to retreat progressively by advancing Thais and Vietnamese, the empire became so weak it eventually had to seek French protection, granted in 1864. After winning independence in 1954, the country was led by Norodom Sihanouk. His overthrow in 1970 was followed by a period of civil war and the rule, between 1975 and 1979, of the notorious Khmer Rouge revolutionary movement.

Until the Khmer Rouge forcibly collectivized farmland, the vast majority of Khmer lived in villages. These small groupings were effectively self-sufficient and enjoyed a high degree of autonomy. Rice is the staple crop, supplemented by subsistence fishing. Most Khmer are Buddhists. Traditionally, Khmer society was divided into six categories: the extended royal family; Brahmins, who conducted the royal rituals; monks; officials; commoners; and slaves. Before the Khmer Rouge took power, the life of Khmer communities centred around local monasteries whose leaders exerted great influence in their area.

The language, Khmer, of the Mon-Khmer linguistic group, has been written since the 7th century and has an extensive literature.

Contributed By:
Jeremy MacClancy

Hilady So Phear: Tel: (855)15728271/(855)12849265/(855)12511605 located at Olympic market stall 1Do Floor, Cambodia