Science Fiction


Science Fiction, the fictional treatment in print, films, television, or other media of the effects of science or future events on human beings. More precisely, science fiction deals with events that did not happen or have not yet happened; it considers these events rationally in terms both of explanation and of consequences, and it is concerned with the impact of change on people, often with its consequences for the human race. The most common subjects for science fiction are the future, travel through space or time, life on other planets, and crises created by technology or alien creatures and environments.


The subjects of science fiction have been touched upon by fantastic literature since ancient times. The Babylonian Gilgamesh Epic dealt with a search for ultimate knowledge and immortality, the Greek myths of Daedalus with the technology of flying, and the True History (c. ad 160) of Lucian of Samosata with a trip to the Moon. Imaginary voyages and tales of strange people in distant lands were common in Greek and Roman literature and found new expression in the 14th-century book of travels written in French by the pseudonymous Sir John Mandeville. Trips to the Moon were described in the 17th century by figures as diverse as the British prelate and historian Francis Godwin, the French writer Cyrano de Bergerac, and the German astronomer Johannes Kepler, among others. Another subject, the structure of better societies or better worlds, which goes back at least to the 4th century bc with Plato’s The Republic, was reintroduced and given a generic name when Sir Thomas More wrote Utopia (1516). Stories of an imaginary voyage were usually written for satirical purposes; perhaps the finest example is Gulliver’s Travels (1726) by the English satirist Jonathan Swift. But science fiction could not have existed in its present form without the recognition of social change at the beginning of the Industrial Revolution (c. 1750). The Gothic novel of the 18th century culminated in Frankenstein (1818) by the British novelist Mary Wollstonecraft Shelley, a work permeated by a belief in the potential of science. Many authors of the 19th century, such as Edward Bellamy, Nathaniel Hawthorne, Edgar Allan Poe, and Mark Twain in the United States and Rudyard Kipling in England, worked in the science-fiction genre at one time or another. The first great specialist of science fiction, however, was the French author Jules Verne, who dealt with geology and cave exploration in Journey to the Centre of the Earth (1864), space travel in From the Earth to the Moon (1865) and Off on a Comet (1877), and the submarine and underwater marvels in Twenty Thousand Leagues Under the Sea (1870).


Stories of lost races and unexplored corners of the world were popular in Victorian England. She and Allan Quartermain by H. Rider Haggard both appeared in 1887, and in 1912 Sir Arthur Conan Doyle published The Lost World. The first major writer of science fiction in English, however, and the man who may be considered the founder of modern science fiction is H.G. Wells. More interested in biology and evolution than in the physical sciences and more concerned about the social consequences of the invention than the accuracy of the invention itself, Wells, from 1894 on, wrote stories of science invested with irony and realistic conviction. His reputation grew rapidly after the publication of The Time Machine in 1895; this was followed by The Island of Dr Moreau (1896), The Invisible Man (1897), The War of the Worlds (1898), When the Sleeper Wakes (1899), and The First Men in the Moon (1901), before Wells turned to other forms of literature.

Other science-fiction novels were written by British authors during the first half of the 20th century. Noteworthy are the fancies of Matthew Phipps Shiel (The Purple Cloud, 1901), the cosmic panoramas of Olaf Stapledon (Last and First Men, 1930), and the allegories by the critic and Christian apologist C. S. Lewis (Out of the Silent Planet, 1938). The most important American writer in the field at this time was Jack London, whose contributions included The Iron Heel (1907) and The Scarlet Plague (1912). Many British authors of standard fiction wrote one or two striking novels of a socially prophetic nature. Particularly successful and influential were Brave New World (1932), by Aldous Huxley, and Nineteen Eighty-four (1949), by George Orwell. One prolific writer of works dealing with both science fiction and science fact is Arthur C. Clarke (Childhood’s End, 1953).

In the opinion of many critics, one of the ablest American writers of mainstream science fiction, combining scientific extrapolation with narrative art, is Robert Heinlein (The Green Hills of Earth, 1951; Stranger in a Strange Land, 1961). Other widely known American science-fiction authors are Isaac Asimov (The Caves of Steel, 1953), who is also a prolific author of science surveys for the layperson, and Ray Bradbury (The Martian Chronicles, 1950; Fahrenheit 451, 1953), who is considered more of a fantasy writer. Among the many other authors who have drawn critical acclaim are Philip K. Dick (The Man in the High Castle, 1962) and Ursula K. Le Guin (The Left Hand of Darkness, 1969; The Dispossessed, 1974). Frank Herbert’s works are widely popular. His Dune Chronicles include Dune (1965), Children of Dune (1976), and God Emperor of Dune (1981). Michael Moorcock, author of the Elric of Melnibone series, beginning in 1972; Greg Bear (Eon, 1985); and Larry Niven (N-Space, 1990) should also be mentioned.

In other countries, science fiction also flourished, most notably in Eastern Europe and Russia. Karel Čapek, a Czech writer, introduced the word robot in his play R.U.R. (1921). Polish writer Stanislaw Lem used science-fiction settings to explore both scientific and philosophical concerns. His books include Solaris (1961; translated 1970) and Dzienniki gwiazdowe (1957; translated as two books: The Star Diaries, 1976, and Memoirs of a Space Traveller, 1982). In Russia, utopian fiction first appeared in the 1750s with the works of such authors as V. A. Levshin and M. D. Chulkov. Twentieth-century Russian science-fiction writers include Konstantin Tsiolkovsky, who wrote in the 1920s of space exploration; Yevgeny Zamaytin, known for his anti-utopian novel We (1924; translated, 1925), which greatly influenced George Orwell; Aleksandr Belyaev, who wrote in the 1920s of biological influences on humans; Ivan Efremov, author of the utopian Tumannost’ Andromedy (1956; Andromeda Nebula); and the brothers Arkady and Boris Strugatsky, prolific authors of the 1960s.


The characteristically American type of science fiction was at first published almost entirely in magazines. The authors of magazine science fiction emphasized technical accuracy and plausibility above literary value and sometimes above characterization. The mass magazines that developed in the 1890s published many stories of science, and the pulp fiction magazines of the turn of the century included many stories of romance and wild adventure, such as those written by Edgar Rice Burroughs and Garrett P. Serviss. In 1926 Hugo Gernsback, a Luxembourg emigrant who became an American editor, publisher, inventor, and author, founded the first science-fiction magazine, Amazing Stories. He believed that fiction could be a medium for disseminating scientific information and creating scientists; he published and wrote stories with this purpose in mind. An example of his writing is Ralph 124C41+, first serialized in his popular science magazine Modern Electrics in 1911. Gernsback also created a name for the new form, “scientifiction”, which he changed in 1929, with the founding of Science Wonder Stories, to “science fiction”. In 1937, when John Wood Campbell, Jr., became editor of Astounding Stories, the magazine began to feature a new type of science fiction. As an author, especially when writing under the pseudonym Don A. Stuart, Campbell had already added mood and characterization to the technical and prophetic aspect of science fiction. As an editor, Campbell helped to encourage other writers to produce science fiction of literary merit and fostered what has since been called “the golden age” of science fiction.

Later magazines included Fantasy and Science Fiction, founded in 1949 by the American authors and editors Anthony Boucher and Jesse Francis McComas, and Galaxy Science Fiction, founded in 1950 by the American author and editor Horace Leonard Gold. In these magazines, emphasis shifted more towards literary, psychological, and sociological preoccupations, with some loss, however, of scientific content.

Beginning in the mid-1960s a new concern for humanistic values and experimental techniques emerged. Calling itself the “new wave”, it entered science fiction primarily through the English magazine New Worlds and was typified by the British writers Brian Aldiss and J. G. Ballard and the American writer Harlan Ellison. The new wave preferred to call what it wrote “speculative fiction”, as in, for example, The Infinity Box (1975) by Kate Wilhelm. Much of this type of fiction was published in anthologies of original work, in particular, Ellison’s anthologies beginning with Dangerous Visions (1967).

In the 1980s a new type of science-fiction writing, called cyberpunk literature, was developed. Cyberpunk authors portrayed decentralized societies dominated by technology and science. Their stories emphasized technological detail and were characterized by intricate plots and a style that mirrored the confusing and dazzling worlds they represented. Cyberpunk literature first appeared as short stories published in magazines such as Isaac Asimov’s Science Fiction (founded 1977) and Omni (founded 1978). The first cyberpunk novel is considered to be Neuromancer (1984), by American writer William Gibson, who also wrote the cyberpunk novels Count Zero (1986), Mona Lisa Overdrive (1988), and Virtual Light (1993). Other cyberpunk writers include Bruce Sterling (Schismatrix, 1985; Islands in the Net, 1988); John Shirley (Eclipse, 1985); and Pat Cadigan (Fools, 1992).


Science fiction has interested filmmakers since the earliest days of the cinema, although not often to the benefit of the film or science fiction itself. Most of such films have been adaptations of science-fiction literature and comic strips.

Unlike science-fiction literature, science-fiction cinema was, until the 1970s, increasingly preoccupied with unnatural creatures of various sorts, giving rise to a subgenre colloquially referred to as horror or monster films. Films featuring alien beings, mutant creatures, or soulless humans were more often than not stereotyped melodramas. Among common themes of such science-fiction films were the fallibility of megalomaniacal scientists, the urgency of international cooperation against invaders from outer space or monsters from Earth, the rash hostility of people to anything alien, and the evil aspects of technology.

The earliest film to tackle fantasy, if not a science fiction proper, was Le Voyage dans la Lune (A Trip to the Moon), created by the French film-maker and magician Georges Méliès in 1902. The film company of the American inventor Thomas A. Edison produced A Trip to Mars in 1910. Early German film-makers produced influential films culminating in such Expressionistic films as The Cabinet of Dr Caligari (1919, Robert Wiene) and Metropolis (1926, Fritz Lang). Prominent American monster films, which have since inspired countless sequels, are Frankenstein (1931, James Whale), Dracula (1931, Tod Browning), and The Mummy (1932, Karl Freund). Notable American serials of the 1930s were based on the comic-strip characters Flash Gordon and Buck Rogers. In 1933 came King Kong (Merian C. Cooper, Ernest B. Schoedsack) and The Invisible Man (James Whale). In 1936 Great Britain produced the ambitious Things To Come (William Cameron Menzies), a visionary treatment of a utopian technocracy, the scenario for which was written by Wells, author of the novel, The Shape of Things to Come (1933), from which it was adapted.

The American producer and director George Pal contributed several well-regarded films, beginning in 1950 with Destination Moon (Irving Pichel) and continuing with When Worlds Collide (1951, Rudolph Maté), The War of the Worlds (1953, Byron Haskin), and The Time Machine (1960, George Pal). All four films won awards from the Academy of Motion Picture Arts and Sciences for their special effects. Other notable films of the 1950s were The Day the Earth Stood Still (1951, Robert Wise), Forbidden Planet (1956, Fred M. Wilcox), and Invasion of the Body Snatchers (1956, Don Siegel).

The critically acclaimed science-fiction films of the 1960s and 1970s include The Day of the Triffids (1962, Steve Sekely), Alphaville (1965, Jean-Luc Godard), Fahrenheit 451 (1966, François Truffaut), Fantastic Voyage (1966, Richard Fleischer), Planet of the Apes (1968, Franklin J. Schaffner), The Andromeda Strain (1971, Robert Wise), The Man Who Fell to Earth (1976, Nicolas Roeg), and Close Encounters of the Third Kind (1977, Steven Spielberg). Stanley Kubrick made the epic 2001: A Space Odyssey (1968), which was one of the most widely discussed science-fiction films of all time; and the science-fiction adventure fantasy Star Wars (1977, George Lucas) became one of the biggest box-office hits to date. Several film episodes of Star Trek (based on the television series); Mad Max (1979, George Miller) and its sequels; Brazil (1985, Terry Gilliam); Alien (1979) and Blade Runner (1982) by Ridley Scott; The Terminator (1984) and Terminator 2: Judgement Day (1991), and Aliens (1986), by James Cameron; E.T. The Extra-Terrestrial (1982), Jurassic Park (1993), A. I. Artificial Intelligence (2001), and Minority Report (2002), all by Steven Spielberg; the Matrix series of films (1999-2003) by the Wachowski brothers; and the sequels to Star Wars have demonstrated the range and popularity of science-fiction filmmaking since the 1980s.


One of the most successful science-fiction programmes on radio in the 1930s was the serial Buck Rogers (1932-1947). In 1938 the realism of a broadcast production of Wells’s The War of the Worlds by the American actor and director Orson Welles aroused panic among some listeners, so realistic was its announcement of a Martian invasion of the Earth. Later such programmes as Dimension X (1950-1951) and X Minus One (1955-1958) dramatized short stories.

Two American television programmes from the 1950s are the science-fiction serials Captain Video (1949-1955) and Tom Corbett, Space Cadet (1950-1955). In later years, Superman and other comic book heroes were featured, while programmes popular with adults included The Twilight Zone (1959-1964; revived 1985-1987), The Outer Limits (1963-1965), Lost in Space (1965-1968), Land of the Giants (1968-1970), The Immortal (1970-1971), and Star Trek (1966-1969); and in Britain Doctor Who (1963-1989). Star Trek, one of Paramount Studios’ most successful productions, created a large fan movement and inspired several subsequent syndicated series, including the sequel Star Trek: The Next Generation (1987-1994), which in turn inspired two spin-off series, Star Trek: Deep Space Nine (1993-1999) and Star Trek: Voyager (1995-2001). Science-fiction television programmes of the 1970s and 1980s included the British series Survivors (1975-1977) and Blake’s 7 (1978-1981), and the American shows Battlestar Galactica (1978-1980) and Buck Rogers in the 25th Century (1979-1981). A popular science-fiction television series of the 1990s was The X-Files (1993-2002), about paranormal activity.


Two major events brought science fiction general recognition as a literature of relevance: the explosion of the first atomic bomb in 1945 and the successful landing on the Moon on July 20, 1969, of two American astronauts. Atomic bombs (and atomic energy) and space flight had been two of the major subjects of science fiction almost from its beginning, but they had been ridiculed by traditional critics and even many scientists as “mere science fiction”. Their realization and the recognition by many people of the way in which life is being changed by science and technology have contributed to what Asimov has called “a science-fiction world”. This awareness was intensified in July 1976 when a space vehicle landed on Mars and transmitted to Earth the first on-site photographs of another planet, and in November 1980 when the American spacecraft Voyager I flew by the planet Saturn and transmitted some 1 billion miles back to Earth photographs of remarkable clarity. It was further stimulated in 2004 when United States President George Bush announced proposals to build a permanent lunar space station and send a man to Mars. Scientists and explorers have credited science fiction by Verne and others with starting them on their professions. Space exploration by Soviet scientists was influenced by the writings of the Russian author Konstantin Tsiolkovsky (Beyond Earth, 1920), and German rocket research was inspired partly by the works of the German author Kurd Lasswitz.

Credited images: yesstyle

Facebook Comments



Technology, the purposeful human activity which involves designing and making products as diverse as clothing, foods, artefacts, machines, structures, electronic devices and computer systems, collectively often referred to as “the made world”. Technology can also mean the special kind of knowledge which technologists use when solving practical problems (for example, designing and building an irrigation system for tropical agriculture). Such work often begins with a human want (for example, better safety for an infant passenger in a car) or an aspiration (for example, to see the inside of a human artery or to land on the Moon), and technologists draw on resources of many kinds including visual imagination, technical skills, tools, and scientific and other branches of knowledge. Technological activity is as old as human history and its impact on almost all aspects of people’s lives has been profound.


A common feature of technological activity, no matter what outcome is in mind, is the ability to design. In common with technology, design is difficult to define briefly although the general statement that it is “the exercise of imagination in the specification of form” captures much of what is involved.

The aim of the design is to give some form, pattern, structure, or arrangement to an intended technological product so that it is an integrated and balanced whole which will do what is intended. Designing often begins with an idea in a person’s mind and the designer has to be able to envisage situations, transformations, and outcomes, and model these in the mind’s eye. In the 19th century James Nasmyth, when describing how he had invented his steam pile driver, said that the machine “was in my mind’s eye long before I saw it in action”; he could “build up in the mind mechanical structures and set them to work in imagination”. Much of this thinking is non-verbal and visual; it also involves creativity, including the ability to put together ideas in new ways. Sometimes this is a solitary activity and was often thus in the past, but many designers today work in teams where discussion, sketches, and other visual representations, as well as analogies and ideas, plucked from apparently unconnected fields, can all help the process.

One problem which designers face is that the requirements that a product has to fulfil are not always compatible: ease of maintenance, for example, may conflict with cost and aesthetic appearance; safety considerations may not be reconciled easily with completion of the work by the deadline; and materials chosen on technical grounds for their suitability may raise concerns on environmental or moral grounds (for example, waste disposal difficulties; production by unacceptable methods such as exploited labour). Compromise and optimization are necessary when designing.

Designing is sometimes represented as a linear or a looped set of processes—starting with identification of a problem or requirement, followed by generation of ideas for solutions; selection of a promising design option is then detailed, made, and finally evaluated. In reality, the processes are almost always less orderly than this. Experience from making, for instance, can feed back and lead to modifications in the design. Also, evaluation is an on-going process throughout the stages. It is also the case that the processes of designing can differ according to the product involved. For example, designing active matrix liquid crystal displays, involving the use of basic scientific research, is different from designing corkscrews or mousetraps. Similarly, designing for manufacture on a large scale may require modifications to an artefact that was designed for use, but only as a one-off product.


Although technology and science have many features in common—not least in the minds of many people who link them together when viewed as present-day bodies of practice—their goals and how they judge success tend to differ.

In its most basic form, science is driven by curiosity and speculation about the natural world without thought of any immediate application. It aims to produce theories which can be tested experimentally in the public domain and which are valued according to criteria such as simplicity, elegance, comprehensiveness, and range of explanatory power. By no means all that goes on under the name of science has this “blue-sky”, unconstrained quality; so-called strategic science, for example, is focused more on yielding knowledge that might assist the subsequent development of, as yet unidentified, winning products and processes in the market-place.

Technology, on the other hand, has the goal of creating and improving artefacts and systems to satisfy human wants or aspirations. Success is judged in terms of considerations such as efficiency of performance, reliability, durability, the cost of production, ecological impact, and end-of-life disposability. It has sometimes been said that whereas the output from science is a published paper for all to read and criticize, that from technology is a patent conferring sole ownership of the invention on the holder.

For many centuries technological advances of great significance were made without benefit of knowledge from science. The notable achievements of Asian technology by the end of the first millennium AD in fields such as iron production, printing, and hydraulic engineering, including dams, canals, and irrigation systems, are well documented. In southern Asia, at a later period, the high quality of Indian textile products, especially painted and printed cotton goods, set standards which were an incentive to technological developments in Britain.

Water wheels, canal locks, barbed wire (without which the American West could not have been opened up), food preservation, fermentation and many metallurgical processes are other instances where technology ran ahead of science. The relationship underwent change especially in the late 19th century with the growth of the chemical and electrical power industries; in these, scientific knowledge was of direct use in the solving of problems and the development of products, although it was rarely sufficient on its own. At a later date, the communications and electronics industries provided further testimony to the effectiveness of a closer relationship between science and technology, as indeed did the experience of World War II and subsequent more local military conflicts.

By the second half of the 20th century, much modern technology was intimately related to scientific knowledge, and science itself had become increasingly linked to technology through its dependence upon complex instrumentation to explore the natural world. A technological innovation such as nuclear magnetic resonance imaging, a diagnostic technique widely used in medicine, could not have been developed without scientific knowledge of the magnetic properties of atomic nuclei. The symbiotic and synergistic relationship between modern technology and modern science has led some to use the term technoscience to describe what they see as now an essentially merged, even hybrid, enterprise.

Whether merged or not today, in the past science and technology have often followed independent paths. Furthermore, in so far as any relationship was acknowledged, it was most frequently seen as hierarchical, with technology practice trailing dependently in the wake of scientific theory. This notion that technology was merely applied science enjoyed wide currency in Euro-American circles, and beyond, throughout much of the 19th and 20th centuries. Today there would be little support for it. A more widely accepted model of the relationship is that of two different but interdependent communities of practice which overlap and intermesh in their activities. However, the scientific knowledge constructed by scientists in their search for understanding of natural phenomena is not always in a form which enables it to be used directly and effectively in technological tasks. It often has to be reworked and translated into a form which relates better to the design parameters involved.


Historical accounts of technology can be constructed from many different perspectives, each of which may help in the understanding of this complex enterprise.

At the most general level, attempts have been made to discern and characterize distinctive periods in the evolution of technology. Writing in the 1930s, the Spanish philosopher José Ortega y Gasset identified three. In the first and longest period, there were no systematic techniques for the discovery and development of technological devices. The earliest toolmakers’ achievements such as stone axes, scrapers, and control of fire were no more than the products of chance. In the second period, certain technological skills had become sufficiently conscious to be passed from one generation to the next by accomplished practitioners. These craftsmen, however, had no systematic body of knowledge about their devices. Possession of this kind of knowledge, resulting from analytical modes of thought associated with modern science, characterized the third period and empowered people—in a radically different way from previously—to realize their technological goals.

Also in the 1930s, Lewis Mumford published his classic work Technics and Civilization, including an analysis of the last 1,000 years of the development of technology in terms of three successive, but overlapping and interpenetrating, phases. The first, “eotechnic” phase (roughly ad 1000 to 1750) was characterized by raw materials such as wood, glass, and water, with increased use of horse power and energy from wind and water. This was followed by a “palaeotechnic” phase (roughly 1750 to 1900) a period of “carboniferous capitalism” characterized by a coal and iron complex and the steam engine. Beyond this came a “neotechnic” phase, with science prominent and an electricity-alloy complex with new materials such as plastics coming into use. Electrical energy and diesel and petrol combustion engines replaced the steam engine.

Despite similarities, both of these analyses fail to reflect the impact of technology or the technological characteristics of the late 20th century. Achievements here include new fabrication resources, including composites and “smart” materials which can respond to changes around them and behave as if possessing a memory. Technology has extended into the realm of the living with, for example, genetically engineered strains of “improved” plants and animals. Nuclear power is an alternative if controversial, energy source. Dramatically enhanced means of communication and information processing are widely available and there has been a substantial growth of complex socio-technical systems relating to almost every aspect of work and everyday life—such as the ones encountered at the supermarket checkout or when buying a flight ticket.

The scale of these technological innovations and the speed of their implementation are quite different from anything experienced in previous phases of the evolution of technology. At the same time, a distinguishing feature of the age has been a growing awareness of negative aspects of technology. Technological disasters of unprecedented magnitude have occurred and been widely publicized: the list is long and includes spillages from giant oil tankers; the 1984 tragedy at Bhopal, India, when an explosion at the Union Carbide chemicals plant led to the escape of methyl isocyanate and the death of over 3,000 people, the worst industrial accident to date; the 1986 space shuttle Challenger disaster, when the spacecraft exploded just after the launch killing seven astronauts; and also in 1986, the Chernobyl disaster when a fire in the core of a Soviet nuclear reactor at Chernobyl’ in Ukraine resulted in 31 deaths and the spewing out of deposits of radioactive debris, which fell in many regions of the world—the world’s worst nuclear industry accident.

The United Nations Conference on Environment and Development—widely known as the Earth Summit, held in Rio de Janeiro in 1992—brought into prominence issues such as climatic change, sustainable development, and the more responsible management of global resources, with particular regard to environmental pollution, waste disposal, and a reduction in the gap in technological capacity between developed and developing countries. In this spectacular new phase, as the 20th century closes, any characterization of technology would be incomplete if it failed to acknowledge its inescapable moral dimension. Perhaps no other technological developments have more vividly brought home this realization than those in the field of atomic energy since the dropping of two atomic bombs on the Japanese cities of Hiroshima and Nagasaki in 1945. As Robert Oppenheimer, scientific leader of the Manhattan Project which produced the original bombs, later remarked: “the physicists have known sin, and this is a knowledge which they cannot lose”.

Less comprehensive historical studies have shed further light on the nature and development of technology. A broad distinction can be drawn between so-called internalist and contextualist accounts. In the internalist description, the focus is predominantly on the design features of the particular devices and on related matters such as the nature of technical improvements and the stimulus provided to other inventions. Medieval fortifications, ploughs and ploughshares, keyboard mechanisms, clocks, steel cantilever bridges, chain mail, steam engines, space rockets, and the mariner’s compass have been, and are typical, subjects for internalist histories. Informative though these are, such accounts tend to provide little in the way of explanation of why artefacts have taken the form they did and why mutations in those artefacts have occurred.

In contrast, contextualist accounts place emphasis on the cultural factors which have influenced, and have been influenced by, technological developments. The economic, social, and political ambience in which the technological activity took place and in which it assumed its particular form becomes the focus of the historical investigation.

Other external factors, for example geographical, legal, and environmental constraints, may also affect the shaping of technology and, in turn, contribute to a view of technology as itself an influence on the cultural context. For example, the study of the consequences for workers in the machine-tool industry of a technological development such as automation has served to locate technology in a political context and to highlight questions about the identity and motives of the social and managerial groups who took decisions about the particular form which the technology should assume.

A premise here is that there is nothing inevitable about any technological development. It could always have been different; other options were available. The technology we encounter is the result of decisions which reflect the value judgements of those who were in a position to shape the technology. It would seem that form not only follows function but power as well.


The extent to which technology is under human control is an important question, the answer to which has profound implications for how people perceive technology. On the one hand, there are the social constructionists who believe that technology is a tool shaped by the bidding of its creators or, at least, that it is social groups who define and give meaning to artefacts. A motor car is, after all, not just a means of transport: it can be a status symbol, a reflection of self-image, a source of Treasury revenue, a criminal’s machine for ram-raiding, a competitor to rail travel provisions, the basis for a manufacturing or service job, and much more besides. On the other hand, there is the view that, once launched, technology assumes a life of its own as an autonomous agent of change, driving history. Far from being society’s servant, technology is society’s master, increasingly shaping our destinies in ways which seem inevitable and irreversible. According to believers in this technological determinism, we are progressively being manoeuvred into ways of acting, both in the home and in employment, which are not of our deliberate choosing, but which are dictated by the technologies we have created. Instead of our values shaping technology, technology is shaping our values. The motor car was not invented to support out-of-town shopping and the depopulation of city centres; air pollution by exhaust emissions was not a planned outcome; the sacrificing of tracts of countryside and areas of natural beauty for additional roads to reduce traffic congestion was never intended by the pioneer manufacturers; nor was the association of fast cars with crime.

Between the poles of social constructivism and technological determinism, there are intermediate positions for which historical evidence lends some support. Large, complex technological systems seem capable of developing a momentum of their own and technologies can display latent inclinations that predispose people to develop certain lifestyles rather than others. Fortunately, neither momentum nor inclination is irresistible. An example is the development of anti-pollution technology, with legislation to support it, in the case of the motor car. What does appear to be the case, however, is that technology is not only a moral activity but a political one as well. The exercise of technological choice requires democratic political institutions where the effects and possible impacts of technological change can be openly addressed and their compatibility with personal and society’s goals assessed.


A remarkable feature of the history of technological development, at least as it has been written until comparatively recently, is the invisibility of women. The view that there have been few women technologists is deeply rooted and has been periodically reinforced by heroic accounts of, for example, the great road-makers, fen-drainers, canal and bridge builders, and lighthouse constructors in books such as Samuel Smiles’ Men of Invention and Industry (1884).

Yet women have been growers, gatherers, and processors and storers of food from, and even before, the beginning of recorded history. In some countries, their responsibilities in this respect remain unchanged today. Similarly, in many societies, the activities of infant and child care, nursing, water and wood (if not child) carrying, and spinning and weaving traditionally have been, and often still are, seen as women’s work. Historically, they have, worldwide, had a key role as healers and as sources of knowledge and practice relating to contraception, the premature termination of pregnancies, and the easing of labour, childbirth, and menstrual experiences. In short, women have always lived in close association with certain technologies. It has been claimed that they were responsible for some early technological innovations such as the digging stick (possibly the first lever) and the rotary quern (a hand-operated grain mill) as the world’s first crank. The designers of other artefacts such as cradles, the baby bottle, buttons and button holes, and slings that permit agricultural work while carrying an infant remain anonymous, but the probability is strong that they originated with women.

More recently, studies of the technological capabilities of girls and women in countries including the Sudan, Sri Lanka, Zimbabwe, Nigeria, and the Democratic Republic of the Congo have demonstrated the extent of their ingenuity in matters of food preparation, often in the face of great adversity. To give just one example, when the Tonga tribe was transplanted from north-western Zimbabwe to Matabeleland North Province (to allow the flooding of the valley of the Zambezi River for the Kariba hydroelectric scheme), it moved from rich alluvial soils to an area of poor soils with low rainfall, where hunting was prohibited. Because it was difficult to grow enough food, the women innovated and adapted food production and processing techniques to supplement family diets. New sources of food were identified from indigenous plants and trees, and new processes were developed for preparing food and rendering it fit for storage.

In industrialized societies recent research has provided evidence of women’s previously unacknowledged contributions to technological developments. Examples include the cotton gin, the sewing machine, the small electric motor, the McCormick reaper, the printing press, and the Jacquard loom. Also, it is clear that women were rarely passive recipients of technology, but, as its users, could interact in ways which fed back into and influenced the design of artefacts and systems. The work experiences of women telephonists in the first exchanges were contributory to the development of future telephone networks.

There are no simple explanations for the invisibility of women in the history of technology. Possibly men, who wrote the histories, simply did not know what women did. In some societies the roles were very precisely laid down: “Men’s work is to hunt and fish and then sit down; women’s work is all else”. For many years the inability of a woman to patent an invention in her own name was clearly an obstacle to recognition. The need for capital to support a period of trial and development of a novel artefact was another barrier. It was only in 1882, when the Married Women’s Property Act was passed, that British women acquired legal possession and control of personal property independently of their husbands. The dominant role of warfare and military concerns in the development of technology has also been suggested as contributory to women’s absence from the pages of its history.

There may, however, be considerations of a different kind to take into account. It has been argued that the ways in which women value things and people and communicate with others are different from those of men. As a generalization, women are reputed to be less adversarial, more given to networking and generally less formal and hierarchical relationships, and concerned to minimize disaster and confrontation, as opposed to the authoritarian, rule-bound, competitive, and hierarchical structures of the world of men, intent on maximizing gain.

Even if these differences are more the product of socially determined roles than innate propensities, it would seem that the greater involvement of women in technology could lead to a wider definition of what counts as technological, and to possibly different solutions to what are deemed technological problems.


Culture is often taken to mean the norms, values, beliefs, and conventions of a group. Members interpret their experiences in terms of these shared values and categories and so can be distinguished from members of other groups. An example is a way in which different beliefs about the origin of human life, about the relationship of humans to the natural world, and about death are held by different religious groups such as Buddhists, Christians, and Confucians.

Following from this it is suggested that technologies which originate in a particular cultural context bear the imprint of that culture. They will reflect characteristic values and beliefs as do, for example, the pyramids of Egypt, the Gothic cathedrals of medieval Europe, and the mosques of the Islamic world in their structure, configuration, and decoration. From the present day, Kenji Ekuan, designer of Yamaha motorcycles and musical instruments, has argued that Japanese design is characterized by “complex simplicity”: the products are small and precision built, lightweight yet robust, energy-frugal, and miniaturized with quality—all attributes which he regards as expressions of Japanese culture. French couture and cuisine is a further example where technological products are commonly regarded as expressions of cultural values and norms. Indeed, the definition of culture might be extended to include technology as a part of culture rather than a product of it; that is, culture comprises artefacts and technical processes as well as values and beliefs.

A Technology Transfer

One striking way of bringing into relief the values embedded in technologies is by their transfer from one cultural context to another. When, in the 19th century, agricultural settlers began to invade the Canadian plains on which Indians hunted buffalo—the mainstay of their life—the Indian response was well captured in the following quotation:

You ask me to plough the ground. Shall I take a knife and tear my mother’s breast? Then, when I die she will not take me to her bosom to rest … You ask me to cut grass and make hay and sell it and be rich like white men, but how dare I cut off my mother’s hair?

T. C. McLuhan, Touch the Earth: A Self-Portrait of Indian Existence

The attempt to persuade the Plains Indians to adopt the agricultural practices of ploughing, planting, and harvesting brought into conflict two contrasting value systems regarding the relationship of humans to their environment. The traditional Indian way of life involved respect for, and harmony with, nature and with the spiritual powers believed to inhabit all living and non-living things. The impact of agricultural technology as brought by the predominantly European homesteaders resulted in confusion, strain, and often the destruction of the Indian lifestyle.

Not all technology transfers have had the same disastrous consequences for the recipient culture, although there are many instances where the adoption of a new technology has caused major changes in employment patterns and social structures. Change in the means of manufacturing sandals in North Africa is a case in point. In one region the sandals were made by some 5,000 artisan shoemakers, using local supplies of leather, glue, thread, hand tools, tacks, wax and polish, fabric linings, laces, wooden lasts, and card boxes. Then, two Swiss plastic-injection moulding machines for the manufacture of sandals were introduced, each in operation for three shifts per day and requiring a labour force of only 40 workers. A million and a half pairs of sandals per year were produced, selling for less than the cost of the leather sandals and with a longer life. As a result, many indigenous small industries declined, as did employment opportunities; there was an increase in dependency on imported plastic, spare parts for the machines, and maintenance services—all requiring foreign currency. An accompanying increase in migration from rural areas to cities contributed to the creation of a “dual society”.

Such experiences of transfer testify to the non-neutrality of technologies and the way in which they can recreate, in a new host culture, aspects of the social system of their place of origin. For this reason, technology has sometimes been likened to a social gene which can carry encoded social relations from one context to another and replicate them there. When Japan was modernized after the Meiji Restoration in 1868 and deliberately recruited science and technology experts from already industrialized countries, it sought actively to avoid the introduction of foreign values with the technologies it imported. The cry was for “Western techniques but Eastern values”; the assimilation of new technologies was carefully managed to ensure their alliance to an intense patriotism and to the growth of the nation’s industrial and military strength.

Although one-way technology transfer can act as a powerful alternative to military colonization and a means of fostering the long-term dependency of the recipient culture on the provider of the technology, the transfer process is often more complex and interactive. For one thing, technologies are not used by every culture in exactly the same way. Gunpowder, invented by the Chinese and used by them for fireworks and primitive guns, when brought to Europe stimulated the production of much more powerful and devastating cannon.

Nearer the present day, a project undertaken by the United Nations Children’s Fund (UNICEF) to bring a supply of drinkable water to rural villages in India began by using cast-iron copies of farmstead pumps from the United States and Europe. While adequate for a single household, these were not designed to withstand continual daily use by an entire village community. Breakdowns were frequent. As a result, attempts were made to design a more reliable pump appropriate to the particular demands of village use. The stringent design criteria included low cost, durability, easy installation, maintenance and repair, and ability to be mass-produced under Indian conditions. However, the production of such a pump by no means solved the problem. There was also need for a supporting infrastructure of warehouses for spare parts, distribution networks for delivery of supplies, and training programmes for those who would monitor the quality of the water supply and keep the pump in good working order. Progressive decentralization of responsibility for the technology, from government and international agencies to local manufacturers and suppliers, was needed and a system of quality control involving the standardization of pump parts brought into being. Only by the well-synchronized functioning of all components of the system, in which the pump as artefact was itself enmeshed, was the technology likely to be successful. The drilling of water boreholes and the construction of appropriate pumps represented only one component of the total challenge of transfer.

B Appropriate Technology

The idea of a technology being appropriate in the sense of respecting the needs, resources, environment, and lifestyles of the people using it came into prominence in the 1960s. A powerful advocate was the economist E. F. Schumacher, who in his book Small is Beautiful (1973), wrote of “technology with a human face” and used the term “intermediate technology”. Schumacher drew on the belief of Gandhi that the poor of the world cannot be helped by mass production, only by production by the masses. His prescription for intermediate technology required it to make use of the best of modern knowledge and experience; be conducive to decentralization; be compatible with the laws of ecology; be gentle in its use of scarce resources; and serve humans instead of making them the servants of machines. However, not all those in the so-called developing world are content to see the cultivation of appropriate technology in their own countries while the industrialized societies are perceived as speeding towards a different and high-technology future.


It is clear that technology’s impact on society has been profound, and never more so than today. For many, including governments, its ability to contribute to wealth generation and economic development makes its encouragement a national priority. In the United Kingdom, a government White Paper published in 1986 asserted that “Survival and success will depend on designing, making, and selling goods and services that the customer wants at the time he wants and at a price he is prepared to pay; innovating to improve quality and efficiency; and maintaining an edge over all competition”.

Subsequently, a Technology Foresight Programme was begun to identify ways in which government resources might best be directed in the interests of the economy and to indicate areas of technology which might yield productive innovations. A series of technology foresight themes was drawn up under headings such as harnessing bioprocesses, materials synthesis and processing, computing and communications, cleaner world, modelling and impact, and control in management, the latter including security and anti-fraud technology.

Clearly, technology is seen as having a major role to play in improving the nation’s economic competitiveness and quality of life. Similar dispositions towards, and expectations of, technology are to be found in many other countries.

The idea that the artefacts of technology are indices of progress is deeply entrenched in many societies. For most of the industrialized world, a return to a situation without electricity and water services, telephones and televisions, refrigerators, washing-machines, cars, trains, aeroplanes, and sewage and waste disposal systems would be widely considered to be utterly retrograde. Equally, the vision of progress for many in poorer countries has been in terms of what, technologically, others have but they have not yet got. Either way, however, what is sometimes forgotten is that technology can lead a double life; it may conform to the intentions of its creators, but it can also yield unintended, sometimes unimagined, outcomes. Even with a clear vision of what type of society is being sought, technology can be an unpredictable ally. It also has the ability to usurp, or divert attention from, consideration of the outcome, the onward quest for technological development itself becoming the goal.

With the increased use of technology in production and service industries, a concern has grown that jobs will be destroyed and a vast army of unemployed will result. It is true that, over the past 200 years, many millions of manual workers have been replaced by machines, most dramatically, perhaps, in agriculture. Nearer the present, much factory work has also been automated. Machines saved on labour costs while increasing production, and were thus welcomed by factory owners. However, expansion of the service sector has provided alternative employment for many. Now, with the growing automation of service jobs, and technological developments such as computer diagnosis of illnesses, which put even the skilled at risk of redundancy, concerns have heightened about unemployment. Recent studies, however, provide little support for the view that technological change is the sole cause of unemployment. Rather, it is possible that technology can create more jobs than it destroys. However, a clear picture may be difficult to obtain for some time because firms take a time to learn how to use new technology effectively and replace obsolete management structures. Nevertheless, jobs such as programmer, systems analyst, network manager, database architect, computer operator, and computer repair technician were unheard of only a short time ago and are now among the fastest growing. It does not follow, however, that those displaced from, for example, middle management and from lower-skilled jobs will necessarily be able to move quickly into the new growth areas of employment, or whether these latter will provide a sufficient number of jobs for a nation’s workforce.

It is frequently claimed that the most potent agent of change on present-day society and the economy will prove to be information technology. The so-called digital revolution whereby information—whether text, sound, or video—can be converted into, and from, binary digits (bits) and transmitted over global networks is believed to be likely to transform industries such as banking, telecommunications, and publishing. Advances in light-wave communications technology based on optical fibres have vastly increased the volume and speed of information transmission. The development of flat-panel displays has allowed the computer mobility from the desk. Together these have contributed to the integration of computer and communications technology which, for the user, transcends space and makes information available on demand.

As well as rich opportunities, such a major social change presents severe challenges. One is how to avoid the creation of a divided society of information “haves” and “have-nots”. This is both a national and an international problem. Others relate to ownership and security of information: cyberspace is a new frontier where existing legal principles and practice are having to be rethought. Yet again, there are moral issues to be faced about the kinds of information which should be openly available to users of the Internet.

It is understandable, given the complex ways in which technology interacts with society and with the lives of individuals, that perceptions of it should be widely divergent and influenced strongly by the nature of personal experiences. Technophobes with respect to military technology may be technophiles when it comes to medical technology. In general, public understanding of the nature of technology is not well developed, and there is widespread recognition, globally, of the need for a greater degree of “technological literacy” among citizens. The inclusion of technology as a subject in the curriculum of general education of both primary and secondary schoolchildren is a recent development in many countries intended to improve the understanding and the practice of technology.

Contributed By:
David Layton

Facebook Comments

Scientific Revolution


Scientific Revolution, name commonly used to denote the period of history during which the conceptual, methodological, and institutional foundations of modern science were laid down in Western Europe. Taking place roughly between 1500 and 1700, it should not be seen as a revolution in science, since there was nothing like science in the modern sense in existence before this time. Rather, it was a period when separate traditions of mathematical studies, craft techniques, natural magic, and other occult ideas, were brought closer to and amalgamated with, the traditional natural philosophy which had developed in the Western European university system throughout the Middle Ages. The resulting change of intellectual disciplinary boundaries gave rise for the first time to something recognizably like modern science.

Throughout the Middle Ages, formal attempts to understand the physical world were developed, chiefly in the arts and medical faculties of the medieval universities. This natural philosophy, as it was known, derived almost entirely from the teachings of the ancient Greek philosopher, Aristotle. Most of the brilliant legacy of ancient Greek thought had been lost to Western Europe after the fall of the Roman Empire, and when this legacy began to be recovered, from Byzantine and Islamic sources, where it had to some extent been cherished, it was the works of Aristotle which had the most immediate impact and began to dominate Western philosophical thought (see Western Philosophy). The learning in the two most powerful faculties of the medieval university system, the faculties of divinity and of law, was, of course, based on ancient writings: Holy Scripture and Roman Law, as codified by Justinian I in the Corpus Juris Civilis (534; Body of Civil Law). The arts and medical faculties tended to follow suit, with the result that the focus of the study was not the natural world itself, or the techniques of practical healing, but the writings of Aristotle and Galen, who was the equivalent medical authority in antiquity. Concentration on the study of these texts meant that there was little or no scope for the study of more practical arts or sciences within the university curricula.

This tendency to avoid practical subjects was reinforced by Aristotle’s own teachings about how natural philosophy should be conducted, and the correct way of determining the truth of things. He rejected the use of mathematics in natural philosophy, for example, because he insisted that natural philosophy should explain phenomena in terms of physical causes and that mathematics, being entirely abstract, could not contribute to this kind of physical explanation. Even those branches of the mathematical sciences that seemed to come close to explaining the physical world, like astronomy or optics, were disparaged as “mixed sciences” which tried to combine the principles of one science, geometry, with those of another, physics, in order to explain the behaviour of heavenly bodies or rays of light. But the results, according to Aristotle, could not properly explain anything.

Although geometry and arithmetic were taught in the university system along with two of the “mixed sciences”, astronomy and music (treated essentially as the mathematics of proportions and harmonies), they were always regarded as inferior to natural philosophy and could not be used, therefore, to promote more practical approaches to the understanding of nature. Within the universities, even the study of plants and animals tended to be text-based, taking knowledge of flora, for example, from the compilations of herbal and medicinal plants by Pedanius Dioscorides, leaving more localized and practical knowledge to lay experts in herbal lore outside the university system. Similarly, alchemy and other empirically based aspects of the natural magic tradition were pursued almost entirely outside the university system.

This fragmentation of studies concerned with the workings of nature was reinforced throughout the Middle Ages by the Church. After some initial problems with non-Christian aspects of Aristotelian teaching, the Church embraced it as a “handmaiden” to the so-called “queen of the sciences”, theology. But if Aristotelian natural philosophy was considered to provide support to religious doctrines, other naturalist pursuits were considered to be subversive. The Church was always opposed to the demonological aspects of magic, for example, and so tended to be suspicious also of natural magic, even though natural magic was simply concerned with the perfectly natural, although mysterious or occult, properties of material bodies (such as the ability of magnets to attract iron, or the ability of certain plants or their extracts to cure diseases). One way or another, therefore, the powerful combination of Aristotelian teachings with Church doctrines tended to perpetuate the exclusion of any way of studying or analysing nature that did not fit into traditional Aristotelian natural philosophy.


The situation began to change during the Renaissance. Indeed, the scientific revolution can be seen as a major aspect of the sweeping and far-reaching changes of the Renaissance. In broad terms, the scientific revolution had four major aspects.

A Development of Experimental Method

The Renaissance was the period when the experimental method, still characteristic of science today, began to be developed and came increasingly to be used for understanding all aspects of the physical world. The experimental method was not in itself new—it had been a common aspect of the natural magic tradition from antiquity. For example, all the experimental techniques used by William Gilbert, author of what is generally acknowledged to be the earliest example of an experimental study of a natural phenomenon, De Magnete (1600; Of Magnets, Magnetic Bodies, and the Great Magnet of the Earth, 1890), were first developed by Petrus Peregrinus, a renowned medieval magus (magician). Experimentation was a major aspect of the natural magic tradition and was ready for appropriation by Renaissance natural philosophers who recognized its potential. The likely benefits of magic became more apparent during the Renaissance thanks to the rediscovery of ancient magical writings. Religious opposition to magic had less force after the discovery of various writings allegedly written by Hermes Trismegistus, Zoroaster, Orpheus, and other mythical or legendary characters. We now know these texts were written in the early centuries of the Christian era and deliberately attributed to such legendary authors, but Renaissance scholars believed they were genuinely ancient documents, perhaps as old as the Pentateuch of Moses. This gave them great authority and led to increased respect for magical approaches.

Going hand in hand with manipulative experimental techniques was simply an increased emphasis upon experience and observation. Accidental discoveries proved an invaluable stimulus here. Andreas Vesalius, innovative professor of surgery at the University of Padua, claimed to have noticed over 200 errors in Galen’s anatomical writings when he performed his own dissections. Vesalius’s emphasis upon a return to anatomical dissection, instead of reliance on Galen’s authority, led to major discoveries, including that of the circulation of the blood by William Harvey, who was taught by one of Vesalius’s successors at Padua. Similarly, the discovery of numerous new species of animals and plants in the New World led to a more empirical approach to natural history. Previously, herbals and bestiaries included in their descriptions of plants and animals the religious symbolism, legends, superstitions, and other non-natural lore attributed to them. Since there was no equivalent information about newly discovered species, however, herbals and bestiaries compiled after the great Renaissance voyages of discovery became increasingly concerned only with observed naturalistic properties. The advent of printing also played an important part here. When the circulation of texts depended upon hand-written copies, illustrations were often crudely executed by the scribe, whose forte was calligraphy, not draughtsmanship. In the preparation of a printed edition, however, an illustrator would be called in and the standard of illustrations improved immeasurably. Almost inevitably the illustrations became more realistic and stimulated a concern for proper observation of natural phenomena.

Another important aspect of the new empiricism was the invention of new observational instruments. Galileo’s use of the telescope—first developed for commercial purposes—to make astonishing astronomical observations was in some ways an extension of the use of observational instruments used in navigation, such as the astrolabe and the quadrant. Its exciting success was to stimulate the development of a whole range of instruments for studying nature, such as the microscope, the thermometer, the barometer, the air pump, and the electrostatic generator.

B Mathematization of Nature

The scientific revolution has also been characterized as the period of the “mathematization of the world picture” when quantitative information and the mathematical analysis of the physical world was seen to offer more reliable knowledge than the more qualitative and philosophical analyses that had been typical of traditional natural philosophy. Like magic, the mathematical sciences had their own long history, but thanks to Aristotle’s strictures they had always been kept separate from natural philosophy and regarded as inferior to it. But, as Aristotle’s authority became weaker throughout the Renaissance (the rediscovery of the writings of other ancient Greek philosophers with views widely divergent from those of Aristotle, notably those of Plato, Epicurus, and the Stoics, made it plain that he was by no means the only ancient authority), and as scepticism became increasingly plausible in the light of all the remarkable exposures of the inadequacies of traditional intellectual positions, mathematics became an increasingly powerful force. Mathematicians could lay claim to dealing with certain knowledge, capable of undeniable proof and so immune from sceptical criticisms. The full story of the rise in status of mathematics is complex and crowded, but salient features are provided by Nicolaus Copernicus, who claimed that, for no other reason than that the mathematics indicated it, the Earth must revolve around the Sun, and Johannes Kepler, who reinforced this with a vastly more precise astronomy. Similarly, a moving Earth demanded a new theory of motion, and of how moving bodies behave, and this was effectively initiated as a new mathematical science by Galileo and achieved its apotheosis a few decades later in the work of Isaac Newton.

C Practical Uses of Scientific Knowledge

Experimentalism and mathematization were both stimulated by an increasing concern that knowledge of nature should be practically useful, bringing distinct benefits to its practitioners, its patrons, or even to mankind in general. Apart from its supposed use in supporting medical ideas, the only use to which natural philosophy had been put throughout the Middle Ages was in bolstering religion. During the scientific revolution the practical usefulness of knowledge, an assumption previously confined to the magical and the mathematical traditions (indeed, for many throughout the Middle Ages and the Renaissance, mathematics, so mysterious and incomprehensible, but so useful, was obviously a branch of the magical arts), came to be extended to natural philosophy. To a large extent, this new emphasis was a result of the demands of new patrons, chiefly wealthy princes, who sought some practical benefit from their financial support for the study of nature. It was also in keeping, however, with the claims of the Renaissance humanists that the vita activa (active life) was (contrary to the teachings of the Church) morally superior to the vita contemplativa (contemplative life) of the monk, because of the benefits it could bring to others. The major spokesman for this new pragmatically useful natural philosophy was Francis Bacon, one time Lord Chancellor of England, who promoted his highly influential vision of a reformed empirical knowledge of nature that he believed would result in immense benefits to mankind.

D Development of Scientific Institutions

Finally, it was also a period in which new forms of organization, and even institutionalization, were established for the study of the natural world. While the universities still tended to maintain the traditional natural philosophy, the new, more empiricist, mathematical, and pragmatic, approaches were encouraged in the royal courts of Europe, or in more or less formal gatherings of like-minded individuals, such as the informal gatherings of experimental philosophers in Oxford and London during the Interregnum (1649-1660), or the Royal Society of London, established on a formal basis at the Restoration (1660) by members of those earlier groups. Although nominally under the patronage of Charles II, the Royal Society received no financial support from the Crown. The Académie des Sciences de Paris, however, was set up by Jean-Baptiste Colbert, Louis XIV’s controller-general of finance, and its fellows were paid by the State. Whatever their precise constitution, the proliferation of collaborative scientific societies testifies to the widespread recognition that, as Bacon wrote, “knowledge is power”, and knowledge of nature is potentially extremely powerful.


These four factors interacted with one another and were historically dependent upon one another. In combination their impact on European culture was phenomenal. To begin with, it rapidly became apparent that the traditional Aristotelian natural philosophy was completely wrong. Aristotelian teaching was so encyclopedic in its scope, however, providing a ready explanation for all phenomena, that it could not simply be abandoned. The new innovations and theories were independently arrived at and chipped away at Aristotelian teaching, but did not hang together to provide an alternative system. What was required was a completely new philosophy of nature that could incorporate Copernican astronomy, Galileo’s new theory of motion, Harvey’s new physiology, and show how they and all the other new discoveries depended upon or followed from, certain basic assumptions. This ambition began to be realized in the early 17th century with the development of the so-called mechanical philosophy. There were a number of slightly different versions of this new philosophy, but the earliest and most influential was the system developed by René Descartes.

Powerful as Descartes’ system was, its conclusions, which Descartes arrived at purely by a process of abstract reasoning, were not always compatible with experimentally determined phenomena. In late 17th-century England, a more empirically based version of the mechanical philosophy was developed. The success of this characteristically English approach was triumphantly confirmed in 1687 with the publication of Isaac Newton’s Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy).


Beginning with Descartes and culminating with Isaac Newton, the development of the mechanical philosophy can be seen as the foundation of the modern scientific world-view. Previously, the dominant vision of the nature of the world had been provided by religion. Natural philosophy had been seen merely as an adjunct to religion, a means of demonstrating through the study of the intricacies of nature God’s existence and omnipotence. The fragmentation of Western Christianity after the Reformation led to a weakening of religion. Furthermore, the rise of philosophical scepticism in the Renaissance quickly led to scepticism in religion. Atheism, previously unknown in Christian Europe, gradually became an increasingly popular alternative to religion. Ironically, although all of the major figures in the scientific revolution were devoutly religious, and saw their scientific work as a way of proving the existence of an omnipotent creator of the world, the new mechanical philosophies were appropriated by atheists. Those who wished to deny the validity of the religious world-view could use the new philosophies to suggest that the world was capable of functioning in an entirely mechanistic way with no need for supernatural intervention or supervision.

Newton was especially devout and explicitly stated that his system was intended to demonstrate the existence of God, but he was powerless to prevent the irreligious interpretation of his science. Newton’s influence on European culture was entirely unprecedented. The undeniable success of his Philosophiae Naturalis Principia Mathematica in understanding and describing the workings of nature convinced many that by applying the same methods, all problems could be solved, even moral, political, and economic problems. Much of the ethos of the Enlightenment, including the new “sciences of man”, as they called the political economy, and other new social sciences developed at that time, owed their origins to the powerful stimulus of Newtonian science. But all too often it was a Newtonian science devoid of the God that Newton himself had believed in. From then on the secular scientific worldview became increasingly dominant.


Herbert Butterfield, an eminent Cambridge historian, once said that the scientific revolution reduced the Renaissance and the Reformation “to the rank of mere episodes”, and that it marked “the real origin both of the modern world and of the modern mentality” (The Origins of Modern Science, 1949). Given the overwhelming importance of science and the scientific world-view in modern Western culture it is easy to see what he meant. Indeed, the historical significance of the scientific revolution has ensured that it, or some aspect of it (usually a supposed mental attitude, such as a preoccupation with rationality or measurement), figures in all attempts to explain the current dominance of the West in world culture. Although the cultural imperialism of the West might now seem to owe more to the consumerism of advanced capitalism, it is easy to see how these derive in large measure from the success of the Western science-and-technology complex. This alliance between science and technology in the West can be seen to have had its origins in the 17th-century emphasis on the usefulness of scientific knowledge for the amelioration of the human condition.

This, in turn, has led to attempts to understand why the scientific revolution occurred when and where it did. Philosophical attempts to understand the workings of nature, and the techniques of mathematical analysis, reached astonishingly high levels of accomplishment among the ancient Greeks. During the Middle Ages, it looked as though the civilization of Islam would build upon the Greek legacy while Europeans continued to ignore it. The Arabs made notable achievements in natural philosophy, chemistry, medicine, and mathematics. Meanwhile, scientific and technological advance in China was also way ahead of anything in Europe, until the 17th century when Western Europeans overtook and went much further. To understand why the Greeks, Arabs, and Chinese did not inaugurate the scientific revolution, and why the Western Europeans did, we have to consider not only why the four major factors listed above combined so fruitfully in Western culture, but also why they did not do so in the cultures of the rival contenders. This seems set to keep historians busy for a long time yet.

Contributed By:
John Henry

Credited images: yesstyle

Facebook Comments



Science (Latin, scientia, from scire, “to know”), a term used in its broadest sense to denote systematized knowledge in any field, but usually applied to the organization of objectively verifiable sense experience. The pursuit of knowledge in this context is known as pure science, to distinguish it from applied science, which is the search for practical uses of scientific knowledge, and from technology, through which applications are realized. For additional information, see separate articles on most of the sciences mentioned.


Efforts to systematize knowledge can be traced back to prehistoric times, through the designs that Palaeolithic people painted on the walls of caves, through numerical records that were carved in bone or stone, and through artefacts surviving from Neolithic civilizations. The oldest written records of protoscientific investigations come from Mesopotamian cultures; lists of astronomical observations, chemical substances, and disease symptoms, as well as a variety of mathematical tables, were inscribed in cuneiform characters on clay tablets. Other tablets dating from about 2000 bc show that the Babylonians had knowledge of Pythagoras’ Theorem, solved quadratic equations, and developed a sexagesimal system of measurement (based on the number 60) from which modern time and angle units stem. (see Number Systems; Numerals.)

From almost the same period, papyrus documents have been discovered in the Nile Valley, containing information on the treatment of wounds and diseases, on the distribution of bread and beer, and on working out the volume of a portion of a pyramid. Some of the present-day units of length can be traced back to Egyptian prototypes, and the calendar in common use today is the indirect result of pre-Hellenic astronomical observations.


Scientific knowledge in Egypt and Mesopotamia was chiefly of a practical nature, with a little rational organization. Among the first Greek scholars to seek the fundamental causes of natural phenomena was the philosopher Thales, in the 6th-century bc, who introduced the concept that the Earth was a flat disc floating on the universal element, water. The mathematician and philosopher Pythagoras, who followed him, established a movement in which mathematics became a discipline fundamental to all scientific investigation. The Pythagorean scholars postulated a spherical Earth moving in a circular orbit about a central fire. In Athens, in the 4th-century bc, Ionian natural philosophy and Pythagorean mathematical science combined to produce the syntheses of the logical philosophies of Plato and Aristotle. At the Academy of Plato, deductive reasoning and mathematical representation were emphasized; at the Lyceum of Aristotle, inductive reasoning and qualitative description were stressed. The interplay between these two approaches to science has led to most subsequent advances.

During the so-called Hellenistic Age following the death of Alexander the Great, the mathematician, astronomer, and geographer Eratosthenes made a remarkably accurate measurement of the Earth. Also, the astronomer Aristarchus of Samos espoused a heliocentric (Sun-centred) planetary system, although this concept did not gain acceptance in ancient times. The mathematician and inventor Archimedes laid the foundations of mechanics and hydrostatics (part of fluid mechanics); the philosopher and scientist Theophrastus became the founder of botany; the astronomer Hipparchus developed trigonometry, and the anatomists and physicians Herophilus and Erasistratus based anatomy and physiology on dissection.

Following the destruction of Carthage and Corinth by the Romans in 146 bc, scientific inquiry lost its impetus until a brief revival took place in the 2nd-century ad under the Roman emperor and philosopher Marcus Aurelius. At this time the geocentric (Earth centred) Ptolemaic System, advanced by the astronomer Ptolemy, and the medical works of the physician and philosopher Galen became standard scientific treatises for the ensuing age. A century later the new experimental science of alchemy arose, springing from the practice of metallurgy. By 300, however, alchemy had acquired an overlay of secrecy and symbolism that obscured the advantages such experimentation might have brought to science.


During the Middle Ages, six leading culture groups were in existence: the Latin West, the Greek East, the Chinese, the East Indian, the Arabic, and the Maya. The Latin group contributed little to science before the 13th century, the Greek never rose above paraphrases of ancient learning, and the Maya had no influence on the growth of science. In China, science enjoyed periods of progress, but no sustained drive existed. Chinese mathematics reached its zenith in the 13th century with the development of ways of solving algebraic equations by means of matrices, and with the use of the arithmetic triangle. More important, however, was the impact on Europe of several practical Chinese innovations. These included the processes for manufacturing paper and gunpowder, the use of printing, and the mariner’s compass. In India, the chief contributions to science were the formulation of the so-called Hindu-Arabic numerals, which are in use today, and in the conversion of trigonometry to a quasi-modern form. These advances were transmitted first to the Arabs, who combined the best elements from Babylonian, Greek, Chinese, and Hindu sources. By the 9th century, Baghdad, on the River Tigris, had become a centre for the translation of scientific works, and in the 12th century, this learning was transmitted to Europe through Spain, Sicily, and Byzantium.

Recovery of ancient scientific works at European universities led, in the 13th century, to the controversy over scientific method. The so-called realists espoused the Platonic approach, whereas the nominalists preferred the views of Aristotle. At the universities of Oxford and Paris, such discussions led to advances in optics and kinematics that paved the way for Galileo and the German astronomer Johannes Kepler.

The Black Death and the Hundred Years’ War disrupted scientific progress for more than a century, but by the 16th century, a revival was well under way. In 1543 the Polish astronomer Nicolaus Copernicus published De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Bodies), which revolutionized astronomy. Also published in 1543, De Corpis Humani Fabrica (On the Structure of the Human Body) by the Belgian anatomist Andreas Vesalius corrected and modernized the anatomical teachings of Galen and led to the discovery of the circulation of the blood. Two years later the Ars Magna (Great Art) of the Italian mathematician, physician, and astrologer Gerolamo Cardano initiated the modern period in algebra with the solution of cubic and quartic equations.


Essentially modern scientific methods and results appeared in the 17th century because of Galileo’s successful combination of the functions of scholar and artisan. To the ancient methods of induction and deduction, Galileo added systematic verification through planned experiments, using newly invented scientific instruments such as the telescope, the microscope, and the thermometer. Later in the century, experimentation was widened through the use of the barometer by the Italian mathematician and physicist Evangelista Torricelli; the pendulum clock by the Dutch mathematician, physicist, and astronomer Christiaan Huygens; and the exhaust pump by the English physicist and chemist Robert Boyle and the German physicist Otto von Guericke.

The culmination of these efforts was the universal law of gravitation, published in 1687 by the English mathematician and physicist Isaac Newton in Philosophiae Naturalis Principia Mathematica. At the same time, the invention of calculus by Newton and the German philosopher and mathematician Gottfried Wilhelm Leibniz laid the foundation of today’s sophisticated level of science and mathematics.

The scientific discoveries of Newton and the philosophical system of the French mathematician and philosopher René Descartes provided the background for the materialistic science of the 18th century, in which life processes were explained on a physicochemical basis. Confidence in the scientific attitude carried over to the social sciences and inspired the so-called Age of Enlightenment, which culminated in the French Revolution of 1789. The French chemist Antoine Laurent Lavoisier published Traité élémentaire de chimie (Treatise on Chemical Elements, 1789), with which the revolution in quantitative chemistry opened.

Scientific developments during the 18th century paved the way for the following “century of correlation”, so called for its broad generalizations in science. These included the atomic theory of matter postulated by the British chemist and physicist John Dalton; the electromagnetic theories of Michael Faraday and James Clerk Maxwell, also of the United Kingdom; and the law of the conservation of energy, enunciated by the British physicist James Prescott Joule and others.

The most comprehensive of the biological theories was that of evolution, put forward by Charles Darwin in his On the Origin of Species by Means of Natural Selection (1859), which stirred as much controversy in society at large as the work of Copernicus. By the beginning of the 20th century, however, the fact, but not the mechanism, of evolution was generally accepted, with disagreement centring on the genetic processes through which it occurs.

But as biology became more firmly based, physics was shaken by the unexpected consequences of quantum theory and relativity. In 1927 the German physicist Werner Heisenberg formulated the so-called uncertainty principle, which held that limits existed on the extent to which, on the subatomic scale, coordinates of an individual event can be determined. In other words, the principle stated the impossibility of predicting, with precision, that a particle such as an electron would be in a certain place at a certain time, moving at a certain velocity. Quantum mechanics instead dealt with statistical inferences relating to large numbers of individual events.


Throughout history, scientific knowledge has been transmitted chiefly through written documents, some of which are more than 4,000 years old. From ancient Greece, however, no substantial scientific work survives from the period before the Elements of the geometrician Euclid (c. 300 bc). Of the treatises written by leading scientists after that time, only about half still exist. Some of these are in Greek, and others were preserved through translation by Arab scholars in the Middle Ages. Medieval schools and universities were largely responsible for preserving these works and for fostering scientific activity.

Since the Renaissance, however, this work has been shared by scientific societies; the oldest such society, which still survives, is the Accademia dei Lincei (to which Galileo belonged), established in 1603 to promote the study of mathematical, physical, and natural sciences. Later in the century, governmental support of science led to the founding of the Royal Society of London (1662) and the Académie des Sciences de Paris (1666). These two organizations initiated publication of scientific journals, the former under the title Philosophical Transactions and the latter as Mémoires.

During the 18th century academies of science were established by other leading nations. In the United States, a club organized in 1727 by Benjamin Franklin became, in 1769, the American Philosophical Society for “promoting useful knowledge”. In 1780 the American Academy of Arts and Sciences was organized by John Adams, who became the second US president in 1797. In 1831 the British Association for the Advancement of Science met for the first time, followed in 1848 by the American Association for the Advancement of Science, and in 1872 by the Association Française pour l’Avancement des Sciences. These national organizations issue the journals Nature, Science, and Compte-Rendus, respectively. The number of scientific journals grew so rapidly during the early 20th century that A World List of Scientific Periodicals Published in the Years 1900-1933 contained some 36,000 entries in 18 languages. A large number of these are issued by specialized societies devoted to individual sciences, and most of them are fewer than 100 years old.

Since late in the 19th century, communication among scientists has been facilitated by the establishment of international organizations, such as the International Bureau of Weights and Measures (1873) and the International Council of Research (1919). The latter is a scientific federation subdivided into international unions for each of the various sciences. The unions hold international congresses every few years, the transactions of which are usually published. In addition to national and international scientific organizations, numerous major industrial firms have research departments; some of them regularly publish accounts of the work done or else file reports with government patent offices, which in turn print abstracts in bulletins that are published periodically.


Knowledge of nature originally was largely an undifferentiated observation and interrelation of experiences. The Pythagorean scholars distinguished only four sciences: arithmetic, geometry, music, and astronomy. By the time of Aristotle, however, other fields could also be recognized: mechanics, optics, physics, meteorology, zoology, and botany. Chemistry remained outside the mainstream of science until the time of Robert Boyle in the 17th century, and geology achieved the status of a science only in the 18th century. By that time the study of heat, magnetism, and electricity had become part of physics. During the 19th century, cientists finally recognized that pure mathematics differs from the other sciences in that it is a logic of relations and does not depend for its structure on the laws of nature. Its applicability in the elaboration of scientific theories, however, has resulted in its continued classification among the sciences.

The pure natural sciences are generally divided into two classes: the physical sciences and the biological, or life, sciences. The principal branches among the former are physics, astronomy, chemistry, and geology; the chief biological sciences are botany and zoology. The physical sciences can be subdivided to identify such fields as mechanics, cosmology, physical chemistry, and meteorology; physiology, embryology, anatomy, genetics, and ecology are subdivisions of the biological sciences.

All classifications of the pure sciences, however, are arbitrary. In the formulations of general scientific laws, interlocking relationships among the sciences are recognized. These interrelationships are considered responsible for much of the progress today in several specialized fields of research, such as molecular biology and genetics. Several interdisciplinary sciences, such as biochemistry, biophysics, biomathematics, and bioengineering, have arisen, in which life processes are explained physicochemically. Biochemists, for example, synthesized deoxyribonucleic acid (DNA); and the cooperation of biologists with physicists led to the invention of the electron microscope, through which structures little larger than atoms can be studied. The application of these interdisciplinary methods is also expected to produce significant advances in the fields of social sciences and behavioural sciences.

The applied sciences include such fields as aeronautics, electronics, engineering, and metallurgy, which are applied physical sciences, and agronomy and medicine, which are applied biological sciences. In this case also, overlapping branches must be recognized. The cooperation, for example, between iatrophysics (a branch of medical research based on principles of physics) and bioengineering resulted in the development of the heart-lung machine used in open-heart surgery and in the design of artificial organs such as heart chambers and valves, kidneys, blood vessels, and inner-ear bones. Advances such as these are generally the result of research by teams of specialists representing different sciences, both pure and applied. This interrelationship between theory and practice is as important to the growth of science today as it was at the time of Galileo. (See also Philosophy of Science.)

Credited images: yesstyle

Facebook Comments

Accounting and Bookkeeping


Accounting and Bookkeeping, the process of identifying, measuring, recording, and communicating economic information about an organization or other entity, in order to permit informed judgements by users of the information. Bookkeeping encompasses the record-keeping aspect of accounting and therefore provides much of the data to which accounting principles are applied in the preparation of financial statements and other financial information.

Personal record-keeping often uses a simple single-entry system in which amounts are recorded in column form. Such entries include the date of the transaction, its nature, and the amount of money involved. Record-keeping of organizations, however, is based on a double-entry system, whereby each transaction is recorded on the basis of its dual impact on the organization’s financial position or operating results or both. Information relating to the financial position of an enterprise is presented in a balance sheet, while disclosures about operating results are displayed in a profit and loss statement. Data relating to an organization’s liquidity and changes in its financial structure are shown in a statement of changes in financial position. Such financial statements are prepared to provide information about past performance, which in turn becomes a basis for readers to try to project what might happen in the future.


Bookkeeping and record-keeping methods, created in response to the development of trade and commerce, are preserved from ancient and medieval sources. Double-entry bookkeeping began in the commercial city-states of medieval Italy and was well developed by the time of the earliest preserved double-entry books, from 1340 in Genoa. The development of counting frames and the abacus in China in the first centuries ad laid the basis for similarly advanced techniques in East Asia.

The first published accounting work was written in 1494 by the Venetian monk Luca Pacioli. Although it disseminated rather than created knowledge about double-entry bookkeeping, Pacioli’s work summarized principles that have remained essentially unchanged. Additional accounting works were published during the 16th century in Italian, German, Dutch, French, and English, and these works included early formulations of the concepts of assets, liabilities, and income.

The Industrial Revolution created a need for accounting techniques that were adequate to handle mechanization, factory-manufacturing operations, and the mass production of goods and services. With the emergence in the mid-19th century of large, publicly held business corporations, owned by absentee shareholders and administered by professional managers, the role of accounting was further redefined.

Bookkeeping, which is a vital part of all accounting systems, was in the mid-20th century increasingly carried out by machines. The widespread use of computers broadened the scope of bookkeeping, and the term “data processing” now frequently encompasses bookkeeping.


Accounting information can be classified into two categories: financial accounting or public information and managerial accounting or internal information. Financial accounting includes information disseminated to parties that are not part of the enterprise proper—shareholders, creditors, customers, suppliers, regulatory bodies, financial analysts, and trade associations—although the information is also of interest to the company’s officers and managers. Such information relates to the financial position, liquidity (that is, ability to convert to cash), and profitability of an enterprise.

Managerial accounting deals with cost-profit-volume relationships, efficiency and productivity, planning and control, pricing decisions, capital budgeting, and similar matters that aid decision-making. This information is not generally disseminated outside the company. Whereas the general-purpose financial statements of financial accounting are assumed to meet basic information needs of most external users, managerial accounting provides a wide variety of specialized reports for division managers, department heads, project directors, section supervisors, and other managers.

A Specialized Accounting

Of the various specialized areas of accounting that exist, the three most important are auditing, income taxation, and accounting for not-for-profit organizations. Auditing is the examination, by an independent accountant, of the financial data, accounting records, business documents, and other pertinent documents of an organization in order to attest to the accuracy of its financial statements. Large private and public enterprises sometimes also maintain an internal audit staff to conduct audit-like examinations, including some that are more concerned with operating efficiency and managerial effectiveness than with the accuracy of the accounting data.

The second specialized area of accounting is income taxation. Preparing an income tax form entails collecting information and presenting data in a coherent manner; therefore, both individuals and businesses frequently hire accountants to determine their tax position. Tax rules, however, are not identical with accounting theory and practices. Tax regulations are based on laws that are enacted by legislative bodies, interpreted by the courts, and enforced by designated administrative bodies. Much of the information required in calculating taxes, however, is also needed in accounting, and many techniques of computing are common to both areas.

A third area of specialization is accounting for not-for-profit organizations, such as charities, universities, hospitals, Churches, trade and professional associations, and government agencies. These organizations differ from business enterprises in that they generally receive resources on some non-reciprocating basis (that is, without paying for such resources), they are not set up to create a distributable profit, and they usually have no share capital. As a result, these organizations call for differences in record-keeping, in accounting measurements, and in the format of their financial statements.

B Financial Reporting

Traditionally, the function of financial reporting was to provide information about companies to their owners. Once the delegation of managerial responsibilities to hired personnel became a common practice, financial reporting began to focus on stewardship, that is, on the managers’ accountability to the owners. Its purpose then was to document how effectively the owners’ assets were managed, in terms of both capital preservation and profit generation.

After businesses were commonly organized as corporations, the appearance of large multinational corporations and the widespread employment of professional managers by absentee owners brought about a change in the focus of financial reporting.

Although the stewardship orientation has not become obsolete, financial reporting is today somewhat more geared towards the needs of investors. Because both individual and institutional investors view owning shares of companies as only one of various investment alternatives, they seek much more information about the future than was supplied under the traditional stewardship concept. As investors relied more on the potential of financial statements to predict the results of investment and disinvestment decisions, accounting became more sensitive to their needs. One important result was an expansion of the information supplied in financial statements.

The proliferation of footnotes to financial statements is a particularly visible example. Such footnotes disclose information that is not already included in the body of the financial statement. One footnote usually identifies the accounting policies or methods adopted when acceptable alternative methods also exist, or when the unique nature of the company’s business justifies an otherwise unconventional approach.

Footnotes also disclose information about lease commitments, contingent liabilities, pension plans, share options, and foreign currency translation, as well as details about long-term debt (such as interest rates and maturity dates). A company having a widely distributed ownership usually includes among its footnotes the income it earned in each quarter, quarterly stock market prices of its shares, and information about the relative sales and profit contribution of its different areas of activity.


Accounting as it exists today may be viewed as a system of assumptions, doctrines, tenets, and conventions, all encompassed by the phrase “generally accepted accounting principles”. Many of these principles developed gradually, as did much of common law; only the accounting developments of recent decades are prescribed in statutory law. Following are several fundamental accounting concepts.

The entity concept states that the item or activity (entity) that is to be reported on must be clearly defined, and that the relationship assumed to exist between the entity and external parties must be clearly understood.

The going-concern assumption states that it is expected that the entity will continue to operate for the foreseeable future.

The historical cost principle requires that economic resources be recorded in terms of the amounts of money exchanged; when a transaction occurs, the exchange price is by its nature a measure of the value of the economic resources that are exchanged.

The realization concept states that accounting takes place only for those economic events to which the entity is a party. This principle, therefore, rules out recognizing a gain based on the appreciated market value of a still owned asset.

The matching principle states that income is calculated by matching a period’s revenues with the expenses incurred in order to bring about that revenue.

The accrual principle defines revenues and expenses as the inflow and outflow of all assets—as distinct from the flow only of cash assets—in the course of operating the enterprise.

The consistency criterion states that the accounting procedures used at a given time should conform with the procedures previously used for that activity. Such consistency allows data of different periods to be compared.

The disclosure principle requires that financial statements present the most useful amount of relevant information—namely, all information that is necessary in order not to be misleading.

The substance-over-form standard emphasizes the economic substance of events even though their legal form may suggest a different result. An example is a practice of consolidating the financial statements of one company with those of another in which it has more than a 50 percent ownership interest.

The prudence doctrine states that when exposure to uncertainty and risk is significant, accounting measurement and disclosure should take a cautious and prudent stance until evidence shows sufficient lessening of the uncertainty and risk.

A The Balance Sheet

Of the two traditional types of financial statements, the balance sheet relates to an entity’s value, and the profit and loss account—or income statement—relates to its activity. The balance sheet provides information about an organization’s assets, liabilities, and owners’ equity as of a particular date (such as the last day of the accounting or fiscal period). The format of the balance sheet reflects the basic accounting equation: assets equal equities. Assets are economic resources that provide potential future service to the organization. Equities consist of the organization’s liabilities together with the equity interest of its owners. (For example, a certain house is an asset worth £70,000; its unpaid mortgage is a liability of £45,000, and the equity of its owners is £25,000.)

Assets are categorized as current or fixed. Current assets are usually those that management could reasonably be expected to convert into cash within one year; they include cash, receivables, goods in stock (or merchandise inventory), and short-term investments in stocks and bonds. Fixed assets encompass the physical plant—notably land, buildings, machinery, motor vehicles, computers, furniture, and fixtures. They also include property being held for speculation and intangibles such as patents and trademarks.

Liabilities are obligations that the organization must remit to other parties, such as creditors and employees. Current liabilities usually are amounts that are expected to be paid within one year, including salaries and wages, taxes, short-term loans, and money owed to suppliers of goods and services. Long-term liabilities are usually debts that will come due beyond one year—such as bonds, mortgages, and long-term loans. Whereas liabilities are the claims of outside parties on the assets of the organization, the owners’ equity is the investment interest of the owners of the organization’s assets. When an enterprise is operated as a sole proprietorship or as a partnership, the balance sheet may disclose the amount of each owner’s equity. When the organization is a corporation, the balance sheet shows the equity of the owners—that is, the shareholders—as consisting of two elements: (1) the amount originally invested by the shareholders; and (2) the corporation’s cumulative reinvested income, or retained earnings (that is, income not distributed to shareholders as dividends), in which the shareholders have equity.

B The Profit and Loss Statement

The traditional activity-oriented financial statement issued by business enterprises is the profit and loss statement, often known as the income statement. Prepared for a well-defined time interval, such as three months or one year, this statement summarizes the enterprise’s revenues, expenses, gains, and losses. Revenues are transactions that represent the inflow of assets as a result of operations—that is, assets received from selling goods and providing services. Expenses are transactions involving the outflow of assets in order to generate revenue, such as wages, rent, interest, and taxation.

A revenue transaction is recorded during the fiscal period in which it occurs. An expense appears in the profit and loss statement of the period in which revenues presumably resulted from the particular expense. To illustrate, wages paid by a merchandising or service company are usually recognized as an immediate expense because they are presumed to generate revenue during the same period in which they occurred. On the other hand, money spent on raw materials to be used in making products that will not be sold until a later financial period would not be considered an immediate expense. Instead, the cost will be treated as part of the cost of the resulting stock asset; the effect of this cost on income is thus deferred until the asset is sold and revenue is realized.

In addition to disclosing revenues and expenses (the principal components of income), the profit and loss statement also lists gains and losses from other kinds of transactions, such as the sale of fixed assets (for example, a factory building) or the early repayment of long-term debt. Extraordinary—that is, unusual and infrequent—developments are also specifically disclosed.

C Other Financial Statements

The profit and loss statement excludes a number of assets withdrawn by the owners; in a corporation, such withdrawn assets are called dividends. A separate activity-oriented statement, the statement of retained earnings, discloses income and redistribution to owners.

A third important activity-oriented financial statement is the cash-flow statement. This statement provides information not otherwise available in either a profit and loss statement or a balance sheet; it presents the sources and the uses of the enterprise’s funds by operating activities, investing activities, and financing activities. The statement identifies the cash generated or used by operations; the cash exchanged to buy and sell plant and equipment; the cash proceeds from issuing shares and long-term borrowings; and the cash used to pay dividends, to purchase the company’s outstanding shares of its own stock, and to pay off debts.

D Bookkeeping and Accounting Cycle

Modern accounting entails a seven-step accounting cycle. The first three steps fall under the bookkeeping function—that is, the systematic compiling and recording of financial transactions. Business documents provide the bookkeeping input; such documents include invoices, payroll records, bank cheques, and records of bank deposits. Special ledgers are used to record recurring transactions. (A ledger is a book having one page for each account in the organization’s financial structure. The page for each account shows its debits on the left side and its credits on the right side so that the balance—that is, the net credit or debit—of each account can be determined.) These include a sales ledger, a purchases ledger, a cash receipts ledger, and a cash disbursements ledger. Transactions that cannot be accommodated by a special journal are recorded in a general ledger. In many modern offices, these records are held in computer records that normally follow the traditional ledger structure.

D1 Step One

Recording a transaction in a ledger marks the starting point for the double-entry bookkeeping system. In this system, the financial structure of an organization is analysed as consisting of many interrelated aspects, each of which is called an account (for example, the wages payable account). Every transaction is identified in two aspects or dimensions, referred to as its debit (or left side) and credit (or right side) aspects, and each of these two aspects has its own effect on the financial structure. Depending on their nature, certain accounts are increased with debits and decreased with credits; other accounts are increased with credits and decreased with debits. For example, the purchase of stock for cash increases the stock account (a debit) and decreases the cash account (a credit). If the stock is purchased on the promise of future payment, a liability would be created, and the journal entry would record an increase in the stock account (a debit) and an increase in the liability account (a credit). Recognition of wages earned by employees entails recording an increase in the wage expense account (a debit) and an increase in the liability account (a credit). The subsequent payment of the wages would be a decrease in the cash account (a credit) and a decrease in the liability account (a debit).

D2 Step Two

In the next step in the accounting cycle, the amounts that appear in the various ledger are transferred to the organization’s general ledger—a procedure called posting.

In addition to the general ledger, subsidiary ledgers—usually a sales ledger and a purchase ledger—are used to provide information in greater detail about the accounts in the general ledger. For example, the general ledger contains one account showing the entire amount owed to the enterprise by all its customers; the sales ledger breaks this amount down on a customer-by-customer basis, with separate sales account for each customer. Subsidiary accounts may also be kept for the wages paid to each employee, for each building or machine owned by the company, and for amounts owed to each of the enterprise’s creditors.

D3 Step Three

Posting data to the ledgers is followed by listing the balances of all the accounts and calculating whether the sum of all the debit balances agrees with the sum of all the credit balances (because every transaction has been listed once as a debit and once as a credit). This process is called producing a trial balance. This procedure and those that follow it take place at the end of the financial period, normally each calendar month. Once the trial balance has been successfully prepared, the bookkeeping portion of the accounting cycle is concluded.

D4 Step Four

Once bookkeeping procedures have been completed, the accountant prepares certain adjustments to recognize events that, although they did not occur in conventional form, are in substance already completed transactions. The following are the most common circumstances that require adjustments: accrued revenue (for example, interest earned but not yet received); accrued expenses (wage costs incurred but not yet paid); unearned revenue (earning subscription revenue that had been collected in advance); prepaid expenses (for example, expiration of a prepaid insurance premium); depreciation (recognizing the cost of a machine as expense spread over its useful economic life); stock movements (recording the cost of goods sold on the basis of a period’s purchases and the change in the value of stocks between beginning and end of the financial period); and receivables (recognizing bad-debt expenses on the basis of expected uncollected amounts).

D5 Steps Five and Six

Once the adjustments are calculated, the accountant prepares an adjusted trial balance—one that combines the original trial balance with the effects of the adjustments (step five). With the balances in all the accounts thus updated, financial statements are then prepared (step six). The balances in the accounts are the data that make up the organization’s financial statements.

D6 Step Seven

The final step is to close non-cumulative accounts. This procedure involves a series of bookkeeping debits and credits to transfer sums from income statement accounts into owners’ equity accounts. Such transfers reduce to zero the balances of non-cumulative accounts so that these accounts can receive new debit and credit amounts that relate to the activity of the next business period.


Accounting has a well-defined body of knowledge and rather definitive procedures. Nevertheless, many countries (such as the United States and the United Kingdom) have Accounting Standard boards that continue to refine existing techniques and develop new approaches. Such activity is needed in part because of innovative business practices, newly enacted laws, and socio-economic changes. Better insights, new concepts, and enhanced perceptions have also influenced the development of accounting theory and practices. However, despite considerable efforts to create internationally agreed accounting standards, there still exist important differences in the way accounting information is produced in different countries. These differences often make international comparisons of accounting information extremely hazardous.

Credited images: yesstyle

Facebook Comments



Capital, the collective term for a body of goods and monies from which future income can be derived. Generally, consumer goods and monies spent for present needs and personal enjoyment are not included in the definition or economic theory of capital. Thus, a business regards its land, buildings, equipment, inventory, and raw materials, as well as stocks, bonds, and bank balances available, as capital. Homes, furnishings, cars and other goods that are consumed for personal enjoyment (or the money set aside for purchasing such goods) are not considered capital in the traditional sense.

In the more precise usage of accounting, capital is defined as the stock of property owned by an individual or corporation at a given time, as distinguished from the income derived from that property during a given period. A business firm accordingly has a capital account (frequently called a balance sheet), which reports the assets of the firm at a specified time, and an income account, which reckons the flow of goods and of claims against goods during a specified period.

Among the 19th-century economists, the term capital designated only that segment of business wealth that was the product of past industry. The wealth that is not produced, such as land or ore deposits, was excluded from the definition. Income from capital (so defined) was called profit, or interest, whereas the income from natural resources was called rent. Contemporary economists, for whom capital means simply the aggregate of goods and monies used to produce more goods and monies, no longer make this distinction.

The forms of capital can be distinguished in various ways. One common distinction is between fixed and circulating capital. Fixed capital includes all the more or less durable means of production, such as land, buildings, and machinery. Circulating capital refers to nonrenewable goods, such as raw materials and fuel, and the funds required to pay wages and other claims against the enterprise.

Frequently, a business will categorize all of its assets that can be converted readily into cash, such as finished goods or stocks and bonds, as liquid capital. By contrast, all assets that cannot be easily converted to cash, such as buildings and equipment, are considered frozen capital.

Another important distinction is between productive capital and financial capital. Machines, raw materials, and other physical goods constitute productive capital. Claims against these goods, such as corporate securities and accounts receivable, are financial capital. Liquidation of productive capital reduces productive capacity, but the liquidation of financial capital merely changes the distribution of income.


The 18th-century French economists known as physiocrats were the first to develop a system of economics. Their work was developed by Adam Smith and emerged as the classical theory of capital after further refinements by David Ricardo in the early 19th century. According to the classical theory, capital is a store of values created by labour. Part of capital consists of consumers’ goods used to sustain the workers engaged in producing items for future consumption. The part consists of producers’ goods channelled into further production for the sake of expected future returns. The use of capital goods raises labour productivity, making it possible to create a surplus above the requirements for sustaining the labour force. This surplus constitutes the interest or profit paid to capital. Interest and profits become additions to capital when they are ploughed back into production.

Karl Marx and other socialist writers accepted the classic view of capital with one major qualification. They regarded as capital only the productive goods that yield income independently of the exertions of the owner. An artisan’s tools and a small farmer’s land holding are not capital in this sense. The socialists held that capital comes into being as a determining force in society when a small body of people, the capitalists, owns most of the means of production and a much larger body, the workers, receives no more than bare subsistence as reward for operating the means of production for the benefit of the owners.

In the mid-19th century the British economists Nassau William Senior and John Stuart Mill, among others, became dissatisfied with the classical theory, especially because it lent itself so readily to socialist purposes. To replace it, they advanced a psychological theory of capital based on a systematic inquiry into the motives for frugality or abstinence. Starting with the assumption that satisfactions from present consumption are psychologically preferable to delayed satisfactions, they argued that capital originates in abstinence from consumption by people hopeful of a future return to reward their abstinence. Because such people are willing to forgo present consumption, productive power can be diverted from making consumers’ goods to making the means of further production; consequently, the productive capacity of the nation is enlarged. Therefore, just as physical labour justifies wages, abstinence justifies interest and profit.

Inasmuch as the abstinence theory rested on subjective considerations, it did not provide an adequate basis for objective economic analysis. It could not explain, in particular, why a rate of interest or profit should be what it actually was at any given time.

To remedy the deficiencies of the abstinence theory, the Austrian economist Eugen Böhm-Bawerk, the British economist Alfred Marshall, and others attempted to fuse that theory with the classical theory of capital. They agreed with the abstinence theorists that the prospect of future returns motivates individuals to abstain from consumption and to use part of their income to promote production, but they added, in line with classical theory, that a number of returns depends on the gains in productivity resulting from accretions of capital to the productive process. Accretions of capital make production more roundabout, thus causing greater delays before returns are realized. The amount of income saved, and therefore the amount of capital formed, would accordingly depend, it was held, on the balance struck between the desire for present satisfaction from consumption and the desire for the future gains expected from a more roundabout production process. The American economist Irving Fisher was among those who contributed to refining this eclectic theory of capital.

John Maynard Keynes rejected this theory because it failed to explain the discrepancy between money saved and capital formed. Although according to the eclectic theory and, indeed, all previous theories of capital, savings should always equal investments, Keynes showed that the decision to invest in capital goods is quite separate from the decision to save. If investment appears unpromising of profit, saving still may continue at about the same rate, but a strong “liquidity preference” will appear that will cause individuals, business firms, and banks to hoard their savings instead of investing them. The prevalence of a liquidity preference causes unemployment of capital, which, in turn, results in unemployment of labour.


Although theories of capital are of relatively recent origin, capital itself has existed in civilized communities since antiquity. In the ancient empires of the Middle and the Far East and to a larger degree in the Graeco-Roman world, a considerable amount of capital, in the form of simple tools and equipment, was employed to produce textiles, pottery, glassware, metal objects, and many other products that were sold in international markets. The decline of trade in the West after the fall of the Roman Empire led to less specialization in the division of labour and a reduced use of capital in production. Medieval economies engaged almost wholly in subsistence agriculture and were therefore essentially non-capitalist. Trade began to revive in the West during the time of the Crusades. The revival was accelerated worldwide throughout the period of exploration and colonization that began late in the 15th century. Expanding trade fostered the greater division of labour and mechanization of production and therefore a growth of capital. The flow of gold and silver from the New World facilitated the transfer and accumulation of capital, laying the groundwork for the Industrial Revolution. With the Industrial Revolution, production became increasingly roundabout and dependent on the use of large amounts of capital. The role of capital in the economies of Western Europe and North America was so crucial that the socio-economic organization prevailing in these areas from the 18th century through the first half of the 20th century became known as the capitalist system or capitalism.

In the early stages of the evolution of capitalism, investments in plant and equipment were relatively small, and merchant, or circulating, capital—that is, goods in transit—was the preponderant form of capital. As industry developed, however, industrial, or fixed, capital—for example, capital frozen in mills, factories, railways, and other industrial and transport facilities—became dominant. Late in the 19th and early in the 20th centuries, financial capital in the form of claims to the ownership of capital goods of all sorts became increasingly important. By creating, acquiring, and controlling such claims, financiers and bankers exercised great influence on production and distribution. After the Great Depression of the 1930s, financial control of most capitalist economies was superseded in part by state control. A large segment of the national income of the United States, Great Britain, and various other countries flows through government, which as the public sector exerts a great influence in regulating that flow, thereby determining the amounts and kinds of capital formed.

Credited images: yesstyle

Facebook Comments


Interest, payment made for the use of another person’s money; in economics, it is regarded more specifically as a payment made for capital. Economists also consider interest as the reward for thrift; that is, payment offered to people to encourage them to save and to make their savings available to others.

Interest paid only on the principal, that is, on the sum of money loaned, is called simple interest. Interest paid not only on the principal but also on the cumulative total of past interest payments is called compound interest. The rate of interest is expressed as a percentage of the principal paid for its use for a given time, usually a year. The current, or market, the rate of interest is determined primarily by the relation between the supply of money and the demands of borrowers. When the supply of money available for investment increases faster than the requirements of borrowers, interest rates tend to fall. Conversely, interest rates generally rise when the demand for investment funds grows faster than the available supply of funds to meet those demands. Business executives will not borrow money at an interest rate that exceeds the return they expect the use of the money to yield.

In medieval Christendom and before, the payment and receiving of interest were questioned on moral grounds, as usury was considered a sin. The position of the Christian Church, as defined by St Thomas Aquinas, condoned interest on loans for business purposes, because the money was used to produce new wealth, but adjudged it sinful to pay or receive interest on loans made for the purchase of consumer goods. Under modern capitalism, the payment of interest for all types of loans is considered proper and even desirable because interest charges serve as a means to allocate the limited funds available for loan to projects in which they will be most profitable and most productive. Islamic Shari’ah law, however, still regards interest as, strictly speaking, sinful, and in some Islamic countries, legal provisions are made to replace interest with other rewards for thrift or investment such as shares in profits.

Credited Images:yesstyle

Facebook Comments


Loan, in finance, the lending of a sum of money. In common usage, the lending of any piece of property. A loan may be secured by a charge on the borrower’s property (as a house purchase mortgage is) or be unsecured. There will also be a number of conditions attached to the loan: for example when it is to be repaid and the rate of interest to be charged on the sum of money loaned. Almost any person or any organization can make or receive a loan, but there are restrictions on some types of the loan; for example, those made by a company to one of its directors.

Loans can take many forms. Many businesses are financed by long-term loan capital, such as loan stock or debentures. Governments also finance their borrowing requirements by issuing long-term fixed-interest bonds that in the United Kingdom are known as gilt-edged stock (or gilts). These loans will usually have a fixed repayment (or redemption or maturity) date and will earn the lender (owner of the stock/debenture/bond) a fixed rate of interest until that date. In the meantime, the price at which the stock can be traded on a stock exchange will depend on a number of things, including how the interest rate on the stock compares with the current rate available on other loan stock. For example, if interest rates have gone down, the price of the loan stock should go up, because the stock is now earning a higher rate of interest than would be earned on its original value at the current market rate. But the market price of a bond will also depend on its maturity date and its quality. In the United States, bonds issued by companies whose credit ratings are below investment grade are known as junk bonds; these pay a higher rate of interest than “non-junk” bonds but their market value will take into account the deemed higher risk of the bond issuer defaulting on interest payments or on redeeming the bonds at their full redemption value. A company’s loan capital is normally recorded on its balance sheet, at the repayment amount, as long-term liabilities. One of the factors by which investors judge a company and by which lenders decide whether to lend it money is the ratio of its debt to its equity. This is known as gearing or leverage; the higher the proportion of loan finance to equity, the higher the gearing or leverage. One of the other ratios people look at when evaluating a company is the proportion of its profits being used to pay the interest on its loan finance.

The interest rate payable on loans is usually determined by market forces at the time the loan is taken out. However, governments may give soft loans (loans on more favourable terms than can be obtained in the market) to businesses they wish to support or encourage. The International Development Association, part of the World Bank, is specifically concerned with organizing loans to developing countries on soft terms.

Credited images: yesstyle

Facebook Comments

International Bank for Reconstruction and Development


International Bank for Reconstruction and Development, also known as the World Bank, specialized United Nations agency established at the Bretton Woods Conference in 1944. A related institution, the International Monetary Fund (IMF), was created at the same time. The chief objectives of the bank, as stated in the articles of agreement, are “to assist in the reconstruction and development of territories of members by facilitating the investment of capital for productive purposes [and] to promote private foreign investment by means of guarantees or participation in loans [and] to supplement private investment by providing, under suitable conditions, finance for productive purposes out of its own capital …”.

The bank grants loans only to member nations, for the purpose of financing specific projects (at the start of the 21st century it had 183 members and operated in 100 countries). Before a nation can secure a loan, advisers and experts representing the bank must determine that the prospective borrower can meet conditions stipulated by the bank. Most of these conditions are designed to ensure that loans will be used productively and that they will be repaid. The bank requires that the borrower is unable to secure a loan for the particular project from any other source on reasonable terms and that the prospective project is technically feasible and economically sound. To ensure repayment, member governments must guarantee loans made to private concerns within their territories. After the loan has been made, the bank requires periodic reports both from the borrower and from its own observers on the use of the loan and on the progress of the project.

In the early period of the World Bank’s existence, loans were granted chiefly to European countries and were used for the reconstruction of industries damaged or destroyed during World War II. Since the late 1960s, however, most loans have been granted to economically developing countries in Africa, Asia, and Latin America. The bank gave particular attention to projects that could directly benefit the poorest people in developing nations by helping them to raise their productivity and to gain access to such necessities as safe water and waste-disposal facilities, health care, family planning assistance, nutrition, education, and housing. Direct involvement of the poorest people in economic activity was being promoted by providing loans for agriculture and rural development, small-scale enterprises, and urban development. The bank also was expanding its assistance to energy development and ecological concerns.


World Bank funds are provided primarily by subscriptions to, or purchase of, capital shares. The minimum number of shares that a member nation must purchase varies according to the relative strength of its national economy. Not all the funds subscribed are immediately available to the bank; only about 8.5 percent of the capital subscription of each member nation actually is paid into the bank. The remainder is to be deposited only if, and to the extent that, the bank calls for the money in order to pay its own obligations to creditors. There has never been a need to call in capital. The bank’s working funds are derived from sales of its interest-bearing bonds and notes in capital markets of the world, from the repayment of earlier loans, and from profits on its own operations. It has earned profits every year since 1947.

All powers of the bank are vested in a board of governors, comprising one governor appointed by each member nation. The board meets at least once annually. The governors delegate most of their powers to 24 executive directors, who meet regularly at the central headquarters of the bank in Washington, D.C. Five of the executive directors are appointed by the five member states that hold the largest number of capital shares in the bank. The remaining 19 directors are elected by the governors from the other member nations and serve 2-year terms. The executive directors are headed by the president of the World Bank, whom they elect for a 5-year term, and who must be neither a governor nor a director.


The bank has two affiliates: the International Finance Corporation (IFC), established in 1956; and the International Development Association (IDA), established in 1960. Membership in the bank is a prerequisite for membership in either the IFC or the IDA. All three institutions share the same president and boards of governors and executive directors.

IDA is the bank’s concessionary lending affiliate, designed to provide development finance for those countries that do not qualify for loans at market-based interest rates. IDA soft loans, or “credits”, are longer term than those of the bank and bear no interest; only an annual service charge of 0.75 per cent is made. The IDA depends for its funds on subscriptions from its most prosperous members and on transfers of income from the bank.

All three institutions are legally and financially separate, but the bank and IDA share the same staff; IFC has its own operating and legal staff but uses administrative and other services of the bank. Membership in the International Monetary Fund is a prerequisite for membership in the World Bank and its affiliates.


The World Bank has been heavily criticized in recent years for its poor performance in development economics, especially with regard to the social and environmental consequences of the projects it supported in developing countries. The bank itself has admitted considerable wrongdoing. Resulting reforms were embodied in the Strategic Compact of 1997, which decentralized the bank’s operations. However, it is arguable that it is less at fault than many of the corrupt or incompetent regimes whose schemes it is called on to fund. The bank’s role in development has in any case diminished with the vast influx of private capital into profitable projects in developing countries. Health, education, and other fields unlikely to yield profits remain in need of an institution such as the World Bank.

Credite images: Yesstyle

Facebook Comments

Reconstruction Finance Corporation

Reconstruction Finance Corporation (RFC), the independent agency of the United States government, created during the economic depression by the congressional enactment in 1932, and abolished by Congress in June 1957. The stated purpose of the RFC was “to provide emergency financing facilities for financial institutions; to aid in financing agriculture, commerce, and industry; to purchase preferred stock, capital notes, or debentures of banks and trust companies; and to make loans and allocations of its funds as prescribed by law”. These purposes were subsequently enlarged by legislative amendment to include participation in the maintenance of the economic stability of the country through the promotion of maximum production and employment and the encouragement of small business enterprises. The basic activities of the RFC were to make and collect loans and to buy and sell securities. Originally, the capital stock of the corporation was fixed at $500 million.

For seven years following its creation, the RFC was classified as an emergency agency. In 1939 it was grouped with other agencies to constitute the Federal Loan Agency. It was transferred to the Department of Commerce in 1942 and reverted to the Federal Loan Agency three years later. When that agency was abolished in 1947, its functions were assumed by the RFC.

Approximately two-thirds of the disbursements of the RFC were made in connection with the national defence of the United States, especially during World War II. Loans were also made by the RFC to federal agencies and to state and local governments in connection with the relief of the unemployed and the relief of victims of disasters such as floods and earthquakes. Disbursements to private enterprises included loans to banks and trust companies to aid in their establishment, reorganization, or liquidation, and to mortgage loan companies, building and loan associations, and insurance companies. Loans were also made to agricultural financing institutions, to enterprises engaged in financing the export of agricultural surpluses, and to railways, mines, mills, and other industrial enterprises. Hundreds of millions of dollars were disbursed by the RFC for the purchase of securities offered by the Public Works Administration, other government agencies, and private corporations.

In 1948, after the financial crisis of the depression and World War II had passed, Congress reduced the capital stock of the RFC to $100 million and provided for the retirement of the outstanding capital stock in excess of that amount. It also authorized the RFC to issue to the Treasury its own notes, debentures, bonds, or other similar obligations, in a number of its outstanding loans, in order to borrow money with which to carry on its functions.

During 1951 and 1952 congressional investigators found considerable evidence of fraud and corruption among RFC officials. In July 1953, Congress enacted the RFC Liquidation Act, providing for the gradual transfer of the functions of the RFC to other government agencies. The RFC loan powers were transferred in 1954 to the Small Business Administration. The RFC was abolished in June 1957, and its remaining functions were transferred to the Housing and Home Finance Agency, the General Services Administration, and the Department of the Treasury. During its existence from 1932 to 1957, the RFC disbursed more than $50 billion in loans.

Facebook Comments