Nuclear Power


Nuclear Power, electrical power produced from energy released by controlled fission or fusion of atomic nuclei in a nuclear reaction. Mass is converted into energy, and the amount of released energy greatly exceeds that from chemical processes such as combustion.

The first experimental nuclear reactor was constructed in 1942 amid tight wartime secrecy in Chicago, Illinois, in the United States. A prototype reactor was demonstrated at Oak Ridge, Tennessee, in 1943, and by 1945 three full-scale reactors were in operation at Hanford, in Washington State. These were dedicated to plutonium production for nuclear weapons; however, the first large-scale commercial reactor generating electrical power was started up in 1956 at Calder Hall, United Kingdom.

Nuclear power is now a well-established source of electricity worldwide. The most common types of reactor are light water reactors, mostly pressurized water reactors (PWRs) together with boiling water reactors (BWRs). Gas-cooled and heavy water reactors make up the rest. Worldwide there are currently about 430 reactors operating in 25 countries providing about 17 percent of the world’s electricity. Nuclear reactors are also used for propulsion of submarines and ships, and there are a number of prototype and experimental reactors around the world. At present, only a few experimental fusion reactors exist, none of which produce usable amounts of electrical power.

Few nuclear power stations are under construction at present, and some have been cancelled when partly built. This is mainly because of long-term resistance from the environmental movement (in particular since the Chernobyl disaster of 1986), but nuclear power stations are also not competitive with natural gas- and coal-fired power stations at present. It is uncertain whether nuclear power generation will increase or decrease worldwide over the next 50 years. However, the very low carbon dioxide emissions from nuclear power stations compared with coal-, gas-, or oil-fired units mean that there is potentially a future expansion in nuclear power driven by the need to control climate change.

More than 40 million kilowatt-hours (kWh) of electricity can generally be produced from one tonne of natural uranium. Over 16,000 tonnes of coal or 80,000 barrels of oil would need to be burned to make the same amount of electricity. Moreover, the amount of carbon dioxide produced in generating one kWh of electricity would be 1 kg for coal, 0.5 kg for gas, and only 10 grams for nuclear power.

Other than economic factors, the main issues limiting the expansion of nuclear power are the disposal of radioactive waste (including waste left over from decommissioning of old facilities), radioactivity in liquid effluent and gaseous discharges, security concerns over stockpiled plutonium, and the historical connection with nuclear weapons. Availability of nuclear fuel is unlikely to limit nuclear power production in the foreseeable future.


Nuclear power plants generate electricity from fission, usually of uranium-235 (U-235), the nucleus of which has 92 protons and 143 neutrons. When it absorbs an extra neutron, the nucleus becomes unstable and splits into smaller pieces (“fission products”) and more neutrons. The fission products and neutrons have a smaller total mass than the U-235 and the first neutron; the mass difference has been converted into energy, mostly in the form of heat, which produces steam and in turn drives a turbine generator to produce electricity.

Natural uranium is a mixture of two isotopes, fissionable U-235 (0.7 per cent) and non-fissionable U-238. However, U-238 can absorb neutrons to form plutonium-239 (P-239), which is fissionable, and up to half the energy produced by a reactor can, in fact, come from fission of P-239. Some types of reactor require the amount of U-235 to be increased above the natural level, which is called enrichment. Pressurized water reactors (PWRs), the most common type of reactor, require fuel enriched to about 3 percent U-235.

Reactor fuel is made up of fuel pellets or pins enclosed in a tubular cladding of steel, zircaloy, or aluminium. Several of these fuel rods make up each fuel assembly. The fast neutrons released in the fission reaction need to be slowed down before they will induce further fissions and give a sustained chain reaction. This is done by a moderator, usually water or graphite, which surrounds the fuel in the reactor. However, in “fast reactors” there is no moderator and the fast neutrons sustain the fission reaction.

A coolant is circulated through the reactor to remove heat from the fuel. Ordinary water (which is usually also the moderator) is most commonly used but heavy water (deuterium oxide), air, carbon dioxide, helium, liquid sodium, liquid sodium-potassium alloy, molten salts, or hydrocarbon liquids may be used in different types of reactor.

The chain reaction is controlled by using neutron absorbers such as boron, either by moving boron-containing control rods in and out of the reactor core or by varying the boron concentration in the cooling water. These can also be used to shut down the reactor. The power level of the reactor is monitored by temperature, flow, and radiation instruments and used to determine control settings so that the chain reaction is just self-sustaining.

The main components of a nuclear reactor are the pressure vessel (containing the core); the fuel rods, moderator, and primary cooling system (making up the core); the control system; and the containment building. This last element is required in the event of an accident, to prevent any radioactive material being released to the environment, and is usually cylindrical with a hemispherical dome on top.

During operation, and also after it is shut down, a nuclear reactor will contain a very large amount of radioactive material. The radiation emitted by this material is absorbed in thick concrete shields surrounding the reactor core and primary cooling system. An important safety feature is the emergency core cooling system, which will prevent overheating and “meltdown” of the reactor core if the primary cooling system fails. See also Nuclear Fission.


Radioactivity was discovered by Antoine Henri Becquerel in 1896, although not called this until two years later when Pierre and Marie Curie discovered the radioactive elements polonium and radium, which occur naturally with uranium. In 1932 the neutron was discovered by British scientist James Chadwick. Enrico Fermi and colleagues in Italy then discovered that bombarding uranium with neutrons slowed by means of paraffin produced at least four different radioactive products. Six years later, German scientists Otto Hahn and Fritz Strassman demonstrated that the uranium atom was actually being split. The Austrian-born Swedish physicist Lise Meitner continued the work with her nephew Otto Frisch and defined nuclear fission for the first time.

In 1939, Fermi travelled to the United States to escape the Fascist regime in Italy and was followed by physicist Niels Bohr, who fled the German occupation of Denmark. Collaborating at Columbia University, they developed the concept of a chain reaction as a source of power. With the outbreak of World War II, concerns arose among refugee European physicists in France, the United Kingdom, and the United States that Nazi Germany might develop an atomic bomb. The focus of research then changed to military applications.

The Manhattan Project began in the United States in 1940, with the aim to develop nuclear weapons. In 1942, Fermi constructed the first experimental nuclear reactor at the University of Chicago. One year later, a prototype plutonium production reactor was demonstrated at Oak Ridge and by 1945 three full-scale reactors were in operation at Hanford. The first nuclear bomb was tested at Alamogordo Air Base in New Mexico in July 1945. Two bombs were then dropped on Japan in August, the first at Hiroshima and the second at Nagasaki.

With the end of World War II in 1945, the Cold War and the East-West arms race took over. The Union of Soviet Socialist Republics (USSR) mounted a crash development programme and soon began plutonium production. The United States continued with plutonium production and also developed different types of reactor, as did the USSR, United Kingdom, France, and Canada. Both sides developed a range of technologies that was also applicable to nuclear power generation. Reliable energy supplies were important to national recovery, and nuclear power was seen as an essential element of national power programmes.

The first purpose-built reactor for electrical power generation was started up in 1954 at Obninsk, near Moscow, in the USSR. In 1956 the first large-scale commercial reactor generating electrical power (as well as producing plutonium) began operating at Calder Hall, England. In the United States, three types of the reactor were being developed for commercial use, namely the pressurized water reactor (PWR), boiling water reactor (BWR), and the fast breeder reactor (FBR). In 1957 the first commercial power unit, a BWR, was started up in the United States.

There have been some major incidents in nuclear power plants. In 1957 a plutonium production reactor caught fire at Windscale (modern-day Sellafield) in Cumbria, England, spreading large amounts of radioactivity across Britain and northern Europe. It was the worst nuclear accident in the history of the UK. In 1979, in the worst nuclear accident in US history, a core meltdown occurred at Three Mile Island power plant near Harrisburg, Pennsylvania. The worst nuclear accident to date occurred in 1986, when a runaway nuclear reaction at Chernobyl power plant near Kiev, USSR (modern-day CIS), led to a series of explosions that dispersed massive amounts of radioactive material throughout the Northern hemisphere. In 1999 a “criticality incident” occurred at the Tokai-Mura plant in Japan, causing the worst nuclear damage in that country. (See also section on Nuclear Accidents.)

The number of nuclear reactors in the world has grown steadily. By 1964 there were 14 reactors connected to electricity distribution systems worldwide. In 1970 there were 81; this number grew to 167 by 1975, to 365 by 1985, to 435 by 1995, and then decreased to 428 by 1999.


Most of the world’s reactors are located in nuclear power plants, the rest are research reactors or reactors used for propulsion of submarines and ships. Some designs can be re-fuelled while in operation, others need to be shut down to refuel. Several advanced reactor designs, which are simpler, more efficient, and inherently safer, are also under development.

There are two basic types of fission reactors: thermal reactors and fast reactors. In thermal reactors, the neutrons created in the fission reaction lose energy by colliding with the light atoms of the moderator until they can sustain the fission reaction. In fast reactors, “fast” neutrons sustain the fission reaction and a moderator is not needed. They require enriched fuel, but the fast neutrons can be used to convert U-238 into fissile material (plutonium), creating more nuclear fuel than the amount consumed. They can also be used to “burn” plutonium as a means of reducing the amount that is stockpiled.

For the purpose of electricity generation, there are five main categories of reactors, each comprising one or more types. Light Water Reactors include Pressurized Water Reactors (PWRs), together with the Russian VVER design, and Boiling Water Reactors (BWRs). Gas Cooled Reactors comprise Magnox reactors and Advanced Gas-Cooled Reactors (AGR), developed in the United Kingdom, as well as High-Temperature Gas-Cooled Reactors (HTGR). Pressurized Heavy Water Reactors include the CANDU reactor developed in Canada. Light Water Graphite Reactors comprise the RBMK reactors, developed in the USSR. Lastly, Fast Breeder Reactors include Liquid Metal Fast Breeder Reactors (LMFBR).

In the early 1950s, enriched uranium was only available in the United States and the USSR. For this reason, reactor development in the United Kingdom (Magnox), Canada (CANDU), and France was based on natural uranium fuel. The Russian RBMK design also used natural uranium fuel.

In natural uranium reactors, ordinary water cannot be used as the moderator, because it absorbs too many neutrons. In the successful CANDU design, this was overcome by using heavy water (deuterium oxide) for the moderator and coolant. Nearly all reactors in the United Kingdom have used a graphite moderator and carbon dioxide as the coolant.

In the United Kingdom, the Magnox reactors of the 1960s were followed by the AGRs, which used enriched fuel and were able to operate at higher temperatures and with greater efficiency. The Steam Generating Heavy Water Reactor (SGHWR) design was intended as the next technological step but this policy was changed in favour of the more established PWR design, of which many were already in operation. However, only one PWR was subsequently constructed in the United Kingdom, at Sizewell. Nuclear power generates about 25 per cent of the country’s electricity.

French researchers abandoned the design they had initially developed and embarked in the early 1970s on a nuclear power programme based totally on PWRs when French-produced enriched uranium became available. These now supply almost 80 percent of France’s electricity.

Worldwide 56 per cent of power reactors are PWRs, 22 percent are BWRs, 6 percent are pressurized heavy water reactors (mostly CANDUs), 3 percent are AGRs, and 23 percent are other types. Eighty-eight percent are fuelled by enriched uranium oxide, the rest by natural uranium, with a few light water reactors, also using mixed oxide fuel (MOX), which contains plutonium as well as uranium. Light water is the coolant/moderator for 80 per cent to 85 per cent of all reactors.

The most important factors to be considered for any type of nuclear reactor are safety; cost per kilowatt of generating the capacity to construct; cost per kilowatt delivered (to include fuel, operation, and downtime costs); operating lifetime; and decommissioning costs.

A Pressurized Water Reactor (PWR)

PWRs are normally fuelled with uranium oxide pellets in a zirconium cladding, although in recent years some mixed oxide fuel (MOX), which contains plutonium, has been used. The fuel is enriched to 3 percent U-235. The moderator is the ordinary water coolant, which is kept pressurized at about 150 bars to stop it boiling. It is pumped through the reactor core, where it is heated to about 325° C (about 620° F). The superheated water is pumped through a steam generator, where, through heat exchangers, a secondary loop of water is heated and converted to steam. This steam drives one or more turbine generators, is condensed, and pumped back to the steam generator. The secondary loop is isolated from the reactor core water and is therefore not radioactive. The third stream of water from a lake, river, the sea, or cooling tower is used to condense the steam. A typical reactor pressure vessel is 15 m (49 ft) high and 5 m (16 ft) in diameter, with walls 25 cm (10 in) thick. The core contains about 90 tonnes of fuel.

The PWR was originally designed by Westinghouse Bettis Atomic Power Laboratory for military ship applications, then by Westinghouse Nuclear Power Division for commercial applications. The Soviet-designed VVER (Veda-Vodyanoi Energetichesky Reaktor) design is similar to Western PWRs but has different steam generators and safety features.

B Boiling Water Reactor (BWR)

The BWR is simpler than the PWR but less efficient in its fuel use and has a lower power density. Like the PWR, it is fuelled by uranium oxide pellets in a zirconium cladding, but slightly less enriched. The moderator is the ordinary water coolant, which is kept at lower pressure (70 bars) so that it boils within the core at about 300° C. The steam produced in the reactor pressure vessel is piped directly to the turbine generator, condensed, and then pumped back to the reactor. Although the steam is radioactive, there is no intermediate heat exchanger between the reactor and turbine to decrease efficiency. As in the PWR, the condenser cooling water has a separate source, such as a lake or river.

The BWR was originally designed by Allis-Chambers and General Electric (GE) of the United States. The GE design has survived, and other versions are available from ASEA-Atom, Kraftwerk Union, and Hitachi.

C Gas-Cooled Reactors

Magnox reactors take their name from the magnesium-based alloy used as cladding for the natural uranium metal fuel. The moderator is graphite and the carbon dioxide coolant is circulated through the core at a pressure of about 27 bars, exiting at about 360° C. The heat is transferred to the secondary water loop, in which steam is raised to drive the turbine generators. Early units had a steel pressure vessel with the steam generators outside the containment. Later versions had a concrete pressure vessel containing the core and the steam generators. Magnox reactors are a British design but were also built in Tokai-Mura (Japan) and Latina (Italy).

The Advanced Gas-Cooled Reactor (AGR) is a development of the Magnox design using uranium oxide fuel enriched to 2-3 percent U-235 and clad in stainless steel or zircaloy. The moderator is graphite and the carbon dioxide coolant circulates at about 40 bars, exiting the core at 640° C. The heat is transferred to the secondary water loop, in which steam is raised to drive the turbine generators. A concrete pressure vessel is used, with walls about 6 m (20 ft) thick. AGRs are unique to the UK.

High-Temperature Gas-Cooled Reactors (HTGRs) are largely experimental. The fuel elements are spheres made from a mixture of graphite and nuclear fuel. The German version has the fuel loaded in a silo, the US version loads the fuel into hexagonal graphite prisms. The coolant is helium, pressurized to about 100 bars, circulated through the interstices between the spheres or through holes in the graphite prisms. An example of this type is described in the Advanced Reactors section later in this article.

D Pressurized Heavy Water Reactor

The most widely used reactor of this type is the Canadian CANDU (Canadian Deuterium Uranium Reactor). The moderator and coolant are heavy water (deuterium oxide) and the fuel consists of natural uranium oxide pellets in zircaloy tubes. These are contained in pressure tubes mounted horizontally through a tank of heavy water called the “calandria”, which acts as the moderator. This feature avoids the need for a pressure vessel and facilitates on-load refuelling. The heavy water coolant is pumped through the pressure tubes at 110 bar and exits at about 320° C. The heat is transferred to the secondary water loop, in which steam is raised to drive the turbine generators.

The CANDU was designed by Atomic Energy of Canada Limited (AECL) to make the best use of Canada’s natural resources of uranium without needing enrichment technology, although requiring heavy water production facilities. In total, 21 CANDUs have been built, 5 of them outside of Canada.

E Light Water Graphite Reactor

The Soviet-designed Reaktor Bolshoi Moshchnosty Kanalyny (RBMK) is a pressurized water reactor with individual fuel channels. The moderator is graphite, the coolant—ordinary water, and the fuel—enriched uranium oxide. The fuel tubes and coolant tubes pass vertically through a massive graphite moderator block. This is contained in a pressure vessel and filled with helium-nitrogen mixture to improve heat transfer and prevent oxidation of the graphite. The coolant is maintained at 75 bar and exits at up to 350° C. The water is permitted to boil and the steam, after removal of water, is fed to the turbine generators.

Following the 1986 Chernobyl disaster, the design weaknesses of the RBMK were recognized and modifications made to help overcome them. The last operating reactor at the Chernobyl site was closed down in December 2000, and others will eventually be phased out.

F Fast Breeder Reactor

The Liquid Metal Fast Breeder (LMFBR) uses molten sodium as the coolant and runs on fuel enriched with U-235. Instead of a moderator being employed, the core is surrounded by a reflector, which bounces neutrons back into the core to help sustain the chain reaction. A blanket of “fertile” material (U-238) is included above and below the fuel, to be converted into fissile plutonium by capture of fast neutrons. The core is compact, with a high power density. The molten sodium primary coolant transfers its heat to a secondary sodium loop, which heats water in a third loop to raise steam and drive the turbine generators.

Development of fast reactors proceeds only in France, India, Japan, and Russia. The only commercial power reactors of this type are in Kazakhstan and Russia. The British fast reactor, which generated 240 megawatts, was closed down in the 1990s and is being decommissioned.

G Propulsion Reactors

Propulsion reactors are used to propel military submarines and large naval ships such as the aircraft carrier USS Nimitz. The US, UK, Russia, and France all have nuclear powered submarines in their fleets. The basic technology of the propulsion reactor was first developed in the US naval reactor programme directed by Admiral Hyman George Rickover. Submarine reactors are generally small, with compact cores and highly enriched uranium fuel.

The former USSR built the first successful nuclear-powered icebreaker Lenin for use in clearing the Arctic sea lanes. Three experimental nuclear powered cargo ships were operated for limited periods by the US, Germany, and Japan. Although technically successful, economic conditions and restrictive port regulations brought an end to these projects.

H Research Reactors

A variety of small nuclear reactors has been built in many countries for use in education, training, research, and production of radioactive isotopes for medical and industrial use. These reactors generally operate at power levels near 1 MW and are more easily started up and shut down than large power reactors.

A widely used type is the swimming-pool reactor. The core consists of partially or fully enriched U-235 contained in aluminium alloy plates immersed in a large tank of water that serves as both coolant and moderator. Materials to be irradiated with neutrons may be placed directly in or near the reactor core. This process is used to make radioactive isotopes for medical, industrial, and research use (see also Isotopic Tracer). Neutrons may also be extracted from the reactor core and directed along beam tubes for use in experiments.

I Advanced Reactors

Several new designs are under development around the world which are simpler, more efficient in their utilization of fuel, cheaper to build and operate, and inherently safer. They typically include passive safety features that avoid relying on pumps and valves, along with increased time for operators to respond to abnormal situations.

Some have evolved from established designs, taking into account the lessons learned from operating experience over the years and advances in fuel design for increased “burnup”. Others represent greater departures from established designs and would require a demonstration unit to be constructed before being used commercially. The cost and technical demands of these projects mean that national or international collaboration is usually necessary.

Projects are currently under way in Canada, France, Germany, Japan, Russia, the US, and South Africa. They fall into the three categories of water-cooled reactors, fast reactors, and gas-cooled reactors. Capacities cover all ranges—small, medium, and large (1,000 MW and above). The large capacity Advanced Boiling Water Reactor (ABWR) design is already in commercial operation in Japan. Others are under construction or on hold, awaiting favourable economic circumstances; the rest are still on the drawing board.

As an example of a design not based on existing commercial units, the South African Pebble Bed Modular Reactor (PMBR) is due to begin construction in 2001 and should be in commercial operation by 2005. It is a High Temperature Gas-Cooled Reactor (HTGR) of 110 MW capacity per module and is fuelled by several hundred thousand graphite-uranium oxide pebbles, each the size of a tennis ball. The helium coolant passes through a gas turbine to drive electrical generation with high efficiency and returns to the reactor in a closed loop. Each pebble passes through the reactor about ten times before needing to be replaced, which is carried out continuously without shutting the reactor down. Four modules would fit inside a football stadium and the design lifetime is 40 years.

Reactors have to be approved and certificated by the national safety regulatory authority before they can be used in a nuclear power station. International certification of reactors, as with new aircraft, is some way in the future.

J Fusion Reactors

Nuclear fusion is the process that powers the Sun, and for several decades people have looked at it as the answer to energy problems on Earth. However, the technological problems are complex, and a fusion power plant has not yet been built. (In 2005 agreement was reached between China, the European Union (EU), Japan, South Korea, Russia, and the US on the building of the world’s first nuclear fusion power plant at Cadarache, in southern France. The International Thermonuclear Experimental Reactor (ITER), as it is to be known, is scheduled to be in operation by 2016.) Fundamentally, any useful fusion reactor needs to confine plasma at a high enough density for sufficient time to generate more energy than the energy which was put in to create and confine the plasma. This occurs when the product of the confinement time and the density of the plasma, known as the Lawson number, is 1014 or above.

Numerous schemes for magnetic confinement of plasma have been tried since 1950. Thermonuclear reactions have been observed but the Lawson number has rarely exceeded 1012. The Tokamak device, originally suggested in the USSR by Igor Tamm and Andrei Sakharov, began to give encouraging results in the 1960s.

The confinement chamber of a Tokamak has the shape of a torus (doughnut), with a minor diameter of about 1 m (3 ft 4 in) and a major diameter of about 3 m (9 ft 9 in). A toroidal magnetic field of about 5 tesla is established inside this chamber by large electromagnets. This is about 100,000 times the Earth’s magnetic field at the planet’s surface. A longitudinal current of several million amperes is induced in the plasma by the transformer coils that link the torus. The resulting magnetic field lines are spirals in the torus and confine the plasma.

Following the successful operation of small Tokamaks at several laboratories, two large devices were built in the early 1980s, one at Princeton University in the US and one in the USSR. In the Tokamak, high plasma temperature naturally results from resistive heating by the very large toroidal current, and additional heating by neutral beam injection in the new large machines should result in ignition conditions.

Another possible route to fusion energy is that of inertial confinement. In this technique, the fuel (tritium or deuterium) is contained within a tiny pellet that is bombarded on several sides by a pulsed laser beam. This causes an implosion of the pellet, setting off a thermonuclear reaction that ignites the fuel. Several laboratories in the US and elsewhere are currently pursuing this possibility.

A significant milestone was achieved in 1991 when the Joint European Torus (JET) in the UK produced for the first time a significant amount of power (about 1.7 million watts) from controlled nuclear fusion. And in 1993 researchers at Princeton University in the US used the Tokamak Fusion Test Reactor (TFTR) to produce 5.6 million watts. However, both JET and TFTR consumed more energy than they produced in these tests.

There has been promising progress in fusion research around the world for several decades; however, it will take decades more to develop a practical fusion power plant. It has been estimated that an investment of US $50-100 billion is needed to achieve this, but each year only US $1.5 billion is being spent worldwide. The main areas where work is needed include superconducting magnets; vacuum systems; cryogenic systems; plasma purity, heating, and diagnostic systems; sustainment of plasma current; and safety issues.

The JET project has achieved “breakeven” operation, where the fusion power generated exceeds the input power, but only by injecting tritium that has made the structure radioactive. ITER is scheduled to begin by 2016. A demonstration fusion power plant would be built about 15 years later and, if successful, commercial fusion power plants could be operating by about 2050. This timescale could be significantly delayed or accelerated by the rate of progress in understanding plasma behaviour and by the rate of funding.

If fusion energy does become practicable it would offer the following advantages: (1) an effectively limitless source of fuel—deuterium from the ocean; (2) inherent safety, since the fusion reaction would not “run away” and the amount of radioactive material present is low; and (3) waste products that are less radioactive and simpler to handle than those from fission systems. However, the structure will become radioactive due to absorption of neutrons, so decommissioning will be a serious undertaking.


Nuclear power is based on uranium, a slightly radioactive metal that is relatively abundant (about as common as tin and 500 times as abundant as gold). Thorium is also usable as a nuclear fuel, but there is no economic incentive to exploit it at present. Economically extractable reserves at current low world prices amount to just 4.4 million tonnes, from the richer ores. At the current world usage rate of 50,000 tonnes per annum this would last only another 80 years. But if prices were to rise significantly, the usable reserves would increase to the order of 100 million tonnes. And if prices were to rise to several hundred dollars per kilogram, it may become economic to extract uranium from seawater, in which it is present at about 3 mg per tonne. This would be a sufficient supply for a greatly enlarged industry for several centuries.

A Uranium Production

The world’s uranium reserves are mostly located in Australia (35 percent), countries of the former USSR (29 percent), Canada (13 percent), Africa (8 percent), and South America (8 percent). In terms of production, Canada (33 percent) is followed by Australia (15 per cent) and Nigeria (10 per cent). Other producers are Kazakhstan, Namibia, Russia, South Africa, the US, and Uzbekistan.

Uranium ore contains about 1 percent uranium. It is mined either by open-pit or deep-mining techniques and milled (crushed and ground) to release the uranium minerals from the surrounding rock. The uranium is then dissolved, extracted, precipitated, filtered, and dewatered to produce a uranium ore concentrate called “yellowcake” which contains about 60 percent uranium. This has a much smaller volume than the ore and hence is less expensive to transport. It is either shipped to the fuel enrichment plant or, alternatively, to the fuel fabrication plant if it is not to be enriched.

B Enrichment

The yellowcake is converted to uranium hexafluoride (UF6), which is a gas above 50° C and is used as the feedstock for the enrichment process. Because most reactors require more than the 0.7 percent natural concentration of U-235, some of the U-238 needs to be removed to give a concentration of 3 percent U-235 or thereabouts.

Enrichment is carried out using either the gaseous diffusion process or the newer gas centrifuge process. A laser process is also under development. The gas centrifuge process requires only 5 percent of the energy to separate the same amount of U-235 as the diffusion process, although diffusion plants are still dominant worldwide.

C Fuel Fabrication

The enriched UF6 is converted to uranium dioxide in the form of a ceramic powder. This is pressed and then sintered in a furnace to produce a dense ceramic pellet. Pellets are welded into fuel rods and combined into fuel assemblies, which are then transported to the nuclear power station for loading into the reactor.

Plutonium oxide may also be mixed with the uranium oxide to make mixed oxide fuel (MOX), as a means of reducing the amount of stockpiled plutonium (although not the total amount in circulation) and avoiding the need to enrich the uranium. MOX fuel is manufactured at the reprocessing plant where the plutonium is held and is increasingly being used in light water reactors, up to a maximum of about 30 percent of the fuel in a PWR. Because spent MOX fuel is highly radioactive, the plutonium is unlikely to be illegally diverted into the manufacture of nuclear weapons.

D Power Generation

The fuel assemblies are loaded into the reactor in a planned cycle to “burn” the fuel most efficiently. The “burnup” is expressed as gigawatt-days per tonne (GWd/te) of uranium. The early Magnox stations achieved 5 GWd/te but by the late 1980s, PWRs and BWRs were achieving 33 GWd/te. Figures of 50 GWd/te are now being achieved, and this is forecast to increase.

E Spent Fuel

The fuel elements are removed from the reactor when they have reached the design burnup level, typically after four years. At this point, they are intensely radioactive and generate a lot of heat, so the spent fuel is placed in a cooling pond adjacent to the reactor. The water (which is dosed with boric acid to absorb neutrons and prevent a chain reaction) acts as a radiation shield and coolant. The fuel elements remain there for at least five months until the radioactivity has decayed enough to permit them to be transported.

Where the fuel is to be reprocessed, it is transported in shielded flasks by rail or road to the reprocessing plant. Where this is not the case, it will remain in the cooling pond. Older ponds were designed to accommodate up to ten years’ worth of spent fuel but may be able to accommodate more by removing older fuel into dry storage facilities. But ultimately the spent fuel will need to be sent for permanent disposal if it is not to be reprocessed.

F Reprocessing

The spent fuel is typically made up of non-fissile U-238 (about 95 percent), fissile U-235 (about 0.9 percent), various highly radioactive fission products, and a mixture of plutonium isotopes (more than half of which are fissile). Reprocessing separates the uranium and plutonium from the waste and was historically carried out to recover plutonium for manufacture of nuclear weapons. In the UK this was also carried out to deal with the magnesium alloy Magnox fuel casings, which are eventually corroded by the water in the cooling pond and are not suitable for dry storage. The recovered U-235 is used for the manufacture of new fuel, and the plutonium can be used for the manufacture of MOX fuel (see Fuel Fabrication above), although the majority is stockpiled at present.

The spent fuel received from the nuclear power station is stored in a cooling pond and then mechanically cut up. In the commonly used Purex process, the fuel is dissolved in nitric acid and then the uranium, plutonium, and fission products are separated by solvent extraction using a mixture of tributyl phosphate and kerosene. The uranium goes to fuel fabrication and the plutonium is either stored or used for MOX fuel production. The fission products are separated into a liquid stream, which is processed with glass-making materials into a vitrified high-level waste (HLW) product. Other liquids and solid waste streams are also generated, and these are discussed in the section on radioactive waste management later in the article.

Reprocessing in the civil nuclear industry is a contentious and complex issue. Between 1976 and 1981 it was not carried out in the United States due to concerns that plutonium could be illegally diverted into the manufacture of nuclear weapons (although now permitted, it has not been resumed). Instead, a “once through” policy for nuclear fuel is followed, with spent fuel regarded as waste destined for permanent disposal. The UK, France, Japan, and Russia have reprocessing plants and all are busy reducing their stock of nuclear weapons (apart from Japan, which has none), so the amounts of stored plutonium are increasing. Options for handling plutonium include “burning” it in a fast reactor, or using it up as MOX fuel followed by disposal of the spent fuel. As well as the plutonium issue, decision-making factors include the economics of the process and national perceptions of future energy needs.

G Transport

Uranium concentrate, new nuclear fuel, spent fuel, and radioactive waste are transported by rail, road, ship, and air in packages designed to prevent the release of radioactive material under all foreseeable accident scenarios. The most radioactive items such as spent fuel or vitrified high level waste are transported in extremely rugged “flasks” or “casks”, which will typically have undergone high-speed impact tests and fire tests to demonstrate their integrity.


Nuclear power stations, reprocessing plants, fuel fabrication plants, uranium mines, and all other nuclear facilities produce solid and liquid wastes of varying characteristics and amounts. These are internationally classified as high level waste (HLW), intermediate level waste (ILW), and low level waste (LLW).

A typical 1000 MW nuclear power station produces about 300 cu m of LLW and ILW waste each year, of which 95 percent would be classified as LLW. It also produces about 30 tonnes of spent fuel, classified as HLW. In comparison, a coal-fired power station of the same capacity would produce 300,000 tonnes of ash per year, containing a very large amount of radioactivity and toxic heavy metals, which would be dispersed into landfill sites and the atmosphere. Worldwide, about 200,000 cu m of low and intermediate waste are produced from nuclear power stations each year, together with 10,000 cu m of HLW (primarily spent fuel).

Wastes of lower activity are also produced, including very low level waste from most nuclear facilities which can be disposed of in normal municipal waste disposal sites without special precautions. Uranium mines and mills produce large volumes of waste containing low concentrations of radioactive and toxic materials, which are handled by normal mining techniques such as tailings dams. The enrichment process produces depleted uranium, primarily consisting of U-238, which is slightly radioactive and requires some precautions for safe disposal.

A High Level Waste

This is highly radioactive, heat generating, long-lived material, which will remain biologically hazardous for thousands of years. The spent fuel from nuclear power plants, destined for permanent disposal, is classified as HLW, as is the concentrated liquid waste generated by reprocessing. The 30 tonnes of spent fuel produced each year by a typical power station will, after ten years, still produce a power of several hundred kilowatts and cooling will be necessary for about 50 years overall. For final disposal of spent fuel, the fuel rods would be removed from their assemblies and repacked in a dense lattice within a corrosion-resistant steel canister. A cover would be welded on and the canister covered with an overpack. However, this is not yet carried out (see section on Disposal below), and some countries (notably Russia) are reluctant to dispose of spent fuel because, if reprocessed, it is an energy resource.

Reprocessing one tonne of spent fuel produces about 0.1 cu m of radioactive liquid, containing about 99 percent of the fission product radioactivity. The liquid is stored in tanks with multiple cooling systems designed to remove the heat produced by the radioactive decay, and after several tens of years it can be processed for final disposal. For example, the vitrification process operated at the Sellafield reprocessing plant in England converts the liquid to a stable, solid form by turning it into a borosilicate glass (referred to as vitrified high level waste, VHLW) in a stainless steel container suitable for long-term storage and final disposal. Processes based on other immobilization technologies are in development elsewhere, such as the SYNROC process in Australia.

B Intermediate Level Waste

This consists of solid and liquid materials such as fuel cladding, contaminated equipment, sludges, evaporator concentrates, and spent ion-exchange resin. This material is not sufficiently radioactive to require cooling. Reprocessing one tonne of spent fuel produces about 1 cu m of ILW, containing about 1 per cent of the radioactivity in the fuel.

Various processes for retrieval, volume reduction, incineration, conditioning, and immobilization of ILW to convert it to stable, solid forms (usually based on cement, but also polymers and bitumen) are operated at power stations and reprocessing plants. The final product is typically contained in a drum, suitable for long-term storage or final disposal.

C Low Level Waste

This consists of trace-contaminated used protective clothing, gloves, contaminated rags, filters, and the like, and also larger items of lightly contaminated equipment. Brazil nuts and coffee beans contain as much natural radioactivity as typical LLW. Reprocessing one tonne of spent fuel produces about 4 cu m of LLW, containing about 0.001 percent of the radioactivity in the fuel. Size reduction techniques include shearing, shredding, and compaction. The waste is grouted into containers called “overpacks” to produce a stable waste form suitable for final disposal.

D Disposal

About 40 near-surface disposal sites for LLW have been in operation for over 30 years in countries with nuclear power industries, and another 30 are expected to come into operation in the next 15 years. They typically have concrete-lined trenches, an impervious cap, and systems for collecting water from the base of the trenches.

The intention in all countries with nuclear power industries is eventually to dispose of ILW and HLW in deep underground repositories, where the long-lived radioactive isotopes will be segregated for more than 100,000 years by a combination of engineered and natural barriers. Development and selection of final disposal sites is under way in all countries where they will be needed, although the rate of progress is generally slow due to the need to obtain public acceptance and address the main issues, which have been identified as transport of radioactivity in groundwater; migration of radioactivity in gas generated by the waste; natural disruptive events and inadvertent human intrusion; and the question of whether or not the waste should be retrievable. Meanwhile, ILW and HLW are stored at the sites where they are produced, which can generally be continued for 50 years or more.

As an alternative to constructing a series of national repositories (where geological conditions may be less than ideal), it has been proposed that waste should be transported to disposal sites in sparsely populated and more geologically suitable areas of the world such as Western Australia or the Gobi Desert. This, however, remains contentious.

A number of countries used to dispose of radioactive waste by dumping at sea. This practice is now discontinued following the London Convention of 1983; however, disposal of deep-sea sediments several hundred metres below the bottom of the sea in water depths of at least 4,000 m (13,123 ft) is a potentially attractive option where it is not envisaged that the waste would ever need to be retrieved. This option would require a large international collaborative effort to develop.

E Return of Wastes

The policy of some countries with small-scale nuclear power industries is to return spent fuel to the foreign supplier of the fuel. And the policy of European reprocessing plants is eventually to return the wastes arising from large-scale reprocessing of spent fuel to the country where the spent fuel came from. For example, vitrified HLW is shipped from the reprocessing plant at Cap la Hague in France back to Japan.

F Liquid Discharges

The liquid effluents generated by nuclear power stations, reprocessing plants, and other nuclear facilities are treated by a variety of efficient processes to remove radioactivity. Stringent limits are set for each site, radiation levels in discharge streams are monitored, and efforts are made to improve year on year. Any residual radioactivity in the effluent will generally end up in the sea where its uptake by “critical groups” (those most likely to receive the radiation) can be estimated. At the 1998 Oslo-Paris Commission (OSPAR) meeting in Portugal, the EU member states committed themselves to reduce discharges of radioactivity to the point where additional concentrations above background levels are close to zero.

Reprocessing effluents presents the greatest challenge. Techniques such as sand bed filtration, ion exchange, neutralization of acidic effluents to precipitate solids, removal of solids by hydrocyclone or ultrafiltration, and alkaline hydrolysis of organic solvent are used to clean the effluents until they can be discharged to sea. The separated radioactive material is processed as ILW, as described above.

G Aerial Discharges

The radioactive gases that are discharged are subject to similar limits and monitoring as liquid effluents to ensure the minimum uptake by “critical groups”.


Before discussing the safety issues surrounding nuclear power it is necessary to understand the basics of radiation.

A Introduction to Radiation

Heat and light are types of radiation that people can feel or see, but we cannot detect ionizing radiation in this way (although it can be measured very accurately by various types of instrument). Ionizing radiation passes through matter and causes atoms to become electrically charged (ionized), which can adversely affect the biological processes in living tissue.

Alpha radiation consists of positively charged particles made up of two protons and two neutrons. It is stopped completely by a sheet of paper or the thin surface layer of the skin; however, if alpha-emitters are ingested by breathing, eating, or drinking they can expose internal tissues directly and may lead to cancer.

Beta radiation consists of electrons, which are negatively charged and more penetrating than alpha particles. They will pass through 1 or 2 centimetres of water but are stopped by a sheet of aluminium a few millimetres thick.

X-rays are electromagnetic radiation of the same type as light, but of much shorter wavelength. They will pass through the human body but are stopped by lead shielding.

Gamma rays are electromagnetic radiation of shorter wavelength than X-rays. Depending on their energy, they can pass through the human body but are stopped by thick walls of concrete or lead.

Neutrons are uncharged particles and do not produce ionization directly. However, their interaction with the nuclei of atoms can give rise to alpha, beta, gamma, or X-rays, which produce ionization. Neutrons are penetrating and can be stopped only by large thicknesses of concrete, water, or paraffin.

Radiation exposure is a complex issue. We are constantly exposed to naturally occurring ionizing radiation from radioactive material in the rocks making up the Earth, the floors and walls of the buildings we use, the air we breathe, the food we eat or drink, and in our own bodies. We also receive radiation from outer space in the form of cosmic rays.

We are also exposed to artificial radiation from historic nuclear weapons tests, the Chernobyl disaster, emissions from coal-fired power stations, nuclear power plants, nuclear reprocessing plants, medical X-rays, and from radiation used to diagnose diseases and treat cancer. The annual exposure from artificial sources is far lower than from natural sources. The dose profile for an “average” member of the UK population is shown in the table above, although there will be differences between individuals depending on where they live and what they do (for example, airline pilots would have a higher dose from cosmic rays and radiation workers would have a higher occupational dose).

B Radiation Effects and Dose Limits

Large doses of ionizing radiation in short periods of time can damage human tissues, leading to death or injury within a few days. Moderate doses can lead to cancer after some years. And it is generally accepted that low doses will still cause some damage, despite the difficulty in detecting it (although there is a body of opinion that there exists a “threshold” below which there is no significant damage). There is still no definite conclusion as to whether exposure to the natural level of background radiation is harmful, although damaging effects have been demonstrated at levels a few times higher.

Absorbed radiation dose is measured in sieverts (Sv), although doses are usually expressed in millisieverts (mSv). One chest X-ray gives a dose of about 0.2 mSv. The natural background radiation dose in the UK is about 2.5 mSv per annum, although it doubles in some areas, and in certain parts of the world, it may reach several hundred mSv. A dose of 5 Sv (that is, 5,000 mSv) is likely to be fatal.

Basic principles and recommendations on radiation protection are issued by the International Commission on Radiological Protection (ICRP) and used to develop international standards and national regulations to protect radiation workers and the general public. The basic approach is consistent all over the world. Over and above the natural background level, the dose limit for a radiation worker is set at 100 mSv per year averaged over five years, and 1mSv per year over five years for a member of the general public. Doses should always be kept as low as reasonably achievable, and the limits should not be exceeded.

In the UK the recommended maximum annual dose for a radiation worker is set at 20 mSv (although higher limits may apply elsewhere in the world) and the typical annual dose for a radiation worker would be controlled to less than 1.5 mSv. However, some may receive more than 10 mSv, and a few may approach the annual limit.

C Ensuring Nuclear Safety

In common with all hazardous industrial activities, the risk of major nuclear accidents is minimized at power stations and reprocessing plants by means of multiple levels of protection. In order of importance, engineered systems are provided for prevention, detection, and control of any release of radioactive material. Escape and evacuation of people on site and nearby are available as the last resort. Sophisticated analysis is carried out to evaluate the effect of the protective systems in all foreseeable accident scenarios and to demonstrate that the risk of failure is sufficiently low.

For example, for a major release of radioactivity from a modern nuclear power station there would have to be a whole series of failures. The primary cooling system would have to fail, followed by the emergency cooling system, then the control rods, then the pressure vessel, and finally the malfunction of the containment building before significant amounts of radioactivity could be released.

The safety record of the nuclear industry worldwide over the last 45 years has been generally good, with the exception of the Windscale (Sellafield) fire in 1957 (which actually happened with a military plutonium production reactor rather than a power reactor), the Three Mile Island accident in 1979, the Chernobyl disaster of 1986, and the most recent accident, at Tokai-Mura in Japan in 1999, all of which are discussed in more detail in the next section. However, there have also been a number of incidents at nuclear power stations and reprocessing plants over the years, which resulted in severe damage and/or had the potential to escalate into major accidents, and should, therefore, be classified as “near misses”.

Lessons have been learned. While safety relies on the design (that is, engineered safety systems), just as importantly it depends on how the reactor is operated. Improvements have been made to reactor control systems in response to Three Mile Island and Chernobyl but without trained and competent operators following valid procedures there remains the possibility of a major nuclear accident. And there are 12 RBMK reactors of the same type as the Chernobyl device still in operation, which have an inherently less safe design than any other type of nuclear power reactor.

D Nuclear Accidents

There have been four particularly severe nuclear accidents in the past 45 years, which released, or almost released, large amounts of radioactivity.

The 1957 Windscale fire was the worst nuclear accident in UK history. It happened when an early plutonium production reactor (with few safety systems) caught fire and is not representative of modern nuclear power reactors.

The 1979 core meltdown at the Three Mile Island PWR was the worst nuclear accident in US history. The disaster was largely contained but happened because of deficiencies in the control system and incorrect responses by the operators when abnormal circumstances arose initially, which then escalated into a far worse situation.

The 1986 Chernobyl disaster was the worst nuclear accident in history. It was caused by the operators carrying out an unauthorized and previously untried procedure on an RBMK reactor that involved them disabling a number of safety devices. This led to the reactor becoming unstable and eventually exploding. In the years following the accident over 30 people (mainly firefighters) died from radiation exposure. A further 300 workers and firefighters suffered radiation sickness (those who were sent in to clean up the plant following the explosion were later found to have been at a significantly increased risk of lung cancer) and almost 2,000 people in the surrounding area who were children at the time have developed thyroid cancer (which is, fortunately, treatable, and so few have died), with more cases expected. Massive amounts of radioactive material were dispersed throughout the Northern hemisphere.

In recent years confidence in Japan’s nuclear industry has been shaken by a number of serious accidents. In 1999 a “criticality incident” occurred at the Tokai-Mura nuclear plant. There was a sustained burst of neutrons caused by a chain reaction that was triggered when operators carried out a prohibited procedure while manufacturing highly enriched fuel (15 per cent to 20 per cent U-235) for an experimental fast reactor. There were a small number of fatalities (those closest to the incident, which is typical of criticality accidents). People living in the surrounding area were irradiated with neutrons for some hours. In 2004, four people were killed and seven injured at a plant in Mihama, Western Japan, when a corroded pipe exploded, covering workers with scalding water that caused severe burns. Although officials insisted that there had been no radiation leak from the plant and that there was no danger to the surrounding area, the casualties were the highest in Japan’s history of nuclear power.


A Today

Worldwide, there are about 430 power reactors operating in 25 countries, providing about 17 percent of the world’s electricity. Of these 56 percent are PWRs, 22 percent are BWRs, 6 percent are pressurized heavy water reactors (mostly CANDUs), 3 percent are AGRs, and 23 percent are other types. In all, 88 percent are fuelled by enriched uranium oxide, the rest by natural uranium. A few light water reactors also use mixed oxide fuel (MOX) and this is likely to increase, partly as a way to dispose of the growing stocks of military plutonium. The number of fast breeder reactors (FBRs) has reduced with the closure of FBR programmes in several countries.

Nuclear power and hydro-electric power together provide 36 per cent of world electricity: neither put carbon dioxide into the atmosphere. In both cases the technology is mature. The new renewable technologies hardly appear in the statistics but, with financial support, they are starting to make their presence felt. It is unlikely, however, that renewables will ever provide more than 20 percent of world energy. In a world greedy for energy, where oil is already beginning to be supply-constrained and gas will follow by 2010, concerns about the security of energy supply are now being voiced. In the US, where power blackouts are prevalent in some states, life extension of nuclear stations is being implemented urgently to ensure supply despite pressures from the environmental movement to phase out nuclear power.

Life extension of nuclear reactors in the UK has been very successful but current policy appears to be to retire stations at the end of their useful lives and replace them with gas-fired stations. However, replacement by gas brings a huge carbon dioxide penalty which will derail the UK’s Kyoto Protocol’s obligations. The much more stringent post-2010 requirements that have been called for by the Royal Commission on Environmental Pollution require a 60 percent carbon dioxide reduction by 2050 but have virtually no chance of success without a nuclear input. Sweden and Germany are following a similar nuclear closure route.

Nuclear power construction is on the plateau or in decline in some developed countries and consequently, the teams of experienced nuclear engineers have been dispersed (despite the 38 new nuclear power plants currently under construction). Some countries, such as the UK, have lost the capacity to build a nuclear power station. University departments teaching nuclear technology have all but disappeared, which could be a limiting factor on new nuclear construction and will take the time to change.

Nuclear power is generally not discussed by politicians with the exception of France, some Far Eastern countries, China, and Russia. The EU energy commissioner Loyola de Palacio said, on November 7, 2000: “From the environmental point of view nuclear energy cannot be rejected if you want to maintain our Kyoto commitments.” She went on to imply she wanted nuclear power to be part of the Kyoto Protocol’s Clean Development Mechanism.

B The Short-Term Future

Attention is beginning to move towards building new nuclear power stations. Looking ahead to a doubling of energy demand by 2050, and with the world now trying to reduce its use of fossil fuels in order to contain carbon dioxide emissions, it is difficult to see how this can be achieved without a substantial increase in nuclear power. Nevertheless, the public perception of the industry in many countries is that it is more dangerous than other forms of energy and the problems of storing nuclear waste have not been fully solved (though there is considerable evidence to counter this argument).

A new generation of advanced reactors is being developed, which are more fuel efficient and inherently safer, with passive safety systems. The new designs are based on accumulated experience derived from operating PWRs and BWRs. Advanced boiling water and pressurized water reactors are already operating and the smaller AP 600 Westinghouse design has been certificated (as already mentioned, global certification, as with new aircraft, will be essential to get new designs into production). The European pressurized water reactor is available for construction, a number of liquid and gas cooled fast reactor systems have been designed, and some prototypes constructed. These new designs will produce electricity more cheaply than coal-fired stations and than gas-generated electricity (if gas prices continued to increase), and probably also more cheaply than renewable electricity (and with better availability).

Interest in high-temperature gas-cooled reactors (HTGRs) using helium at 950o C has been revived, particularly in Japan and China, and a Pebble Bed Modular Reactor (PBMR) with direct cycle gas turbine generator is being developed in South Africa.

The use of nuclear reactors to generate process heat is an important development, particularly if the heat is used for desalination. An integrated nuclear reactor producing electricity and clean water could produce water at between 0.7 dollars and 1.1 dollars per cubic metre. There is considerable interest in this technology from North Africa, the Arabian Peninsula states, Turkey, and northern China.

C Further Ahead

The existing types of nuclear reactor are not particularly efficient in their use of uranium. An alternative is the fast breeder reactor (FBR), which uses uranium some 60 times more efficiently than today’s PWRs and BWRs, although it is more expensive and is not yet a mature technology. Russian scientists have successfully operated the BS 600 fast reactor for 18 years with over 75 per cent availability. FBRs in other countries have been less successful and they eventually closed down because it was thought that the technology would not be required for 30 years and uranium and plutonium are readily available at the present time. In the long term, fast reactor technology could effectively increase world energy resources by a factor of ten and its time will no doubt come unless nuclear fusion can be engineered into a power station. Research on fusion continues, with the time horizon constantly receding, but it is expected that the prize will be worth the effort.

D Conclusions

The world is poised to make much more use of nuclear power, provided the public perception that nuclear power is too dangerous to contemplate ultimately alters. It is possible that destabilization of weather systems, resulting from global warming, may persuade people that nuclear power is the lesser of two evils.

The biggest drivers for new nuclear construction are the security of supply, the steadily increasing prices of natural gas and oil, the likely interruptions of gas and oil supply for political reasons, and the absence of carbon dioxide emissions from nuclear stations.

The way ahead seems to be represented by an increasing mixture of nuclear power and renewable energy.

Contributed By:
Nicholas S. Fells

Reviewed By:
Ian Fells


Earth Day,Conservation

Earth Day, event first observed internationally on April 22, 1970, to emphasize the necessity for the conservation of the world’s natural resources. Starting as a student-led campus movement, initially observed on March 21, Earth Day has become a major educational and media event. Environmentalists use it as an occasion to sum up current environmental problems of the planet: the pollution of air, water, and soils; the destruction of habitats; the decimation of hundreds of thousands of plant and animal species; and the depletion of non-renewable resources. The emphasis is on solutions that will slow and possibly reverse the negative effects of human activities. Such solutions include the recycling of manufactured materials, fuel and energy conservation, banning the use of harmful chemicals, halting the destruction of major habitats such as rainforests, and protecting endangered species.



Conservation, sustainable use of natural resources, such as soils, water, plants, animals, and minerals. In economic terms, the natural resources of any area constitute its basic capital, and wasteful use of those resources constitutes an economic loss. From the aesthetic and moral viewpoint, conservation also includes the maintenance of national parks, wilderness areas, historic sites, and wildlife. In certain cases, conservation may imply the protection of a natural environment from any human economic activity.

Natural resources are of two main types, renewable and non-renewable. Renewable resources include wildlife and natural vegetation of all kinds. The soil itself can be considered a renewable resource, although severe damage is difficult to repair because of the slow rate of soil-forming processes. The natural drainage of waters from the watershed of a region can be maintained indefinitely by careful management of vegetation and soils, and the quality of water can be controlled through pollution control. See Air Pollution; Environment; Reclamation; Sewage Disposal; Water Pollution; Energy Conservation.

Non-renewable resources are those that cannot be replaced or that can be replaced only over extremely long periods of time. Such resources include the fossil fuels (coal, petroleum, and natural gas) and the metallic and other ores. For discussions of conservation problems in this area, see individual entries on the substances concerned.


Although the conservation of natural resources has been recognized as desirable by many peoples since ancient times, frequently the basic principles of sound land use have been ignored, with disastrous results. Major losses—for example, the silting of rivers and the flooding of lowlands—resulted from the destruction of the forests and grasslands that protected watersheds in northern China and the Tigris-Euphrates area. Large areas in North Africa and the Middle East were rendered barren by centuries of uncontrolled livestock grazing, unwise cultivation, and excessive cutting of woody plants for fuel. Similar damage has also occurred in most of the more recently developed regions of the world, sometimes through the unwise introduction of species into new environments. The increasing industrialization of nations around the world continues to present severe conservation problems although international cooperation efforts have also evolved in certain areas, such as the protection of some endangered species. Some basic conservation principles in major areas of concern are discussed below.


In forests more than any other ecosystem, demand is increasingly being made that conservation should involve preservation from any destructive commercial use, particularly the cutting of trees for timber, which in a virgin forest is known to have harmful consequences far beyond the loss of the actual trees (for example the loss of animal habitats, and soil erosion). Where tracts of virgin forest are given over to timber production, principles of management have evolved in order to minimize the destructiveness of the process and to make it as sustainable as possible. The management of forest trees for timber production involves three fundamental principles. The first is the protection of the growing trees from fire, insects, and disease. However, fire, once regarded as a destroyer of forests, is now recognized as a management tool when carefully employed. Some important timber trees actually require fire for successful regeneration. Insects, such as the gypsy moth, spruce budworm, and pine sawfly, and disease, still take a heavy toll. However, biological control measures and some aerial spraying, proper cutting cycles, and slash disposal are increasingly effective. The second principle concerns proper harvesting methods, ranging from removal of all trees (clear-cutting) to removal of selected mature trees (selection cutting), and provision for reproduction, either naturally from seed trees or artificially by planting. The rate and frequency of any cutting should aim for sustained production over an indefinite period. The third principle of timber management is the complete use of all trees harvested. Technological advances, such as particleboard and gluing, have created uses for branches, defective logs, trees too small to be milled into boards, and so-called inferior trees. As demand for wilderness areas and recreational use of forests increases, management of commercial forests will become more intense. See Forest; Forest Fires; Forest Conservation and Management.


One of the principles of range conservation is the use of only a portion (usually about a half) of the annual forage plant production of a particular range in order to maintain healthy plant growth and reproduction. In addition, each range is stocked with the number of animals that can be nourished properly on the available usable forage and are permitted to graze only during the season suitable for that type of range. The conservation of ranges is based on a programme of grazing designed to keep them productive indefinitely and to improve depleted areas by natural reproduction or by artificial seeding with appropriate forage species. Although these principles are well established, many hundreds of thousands of acres of public grazing lands are still overgrazed.


One of the basic principles of wildlife conservation involves providing adequate natural food and shelter to maintain populations of each species in a given habitat. A major threat facing wildlife is both the destruction of habitat, through drainage, agriculture and urban expansion, and the fragmentation of habitat into parcels too small for wildlife populations to use. Illegal trade in feathers, horns, ivory, hides, and organs has brought many endangered species to the verge of extinction. Wildlife is an important biological, economic, and recreational resource that can be maintained through careful management. Hunting regulations allow the culling of many species without affecting overall population levels, and can even help control species that have grown too abundant for the region they inhabit.


Among the basic measures for soil conservation currently in use is the zoning of land by capability classes. In this system, the more level and stable soils are designated as suitable for annual crops, and other areas are designated for perennials, such as grass and legumes, or for use as grazing or forest lands. Another conservation method involves the use of soil-building plants in crop rotations. Such crops hold and protect the soil during growth and when ploughed under, supply much-needed organic matter to the soil. Cultivation methods that leave a layer of vegetable waste on the surface of the soil represent a major advance in land use. In many areas, these techniques have supplanted the use of the mouldboard plough, associated with the practice known as clean cultivation, which left the soil surface exposed to all the natural erosive forces. Special methods for erosion control include contour farming, in which cultivation follows the contours of sloping lands, and ditches and terraces are constructed to diminish the run-off of water. Another soil conservation method is the use of strip-cropping—that is, alternating strips of crop and fallow land. This method is valuable for control of wind erosion on semi-arid lands that need to lie fallow for efficient crop production. In addition, the maintenance of soil fertility at the maximum level of production often involves the use of inorganic (chemical) fertilizers. See Erosion; Soil; Soil Management.


Recent studies have confirmed that extremely dense vegetation prevents the collection of the maximum amount of water in a given drainage basin. Greater yields of water have been obtained from some mountain forest regions by thinning the natural tree stands, but not so much as to increase soil erosion or flood danger. A forest or shrub cover containing numerous small openings has been found to be more effective for capturing water than a dense, continuous cover that intercepts much snow and rain and permits the moisture to be lost by evaporation. Highly important in drainage basin conservation is the preservation of wetlands, which function as filtration systems that stabilize water tables by holding rainfall and discharging the water slowly, and as natural flood-control reservoirs.

Credited Images: yesstyle


News and Current Affairs


News and Current Affairs, reporting and analysis of events by radio and television programmes, and on the Internet. The two terms, “news” and “current affairs”, reflect old differences in the way that broadcasting used to treat topical matters, differences that barely survive in today’s advanced radio and television systems. In early radio, before television, the news was plain, restricted to what newspeople call “hard fact”. Newsreaders gave carefully scripted accounts of main undisputed facts of politics, wars, accidents, and other significant events. Facts were not interpreted or analysed.


In traditionalist Europe, that narrow concept of news satisfied the desires of governments to control the new radio medium in the public interest. They believed that without controls broadcasting could do as much harm as good. The United States, however, was different because broadcasting development was driven more by commercial considerations and by a stronger belief in the pre-eminence of freedom of expression. News broadcasting there soon developed a freer style than in Europe and its colonies.

Regardless of the degree of control, the inadequacy of news limited to plain fact became evident. A news bulletin told people the news but did not help them understand it. It did not adequately make them aware of the issues, of the “news behind the news”. To compensate, the concept of current affairs was invented. Though close to the news in the subject area, it was separate from it. News continued to be strictly factual, while current affairs delivered a mix of fact, comment, opinion, analysis, and interpretation in interviews, commentaries by experts, and feature reports. The change advanced more in European “public service” broadcasting than in American commercial broadcasting.

An important factor in the free world that is still strong today was the belief that news broadcasting should be impartial. It should not take sides in matters of public dispute. It should, for instance, report industrial strikes without favouring the employers or strikers. Similarly, political reporting should not side with any party. Impartiality was encouraged by the dependence of broadcasting on a public resource—the frequencies, sound waves that carry signals from transmitters to radio sets. Frequencies are allocated to prevent a jumble of programmes from different stations on the same frequencies in the same areas at the same time. It was reasoned that as broadcasting used a public resource it should serve all of the public. To do so, it had to be impartial.

Such reasoning does not apply to newspapers, which do not depend on any public resource in the same way. Publication of one newspaper does not obstruct or prevent publication of another, although competition for readers might cause one to fail, in the same way as competition for listeners among radio stations. Thus, newspapers continued to be free to take sides while regulated broadcasting was not.

In the United States, the tradition of independent journalism encouraged its lightly regulated radio stations to standards of impartial reliability as strongly as heavier regulation achieved that end in European liberal democracies. Authoritarian regimes in Europe and elsewhere strictly controlled all the news media and used them for propaganda. Some other countries had a degree of newspaper freedom, while broadcasting was made to serve “the state”, which usually meant the purposes of government.

Over the years, broadcasters in freethinking countries developed more sophisticated ideas of news and current affairs. The two approaches moved closer together, overlapped, and finally intermingled. The new ways of news broadcasting aimed to make the news comprehensive and comprehensible. Broadcasters came to believe that a news programme should give the news, the meaning of the news, and relevant comment on the news in whatever ways programme-makers decided was the best. A radio or television news programme might start with a bulletin of hard news reports on various events, in summary, or at length. The same programme could then move to a sequence of interviews with people in the news and to reports that were more discursive than in the bulletin. A differently constructed news programme might tell the facts of one event, explain them in another report immediately following, perhaps by a specialist correspondent, and include leading comment from people involved in the event, before dealing in similar ways with the next most important or most interesting story. Many variations are possible. The length of programme and the nature of its parts depend on several factors: on the time of day, shorter news items being more convenient for audiences at busier times; on audience profile in terms of age, sex, and socio-economic group; on programme policy, a talk station favouring longer news than a station mainly for music; and on whatever news is available.


Interviews became important. Broadly, they have two aims: to elicit facts and to seek comments—functions that often merge. Interviews for facts are prominent when newsworthy events have just occurred. Viewers and listeners hear police officers, for example, giving facts about newly committed crimes, or rescuers describing what has happened in accidents and disasters. Interviews for comments involve experts, public figures, and other people in the news. Their purpose is sometimes to explain the significance of events. With public figures fixing public policy, the purpose is to press them to justify their decisions. In early broadcasting, such interviews were usually deferential. Interviewers showed well-mannered respect for people in public office. Now, they are as likely to interrogate interviewees. This has caused politicians in democracies to complain that television and radio have supplanted parliament as the forum of national debate: “trial by media”, they say. In turn, broadcasters argue that experience and concern for public image make politicians evasive. In the United States, the sound bite—a cogent, very short comment, used repeatedly in news programmes—is held to have ousted thoughtful exposition, although public figures do explain themselves at length on prime time talk shows. National culture also influences interviewing style. Interviewing style can vary between nations, perhaps showing the influence of prevailing cultural trends.


Technology assisted the transition from rigidly separated news and current affairs broadcasting to modern news programming that has abundant material. Difficult-to-use wax discs for recording interviews and reporters’ dispatches gave way on the radio to manageable magnetic tape. On television, cheap, easily edited videotape replaced expensive film that had to be developed before viewers could see it. Improved telephones and landline circuits from distant studios to the news transmission studio encouraged programmes to use their own reporters instead of standard news agency copy. Cumbersome, costly outside broadcast vehicles—mobile studios—sent to the scene of only the biggest stories were superseded by smaller news broadcast vehicles, saloon cars with radio transmission equipment. These can travel more readily, giving radio reporters more opportunities to beam their news directly into the news studio and, if necessary, live into homes and offices. Electronic news gathering (ENG) in television allowed its reporters to do the same with pictures and sound. Communications satellites also improved the quality of pictures and sound from distant places. More news was reported more quickly.

Portable telephones, lightweight video cameras, and portable satellite transponders (devices that both receive and send out signals) have further increased quantity and speed. Reporters send pictures and their account of the facts directly to satellite and on to studios in London, Washington, Paris, Sydney, and all points on the globe. Reporting the news from any location can now be instant.

As a result, editors of news programmes have many more stories to choose from and much more material to illustrate them. Editors first decide which events they would like covered so that reporters with cameras and sound equipment are allocated to them. Editors also receive material on events they did not know were happening or were going to happen. For their programmes, they decide what to use, in what form, how they are to be edited, to what length, in what order, and whether the reports should be live or recorded. They also decide which stories are most important or most interesting, and how their locality, their country, their region, and the world will be presented.

With more news to use, radio and television have much more news programmes than in days when news travelled slowly. Some stations have news all the time, 24 hours a day. The explosion of news will continue. Events in many parts of the world are under-reported or not reported at all, sometimes because they are too remote, sometimes because of restrictive governments, eager to hide problems, suppress information and deter reporters. However, political change, the demand for news, and easy technology combine to break down barriers and to encourage programme producers to explore more and more events in more and more parts of the world.

Some critics say that television often uses pictures simply because they exist or because they are exciting, not because they are important. They argue that editors neglect more important events for which there are no pictures or where the pictures lack action. Others see the situation in a different light: the growth of news means that the world is better informed and, while many events reported are relatively trivial, there are many serious news programmes attending to many significant events.


The expansion of news and current affairs journalism has continued with the emergence of the Internet. Since the early 1990s, when the Internet began to become a mass medium, it has developed into a steadily more important platform for journalism. The Internet is the world’s first truly global news medium, in that online journalism is accessible to anyone, anywhere on the planet, with a personal computer and an Internet connection.

In 1997 there were only 700 online news sites in the world. As of 2007, there were millions, and nearly every news organization has a website. In addition to sites operated by established news organizations, such as the BBC and CNN, there is millions more run by individual journalists, and by what are now called “bloggers”. Blogs are regularly updated online bulletins, often containing news and comment on the issues of the moment. Many are amateurish and ephemeral, read by only a handful of like-minded bloggers. Others, such as that written by Salam Pax, the “Baghdad Blogger” during the invasion of Iraq in 2003, become essential reading all over the world, supplying the traditional news media with stories and analyses.

Blogging is part of a broader trend towards “citizen journalism”, in which individuals armed with video cameras and mobile phones generate material for traditional news and current affairs outlets. Coverage of the Asian tsunami of Boxing Day 2004, for example, featured many video clips taken by people on the spot, then uploaded to the editorial offices of the BBC and others for incorporation into news bulletins.

The rise of citizen journalism, also known as user-generated content, has benefited traditional news and current affairs broadcasting by making available more of the raw material of news. In general, therefore, the trend has been welcomed by the news media. However, concerns have been raised about the quality controls on this kind of material. How can the accuracy and objectivity of user-generated content be guaranteed, in the absence of professional skills and editorial safeguards? This is an issue that traditional news and current affairs media are now grappling with, in an effort to harness the potential of new technologies like the Internet, while preserving the perceived reliability of their programmes.

The Internet has also fuelled what some observers call a “commentary explosion”, in which more and more of the content of journalism is not factual reportage or balanced analysis and commentary on the news, but the rumour, gossip, polemic, and bias. Again, news media face the issue of trying to filter out worthwhile commentary and analysis for inclusion in their news and current affairs programmes. The greatest challenge facing news and current affairs journalism today is not the quantity of material available, and the number of platforms from which journalism can be distributed, but ensuring the quality of what is produced.

Additional material by Brian McNair, Professor of Journalism and Communication, University of Strathclyde. Author, Cultural Chaos: Journalism, News, and Power in a Globalised World.

Contributed By:
John Wilson


Evolutionary Psychology


Evolutionary Psychology, the notion that the human mind is the product of evolution and has therefore developed innate psychological mechanisms that are typical of the human species. This relatively new field of study stands in marked contrast to the standard social science model (SSSM) which has tended to portray the human mind as a general-purpose computer to be programmed by random, culture-specific determinants (the “blank slate” thesis).

This new branch of psychology grew out of developments in the late 20th century in a number of quite disparate disciplines including evolutionary biology, paleoanthropology, and cognitive psychology. Evolutionary psychologists have used findings from each of these fields of research to argue that a universal human nature lies just below the surface of cultural variability.

While the SSSM emphasizes the flexibility of human learning, social environment, and random cultural processes, evolutionary psychologists believe that such flexibility consists of a number of tendencies to learn particular skills and to do so at various, specific ages. Evolutionary psychologists do not dispute the importance of learning but attempt to explain the process in terms of innate mechanisms. Likewise, evolutionary psychology stresses the importance of culture, but, rather than defining culture as a random force, it sees it as the way in which humans are aided in acquiring skills that potentially enhance fitness and, therefore, the ability to survive longer than others and produce fit offspring.


Evolutionary psychology draws heavily on the Darwinian principles of natural and sexual selection, proposing that these are the mechanisms that have led to the development of modern human behaviour. Natural selection favours characteristics that aid survival and reproduction, while sexual selection favours those traits that help individuals gain access to mates. These processes are believed to have led to the evolution of the modern human species late in the Pleistocene epoch some 200,000 to 10,000 years ago, when our ancestors lived in extended families of hunter-gatherer tribes.

Evolutionary psychology overlaps with and, in part, grew out of sociobiology, which focuses on the biological basis of social behaviour. The former, however, places a greater emphasis on the relationship between evolution and the development of psychological mechanisms, while the latter concentrates on the adaptive significance of social behaviour. Both subject areas, however, draw on studies of other animal species in order to help understand the evolutionary significance of human social behaviour.


As a young science evolutionary psychology has so far been concerned more with theory than with empirical findings. Despite its current emphasis on theory over data gathering, the field does consist of more than speculation about the relationship between Darwinian theory and human nature and attempts are being made to produce testable hypotheses. However, critics question the level to which evidence acquired from the research is trustworthy. Three of the areas of recent hypothesis testing are related to language development, to differences between men and women in mate preference, and to different ways individuals adapt their behaviour during the social exchange.

A Language

Proponents of the evolutionary approach, such as Steven Pinker, have suggested that language is an adaptive trait that develops during a sensitive period of childhood between the ages of one and six years. After surveying a variety of cultures, Pinker argues that, not only is this time-course for language development a universal trait but also that the development of grammatical complexity is a cross-cultural phenomenon that demonstrates a remarkable degree of uniformity. All cultures studied have verbal languages which contain nouns, verbs, word and phrase structures, cases, and auxiliaries: to Pinker, this suggests a universal grammar or “language instinct”. Cross-cultural studies are important to evolutionary psychology since fundamental similarities between widely separated societies may suggest a common evolutionary ancestor.

B Mate Preference

In a similar, but more controversial vein, cross-cultural studies have been used to elucidate differences between men and women in their choice of a mate. These studies focus primarily on male and female biology. Since women have a reduced period of fertility and since they make a greater investment in reproduction compared to men, evolutionary theory predicts that men will favour youthfulness as an attractive feature in women, while women will favour resources over youthfulness in men. Both predictions are supported by the cross-cultural studies of David Buss, who surveyed over 10,000 people from 37 different cultures. Buss reported that men universally placed physical correlates of nobility (or youthful attractiveness) as the most important feature in a potential mate. In contrast, females rated status and resources as the most important features for a male partner. Evolutionary psychologists claim that this is because during the period when people dwelt on the savannah, a male could produce more surviving offspring by pairing with a young woman with a maximum number of fertile years ahead of her, while a female could make the best of her childbearing years by choosing a male who, through his power, network of allegiances, and resources, could give her offspring the best chance of survival; such a male is likely to be older than herself.

C Cognition and Social Exchange

In addition to cross-cultural surveys, a number of evolutionary psychologists have begun to make use of problem-solving methods to uncover constraints and abilities that people demonstrate under particular social circumstances. If the central, fitness-enhancing premise of evolutionary psychology is accurate, then we would expect people to be better at learning important social tasks than other, socially irrelevant, ones. Leda Cosmides and John Tooby used decision-making tasks to study the relationship between cognitive (understanding) abilities and social exchange. Their studies have demonstrated that people who are unsuccessful in completing a task of logical reasoning are, however, suddenly able to do so when the task is redefined as a matter of social exchange (for example, the detection of “cheats”). From these experiments, Cosmides and Tooby propose that the reasoning procedures that people have developed as a species are evidence of evolved information-processing capacities that are specific to social exchanges, rather than the hallmark of a highly logical, problem-solving mind.


Some of the implications of these findings have led to concerns on social and political grounds. In particular, the notion that males and females differ on some inherently psychological dimension has alarmed some social scientists, social psychologists, and feminists who argue that evolutionary psychology seeks to maintain an unfair gender role status quo, and reinforces gender stereotyping through a form of biological determinism. While evolutionary psychologists point out that findings based on nature should not preclude an appreciation of the importance of nurture, and that people can always intentionally alter the relationship between the sexes by social and educational means, they argue nevertheless that social reformers will not make their task any easier by ignoring the findings from evolutionary psychology. The problem here is that it suggests another form of mental programming based on current genetic trends, which could imply that people’s behaviour is predestined. It also implies that men and women have not evolved socially beyond their animal origins.

Having drawn upon cognitive psychology, the task now facing evolutionary psychology is to feed back into other areas of the discipline. Clearly neurological and comparative branches of psychology are likely to be sympathetic to a Darwinian approach. The area in which evolutionary psychology could potentially have the greatest impact, however, is social psychology, but this field as a whole has not been unequivocally favourable to the evolutionary approach. This is probably owing, in part, to the ways in which Darwinian theory was grossly misused in the first half of the 20th-century when Eugenics—selective breeding of human beings—and Fascist theories of a master race were based on pseudoscientific ideas.

Contributed By:
Lance Workman

Credited Images: aliexpress


Information Technology in Education


Information Technology in Education, effects of the continuing developments in information technology (IT) on education.

The pace of change brought about by new technologies has had a significant effect on the way people live, work, and play worldwide. New and emerging technologies challenge the traditional process of teaching and learning, and the way education is managed. Information technology, while an important area of study in its own right, is having a major impact across all curriculum areas. Easy worldwide communication provides instant access to a vast array of data, challenging assimilation and assessment skills. Rapid communication, plus increased access to IT in the home, at work, and in educational establishments, could mean that learning becomes a truly lifelong activity—an activity in which the pace of technological change forces constant evaluation of the learning process itself.


From the early days of computers, the United Kingdom has recognized the need to develop a national strategy for the use of IT in education. England, Wales, Scotland, and Northern Ireland have developed separate but similar plans. The IT strategy for schools was initially developed in England, Wales, and Northern Ireland through the government-funded Microelectronics Education Programme, which had a research and development role from 1981 to 1986. Then followed the Microelectronics Education Support Unit, which provided professional support to local education authorities (LEAs). This merged in 1988 with the Council for Educational Technology to become the National Council for Educational Technology (NCET), with the wider remit of evaluating and promoting the use of new technologies in education and training. Scotland set up the Scottish Council for Educational Technology (SCET) to support developments for Scottish schools. NCET was a registered charity, funded primarily by the Department for Education and Employment (DfEE, retitled as the Department for Education and Skills or DfES after the 2001 general election). In April 1998 it was given a new role as the British Educational Communications and Technology Agency (BECTA).

In 1988 the Conservative government set up the Information Technology in Schools (ITIS) initiative to oversee expenditure in this area. The initial strategy focused on encouraging teacher training in new technologies and the provision of hardware in schools. Grants were made to LEAs; before obtaining the grant, each LEA was required to produce a policy statement and a five-year plan for the development of IT in its area. Different but similar initiatives were developed in Scotland, Wales, and Northern Ireland, with the general aim of stimulating schools and local authorities to support curriculum and management use of IT. Of substantial importance across the whole of the United Kingdom was the inclusion of IT as an essential component of the national curriculum for every student aged 5 to 16. The curriculum identifies a core set of IT capabilities and stresses that these should be developed by applying them across subject areas.

Grants to schools for IT development in England ceased in 1994. From 1994 to 1997 government strategy was based on providing information and advice to schools and stimulating the purchase of newer technologies. Following legislation in 1988, schools and colleges became increasingly autonomous in making their own purchasing and staffing decisions. The government was concerned to ensure the growth of viable and appropriate commercial markets for new IT products for schools and colleges. Through a number of NCET-managed intervention strategies, it stimulated specific areas, for example, the introduction of CD-ROMs in schools.

Between 1991 and 1995 some £12 million of government funding was made available through NCET for the purchase of CD-ROM systems by schools and the development of curriculum materials. This strategy resulted in over 90 per cent of secondary schools and more than 30 per cent of primary schools in England having access to CD-ROM systems and in the development of an independent market for CD-ROM hardware and software for schools. Similar initiatives of varying scales and technologies, including portable computers for teachers, communications technologies, multimedia desktop computers, satellite technologies and integrated learning systems and libraries, have all contributed to keeping UK schools up to date with changes in technology. Research conducted by NCET showed clearly that IT changes what people learn and how they learn it.

After Labour came to power in May 1997, there was a marked change in government strategy in information and communications technology (ICT) for schools and colleges. Much of this strategy was based on developing a National Grid for Learning (NGfL). The concept was a mosaic of networks and content providers linked together to create a nationwide learning network for schools, colleges, libraries, and, eventually, homes. To achieve this the government set targets for the year 2002: all schools should be connected to the NGfL, all teachers should be competent and confident to teach with ICT, all students should leave school with a good understanding of ICT, and all transactions between central and local government and schools will be electronic. To support this strategy the DfEE provided £50 million in 1998-1999, matched by LEAs and schools. Scotland and Northern Ireland developed similar initiatives and by the end of 1999 all 1,300 schools in Northern Ireland were linked to the Internet. Plans were also made for substantial (costing over £200 million) teacher-training programmes across the United Kingdom; the scheme was run by the New Opportunities Fund (NOF), which reported in 2000 that, to that date, almost half of the teachers in England had registered for ICT training.

A IT in Schools

In 1996 there was an average of 96 computers per secondary school and 13 per primary school in England, for example. Expenditure on IT by schools steadily increased from £20 million per year in 1984 to £132 million in 1994, with well over half coming from schools’ budgets and the rest from central and local government sources. Despite this positive picture, hardware provision is variable, with some schools having a computer-to-pupil ratio of 1 to 3, while others have a ratio of 1 to 60. The average computer-to-pupil ratio in 1995-1996 was 1 to 19 in primary schools, and 1 to 9 in secondary schools. LEAs were set a target for the year 2001-2002 of 1 computer to 11 pupils in primary schools, and 1 to 7 in secondary schools.

B IT in Further Education

The provision of hardware and software resources varies substantially in further education (FE) colleges. Learning resource centres now often contain learning materials published on CD-ROM, and most colleges are connected to the Internet. These technologies have the potential to develop “virtual campuses” and thus increase student access and participation. Although there is a trend towards individualized programmes of study for students, little use is made as yet of computer-managed learning. A programme of training in educational technology for FE staff called the Quilt initiative was launched in February 1997 as a joint initiative between NCET, the Further Education Development Agency, the DfEE, and FE colleges.

C IT in Higher Education

All UK universities are connected to the Internet via the academic network known as JANET. A high-speed broadband version of this network, SuperJANET, is being developed. It currently links 60 universities and enables high-quality moving video to be networked for remote teaching and research purposes. In 1993, through the Teaching and Learning Technology Programme, the Higher Education Funding Council provided over £11 million for 76 projects to develop software materials to support the university curriculum. Use of such materials is encouraged by 20 university centres set up under the Computers in Teaching Initiative. The use of the Internet and CD-ROM to access information continues to grow. In 2000 the Higher Education Funding Council for England (HEFCE) announced a new project, the ‘e-University’, to develop web-based learning for higher education institutions.

D IT in Training

In 1994 research by a group called Benchmark found that use of computer-based training in public and private organizations in the United Kingdom had grown from 29 per cent in 1991 to 60 per cent in 1994. The use of other educational technologies was also evident: 12 per cent using interactive video, 6 per cent CD-I (compact disc-interactive), and 6 per cent CD-ROM.


As part of the IT curriculum, learners are encouraged to regard computers as tools to be used in all aspects of their studies. In particular, they need to make use of the new multimedia technologies to communicate ideas, describe projects, and order information in their work. This requires them to select the medium best suited to conveying their message, to structure information in a hierarchical manner, and to link together information to produce a multidimensional document.

In addition to being a subject in its own right, IT has an impact on most other curriculum areas, since the National Curriculum requires all school pupils from 5 to 16 years to use IT in every compulsory subject. Science uses computers with sensors for logging and handling data; mathematics uses IT in modelling, geometry, and algebra; in design and technology, computers contribute to the pre-manufacture stages; for modern languages, electronic communications give access to foreign broadcasts and other materials; and in music, computers enable pupils to compose and perform without having to learn to play traditional instruments. For those with special educational needs, IT provides access to mainstream materials and enables students to express their thoughts in words, designs, and activities despite their disabilities.


Using IT, learners can absorb more information and take less time to do so. Projects investigating the use of IT in learning demonstrate increased motivation in children and adults alike. In some cases it can mean success for people who have previously always failed. Learners may be more productive, challenge themselves more, be bolder, and have more confidence.


Another use of IT in learning is currently undergoing trials in the United Kingdom: integrated learning systems (ILS). These involve learning through rather than about IT, by providing structured, individualized tuition in numeracy and literacy. Using the system for short, regular sessions, learners progress through the programme at a steady but challenging rate. The system keeps a progress record, assesses the learner’s rate of performance, and produces reports for teachers, learners, and parents. This approach provides highly structured, targeted, and assessed learning for short periods of time.

Pupils and teachers alike find the individual ILS reports helpful and motivating, and teachers have never before had such detailed and accurate analysis of children’s abilities. The learning gains demonstrated so far have been encouraging. The multimedia attributes of the system make it possible to demonstrate complex concepts, and students can proceed at their own pace free from the pressure of their peers. Similar trials are taking place in Australia, Israel, and New Zealand. Integrated learning systems are used extensively in the United States.


The use of communication tools such as e-mail, fax, computer, and videoconferencing overcomes barriers of space and time and opens new possibilities for learning. The use of such technology is increasing, and it is now possible to deliver training to a widely dispersed audience by means of on-demand two-way video over terrestrial broadband networks. The vocational training sector has been supported by developments in this area in Britain by projects funded by the Education and Employment Department and the European Commission’s Socrates and Leonardo programmes. Many schools have gained experience of communications through e-mail and electronic conferencing systems that run over the telephone network. The Education Department’s Superhighways Initiative comprises 25 projects—involving over 1,000 UK schools and colleges—that focus on the application of electronic communications in schools and colleges.

Schools and colleges are making increasing use of the Internet. In 1997 all FE colleges, most secondary schools, and some primary schools had access to the Internet but it was expected that all schools would be online by 2002. Schools use the Internet both to access materials, people, and resources, and to display their own Web pages created by teachers and students. The use of videoconferencing is growing slowly and has helped some students learn foreign languages by talking directly to other students abroad. In January 2000, it was announced that teachers taking part in information technology training schemes would receive a subsidy of up to £500 to buy computing equipment.

A Computer Based Management Information Systems

Following a government initiative with LEAs in 1987, schools have made increasing use of computers for administration. The 1988 Education Act gave schools the responsibility for budgets, teacher and pupil records, and many other day-to-day administrative tasks. Many LEAs integrated their schools’ administrative systems with their own financial systems and provided extensive training and support for this. Between 1987 and 1997 schools and LEAs spent over £600 million on equipment and support. This has led to increasingly sophisticated uses of computer-based management information systems (CMIS), and the trend continues as communication technologies offer the opportunity for schools, LEAs, and government to exchange and compare data easily.


Education in the United States is organized at the state and school district level, but significant funding for IT in schools is provided through federal programmes. While all schools make some use of computers, the level of that use varies widely. Between 15 and 20 per cent of schools make extensive use of integrated learning systems. Multimedia computers have been used by some schools to develop pupils’ skills in producing essays containing text, sound, and still and moving images. The proposed extension of electronic communications systems, such as the Internet to all “K-12” schools (kindergarten through grade 12, that is, up to age 18), has given rise to a number of pilots investigating how the education system could capitalize on the opportunities offered.

In his paper of February 23, 1993, Technology for America’s Growth, President Clinton declared that in teaching there should be an emphasis on high performance. He announced new public investment to support technology with the aim of increasing the productivity of teaching and learning in schools.


In Australia, the range and quality of IT-supported learning are comparable to that in Britain. A number of technology-led initiatives have been funded by federal and state departments. The federal government has identified the emerging information age as a major opportunity for Australian industry and society in general. A national strategy has been announced that is to explore ways of networking schools and colleges.


Each individual provincial government in Canada has responsibility for running its schools, colleges, and universities. Although these may vary in their approach to education, they are all making substantial investments in IT. In particular, they are developing their use of communication technologies to support their school, college, and university systems. Provinces such as British Columbia and Nova Scotia have invested in extensive networks, which offer distance-learning programmes to overcome geographical barriers and to develop school and community use of technology. National involvement in this and the development of a national “Schoolnet” network is supported by the federal department of “Industry Canada”.


Radical technological developments in miniaturization, electronic communications, and multimedia hold the promise of affordable, truly personal, mobile computing. The move to digital data is blurring the boundary between broadcasting, publishing, and telephony by making all these media available through computer networks and computerized televisions (see Digital Broadcasting and Electronic Publishing). These developments are not only giving learners access to vast libraries and multimedia resources but also live access to tutors and natural phenomena throughout the world.

As technology provides easier access for students to material previously supplied by the teacher, it enhances the role of the teacher as manager of the learning process rather than source of the content. Easier access for students to information, tutorials, and assessment, together with the use of IT tools such as word processors and spreadsheets, will help them learn more productively. There will be a clear split in the way schools and colleges organize learning. In areas of the curriculum that are structured and transferable to electronic format, students will work at different levels and on different content. By removing the burden of individualized learning from schools and colleges, time will be freed for teachers to concentrate on the many other learning activities requiring a teacher as the catalyst.

Developments in communications technology and the increase in personal ownership of technology will allow learning in schools and colleges to integrate with learning elsewhere. The boundaries between one institution and another and between institutions and the outside world will become less important. Crucially, technology will remove the barrier between school and home.

The momentum of the technological revolution creates rapid and disruptive changes in the way in which people live, work, and play. As the pace of technological advance shows no sign of slowing, the challenge is in learning to adapt to changes with the minimum of physical and mental stress. To make this possible, the learning systems and those who manage them must prepare people to work with new technologies competently and confidently. They need to expect and embrace constant change to skill requirements and work patterns, making learning a natural lifelong process.

However disturbing this challenge may at first seem, the nature of technology is that it not only poses problems but also offers solutions—constantly creating opportunities and providing new and creative solutions to the process of living and learning.

Contributed By:
Margaret Bell

Reviewed By:
Peter Avis

Credited Image: aliexpress


Degree, Academic


Degree, Academic, title granted by a college or university, usually signifying completion of an established course of study. Honorary degrees are conferred as marks of distinction, not necessarily of scholarship.


Institutions of higher learning have granted degrees since the 12th century. The word itself was then used for the baccalaureate and licentiate, the two intermediate steps that led to the certificates of master and doctor, requisites for teaching in a medieval university. During the same period, honorary degrees were sometimes conferred by a pope or an emperor. In England, the Archbishop of Canterbury, by an act passed during the reign of King Henry VIII, acquired the authority to grant honorary Lambeth degrees.

During the Middle Ages, the conferring of a doctorate also allowed the recipient to practice the profession in which the certificate was awarded; this condition still holds true for the legal and medical professions in European countries, such as France, in which the government controls the universities.


In Germany and at most Continental universities, only the doctor’s degree is conferred, except in theology, in which the licentiate, or master’s degree, is also presented. Granting of the doctorate is contingent upon the acceptance of a dissertation and the passing of examinations. The baccalaureate is not usually a university degree in Europe. In France, it is acquired by passing a state examination at the completion of secondary education; the only university-conferred baccalaureate is that awarded by the faculty of law.

Most British universities grant the bachelor’s degree after the satisfactory completion of a three- or four-year course. Some, such as Oxford and Cambridge, have examinations, called tripos, for honours. The master’s degree in arts or science is granted after a further period of residence and study and the payment of fees. Other English universities grant the master’s degree only after a candidate has passed a series of examinations and presented an approved thesis. Doctorates (Ph.D. or D.Phil.) are awarded for individual research contributions in the arts or sciences. Postgraduate work leads to the writing of a thesis and the passing of oral (viva voce) and written examinations. Honorary degrees, such as the D.Litt. are sometimes given to prominent public figures.

In Australia, a bachelor’s degree precedes a master’s degree, with the latter being earned after an additional year or two of study. Degree courses vary between three and six years of study, with a doctor’s requiring a further two to five years. The most commonly granted degrees in the United States are the BA, or bachelor of arts, and the B.Sc., or bachelor of science, both generally given after the completion of a four-year course of study and sometimes followed by a mark of excellence, such as cum laude (“with praise”); magna cum laude (“with great praise”); or summa cum laude (“with highest praise”). The master’s degree is granted after one or two years of postgraduate work and may require the writing of a thesis or dissertation. The doctorate requires two to five years of postgraduate work, the writing of a thesis, and the passing of oral and written examinations.


The academic dress worn at degree-granting ceremonies consists of a long, full-cut gown and a mortarboard, a stiff square-shaped cap with a tassel. In the United Kingdom, a hood lined with coloured silk indicating the graduate’s institution is worn. More ornate gowns are worn for the D.Phil. and other higher degrees, and for ceremonies such as the Insaenia (the ceremony at which honorary degrees are granted). Full subfusc is still required at Oxford for examinations; this black and white outfit consists of dark suits for men and a white blouse and black skirt for women, as well as a mortarboard and cap.

Credited image: aliexpress


Further Education


Further Education (FE), the tertiary sector of education, following primary and secondary education, and sometimes preceding higher education. Whereas in the rest of Europe the tertiary sector is generally confined to vocational education and training, in the United Kingdom FE embraces both academic and vocational or professional study programmes. FE in the United Kingdom has no direct equivalent in other parts of the world. Other systems tend towards separation of the vocational system from schools and universities.

Most full-time students in FE study in further education colleges between the ages of 16 and 19, but the majority of FE college students overall are adults and study part-time. FE is often regarded as the “college sector” which provides study opportunities between school and university. However, the boundaries between further education colleges and higher education institutions are becoming increasingly blurred.

FE offers study opportunities for those who need help with basic skills: literacy, numeracy, and key skills at a foundation level. The majority of students are following courses at level 1 to 3 (foundation, intermediate, and advanced), and there are more A-level students in colleges than in school sixth forms. About 20 per cent of FE colleges also offer some higher education, and several universities (generally the former polytechnics) offer some FE.


FE in the United Kingdom is a distinctly modern phenomenon. It has its beginnings in the mechanical and technical institutes of the early 19th century. The first institute was formally constituted in Edinburgh in 1821. The subsequent growth in institutes was phenomenal and was matched by the development of the first national examining bodies from the time of the RSA Examinations Board in 1856. The RSA has now merged with other examinations boards to become the OCR Awarding Body, one of several awarding bodies that also include City and Guilds and Edexcel.

From the early 20th century until the 1960s, the UK had a tripartite system of schools, grammar, technical, and secondary modern, and the role of FE was primarily as a provider of evening study programmes in the local technical college. Before 1940, the technical college was a place of vocational education for the employed. The end of World War II and the demand for new skills meant further education concentrated on day release from work and evening classes. A new era of partnership with industry began. This developed with the industry training boards and levy systems of the 1960s and 1970s and the Manpower Services Commission (MSC) of the 1980s, to the Training and Enterprise Councils of the 1990s.

In the 1960s, with the arrival of comprehensive schools, many local education authorities (LEAs) wanted to offer second-chance opportunities to students at 16 and evening classes to adult students. The tertiary college was born. Many LEAs reorganized in order to set up tertiary colleges for all post-16 students and adults. The FE sector as we know it was created, although the process of providing schools for just 11- to 16-year-olds was not continued, and there is therefore now a diverse system.

FE today is provided by various institutions, including general further education colleges, agricultural and horticultural colleges, art and design colleges, and other specialist colleges; sixth forms in secondary schools, sixth-form colleges (England and Wales only), and universities. In 1999-2000 there were approximately 3.1 million students in FE in England; 22 percent were full-time students and 78 percent part-time. The government has actively encouraged the increase in FE provision.


The structures of funding and quality assurance are different within England, Wales, Scotland, and Northern Ireland. In England, Wales, and Scotland, colleges of further education, tertiary colleges, and sixth-form colleges, which previously received grants directly through their LEAs, have since April 1993 had the autonomy to run their own affairs within the further education sector. Northern Ireland followed suit in 1998. Internal organization, as well as finance and management issues (including pay and conditions of service contracts), are matters for each college to determine. All have governing bodies, which include representatives of the local business community and many courses are run in conjunction with local employers. In April 2001 a national Learning and Skills Council (LSC) was established in England, taking on the funding responsibilities of the English Further Education Funding Council (FEFC) and the functions of the Training and Enterprise Councils (TECs).


In the United Kingdom,, the aim is to establish a qualifications framework which includes both academic and vocational qualifications, overcoming traditional barriers and promoting greater flexibility within the system. The intention is that individuals and employers will establish systems to allow employees to attain progressively higher levels of skill. There is close cooperation between the regulatory bodies in England, Scotland, Wales, and Northern Ireland. In England, the Qualifications and Curriculum Authority (QCA) is responsible for a comprehensive qualifications framework, including accreditation for National Vocational Qualifications (NVQs). This accreditation also extends to Wales and Northern Ireland. In Wales, the Qualification, Curriculum, and Assessment Authority (ACCAC) perform a similar role to the QCA, while the Welsh Joint Educational Committee offers A-level and GCSE assessment. In Scotland, the Scottish Qualifications Authority is a regulatory and awarding body. The main thrust of the framework there is to offer parity of esteem for all qualifications across the system and to include the new competence-based system that has been devised by industry.

In September 2000 a new curriculum, known as “Curriculum 2000”, was introduced. This gives students the opportunity to study more subjects (normally four or five in one year) at AS (Advanced Subsidiary) level, and they can then specialize in two to four subjects in the second year, at the A2 level. The former GNVQs have been renamed as vocational A levels. Key skills of numeracy, literacy, and IT (information technology) efficiency are also examined within the curriculum at different levels.


College systems overseas tend to concentrate exclusively on the vocational provision, as compared with the UK FE college system, which combines academic and vocational provision. Other countries have priorities similar to those of the United Kingdom, including curriculum, qualification, and funding reforms; the decentralization of decision-making; the encouragement of links between industry and education; quality and standards; guidance and counselling; progression; and non-completion. Generally, vocational qualifications have the same low status overseas as in the United Kingdom; however, governments everywhere are aiming to change this perception.

Contributed By:
British Training International

Credited Images:aliexpress




Literacy, the ability to read and write at a level for an individual to operate and progress in the society they live in. It is sometimes further defined as the ability to decode written or printed signs, symbols, or letters, combined into words. In 1958 UNESCO defined an illiterate person as someone “who cannot with understanding both read and write a short simple statement on his/her everyday life”.


Most literacy surveys use this basic definition, particularly surveys of literacy levels in developing countries. Based on this definition about 4 in 5 of the population of the world over 15 years of age would be considered literate. According to information from UNESCO released in 2003 more women are now literate than ever before.

The number of adults who are illiterate in the world has fallen from 22.4 per cent in 1995 to 20.3 per cent in 2000; in total from about 872 million adults in 1995 to 862 million adults in 2000. If this trend continues the number of illiterate adults in the world in 2010 should have dropped to 824 million, or 16.5 per cent. The largest fall in illiteracy has been in Africa and Asia.

Although women continue to make up 2 in every 3 of the illiterate adults in the world, the number of illiterate women is falling and the percentage of illiterate women has dropped from 28.5 per cent to 25.8 per cent. This tendency is particularly marked in Africa, where, for the first time, most women are now literate. Although discrimination is one major reason why girls and women lack access to education, countries in the developing world increasingly recognize the benefits of providing access to education for girls and women, particularly as the children of educated women are more likely to become educated themselves.

Progress is slow, however, and about 20 per cent of adults remain illiterate. Worryingly, at the present rate, it is likely that the number of illiterate adults will further fall by about 5 per cent by the year 2015. Just as worryingly, the United Nations Children’s Fund (UNICEF) reports that 121 million children in the world are not in school and most of these are girls.

Although the Universal Declaration of Human Rights in 1948 and the 1989 Convention on the Rights of the Child established education as a basic human right, about 1 in 5 adults in the world were unable to read and write at the beginning of the 21st century.


Far more than basic literacy as defined by UNESCO is necessary for any adult living in an industrialized society. Recognition of this has led to the use of more complex definitions of literacy in most industrialized countries and the use of the term “functionally illiterate” rather than “illiterate”. This term—functionally illiterate—is usually used to refer to adults who are unable to use a variety of skills beyond the reading or writing of a simple sentence. In the industrialized world someone is considered to be functionally literate if they can “use reading, writing, and calculation for his or her own and the community’s development” rather than if they can merely read and write to some limited extent.

There have been rather fewer surveys of functional illiteracy in industrialized countries than of illiteracy in the developing world. However, beginning in 1994 governments, national statistical agencies, research institutions, and the Organisation for Economic Co-operation and Development (OECD) undertook a large-scale assessment of the literacy skills of adults, called the International Adult Literacy Survey (IALS). The IALS considered literacy in three areas: the first, “prose literacy”, focused on reading and interpreting prose in newspaper articles, magazines, and books; the second area, “document literacy”, focused on identifying and using information located in documents, such as forms, tables, charts, and indexes; the third, “quantitative literacy”, considered how well adults could apply numerical operations to information contained in printed material, such as a menu, a chequebook, or an advertisement.

The IALS considered how well adults in industrialized countries could use information to function in society and the economy rather than just classifying adults as either functionally literate or functionally illiterate. This definition was much more about the ability to operate in a print-based industrialized world than about the simple ability to decode print.

Nine countries—Canada (English- and French-speaking populations), France, Germany, Ireland, the Netherlands, Poland, Sweden, Switzerland (German- and French-speaking regions), and the United States—took part in the IALS in 1994. Two years later, in 1996, five more areas—Australia, the Flemish Community in Belgium, Great Britain, New Zealand, and Northern Ireland—administered the IALS assessment tests to samples of their adults. Finally, Chile, the Czech Republic, Denmark, Finland, Hungary, Italy, Norway, Slovenia, and the Italian-speaking region of Switzerland took part in the IALS in 1998.

The IALS established five levels of literacy:

• Level 1 suggested that a person had very poor skills. Adults at this Level would be unable to determine the correct amount of medicine to give a child from information printed on the package.
• Level 2 adults could only deal with simple material that was clearly laid out, and where the tasks were not very complex. Adults at level 2 had weak skills, could read, but tested poorly.
• Level 3 was considered the minimum for coping with the demands of everyday life and work in an industrialized country. It suggests the approximate level of literacy required for successfully completing secondary education and for moving on to higher education. At this level an adult should be able to integrate several sources of information and solve more complex problems.
• Levels 4 and 5 were used to describe adults who had higher-order information processing skills.
What is clear from the IALS is that there are considerable differences in the average level of literacy both within and between countries. In every country some adults had low-level literacy skills, although this varied from country to country. Common factors that influenced literacy level were home background and previous educational attainment. Recently, however, researchers and other experts have questioned the method used in the IALS and have suggested that it may not give an accurate picture of literacy in industrialized countries.
Contributed By:
Alan Wells

Credited Images: aliexpress




Adolescence, stage of maturation between childhood and adulthood. The term denotes the period from the beginning of puberty to maturity; it usually starts at about age 14 in males and age 12 in females. The transition to adulthood varies among cultures, but it is generally defined as the time when individuals begin to function independently of their parents.


Dramatic changes in physical stature and features are associated with the onset of pubescence. The activity of the pituitary gland at this time results in the increased secretion of hormones, with widespread physiological effects. Growth hormone produces a rapid growth spurt, which brings the body close to its adult height and weight in about two years. The growth spurt occurs earlier among females than males, also indicating that females mature sexually earlier than males. Attainment of sexual maturity in girls is marked by the onset of menstruation and in boys by the production of semen. The main hormones governing these changes are androgen in males and oestrogen in females, substances also associated with the appearance of secondary sex characteristics: facial, body, and pubic hair and a deepening voice in males; pubic and body hair, enlarged breasts, and broader hips in females. Physical changes may be related to psychological adjustment; some studies suggest that earlier-maturing individuals are better adjusted than their later-maturing contemporaries.

See also Growth, Human.


No dramatic changes take place in intellectual functions during adolescence. The ability to understand complex problems develops gradually. The French psychologist Jean Piaget determined that adolescence is the beginning of the stage of formal operational thought, which may be characterized as thinking that involves deductive logic. Piaget assumed that this stage occurs among all people regardless of educational or related experiences. Research evidence, however, does not support this hypothesis; it shows that the ability of adolescents to solve complex problems is a function of accumulated learning and education.


The physical changes that occur at pubescence are responsible for the appearance of the sex drive. The gratification of sex drives is still complicated by many social taboos, as well as by a lack of accurate knowledge about sexuality. Since the 1960s, however, sexual activity has increased among adolescents; recent studies show that almost 50 percent of adolescents under the age of 15 and 75 percent under the age of 19 reports having had sexual intercourse. Despite their involvement in sexual activity, some adolescents are not interested in, or knowledgeable about, birth-control methods or the symptoms of sexually transmitted diseases. Consequently, the rate of illegitimate births and the incidence of venereal disease are increasing.


The American psychologist G. Stanley Hall asserted that adolescence is a period of emotional stress, resulting from the rapid and extensive physiological changes occurring at pubescence. The German-born American psychologist Erik Erikson sees development as a psychosocial process going on through life.

The psychosocial task of adolescence is to develop from a dependent to an independent person, whose identity allows the person to relate to others in an adult fashion (intimacy). The occurrence of emotional problems varies among adolescents.

See also Child Psychology.

Credited Images: Yesstyle


Educational Psychology


Educational Psychology, field of psychology concerned with the development, learning, and behaviour of children and young people as students in schools, colleges, and universities. It includes the study of children within the family and other social settings, and also focuses on students with disabilities and special educational needs. Educational psychology is concerned with areas of education and psychology which may overlap, particularly child development, evaluation and assessment, social psychology, clinical psychology, and educational policy development and management.


In the 1880s the German psychologist Hermann Ebbinghaus developed techniques for the experimental study of memory and forgetting. Before Ebbinghaus, these higher mental processes had never been scientifically studied; the importance of this work for the practical world of education was immediately recognized.

In the late 1890s William James of Harvard University examined the relationship between psychology and teaching. James, who was influenced by Charles Darwin, was interested in how people’s behaviour adapted to different environments. This functional approach to behavioural research led James to study practical areas of human endeavour, such as education.

James’s student Edward Lee Thorndike is usually considered to be the first educational psychologist. In his book Educational Psychology (1903), he claimed to report only scientific and quantifiable research. Thorndike made major contributions to the study of intelligence and ability testing, mathematics and reading instruction, and the way learning transfers from one situation to another. In addition, he developed an important theory of learning that describes how stimuli and responses are connected.


Educational psychology has changed significantly over the 20th century. Early investigations of children as learners and the impact of different kinds of teaching approaches were largely characterized by attempts to identify general and consistent characteristics. The approaches used varied considerably. Jean Piaget, for example, recorded the development of individuals in detail, assessing changes with age and experience. Others, such as Robert Gagné, focused on the nature of teaching and learning, attempting to lay down taxonomies of learning outcomes. Alfred Binet and Cyril Burt were interested in methods of assessing children’s development and identifying those children considered to be of high or low general intelligence.

This work led to productive research which refined the theories of development, learning, instruction, assessment, and evaluation, and built up an increasingly detailed picture of how students learn. Educational psychology became an essential part of the training of teachers, who for several generations were instructed in the theories emanating from its research to help train them in classroom teaching practice.

A Changing Approaches

Recently the approach of educational psychology has changed significantly in the United Kingdom, as has its contribution to teacher education. In part these changes reflect political decisions to alter the pattern of teacher training: based on the belief that theory is not useful, and that “hands-on” training is preferable. However, discipline and teacher education have each been changing of their own accord. Moving away from the emphasis on all-encompassing theories, such as those of Jean Piaget, Sigmund Freud, and B. F. Skinner, the concerns of educational psychologists have shifted to practical issues and problems faced by the learner or the teacher. Consequently, rather than, for example, impart to teachers in training Skinner’s theory of operant conditioning, and then seek ways of their applying it in classrooms, educational psychologists have tended to begin with the practical issues—how to teach reading; how to differentiate a curriculum (a planned course of teaching and learning) across a range of children with differing levels of achievement and needs; and how to manage discipline in classrooms.

Theory-driven research increasingly suggested that more elaborated conceptions of development were required. For example, the earlier work on intelligence by Binet, Burt, and Lewis Madison Terman focused on the assessment of general intelligence, while recognizing that intellectual activity included verbal reasoning skills, general knowledge, and non-verbal abilities such as pattern recognition. More recently the emphasis has shifted to accentuate the differing profiles of abilities, or “multiple intelligences” as proposed by the American psychologist Howard Gardner, who argues that there is good evidence for at least seven, possibly more, intelligences including kinaesthetic and musical as well as the more traditionally valued linguistic and logico-mathematical types of intelligence.

There has also been a shift in emphasis from the student as an individual to the student in a social context, at all levels from specific cognitive (thinking and reasoning) abilities to general behaviour. For example, practical intelligence and its links with “common sense” have been addressed and investigations made into how individuals may have relatively low intelligence as measured by conventional intelligence tests, yet be seen to be highly intelligent in everyday tasks and “real-life” settings. Recognition of the impact of the environment on a child’s general development has been informed by research on the effects of poverty, socio-economic status, gender, and cultural diversity, together with the effects of schooling itself. Also, the emphasis has changed from one of regarding differences in their performance on specific tasks as deficits compared with some norm, to an appreciation that deficits in performance may reflect unequal opportunity, or that differences may even reflect a positive diversity.

It is now apparent that there are also important biological factors determined by a child’s genetic make-up and its prenatal existence, as well as social factors concerned with the family, school, and general social environment. Because these various factors all interact uniquely in the development of an individual, consequently there are limitations in the possible applicability of any one theory in educational psychology.


Professional educational psychologists (EPs) draw upon theory and research from other disciplines in order to benefit individual children, their families, and educational institutions, particularly schools through the following activities:

A Individual Children

An EP may be asked to advise a parent on how to deal with a pre-school child with major temper tantrums; to assess a young child with profound and multiple disabilities; to advise teachers on the nature of a 7-year-old’s reading difficulties; to advise teachers and parents on an adolescent’s problematic behaviour; to undertake play therapy with an 8-year-old who has been sexually and physically abused; or to give an adolescent counselling or psychotherapy.

In each case there is an assessment to identify the nature of the problem, followed by an intervention appropriate to this analysis. The assessment may include the use of standardized tests of cognitive abilities, not necessarily to derive an Intelligence Quotient (IQ) but to investigate a range of aspects of intellectual development; informal assessment and standardized tests of attainment (such as reading and spelling); interviews; observation of the child in class, or with parents or friends; or methods designed to understand the child’s view of their world, including play and structured pictures and tasks where the child arranges the materials to represent their own views of family, or other social arrangements. The interventions (planned procedures) may be equally wide-ranging. In some cases the EP will try to help key adults to understand the child and the nature of the problem. In other cases, more direct advice may be given on how to handle disturbing aspects of a child’s behaviour. In other instances the EPs may advise or produce a specific programme of work, counselling, or behaviour change, which they might implement directly, or they may advise on and monitor the practices of teachers and parents.

In some instances the main basis for advice might be evidence obtained from research on child development; or evidence on intellectual development and its assessment in ethnic minority populations; or theories of learning and instruction as applied to helping a child with literacy difficulties; or theories of counselling or psychotherapeutic intervention may help an adolescent with significant emotional problems. EPs normally work collaboratively with teachers and parents, and with medical and other colleagues. They play a major role in providing advice to local education authorities or school districts in those countries which make statutory assessments of students’ special educational needs.

B Institutions

Often the involvement of an EP with an individual child in a school will lead teachers to recognize that the same issues apply more generally. For example, other children may also have similar learning difficulties or problems in controlling aggression. The EP may then provide a consultancy service to the teacher or school. In some cases this service may be sought direct, for example when a new headteacher wishes to review a previous assessment or the school’s current behaviour policy. Research has indicated, for example, how schools can reduce bullying, improve pupil performance by rearranging classrooms, for example, and optimize the inclusion of children with special educational needs.


Educational psychology continues to provide a major basis for the initial education of teachers, particularly in management of learning and behaviour, but also on curriculum design, with special attention given to the needs of individual children. Increasingly, educational psychology is also contributing to student teachers’ understanding of the school as a system and the importance of this wider perspective for optimizing their performance; to their professional development by helping them analyse their own practice, beliefs, and attitudes and, once they begin the practice of teaching, to their continuing professional development based on experience in schools—particularly in areas such as special needs and disability. The impact of information technology and the increasing development of inclusive education provide particular challenges.

Contributed By:
Geoff Lindsay

Credited Images: Yesstyle