Accounting and Bookkeeping


Accounting and Bookkeeping, the process of identifying, measuring, recording, and communicating economic information about an organization or other entity, in order to permit informed judgements by users of the information. Bookkeeping encompasses the record-keeping aspect of accounting and therefore provides much of the data to which accounting principles are applied in the preparation of financial statements and other financial information.

Personal record-keeping often uses a simple single-entry system in which amounts are recorded in column form. Such entries include the date of the transaction, its nature, and the amount of money involved. Record-keeping of organizations, however, is based on a double-entry system, whereby each transaction is recorded on the basis of its dual impact on the organization’s financial position or operating results or both. Information relating to the financial position of an enterprise is presented in a balance sheet, while disclosures about operating results are displayed in a profit and loss statement. Data relating to an organization’s liquidity and changes in its financial structure are shown in a statement of changes in financial position. Such financial statements are prepared to provide information about past performance, which in turn becomes a basis for readers to try to project what might happen in the future.


Bookkeeping and record-keeping methods, created in response to the development of trade and commerce, are preserved from ancient and medieval sources. Double-entry bookkeeping began in the commercial city-states of medieval Italy and was well developed by the time of the earliest preserved double-entry books, from 1340 in Genoa. The development of counting frames and the abacus in China in the first centuries ad laid the basis for similarly advanced techniques in East Asia.

The first published accounting work was written in 1494 by the Venetian monk Luca Pacioli. Although it disseminated rather than created knowledge about double-entry bookkeeping, Pacioli’s work summarized principles that have remained essentially unchanged. Additional accounting works were published during the 16th century in Italian, German, Dutch, French, and English, and these works included early formulations of the concepts of assets, liabilities, and income.

The Industrial Revolution created a need for accounting techniques that were adequate to handle mechanization, factory-manufacturing operations, and the mass production of goods and services. With the emergence in the mid-19th century of large, publicly held business corporations, owned by absentee shareholders and administered by professional managers, the role of accounting was further redefined.

Bookkeeping, which is a vital part of all accounting systems, was in the mid-20th century increasingly carried out by machines. The widespread use of computers broadened the scope of bookkeeping, and the term “data processing” now frequently encompasses bookkeeping.


Accounting information can be classified into two categories: financial accounting or public information and managerial accounting or internal information. Financial accounting includes information disseminated to parties that are not part of the enterprise proper—shareholders, creditors, customers, suppliers, regulatory bodies, financial analysts, and trade associations—although the information is also of interest to the company’s officers and managers. Such information relates to the financial position, liquidity (that is, ability to convert to cash), and profitability of an enterprise.

Managerial accounting deals with cost-profit-volume relationships, efficiency and productivity, planning and control, pricing decisions, capital budgeting, and similar matters that aid decision-making. This information is not generally disseminated outside the company. Whereas the general-purpose financial statements of financial accounting are assumed to meet basic information needs of most external users, managerial accounting provides a wide variety of specialized reports for division managers, department heads, project directors, section supervisors, and other managers.

A Specialized Accounting

Of the various specialized areas of accounting that exist, the three most important are auditing, income taxation, and accounting for not-for-profit organizations. Auditing is the examination, by an independent accountant, of the financial data, accounting records, business documents, and other pertinent documents of an organization in order to attest to the accuracy of its financial statements. Large private and public enterprises sometimes also maintain an internal audit staff to conduct audit-like examinations, including some that are more concerned with operating efficiency and managerial effectiveness than with the accuracy of the accounting data.

The second specialized area of accounting is income taxation. Preparing an income tax form entails collecting information and presenting data in a coherent manner; therefore, both individuals and businesses frequently hire accountants to determine their tax position. Tax rules, however, are not identical with accounting theory and practices. Tax regulations are based on laws that are enacted by legislative bodies, interpreted by the courts, and enforced by designated administrative bodies. Much of the information required in calculating taxes, however, is also needed in accounting, and many techniques of computing are common to both areas.

A third area of specialization is accounting for not-for-profit organizations, such as charities, universities, hospitals, Churches, trade and professional associations, and government agencies. These organizations differ from business enterprises in that they generally receive resources on some non-reciprocating basis (that is, without paying for such resources), they are not set up to create a distributable profit, and they usually have no share capital. As a result, these organizations call for differences in record-keeping, in accounting measurements, and in the format of their financial statements.

B Financial Reporting

Traditionally, the function of financial reporting was to provide information about companies to their owners. Once the delegation of managerial responsibilities to hired personnel became a common practice, financial reporting began to focus on stewardship, that is, on the managers’ accountability to the owners. Its purpose then was to document how effectively the owners’ assets were managed, in terms of both capital preservation and profit generation.

After businesses were commonly organized as corporations, the appearance of large multinational corporations and the widespread employment of professional managers by absentee owners brought about a change in the focus of financial reporting.

Although the stewardship orientation has not become obsolete, financial reporting is today somewhat more geared towards the needs of investors. Because both individual and institutional investors view owning shares of companies as only one of various investment alternatives, they seek much more information about the future than was supplied under the traditional stewardship concept. As investors relied more on the potential of financial statements to predict the results of investment and disinvestment decisions, accounting became more sensitive to their needs. One important result was an expansion of the information supplied in financial statements.

The proliferation of footnotes to financial statements is a particularly visible example. Such footnotes disclose information that is not already included in the body of the financial statement. One footnote usually identifies the accounting policies or methods adopted when acceptable alternative methods also exist, or when the unique nature of the company’s business justifies an otherwise unconventional approach.

Footnotes also disclose information about lease commitments, contingent liabilities, pension plans, share options, and foreign currency translation, as well as details about long-term debt (such as interest rates and maturity dates). A company having a widely distributed ownership usually includes among its footnotes the income it earned in each quarter, quarterly stock market prices of its shares, and information about the relative sales and profit contribution of its different areas of activity.


Accounting as it exists today may be viewed as a system of assumptions, doctrines, tenets, and conventions, all encompassed by the phrase “generally accepted accounting principles”. Many of these principles developed gradually, as did much of common law; only the accounting developments of recent decades are prescribed in statutory law. Following are several fundamental accounting concepts.

The entity concept states that the item or activity (entity) that is to be reported on must be clearly defined, and that the relationship assumed to exist between the entity and external parties must be clearly understood.

The going-concern assumption states that it is expected that the entity will continue to operate for the foreseeable future.

The historical cost principle requires that economic resources be recorded in terms of the amounts of money exchanged; when a transaction occurs, the exchange price is by its nature a measure of the value of the economic resources that are exchanged.

The realization concept states that accounting takes place only for those economic events to which the entity is a party. This principle, therefore, rules out recognizing a gain based on the appreciated market value of a still owned asset.

The matching principle states that income is calculated by matching a period’s revenues with the expenses incurred in order to bring about that revenue.

The accrual principle defines revenues and expenses as the inflow and outflow of all assets—as distinct from the flow only of cash assets—in the course of operating the enterprise.

The consistency criterion states that the accounting procedures used at a given time should conform with the procedures previously used for that activity. Such consistency allows data of different periods to be compared.

The disclosure principle requires that financial statements present the most useful amount of relevant information—namely, all information that is necessary in order not to be misleading.

The substance-over-form standard emphasizes the economic substance of events even though their legal form may suggest a different result. An example is a practice of consolidating the financial statements of one company with those of another in which it has more than a 50 percent ownership interest.

The prudence doctrine states that when exposure to uncertainty and risk is significant, accounting measurement and disclosure should take a cautious and prudent stance until evidence shows sufficient lessening of the uncertainty and risk.

A The Balance Sheet

Of the two traditional types of financial statements, the balance sheet relates to an entity’s value, and the profit and loss account—or income statement—relates to its activity. The balance sheet provides information about an organization’s assets, liabilities, and owners’ equity as of a particular date (such as the last day of the accounting or fiscal period). The format of the balance sheet reflects the basic accounting equation: assets equal equities. Assets are economic resources that provide potential future service to the organization. Equities consist of the organization’s liabilities together with the equity interest of its owners. (For example, a certain house is an asset worth £70,000; its unpaid mortgage is a liability of £45,000, and the equity of its owners is £25,000.)

Assets are categorized as current or fixed. Current assets are usually those that management could reasonably be expected to convert into cash within one year; they include cash, receivables, goods in stock (or merchandise inventory), and short-term investments in stocks and bonds. Fixed assets encompass the physical plant—notably land, buildings, machinery, motor vehicles, computers, furniture, and fixtures. They also include property being held for speculation and intangibles such as patents and trademarks.

Liabilities are obligations that the organization must remit to other parties, such as creditors and employees. Current liabilities usually are amounts that are expected to be paid within one year, including salaries and wages, taxes, short-term loans, and money owed to suppliers of goods and services. Long-term liabilities are usually debts that will come due beyond one year—such as bonds, mortgages, and long-term loans. Whereas liabilities are the claims of outside parties on the assets of the organization, the owners’ equity is the investment interest of the owners of the organization’s assets. When an enterprise is operated as a sole proprietorship or as a partnership, the balance sheet may disclose the amount of each owner’s equity. When the organization is a corporation, the balance sheet shows the equity of the owners—that is, the shareholders—as consisting of two elements: (1) the amount originally invested by the shareholders; and (2) the corporation’s cumulative reinvested income, or retained earnings (that is, income not distributed to shareholders as dividends), in which the shareholders have equity.

B The Profit and Loss Statement

The traditional activity-oriented financial statement issued by business enterprises is the profit and loss statement, often known as the income statement. Prepared for a well-defined time interval, such as three months or one year, this statement summarizes the enterprise’s revenues, expenses, gains, and losses. Revenues are transactions that represent the inflow of assets as a result of operations—that is, assets received from selling goods and providing services. Expenses are transactions involving the outflow of assets in order to generate revenue, such as wages, rent, interest, and taxation.

A revenue transaction is recorded during the fiscal period in which it occurs. An expense appears in the profit and loss statement of the period in which revenues presumably resulted from the particular expense. To illustrate, wages paid by a merchandising or service company are usually recognized as an immediate expense because they are presumed to generate revenue during the same period in which they occurred. On the other hand, money spent on raw materials to be used in making products that will not be sold until a later financial period would not be considered an immediate expense. Instead, the cost will be treated as part of the cost of the resulting stock asset; the effect of this cost on income is thus deferred until the asset is sold and revenue is realized.

In addition to disclosing revenues and expenses (the principal components of income), the profit and loss statement also lists gains and losses from other kinds of transactions, such as the sale of fixed assets (for example, a factory building) or the early repayment of long-term debt. Extraordinary—that is, unusual and infrequent—developments are also specifically disclosed.

C Other Financial Statements

The profit and loss statement excludes a number of assets withdrawn by the owners; in a corporation, such withdrawn assets are called dividends. A separate activity-oriented statement, the statement of retained earnings, discloses income and redistribution to owners.

A third important activity-oriented financial statement is the cash-flow statement. This statement provides information not otherwise available in either a profit and loss statement or a balance sheet; it presents the sources and the uses of the enterprise’s funds by operating activities, investing activities, and financing activities. The statement identifies the cash generated or used by operations; the cash exchanged to buy and sell plant and equipment; the cash proceeds from issuing shares and long-term borrowings; and the cash used to pay dividends, to purchase the company’s outstanding shares of its own stock, and to pay off debts.

D Bookkeeping and Accounting Cycle

Modern accounting entails a seven-step accounting cycle. The first three steps fall under the bookkeeping function—that is, the systematic compiling and recording of financial transactions. Business documents provide the bookkeeping input; such documents include invoices, payroll records, bank cheques, and records of bank deposits. Special ledgers are used to record recurring transactions. (A ledger is a book having one page for each account in the organization’s financial structure. The page for each account shows its debits on the left side and its credits on the right side so that the balance—that is, the net credit or debit—of each account can be determined.) These include a sales ledger, a purchases ledger, a cash receipts ledger, and a cash disbursements ledger. Transactions that cannot be accommodated by a special journal are recorded in a general ledger. In many modern offices, these records are held in computer records that normally follow the traditional ledger structure.

D1 Step One

Recording a transaction in a ledger marks the starting point for the double-entry bookkeeping system. In this system, the financial structure of an organization is analysed as consisting of many interrelated aspects, each of which is called an account (for example, the wages payable account). Every transaction is identified in two aspects or dimensions, referred to as its debit (or left side) and credit (or right side) aspects, and each of these two aspects has its own effect on the financial structure. Depending on their nature, certain accounts are increased with debits and decreased with credits; other accounts are increased with credits and decreased with debits. For example, the purchase of stock for cash increases the stock account (a debit) and decreases the cash account (a credit). If the stock is purchased on the promise of future payment, a liability would be created, and the journal entry would record an increase in the stock account (a debit) and an increase in the liability account (a credit). Recognition of wages earned by employees entails recording an increase in the wage expense account (a debit) and an increase in the liability account (a credit). The subsequent payment of the wages would be a decrease in the cash account (a credit) and a decrease in the liability account (a debit).

D2 Step Two

In the next step in the accounting cycle, the amounts that appear in the various ledger are transferred to the organization’s general ledger—a procedure called posting.

In addition to the general ledger, subsidiary ledgers—usually a sales ledger and a purchase ledger—are used to provide information in greater detail about the accounts in the general ledger. For example, the general ledger contains one account showing the entire amount owed to the enterprise by all its customers; the sales ledger breaks this amount down on a customer-by-customer basis, with separate sales account for each customer. Subsidiary accounts may also be kept for the wages paid to each employee, for each building or machine owned by the company, and for amounts owed to each of the enterprise’s creditors.

D3 Step Three

Posting data to the ledgers is followed by listing the balances of all the accounts and calculating whether the sum of all the debit balances agrees with the sum of all the credit balances (because every transaction has been listed once as a debit and once as a credit). This process is called producing a trial balance. This procedure and those that follow it take place at the end of the financial period, normally each calendar month. Once the trial balance has been successfully prepared, the bookkeeping portion of the accounting cycle is concluded.

D4 Step Four

Once bookkeeping procedures have been completed, the accountant prepares certain adjustments to recognize events that, although they did not occur in conventional form, are in substance already completed transactions. The following are the most common circumstances that require adjustments: accrued revenue (for example, interest earned but not yet received); accrued expenses (wage costs incurred but not yet paid); unearned revenue (earning subscription revenue that had been collected in advance); prepaid expenses (for example, expiration of a prepaid insurance premium); depreciation (recognizing the cost of a machine as expense spread over its useful economic life); stock movements (recording the cost of goods sold on the basis of a period’s purchases and the change in the value of stocks between beginning and end of the financial period); and receivables (recognizing bad-debt expenses on the basis of expected uncollected amounts).

D5 Steps Five and Six

Once the adjustments are calculated, the accountant prepares an adjusted trial balance—one that combines the original trial balance with the effects of the adjustments (step five). With the balances in all the accounts thus updated, financial statements are then prepared (step six). The balances in the accounts are the data that make up the organization’s financial statements.

D6 Step Seven

The final step is to close non-cumulative accounts. This procedure involves a series of bookkeeping debits and credits to transfer sums from income statement accounts into owners’ equity accounts. Such transfers reduce to zero the balances of non-cumulative accounts so that these accounts can receive new debit and credit amounts that relate to the activity of the next business period.


Accounting has a well-defined body of knowledge and rather definitive procedures. Nevertheless, many countries (such as the United States and the United Kingdom) have Accounting Standard boards that continue to refine existing techniques and develop new approaches. Such activity is needed in part because of innovative business practices, newly enacted laws, and socio-economic changes. Better insights, new concepts, and enhanced perceptions have also influenced the development of accounting theory and practices. However, despite considerable efforts to create internationally agreed accounting standards, there still exist important differences in the way accounting information is produced in different countries. These differences often make international comparisons of accounting information extremely hazardous.

Credited images: yesstyle




Capital, the collective term for a body of goods and monies from which future income can be derived. Generally, consumer goods and monies spent for present needs and personal enjoyment are not included in the definition or economic theory of capital. Thus, a business regards its land, buildings, equipment, inventory, and raw materials, as well as stocks, bonds, and bank balances available, as capital. Homes, furnishings, cars and other goods that are consumed for personal enjoyment (or the money set aside for purchasing such goods) are not considered capital in the traditional sense.

In the more precise usage of accounting, capital is defined as the stock of property owned by an individual or corporation at a given time, as distinguished from the income derived from that property during a given period. A business firm accordingly has a capital account (frequently called a balance sheet), which reports the assets of the firm at a specified time, and an income account, which reckons the flow of goods and of claims against goods during a specified period.

Among the 19th-century economists, the term capital designated only that segment of business wealth that was the product of past industry. The wealth that is not produced, such as land or ore deposits, was excluded from the definition. Income from capital (so defined) was called profit, or interest, whereas the income from natural resources was called rent. Contemporary economists, for whom capital means simply the aggregate of goods and monies used to produce more goods and monies, no longer make this distinction.

The forms of capital can be distinguished in various ways. One common distinction is between fixed and circulating capital. Fixed capital includes all the more or less durable means of production, such as land, buildings, and machinery. Circulating capital refers to nonrenewable goods, such as raw materials and fuel, and the funds required to pay wages and other claims against the enterprise.

Frequently, a business will categorize all of its assets that can be converted readily into cash, such as finished goods or stocks and bonds, as liquid capital. By contrast, all assets that cannot be easily converted to cash, such as buildings and equipment, are considered frozen capital.

Another important distinction is between productive capital and financial capital. Machines, raw materials, and other physical goods constitute productive capital. Claims against these goods, such as corporate securities and accounts receivable, are financial capital. Liquidation of productive capital reduces productive capacity, but the liquidation of financial capital merely changes the distribution of income.


The 18th-century French economists known as physiocrats were the first to develop a system of economics. Their work was developed by Adam Smith and emerged as the classical theory of capital after further refinements by David Ricardo in the early 19th century. According to the classical theory, capital is a store of values created by labour. Part of capital consists of consumers’ goods used to sustain the workers engaged in producing items for future consumption. The part consists of producers’ goods channelled into further production for the sake of expected future returns. The use of capital goods raises labour productivity, making it possible to create a surplus above the requirements for sustaining the labour force. This surplus constitutes the interest or profit paid to capital. Interest and profits become additions to capital when they are ploughed back into production.

Karl Marx and other socialist writers accepted the classic view of capital with one major qualification. They regarded as capital only the productive goods that yield income independently of the exertions of the owner. An artisan’s tools and a small farmer’s land holding are not capital in this sense. The socialists held that capital comes into being as a determining force in society when a small body of people, the capitalists, owns most of the means of production and a much larger body, the workers, receives no more than bare subsistence as reward for operating the means of production for the benefit of the owners.

In the mid-19th century the British economists Nassau William Senior and John Stuart Mill, among others, became dissatisfied with the classical theory, especially because it lent itself so readily to socialist purposes. To replace it, they advanced a psychological theory of capital based on a systematic inquiry into the motives for frugality or abstinence. Starting with the assumption that satisfactions from present consumption are psychologically preferable to delayed satisfactions, they argued that capital originates in abstinence from consumption by people hopeful of a future return to reward their abstinence. Because such people are willing to forgo present consumption, productive power can be diverted from making consumers’ goods to making the means of further production; consequently, the productive capacity of the nation is enlarged. Therefore, just as physical labour justifies wages, abstinence justifies interest and profit.

Inasmuch as the abstinence theory rested on subjective considerations, it did not provide an adequate basis for objective economic analysis. It could not explain, in particular, why a rate of interest or profit should be what it actually was at any given time.

To remedy the deficiencies of the abstinence theory, the Austrian economist Eugen Böhm-Bawerk, the British economist Alfred Marshall, and others attempted to fuse that theory with the classical theory of capital. They agreed with the abstinence theorists that the prospect of future returns motivates individuals to abstain from consumption and to use part of their income to promote production, but they added, in line with classical theory, that a number of returns depends on the gains in productivity resulting from accretions of capital to the productive process. Accretions of capital make production more roundabout, thus causing greater delays before returns are realized. The amount of income saved, and therefore the amount of capital formed, would accordingly depend, it was held, on the balance struck between the desire for present satisfaction from consumption and the desire for the future gains expected from a more roundabout production process. The American economist Irving Fisher was among those who contributed to refining this eclectic theory of capital.

John Maynard Keynes rejected this theory because it failed to explain the discrepancy between money saved and capital formed. Although according to the eclectic theory and, indeed, all previous theories of capital, savings should always equal investments, Keynes showed that the decision to invest in capital goods is quite separate from the decision to save. If investment appears unpromising of profit, saving still may continue at about the same rate, but a strong “liquidity preference” will appear that will cause individuals, business firms, and banks to hoard their savings instead of investing them. The prevalence of a liquidity preference causes unemployment of capital, which, in turn, results in unemployment of labour.


Although theories of capital are of relatively recent origin, capital itself has existed in civilized communities since antiquity. In the ancient empires of the Middle and the Far East and to a larger degree in the Graeco-Roman world, a considerable amount of capital, in the form of simple tools and equipment, was employed to produce textiles, pottery, glassware, metal objects, and many other products that were sold in international markets. The decline of trade in the West after the fall of the Roman Empire led to less specialization in the division of labour and a reduced use of capital in production. Medieval economies engaged almost wholly in subsistence agriculture and were therefore essentially non-capitalist. Trade began to revive in the West during the time of the Crusades. The revival was accelerated worldwide throughout the period of exploration and colonization that began late in the 15th century. Expanding trade fostered the greater division of labour and mechanization of production and therefore a growth of capital. The flow of gold and silver from the New World facilitated the transfer and accumulation of capital, laying the groundwork for the Industrial Revolution. With the Industrial Revolution, production became increasingly roundabout and dependent on the use of large amounts of capital. The role of capital in the economies of Western Europe and North America was so crucial that the socio-economic organization prevailing in these areas from the 18th century through the first half of the 20th century became known as the capitalist system or capitalism.

In the early stages of the evolution of capitalism, investments in plant and equipment were relatively small, and merchant, or circulating, capital—that is, goods in transit—was the preponderant form of capital. As industry developed, however, industrial, or fixed, capital—for example, capital frozen in mills, factories, railways, and other industrial and transport facilities—became dominant. Late in the 19th and early in the 20th centuries, financial capital in the form of claims to the ownership of capital goods of all sorts became increasingly important. By creating, acquiring, and controlling such claims, financiers and bankers exercised great influence on production and distribution. After the Great Depression of the 1930s, financial control of most capitalist economies was superseded in part by state control. A large segment of the national income of the United States, Great Britain, and various other countries flows through government, which as the public sector exerts a great influence in regulating that flow, thereby determining the amounts and kinds of capital formed.

Credited images: yesstyle



Interest, payment made for the use of another person’s money; in economics, it is regarded more specifically as a payment made for capital. Economists also consider interest as the reward for thrift; that is, payment offered to people to encourage them to save and to make their savings available to others.

Interest paid only on the principal, that is, on the sum of money loaned, is called simple interest. Interest paid not only on the principal but also on the cumulative total of past interest payments is called compound interest. The rate of interest is expressed as a percentage of the principal paid for its use for a given time, usually a year. The current, or market, the rate of interest is determined primarily by the relation between the supply of money and the demands of borrowers. When the supply of money available for investment increases faster than the requirements of borrowers, interest rates tend to fall. Conversely, interest rates generally rise when the demand for investment funds grows faster than the available supply of funds to meet those demands. Business executives will not borrow money at an interest rate that exceeds the return they expect the use of the money to yield.

In medieval Christendom and before, the payment and receiving of interest were questioned on moral grounds, as usury was considered a sin. The position of the Christian Church, as defined by St Thomas Aquinas, condoned interest on loans for business purposes, because the money was used to produce new wealth, but adjudged it sinful to pay or receive interest on loans made for the purchase of consumer goods. Under modern capitalism, the payment of interest for all types of loans is considered proper and even desirable because interest charges serve as a means to allocate the limited funds available for loan to projects in which they will be most profitable and most productive. Islamic Shari’ah law, however, still regards interest as, strictly speaking, sinful, and in some Islamic countries, legal provisions are made to replace interest with other rewards for thrift or investment such as shares in profits.

Credited Images:yesstyle



Loan, in finance, the lending of a sum of money. In common usage, the lending of any piece of property. A loan may be secured by a charge on the borrower’s property (as a house purchase mortgage is) or be unsecured. There will also be a number of conditions attached to the loan: for example when it is to be repaid and the rate of interest to be charged on the sum of money loaned. Almost any person or any organization can make or receive a loan, but there are restrictions on some types of the loan; for example, those made by a company to one of its directors.

Loans can take many forms. Many businesses are financed by long-term loan capital, such as loan stock or debentures. Governments also finance their borrowing requirements by issuing long-term fixed-interest bonds that in the United Kingdom are known as gilt-edged stock (or gilts). These loans will usually have a fixed repayment (or redemption or maturity) date and will earn the lender (owner of the stock/debenture/bond) a fixed rate of interest until that date. In the meantime, the price at which the stock can be traded on a stock exchange will depend on a number of things, including how the interest rate on the stock compares with the current rate available on other loan stock. For example, if interest rates have gone down, the price of the loan stock should go up, because the stock is now earning a higher rate of interest than would be earned on its original value at the current market rate. But the market price of a bond will also depend on its maturity date and its quality. In the United States, bonds issued by companies whose credit ratings are below investment grade are known as junk bonds; these pay a higher rate of interest than “non-junk” bonds but their market value will take into account the deemed higher risk of the bond issuer defaulting on interest payments or on redeeming the bonds at their full redemption value. A company’s loan capital is normally recorded on its balance sheet, at the repayment amount, as long-term liabilities. One of the factors by which investors judge a company and by which lenders decide whether to lend it money is the ratio of its debt to its equity. This is known as gearing or leverage; the higher the proportion of loan finance to equity, the higher the gearing or leverage. One of the other ratios people look at when evaluating a company is the proportion of its profits being used to pay the interest on its loan finance.

The interest rate payable on loans is usually determined by market forces at the time the loan is taken out. However, governments may give soft loans (loans on more favourable terms than can be obtained in the market) to businesses they wish to support or encourage. The International Development Association, part of the World Bank, is specifically concerned with organizing loans to developing countries on soft terms.

Credited images: yesstyle


International Bank for Reconstruction and Development


International Bank for Reconstruction and Development, also known as the World Bank, specialized United Nations agency established at the Bretton Woods Conference in 1944. A related institution, the International Monetary Fund (IMF), was created at the same time. The chief objectives of the bank, as stated in the articles of agreement, are “to assist in the reconstruction and development of territories of members by facilitating the investment of capital for productive purposes [and] to promote private foreign investment by means of guarantees or participation in loans [and] to supplement private investment by providing, under suitable conditions, finance for productive purposes out of its own capital …”.

The bank grants loans only to member nations, for the purpose of financing specific projects (at the start of the 21st century it had 183 members and operated in 100 countries). Before a nation can secure a loan, advisers and experts representing the bank must determine that the prospective borrower can meet conditions stipulated by the bank. Most of these conditions are designed to ensure that loans will be used productively and that they will be repaid. The bank requires that the borrower is unable to secure a loan for the particular project from any other source on reasonable terms and that the prospective project is technically feasible and economically sound. To ensure repayment, member governments must guarantee loans made to private concerns within their territories. After the loan has been made, the bank requires periodic reports both from the borrower and from its own observers on the use of the loan and on the progress of the project.

In the early period of the World Bank’s existence, loans were granted chiefly to European countries and were used for the reconstruction of industries damaged or destroyed during World War II. Since the late 1960s, however, most loans have been granted to economically developing countries in Africa, Asia, and Latin America. The bank gave particular attention to projects that could directly benefit the poorest people in developing nations by helping them to raise their productivity and to gain access to such necessities as safe water and waste-disposal facilities, health care, family planning assistance, nutrition, education, and housing. Direct involvement of the poorest people in economic activity was being promoted by providing loans for agriculture and rural development, small-scale enterprises, and urban development. The bank also was expanding its assistance to energy development and ecological concerns.


World Bank funds are provided primarily by subscriptions to, or purchase of, capital shares. The minimum number of shares that a member nation must purchase varies according to the relative strength of its national economy. Not all the funds subscribed are immediately available to the bank; only about 8.5 percent of the capital subscription of each member nation actually is paid into the bank. The remainder is to be deposited only if, and to the extent that, the bank calls for the money in order to pay its own obligations to creditors. There has never been a need to call in capital. The bank’s working funds are derived from sales of its interest-bearing bonds and notes in capital markets of the world, from the repayment of earlier loans, and from profits on its own operations. It has earned profits every year since 1947.

All powers of the bank are vested in a board of governors, comprising one governor appointed by each member nation. The board meets at least once annually. The governors delegate most of their powers to 24 executive directors, who meet regularly at the central headquarters of the bank in Washington, D.C. Five of the executive directors are appointed by the five member states that hold the largest number of capital shares in the bank. The remaining 19 directors are elected by the governors from the other member nations and serve 2-year terms. The executive directors are headed by the president of the World Bank, whom they elect for a 5-year term, and who must be neither a governor nor a director.


The bank has two affiliates: the International Finance Corporation (IFC), established in 1956; and the International Development Association (IDA), established in 1960. Membership in the bank is a prerequisite for membership in either the IFC or the IDA. All three institutions share the same president and boards of governors and executive directors.

IDA is the bank’s concessionary lending affiliate, designed to provide development finance for those countries that do not qualify for loans at market-based interest rates. IDA soft loans, or “credits”, are longer term than those of the bank and bear no interest; only an annual service charge of 0.75 per cent is made. The IDA depends for its funds on subscriptions from its most prosperous members and on transfers of income from the bank.

All three institutions are legally and financially separate, but the bank and IDA share the same staff; IFC has its own operating and legal staff but uses administrative and other services of the bank. Membership in the International Monetary Fund is a prerequisite for membership in the World Bank and its affiliates.


The World Bank has been heavily criticized in recent years for its poor performance in development economics, especially with regard to the social and environmental consequences of the projects it supported in developing countries. The bank itself has admitted considerable wrongdoing. Resulting reforms were embodied in the Strategic Compact of 1997, which decentralized the bank’s operations. However, it is arguable that it is less at fault than many of the corrupt or incompetent regimes whose schemes it is called on to fund. The bank’s role in development has in any case diminished with the vast influx of private capital into profitable projects in developing countries. Health, education, and other fields unlikely to yield profits remain in need of an institution such as the World Bank.

Credite images: Yesstyle


Reconstruction Finance Corporation

Reconstruction Finance Corporation (RFC), the independent agency of the United States government, created during the economic depression by the congressional enactment in 1932, and abolished by Congress in June 1957. The stated purpose of the RFC was “to provide emergency financing facilities for financial institutions; to aid in financing agriculture, commerce, and industry; to purchase preferred stock, capital notes, or debentures of banks and trust companies; and to make loans and allocations of its funds as prescribed by law”. These purposes were subsequently enlarged by legislative amendment to include participation in the maintenance of the economic stability of the country through the promotion of maximum production and employment and the encouragement of small business enterprises. The basic activities of the RFC were to make and collect loans and to buy and sell securities. Originally, the capital stock of the corporation was fixed at $500 million.

For seven years following its creation, the RFC was classified as an emergency agency. In 1939 it was grouped with other agencies to constitute the Federal Loan Agency. It was transferred to the Department of Commerce in 1942 and reverted to the Federal Loan Agency three years later. When that agency was abolished in 1947, its functions were assumed by the RFC.

Approximately two-thirds of the disbursements of the RFC were made in connection with the national defence of the United States, especially during World War II. Loans were also made by the RFC to federal agencies and to state and local governments in connection with the relief of the unemployed and the relief of victims of disasters such as floods and earthquakes. Disbursements to private enterprises included loans to banks and trust companies to aid in their establishment, reorganization, or liquidation, and to mortgage loan companies, building and loan associations, and insurance companies. Loans were also made to agricultural financing institutions, to enterprises engaged in financing the export of agricultural surpluses, and to railways, mines, mills, and other industrial enterprises. Hundreds of millions of dollars were disbursed by the RFC for the purchase of securities offered by the Public Works Administration, other government agencies, and private corporations.

In 1948, after the financial crisis of the depression and World War II had passed, Congress reduced the capital stock of the RFC to $100 million and provided for the retirement of the outstanding capital stock in excess of that amount. It also authorized the RFC to issue to the Treasury its own notes, debentures, bonds, or other similar obligations, in a number of its outstanding loans, in order to borrow money with which to carry on its functions.

During 1951 and 1952 congressional investigators found considerable evidence of fraud and corruption among RFC officials. In July 1953, Congress enacted the RFC Liquidation Act, providing for the gradual transfer of the functions of the RFC to other government agencies. The RFC loan powers were transferred in 1954 to the Small Business Administration. The RFC was abolished in June 1957, and its remaining functions were transferred to the Housing and Home Finance Agency, the General Services Administration, and the Department of the Treasury. During its existence from 1932 to 1957, the RFC disbursed more than $50 billion in loans.


Energy Conservation


Energy Conservation, the attempt to reduce the amount of energy used for domestic and industrial purposes, particularly in the developed world.

In the past, energy was plentifully available in relation to human demand so that wood or charcoal was burned prodigally and inefficiently on open fires or in simple stoves. This provided the staple fuel supply until coal arrived to fuel the Industrial Revolution in the 18th century. Even today wood provides 13 percent of world energy and much of it is burned very inefficiently to provide heat for cooking in developing countries. A typical Indian villager uses five times as much energy to cook the evening meal on a wood-fuelled fire or stove as his or her counterpart in Europe. The consequence is that fuelwood as an energy source is beginning to run out in Africa and south-eastern Asia.

In Europe, and particularly Britain, wood was already in short supply by the middle of the 18th century, but coal was becoming increasingly available. As well as being used domestically it was burned to raise steam to power the pumping engines necessary to remove the water from the coal mines and so increase the production of this valuable fuel. The coal-fired steam engine also made rail transport possible, with George Stephenson’s invention of a railway engine (Locomotion, built in 1825) that was overwhelmingly more reliable and efficient than any other form of propulsion. That is not to say it had a high efficiency; the conversion of the chemical energy in the coal into the energy of motion of the railway engine was achieved with an efficiency of less than one percent.


The efforts of practical engineers to improve the efficiency of their steam engines led Nicolas Carnot, in 1824, to a statement of the laws of thermodynamics. These laws are laws of experience but have a strong theoretical basis and are crucial to improving the efficiency with which we use our dwindling supply of fossil fuel energy. The realization that energy cannot be created or destroyed should deter inventors of perpetual motion machines, but the second law of thermodynamics presents a more sophisticated limit to the efficiency of any heat engine whether it be a turbine or a car engine. In a steam turbine, for example, if the inlet steam temperature is Thot and the temperature at which it exhausts from the turbine after setting it spinning is Tcold then the maximum theoretical conversion efficiency possible for the engine is simply:

where T is measured in degrees absolute (K).

For this reason, the practical efficiency of conversion of a large coal- or oil-fired steam-driven power station is less than 40 per cent and that of a petrol-fuelled car engine less than 20 per cent. The rest of the energy is usually thrown away as waste heat, although in the case of a car the waste heat can be used to heat or air-condition the car.

The low efficiency with which we generate our electricity or power our cars, a consequence of the laws of physics rather than carelessness, means that future improvements in energy efficiency will come both as a result of technological improvements, and conscious reduction in energy consumption (such as in the number of car journeys). These questions have only begun to be seriously addressed relatively recently. Ingenious technological improvement has raised the efficiency of electricity generation to almost 60 percent by combining the gas turbine and steam turbine cycles. Cars using sophisticated diesel engines now exceed 100 km (60 mi) on only 3 litres (0.66 gallons) of fuel.


The structure of energy use in the developed world experienced a major upheaval in 1973 when the Arab oil producers, in response to the pressures of the Yom Kippur War, increased the price of oil four-fold to US$12 per barrel, and reduced supply to big importers of oil such as the European Community and the United States by 5 percent (in order to effect a withdrawal of support from Israel). They were later, in 1979, to raise the price still higher, so that in 1980 crude oil was trading at US$40 per barrel.

The European Community reacted by instigating a policy called CoCoNuke, that is coal, conservation, and nuclear power. A high priority was given to reducing the overall use of fuels, particularly oil. Stimulated by the increase in fuel prices, people started to save energy and to use it more economically so that through the 1980s considerable improvements in energy efficiency were achieved. In the United States, there was a move towards smaller cars and a reduction in the speed limit. However, as the Arab cartel started to break up and prices of oil fell, in some cases below US$10 per barrel, new motives for energy efficiency came to prominence: environmental pollution and in particular global warming. The trebling of the oil price to US$30 per barrel in 1999 brought commercial pressure to bear on improving the efficiency of energy use.


It was realized as long ago as 1896 by Svante Arrhenius that the radiative balance of the Earth was profoundly influenced by a protective sphere of carbon dioxide. For 150,000 years the carbon dioxide (CO2) content of the atmosphere had been constant at about 270 parts per million (ppm). This carbon dioxide trapped infrared radiation leaving the Earth and caused the average temperature at the Earth’s surface to be some 31 degrees warmer than it would otherwise have been. This has had a crucial effect on life itself, as without this natural greenhouse effect most water on Earth would be ice. However, since about 1850 the carbon dioxide content of the atmosphere has been increasing until it is now over 360 ppm. The main cause of this influence has been the ever-increasing combustion of coal, oil, and gas to provide energy for our ever-improving lifestyle. Western Europeans use three tonnes of oil or its equivalent as gas or coal per person per year and in the United States, the figure is eight tonnes. The world consumes 8 billion tonnes of oil-equivalent fossil fuels each year and the figure is set to increase to 14 billion tonnes by 2020. Much of this increased demand arises from the developing world. China burns 1.2 billion tonnes of coal each year and within five years expects the figure to rise to 1.5 billion tonnes as its economy continues to grow at 10 percent per year or more. (On average in a developing country a growth of 1.5 per cent in energy use can be anticipated for a 1 percent growth in the economy.) The rapidly increasing population in developing countries compounds the problem. United Nations figures give the world population in 2000 as 6 billion, rising to 10 billion by 2040: of these, over 8 billion will be in developing countries, many with fast-growing economies, so that their demand for energy will rise steeply.

The effect of burning ever-increasing amounts of fossil fuels is to increase the production of carbon dioxide. The concentration of carbon dioxide is set to double the 19th century’s stable value of about 270 ppm by 2030, causing the average temperature at the Earth’s surface to rise by about 0.2° to 0.3° C per decade and sea level to rise by 5 to 8 cm (2 to 3 in) per decade, according to the Intergovernmental Panel on Climate Change set up by the United Nations.

The probable consequences of global warming are so perturbing and also, in the long term, unpredictable, that anxiety has been expressed across the world. The prospect of mass flooding of low-lying countries like Bangladesh and change in weather patterns giving increased rainfall in parts of the northern hemisphere and increases in desertification in some equatorial regions over the next few decades is alarming. In May 1992, 154 countries (including those of the European Union) signed the United Nations Framework on Climatic Change (ratified in March 1994). The signatories agreed to stabilize their carbon dioxide emission levels at 1990 values by the end of the century. This led to the adoption in 1997 of the Kyoto Protocol that requires industrialized countries to reduce their emissions of greenhouse gases by an average of 5.2 per cent during the period 2008 to 2012 relative to 1990 levels.

Scientific members of the Intergovernmental Panel on Climate Change, set up to monitor and investigate global warming, warned that this reduction will not be nearly enough to prevent further potentially damaging climate change. However, just to stabilize carbon dioxide emissions will require very considerable political will. The World Energy Council states that to achieve stabilization will require at least a 60 percent reduction in annual human-derived carbon dioxide emissions from now on.

How should such reduction be achieved?


A variety of stratagems are available and are shown in the accompanying diagram. By far the most effective approach is to burn less fossil fuel and particularly the carbon-rich fuels such as coal and heavy fuel oil. As it happens, these are also the fuels with the highest sulphur content, which, together with nitrogen, lead to acid emissions during combustion, the precursor of acid rain. It follows that protection of the environment is now the most important incentive for encouraging energy conservation. In the long term, depletion of non-renewable fossil-fuel resources is just as important; at current rates of consumption reserves of oil and gas are expected to run into short supply in around 50 years and coal in 200 years.

Increasing demand for fossil fuels and the associated pollution hazards involved have led to calls by the Brundtland Commission (1987) and others for a move to sustainable development, and this has been endorsed by politicians in many countries. The extreme difficulties in achieving this desirable end have not been appreciated by many; the World Energy Council estimates that new renewable energy sources are only likely to provide a maximum of 10 per cent of world requirements by about 2020 (although this figure could rise towards 40 per cent by 2100), so it is difficult to see how demand will be met without a substantially increased input from nuclear power.

For these reasons, the European Union has developed a number of initiatives to encourage energy savings, with a figure of 20 per cent saving being seen as a realizable goal. The World Energy Council in its various future energy scenarios has postulated considerable reductions in energy intensity; that is, in the amount of energy required to produce one unit of Gross Domestic Product (GDP). Figures published in a World Energy Council report in 1993 suggest the average world figure for the efficiency of energy use as 3 to 3.5 percent; in Western Europe and Japan the figure is 4 to 5 percent and in the United States only 2 percent.


Energy saving by improved efficiency of operation can be achieved on the supply side by technological improvement in electricity generation, refining operations, and so on. The demand side—energy used for warming buildings, operating electrical appliances, lighting, and so on—has been neglected compared with the supply side and there is very considerable scope for improvement. In Western Europe, 40 percent of final energy use is in the domestic sector and 25 percent in industry; 30 percent is used in transport.


About half the overall energy used in Western Europe is used in buildings. By using currently proven technology energy usage could be reduced by 20 percent with a payback period of fewer than five years. Careful design and insulation; use of energy-efficient lighting and glazing; installation of better controls leading to more efficient energy management; the use of computerized energy-management systems; and the installation of modern and efficient appliances for heating, cooling, cooking, and refrigeration, should all be encouraged. Labelling of appliances with information on their efficiency of operation helps in the choice of the optimum system.

Progress in the domestic sector is slow as energy conservation techniques are best introduced during construction, while turnover of the housing stock is low at 2 per cent per annum in the United Kingdom. Retro-fitting of efficient insulation and lighting systems is to be encouraged. A major refurbishment of commercial and industrial buildings takes place more frequently and should include energy-saving measures.


Improving energy efficiency in the industry usually requires capital expenditure, provision of which is in competition with the provision of new manufacturing equipment, expansion of production, payment of dividends, and wage settlements. As a consequence, in the United Kingdom and the United States particularly, unrealistically short payback times of one to two years are often required on energy-saving investment.

Electricity savings can be achieved by improved power factor controls, installation of modern electric motors for fans, pumps, drives, and so on, and installation of high-efficiency lighting equipment, as well as improving housekeeping techniques such as load shedding to avoid penalization on peak loads of short duration while utilizing cheaper off-peak tariffs, thus saving money (although not necessarily energy).

The efficiency of boilers and furnaces can often be dramatically improved by careful adjustment and control of optimum excess combustion air rates. Waste heat recovery via heat exchangers, heat pumps, and thermal wheels could be assessed for cost effectiveness. The upgrading of steam and condensation systems can achieve substantial savings.

Successful energy conservation can only be achieved if an energy management plan is introduced with rigorous monitoring and targeting of progress. Workforce motivation is essential and can only be achieved if there is overt management commitment at the highest level. Improved energy conservation is as much psychological as a technical and financial problem.


The efficiency of electricity generation is governed ultimately by the laws of thermodynamics. By increasing the inlet temperature in gas turbines, by the introduction of new materials and design techniques, the efficiency of conversion of the latest gas turbines has been raised to 42 per cent. If the hot exhaust gas is used to raise steam to power a steam turbine, a so-called “combined cycle” system is formed with an overall efficiency of conversion of heat to electricity approaching 60 per cent. Combined-cycle power stations running on natural gas are rapidly displacing coal- and oil-fired stations worldwide; an added incentive to build them is their low environmental impact and reduced carbon dioxide emission.

An even more efficient way of using primary fossil fuel energy is to build Cogeneration or Combined Heat and Power (CHP) systems. Here the exhaust heat from the gas or steam turbine or even diesel engine is used to power the electricity generator to provide process steam or heating of work areas. Such systems have an overall efficiency of energy use of over 80 per cent. (This system is illustrated in the diagram.) There are many factors and commercial situations where CHP is ideal for the electricity/heat balance required and where the installation is cost effective as well as conserving energy.


The transport sector is the most polluting of all, generating more carbon dioxide than electricity generation or the destruction of the rainforests. There are 600 million cars in the world today and in Western Europe, the numbers are set to double by 2020; elsewhere in developing countries, the increase will be even faster. Although the efficiency of operation of car engines has been substantially improved by ignition management systems and the use of diesel engines the tendency is still to build cars with performances much higher than road conditions allow. Congestion and pollution are beginning to lead to a move towards electrical traction and an expansion of public transport systems, although people cling to the fictional “freedom” conferred by the private car. Fuel-cell-powered cars running on hydrogen fuel are now being demonstrated, as are hybrid vehicles that use a small, optimized gasoline or diesel engine to charge a battery. Both systems give a good range and halve pollution.


Deregulation and privatization of energy supply systems and the introduction of market-led energy policies encourage producers to maximize profits by selling more and more energy, thus reducing their enthusiasm for energy conservation. They are controlled only by pollution legislation. On the demand side, users seem curiously reluctant to install energy-saving systems even if it saves them money over three or four years; energy-efficient light bulbs are a case in point. In order to encourage energy conservation governments must, therefore, resort to the fiscal encouragement of various kinds using grants and taxes. An energy-carbon tax has been proposed for the European Union of US$10 per barrel of oil equivalent, split evenly between energy and carbon; such taxes are already operating in some countries, such as Denmark. A system of carbon emissions trading is also being put in place so that money spent on reducing carbon emissions can be spent where it will be most effective.

It is clear that the world’s energy resources must be used more efficiently in future if we are to meet increasing demands for energy from a rapidly growing and industrializing population. Pressure on finite fuel resources as well as rising levels of the population requires urgent action.

Contributed By:
Ian Fells

Credited images; modernqipao


Nuclear Power


Nuclear Power, electrical power produced from energy released by controlled fission or fusion of atomic nuclei in a nuclear reaction. Mass is converted into energy, and the amount of released energy greatly exceeds that from chemical processes such as combustion.

The first experimental nuclear reactor was constructed in 1942 amid tight wartime secrecy in Chicago, Illinois, in the United States. A prototype reactor was demonstrated at Oak Ridge, Tennessee, in 1943, and by 1945 three full-scale reactors were in operation at Hanford, in Washington State. These were dedicated to plutonium production for nuclear weapons; however, the first large-scale commercial reactor generating electrical power was started up in 1956 at Calder Hall, United Kingdom.

Nuclear power is now a well-established source of electricity worldwide. The most common types of reactor are light water reactors, mostly pressurized water reactors (PWRs) together with boiling water reactors (BWRs). Gas-cooled and heavy water reactors make up the rest. Worldwide there are currently about 430 reactors operating in 25 countries providing about 17 percent of the world’s electricity. Nuclear reactors are also used for propulsion of submarines and ships, and there are a number of prototype and experimental reactors around the world. At present, only a few experimental fusion reactors exist, none of which produce usable amounts of electrical power.

Few nuclear power stations are under construction at present, and some have been cancelled when partly built. This is mainly because of long-term resistance from the environmental movement (in particular since the Chernobyl disaster of 1986), but nuclear power stations are also not competitive with natural gas- and coal-fired power stations at present. It is uncertain whether nuclear power generation will increase or decrease worldwide over the next 50 years. However, the very low carbon dioxide emissions from nuclear power stations compared with coal-, gas-, or oil-fired units mean that there is potentially a future expansion in nuclear power driven by the need to control climate change.

More than 40 million kilowatt-hours (kWh) of electricity can generally be produced from one tonne of natural uranium. Over 16,000 tonnes of coal or 80,000 barrels of oil would need to be burned to make the same amount of electricity. Moreover, the amount of carbon dioxide produced in generating one kWh of electricity would be 1 kg for coal, 0.5 kg for gas, and only 10 grams for nuclear power.

Other than economic factors, the main issues limiting the expansion of nuclear power are the disposal of radioactive waste (including waste left over from decommissioning of old facilities), radioactivity in liquid effluent and gaseous discharges, security concerns over stockpiled plutonium, and the historical connection with nuclear weapons. Availability of nuclear fuel is unlikely to limit nuclear power production in the foreseeable future.


Nuclear power plants generate electricity from fission, usually of uranium-235 (U-235), the nucleus of which has 92 protons and 143 neutrons. When it absorbs an extra neutron, the nucleus becomes unstable and splits into smaller pieces (“fission products”) and more neutrons. The fission products and neutrons have a smaller total mass than the U-235 and the first neutron; the mass difference has been converted into energy, mostly in the form of heat, which produces steam and in turn drives a turbine generator to produce electricity.

Natural uranium is a mixture of two isotopes, fissionable U-235 (0.7 per cent) and non-fissionable U-238. However, U-238 can absorb neutrons to form plutonium-239 (P-239), which is fissionable, and up to half the energy produced by a reactor can, in fact, come from fission of P-239. Some types of reactor require the amount of U-235 to be increased above the natural level, which is called enrichment. Pressurized water reactors (PWRs), the most common type of reactor, require fuel enriched to about 3 percent U-235.

Reactor fuel is made up of fuel pellets or pins enclosed in a tubular cladding of steel, zircaloy, or aluminium. Several of these fuel rods make up each fuel assembly. The fast neutrons released in the fission reaction need to be slowed down before they will induce further fissions and give a sustained chain reaction. This is done by a moderator, usually water or graphite, which surrounds the fuel in the reactor. However, in “fast reactors” there is no moderator and the fast neutrons sustain the fission reaction.

A coolant is circulated through the reactor to remove heat from the fuel. Ordinary water (which is usually also the moderator) is most commonly used but heavy water (deuterium oxide), air, carbon dioxide, helium, liquid sodium, liquid sodium-potassium alloy, molten salts, or hydrocarbon liquids may be used in different types of reactor.

The chain reaction is controlled by using neutron absorbers such as boron, either by moving boron-containing control rods in and out of the reactor core or by varying the boron concentration in the cooling water. These can also be used to shut down the reactor. The power level of the reactor is monitored by temperature, flow, and radiation instruments and used to determine control settings so that the chain reaction is just self-sustaining.

The main components of a nuclear reactor are the pressure vessel (containing the core); the fuel rods, moderator, and primary cooling system (making up the core); the control system; and the containment building. This last element is required in the event of an accident, to prevent any radioactive material being released to the environment, and is usually cylindrical with a hemispherical dome on top.

During operation, and also after it is shut down, a nuclear reactor will contain a very large amount of radioactive material. The radiation emitted by this material is absorbed in thick concrete shields surrounding the reactor core and primary cooling system. An important safety feature is the emergency core cooling system, which will prevent overheating and “meltdown” of the reactor core if the primary cooling system fails. See also Nuclear Fission.


Radioactivity was discovered by Antoine Henri Becquerel in 1896, although not called this until two years later when Pierre and Marie Curie discovered the radioactive elements polonium and radium, which occur naturally with uranium. In 1932 the neutron was discovered by British scientist James Chadwick. Enrico Fermi and colleagues in Italy then discovered that bombarding uranium with neutrons slowed by means of paraffin produced at least four different radioactive products. Six years later, German scientists Otto Hahn and Fritz Strassman demonstrated that the uranium atom was actually being split. The Austrian-born Swedish physicist Lise Meitner continued the work with her nephew Otto Frisch and defined nuclear fission for the first time.

In 1939, Fermi travelled to the United States to escape the Fascist regime in Italy and was followed by physicist Niels Bohr, who fled the German occupation of Denmark. Collaborating at Columbia University, they developed the concept of a chain reaction as a source of power. With the outbreak of World War II, concerns arose among refugee European physicists in France, the United Kingdom, and the United States that Nazi Germany might develop an atomic bomb. The focus of research then changed to military applications.

The Manhattan Project began in the United States in 1940, with the aim to develop nuclear weapons. In 1942, Fermi constructed the first experimental nuclear reactor at the University of Chicago. One year later, a prototype plutonium production reactor was demonstrated at Oak Ridge and by 1945 three full-scale reactors were in operation at Hanford. The first nuclear bomb was tested at Alamogordo Air Base in New Mexico in July 1945. Two bombs were then dropped on Japan in August, the first at Hiroshima and the second at Nagasaki.

With the end of World War II in 1945, the Cold War and the East-West arms race took over. The Union of Soviet Socialist Republics (USSR) mounted a crash development programme and soon began plutonium production. The United States continued with plutonium production and also developed different types of reactor, as did the USSR, United Kingdom, France, and Canada. Both sides developed a range of technologies that was also applicable to nuclear power generation. Reliable energy supplies were important to national recovery, and nuclear power was seen as an essential element of national power programmes.

The first purpose-built reactor for electrical power generation was started up in 1954 at Obninsk, near Moscow, in the USSR. In 1956 the first large-scale commercial reactor generating electrical power (as well as producing plutonium) began operating at Calder Hall, England. In the United States, three types of the reactor were being developed for commercial use, namely the pressurized water reactor (PWR), boiling water reactor (BWR), and the fast breeder reactor (FBR). In 1957 the first commercial power unit, a BWR, was started up in the United States.

There have been some major incidents in nuclear power plants. In 1957 a plutonium production reactor caught fire at Windscale (modern-day Sellafield) in Cumbria, England, spreading large amounts of radioactivity across Britain and northern Europe. It was the worst nuclear accident in the history of the UK. In 1979, in the worst nuclear accident in US history, a core meltdown occurred at Three Mile Island power plant near Harrisburg, Pennsylvania. The worst nuclear accident to date occurred in 1986, when a runaway nuclear reaction at Chernobyl power plant near Kiev, USSR (modern-day CIS), led to a series of explosions that dispersed massive amounts of radioactive material throughout the Northern hemisphere. In 1999 a “criticality incident” occurred at the Tokai-Mura plant in Japan, causing the worst nuclear damage in that country. (See also section on Nuclear Accidents.)

The number of nuclear reactors in the world has grown steadily. By 1964 there were 14 reactors connected to electricity distribution systems worldwide. In 1970 there were 81; this number grew to 167 by 1975, to 365 by 1985, to 435 by 1995, and then decreased to 428 by 1999.


Most of the world’s reactors are located in nuclear power plants, the rest are research reactors or reactors used for propulsion of submarines and ships. Some designs can be re-fuelled while in operation, others need to be shut down to refuel. Several advanced reactor designs, which are simpler, more efficient, and inherently safer, are also under development.

There are two basic types of fission reactors: thermal reactors and fast reactors. In thermal reactors, the neutrons created in the fission reaction lose energy by colliding with the light atoms of the moderator until they can sustain the fission reaction. In fast reactors, “fast” neutrons sustain the fission reaction and a moderator is not needed. They require enriched fuel, but the fast neutrons can be used to convert U-238 into fissile material (plutonium), creating more nuclear fuel than the amount consumed. They can also be used to “burn” plutonium as a means of reducing the amount that is stockpiled.

For the purpose of electricity generation, there are five main categories of reactors, each comprising one or more types. Light Water Reactors include Pressurized Water Reactors (PWRs), together with the Russian VVER design, and Boiling Water Reactors (BWRs). Gas Cooled Reactors comprise Magnox reactors and Advanced Gas-Cooled Reactors (AGR), developed in the United Kingdom, as well as High-Temperature Gas-Cooled Reactors (HTGR). Pressurized Heavy Water Reactors include the CANDU reactor developed in Canada. Light Water Graphite Reactors comprise the RBMK reactors, developed in the USSR. Lastly, Fast Breeder Reactors include Liquid Metal Fast Breeder Reactors (LMFBR).

In the early 1950s, enriched uranium was only available in the United States and the USSR. For this reason, reactor development in the United Kingdom (Magnox), Canada (CANDU), and France was based on natural uranium fuel. The Russian RBMK design also used natural uranium fuel.

In natural uranium reactors, ordinary water cannot be used as the moderator, because it absorbs too many neutrons. In the successful CANDU design, this was overcome by using heavy water (deuterium oxide) for the moderator and coolant. Nearly all reactors in the United Kingdom have used a graphite moderator and carbon dioxide as the coolant.

In the United Kingdom, the Magnox reactors of the 1960s were followed by the AGRs, which used enriched fuel and were able to operate at higher temperatures and with greater efficiency. The Steam Generating Heavy Water Reactor (SGHWR) design was intended as the next technological step but this policy was changed in favour of the more established PWR design, of which many were already in operation. However, only one PWR was subsequently constructed in the United Kingdom, at Sizewell. Nuclear power generates about 25 per cent of the country’s electricity.

French researchers abandoned the design they had initially developed and embarked in the early 1970s on a nuclear power programme based totally on PWRs when French-produced enriched uranium became available. These now supply almost 80 percent of France’s electricity.

Worldwide 56 per cent of power reactors are PWRs, 22 percent are BWRs, 6 percent are pressurized heavy water reactors (mostly CANDUs), 3 percent are AGRs, and 23 percent are other types. Eighty-eight percent are fuelled by enriched uranium oxide, the rest by natural uranium, with a few light water reactors, also using mixed oxide fuel (MOX), which contains plutonium as well as uranium. Light water is the coolant/moderator for 80 per cent to 85 per cent of all reactors.

The most important factors to be considered for any type of nuclear reactor are safety; cost per kilowatt of generating the capacity to construct; cost per kilowatt delivered (to include fuel, operation, and downtime costs); operating lifetime; and decommissioning costs.

A Pressurized Water Reactor (PWR)

PWRs are normally fuelled with uranium oxide pellets in a zirconium cladding, although in recent years some mixed oxide fuel (MOX), which contains plutonium, has been used. The fuel is enriched to 3 percent U-235. The moderator is the ordinary water coolant, which is kept pressurized at about 150 bars to stop it boiling. It is pumped through the reactor core, where it is heated to about 325° C (about 620° F). The superheated water is pumped through a steam generator, where, through heat exchangers, a secondary loop of water is heated and converted to steam. This steam drives one or more turbine generators, is condensed, and pumped back to the steam generator. The secondary loop is isolated from the reactor core water and is therefore not radioactive. The third stream of water from a lake, river, the sea, or cooling tower is used to condense the steam. A typical reactor pressure vessel is 15 m (49 ft) high and 5 m (16 ft) in diameter, with walls 25 cm (10 in) thick. The core contains about 90 tonnes of fuel.

The PWR was originally designed by Westinghouse Bettis Atomic Power Laboratory for military ship applications, then by Westinghouse Nuclear Power Division for commercial applications. The Soviet-designed VVER (Veda-Vodyanoi Energetichesky Reaktor) design is similar to Western PWRs but has different steam generators and safety features.

B Boiling Water Reactor (BWR)

The BWR is simpler than the PWR but less efficient in its fuel use and has a lower power density. Like the PWR, it is fuelled by uranium oxide pellets in a zirconium cladding, but slightly less enriched. The moderator is the ordinary water coolant, which is kept at lower pressure (70 bars) so that it boils within the core at about 300° C. The steam produced in the reactor pressure vessel is piped directly to the turbine generator, condensed, and then pumped back to the reactor. Although the steam is radioactive, there is no intermediate heat exchanger between the reactor and turbine to decrease efficiency. As in the PWR, the condenser cooling water has a separate source, such as a lake or river.

The BWR was originally designed by Allis-Chambers and General Electric (GE) of the United States. The GE design has survived, and other versions are available from ASEA-Atom, Kraftwerk Union, and Hitachi.

C Gas-Cooled Reactors

Magnox reactors take their name from the magnesium-based alloy used as cladding for the natural uranium metal fuel. The moderator is graphite and the carbon dioxide coolant is circulated through the core at a pressure of about 27 bars, exiting at about 360° C. The heat is transferred to the secondary water loop, in which steam is raised to drive the turbine generators. Early units had a steel pressure vessel with the steam generators outside the containment. Later versions had a concrete pressure vessel containing the core and the steam generators. Magnox reactors are a British design but were also built in Tokai-Mura (Japan) and Latina (Italy).

The Advanced Gas-Cooled Reactor (AGR) is a development of the Magnox design using uranium oxide fuel enriched to 2-3 percent U-235 and clad in stainless steel or zircaloy. The moderator is graphite and the carbon dioxide coolant circulates at about 40 bars, exiting the core at 640° C. The heat is transferred to the secondary water loop, in which steam is raised to drive the turbine generators. A concrete pressure vessel is used, with walls about 6 m (20 ft) thick. AGRs are unique to the UK.

High-Temperature Gas-Cooled Reactors (HTGRs) are largely experimental. The fuel elements are spheres made from a mixture of graphite and nuclear fuel. The German version has the fuel loaded in a silo, the US version loads the fuel into hexagonal graphite prisms. The coolant is helium, pressurized to about 100 bars, circulated through the interstices between the spheres or through holes in the graphite prisms. An example of this type is described in the Advanced Reactors section later in this article.

D Pressurized Heavy Water Reactor

The most widely used reactor of this type is the Canadian CANDU (Canadian Deuterium Uranium Reactor). The moderator and coolant are heavy water (deuterium oxide) and the fuel consists of natural uranium oxide pellets in zircaloy tubes. These are contained in pressure tubes mounted horizontally through a tank of heavy water called the “calandria”, which acts as the moderator. This feature avoids the need for a pressure vessel and facilitates on-load refuelling. The heavy water coolant is pumped through the pressure tubes at 110 bar and exits at about 320° C. The heat is transferred to the secondary water loop, in which steam is raised to drive the turbine generators.

The CANDU was designed by Atomic Energy of Canada Limited (AECL) to make the best use of Canada’s natural resources of uranium without needing enrichment technology, although requiring heavy water production facilities. In total, 21 CANDUs have been built, 5 of them outside of Canada.

E Light Water Graphite Reactor

The Soviet-designed Reaktor Bolshoi Moshchnosty Kanalyny (RBMK) is a pressurized water reactor with individual fuel channels. The moderator is graphite, the coolant—ordinary water, and the fuel—enriched uranium oxide. The fuel tubes and coolant tubes pass vertically through a massive graphite moderator block. This is contained in a pressure vessel and filled with helium-nitrogen mixture to improve heat transfer and prevent oxidation of the graphite. The coolant is maintained at 75 bar and exits at up to 350° C. The water is permitted to boil and the steam, after removal of water, is fed to the turbine generators.

Following the 1986 Chernobyl disaster, the design weaknesses of the RBMK were recognized and modifications made to help overcome them. The last operating reactor at the Chernobyl site was closed down in December 2000, and others will eventually be phased out.

F Fast Breeder Reactor

The Liquid Metal Fast Breeder (LMFBR) uses molten sodium as the coolant and runs on fuel enriched with U-235. Instead of a moderator being employed, the core is surrounded by a reflector, which bounces neutrons back into the core to help sustain the chain reaction. A blanket of “fertile” material (U-238) is included above and below the fuel, to be converted into fissile plutonium by capture of fast neutrons. The core is compact, with a high power density. The molten sodium primary coolant transfers its heat to a secondary sodium loop, which heats water in a third loop to raise steam and drive the turbine generators.

Development of fast reactors proceeds only in France, India, Japan, and Russia. The only commercial power reactors of this type are in Kazakhstan and Russia. The British fast reactor, which generated 240 megawatts, was closed down in the 1990s and is being decommissioned.

G Propulsion Reactors

Propulsion reactors are used to propel military submarines and large naval ships such as the aircraft carrier USS Nimitz. The US, UK, Russia, and France all have nuclear powered submarines in their fleets. The basic technology of the propulsion reactor was first developed in the US naval reactor programme directed by Admiral Hyman George Rickover. Submarine reactors are generally small, with compact cores and highly enriched uranium fuel.

The former USSR built the first successful nuclear-powered icebreaker Lenin for use in clearing the Arctic sea lanes. Three experimental nuclear powered cargo ships were operated for limited periods by the US, Germany, and Japan. Although technically successful, economic conditions and restrictive port regulations brought an end to these projects.

H Research Reactors

A variety of small nuclear reactors has been built in many countries for use in education, training, research, and production of radioactive isotopes for medical and industrial use. These reactors generally operate at power levels near 1 MW and are more easily started up and shut down than large power reactors.

A widely used type is the swimming-pool reactor. The core consists of partially or fully enriched U-235 contained in aluminium alloy plates immersed in a large tank of water that serves as both coolant and moderator. Materials to be irradiated with neutrons may be placed directly in or near the reactor core. This process is used to make radioactive isotopes for medical, industrial, and research use (see also Isotopic Tracer). Neutrons may also be extracted from the reactor core and directed along beam tubes for use in experiments.

I Advanced Reactors

Several new designs are under development around the world which are simpler, more efficient in their utilization of fuel, cheaper to build and operate, and inherently safer. They typically include passive safety features that avoid relying on pumps and valves, along with increased time for operators to respond to abnormal situations.

Some have evolved from established designs, taking into account the lessons learned from operating experience over the years and advances in fuel design for increased “burnup”. Others represent greater departures from established designs and would require a demonstration unit to be constructed before being used commercially. The cost and technical demands of these projects mean that national or international collaboration is usually necessary.

Projects are currently under way in Canada, France, Germany, Japan, Russia, the US, and South Africa. They fall into the three categories of water-cooled reactors, fast reactors, and gas-cooled reactors. Capacities cover all ranges—small, medium, and large (1,000 MW and above). The large capacity Advanced Boiling Water Reactor (ABWR) design is already in commercial operation in Japan. Others are under construction or on hold, awaiting favourable economic circumstances; the rest are still on the drawing board.

As an example of a design not based on existing commercial units, the South African Pebble Bed Modular Reactor (PMBR) is due to begin construction in 2001 and should be in commercial operation by 2005. It is a High Temperature Gas-Cooled Reactor (HTGR) of 110 MW capacity per module and is fuelled by several hundred thousand graphite-uranium oxide pebbles, each the size of a tennis ball. The helium coolant passes through a gas turbine to drive electrical generation with high efficiency and returns to the reactor in a closed loop. Each pebble passes through the reactor about ten times before needing to be replaced, which is carried out continuously without shutting the reactor down. Four modules would fit inside a football stadium and the design lifetime is 40 years.

Reactors have to be approved and certificated by the national safety regulatory authority before they can be used in a nuclear power station. International certification of reactors, as with new aircraft, is some way in the future.

J Fusion Reactors

Nuclear fusion is the process that powers the Sun, and for several decades people have looked at it as the answer to energy problems on Earth. However, the technological problems are complex, and a fusion power plant has not yet been built. (In 2005 agreement was reached between China, the European Union (EU), Japan, South Korea, Russia, and the US on the building of the world’s first nuclear fusion power plant at Cadarache, in southern France. The International Thermonuclear Experimental Reactor (ITER), as it is to be known, is scheduled to be in operation by 2016.) Fundamentally, any useful fusion reactor needs to confine plasma at a high enough density for sufficient time to generate more energy than the energy which was put in to create and confine the plasma. This occurs when the product of the confinement time and the density of the plasma, known as the Lawson number, is 1014 or above.

Numerous schemes for magnetic confinement of plasma have been tried since 1950. Thermonuclear reactions have been observed but the Lawson number has rarely exceeded 1012. The Tokamak device, originally suggested in the USSR by Igor Tamm and Andrei Sakharov, began to give encouraging results in the 1960s.

The confinement chamber of a Tokamak has the shape of a torus (doughnut), with a minor diameter of about 1 m (3 ft 4 in) and a major diameter of about 3 m (9 ft 9 in). A toroidal magnetic field of about 5 tesla is established inside this chamber by large electromagnets. This is about 100,000 times the Earth’s magnetic field at the planet’s surface. A longitudinal current of several million amperes is induced in the plasma by the transformer coils that link the torus. The resulting magnetic field lines are spirals in the torus and confine the plasma.

Following the successful operation of small Tokamaks at several laboratories, two large devices were built in the early 1980s, one at Princeton University in the US and one in the USSR. In the Tokamak, high plasma temperature naturally results from resistive heating by the very large toroidal current, and additional heating by neutral beam injection in the new large machines should result in ignition conditions.

Another possible route to fusion energy is that of inertial confinement. In this technique, the fuel (tritium or deuterium) is contained within a tiny pellet that is bombarded on several sides by a pulsed laser beam. This causes an implosion of the pellet, setting off a thermonuclear reaction that ignites the fuel. Several laboratories in the US and elsewhere are currently pursuing this possibility.

A significant milestone was achieved in 1991 when the Joint European Torus (JET) in the UK produced for the first time a significant amount of power (about 1.7 million watts) from controlled nuclear fusion. And in 1993 researchers at Princeton University in the US used the Tokamak Fusion Test Reactor (TFTR) to produce 5.6 million watts. However, both JET and TFTR consumed more energy than they produced in these tests.

There has been promising progress in fusion research around the world for several decades; however, it will take decades more to develop a practical fusion power plant. It has been estimated that an investment of US $50-100 billion is needed to achieve this, but each year only US $1.5 billion is being spent worldwide. The main areas where work is needed include superconducting magnets; vacuum systems; cryogenic systems; plasma purity, heating, and diagnostic systems; sustainment of plasma current; and safety issues.

The JET project has achieved “breakeven” operation, where the fusion power generated exceeds the input power, but only by injecting tritium that has made the structure radioactive. ITER is scheduled to begin by 2016. A demonstration fusion power plant would be built about 15 years later and, if successful, commercial fusion power plants could be operating by about 2050. This timescale could be significantly delayed or accelerated by the rate of progress in understanding plasma behaviour and by the rate of funding.

If fusion energy does become practicable it would offer the following advantages: (1) an effectively limitless source of fuel—deuterium from the ocean; (2) inherent safety, since the fusion reaction would not “run away” and the amount of radioactive material present is low; and (3) waste products that are less radioactive and simpler to handle than those from fission systems. However, the structure will become radioactive due to absorption of neutrons, so decommissioning will be a serious undertaking.


Nuclear power is based on uranium, a slightly radioactive metal that is relatively abundant (about as common as tin and 500 times as abundant as gold). Thorium is also usable as a nuclear fuel, but there is no economic incentive to exploit it at present. Economically extractable reserves at current low world prices amount to just 4.4 million tonnes, from the richer ores. At the current world usage rate of 50,000 tonnes per annum this would last only another 80 years. But if prices were to rise significantly, the usable reserves would increase to the order of 100 million tonnes. And if prices were to rise to several hundred dollars per kilogram, it may become economic to extract uranium from seawater, in which it is present at about 3 mg per tonne. This would be a sufficient supply for a greatly enlarged industry for several centuries.

A Uranium Production

The world’s uranium reserves are mostly located in Australia (35 percent), countries of the former USSR (29 percent), Canada (13 percent), Africa (8 percent), and South America (8 percent). In terms of production, Canada (33 percent) is followed by Australia (15 per cent) and Nigeria (10 per cent). Other producers are Kazakhstan, Namibia, Russia, South Africa, the US, and Uzbekistan.

Uranium ore contains about 1 percent uranium. It is mined either by open-pit or deep-mining techniques and milled (crushed and ground) to release the uranium minerals from the surrounding rock. The uranium is then dissolved, extracted, precipitated, filtered, and dewatered to produce a uranium ore concentrate called “yellowcake” which contains about 60 percent uranium. This has a much smaller volume than the ore and hence is less expensive to transport. It is either shipped to the fuel enrichment plant or, alternatively, to the fuel fabrication plant if it is not to be enriched.

B Enrichment

The yellowcake is converted to uranium hexafluoride (UF6), which is a gas above 50° C and is used as the feedstock for the enrichment process. Because most reactors require more than the 0.7 percent natural concentration of U-235, some of the U-238 needs to be removed to give a concentration of 3 percent U-235 or thereabouts.

Enrichment is carried out using either the gaseous diffusion process or the newer gas centrifuge process. A laser process is also under development. The gas centrifuge process requires only 5 percent of the energy to separate the same amount of U-235 as the diffusion process, although diffusion plants are still dominant worldwide.

C Fuel Fabrication

The enriched UF6 is converted to uranium dioxide in the form of a ceramic powder. This is pressed and then sintered in a furnace to produce a dense ceramic pellet. Pellets are welded into fuel rods and combined into fuel assemblies, which are then transported to the nuclear power station for loading into the reactor.

Plutonium oxide may also be mixed with the uranium oxide to make mixed oxide fuel (MOX), as a means of reducing the amount of stockpiled plutonium (although not the total amount in circulation) and avoiding the need to enrich the uranium. MOX fuel is manufactured at the reprocessing plant where the plutonium is held and is increasingly being used in light water reactors, up to a maximum of about 30 percent of the fuel in a PWR. Because spent MOX fuel is highly radioactive, the plutonium is unlikely to be illegally diverted into the manufacture of nuclear weapons.

D Power Generation

The fuel assemblies are loaded into the reactor in a planned cycle to “burn” the fuel most efficiently. The “burnup” is expressed as gigawatt-days per tonne (GWd/te) of uranium. The early Magnox stations achieved 5 GWd/te but by the late 1980s, PWRs and BWRs were achieving 33 GWd/te. Figures of 50 GWd/te are now being achieved, and this is forecast to increase.

E Spent Fuel

The fuel elements are removed from the reactor when they have reached the design burnup level, typically after four years. At this point, they are intensely radioactive and generate a lot of heat, so the spent fuel is placed in a cooling pond adjacent to the reactor. The water (which is dosed with boric acid to absorb neutrons and prevent a chain reaction) acts as a radiation shield and coolant. The fuel elements remain there for at least five months until the radioactivity has decayed enough to permit them to be transported.

Where the fuel is to be reprocessed, it is transported in shielded flasks by rail or road to the reprocessing plant. Where this is not the case, it will remain in the cooling pond. Older ponds were designed to accommodate up to ten years’ worth of spent fuel but may be able to accommodate more by removing older fuel into dry storage facilities. But ultimately the spent fuel will need to be sent for permanent disposal if it is not to be reprocessed.

F Reprocessing

The spent fuel is typically made up of non-fissile U-238 (about 95 percent), fissile U-235 (about 0.9 percent), various highly radioactive fission products, and a mixture of plutonium isotopes (more than half of which are fissile). Reprocessing separates the uranium and plutonium from the waste and was historically carried out to recover plutonium for manufacture of nuclear weapons. In the UK this was also carried out to deal with the magnesium alloy Magnox fuel casings, which are eventually corroded by the water in the cooling pond and are not suitable for dry storage. The recovered U-235 is used for the manufacture of new fuel, and the plutonium can be used for the manufacture of MOX fuel (see Fuel Fabrication above), although the majority is stockpiled at present.

The spent fuel received from the nuclear power station is stored in a cooling pond and then mechanically cut up. In the commonly used Purex process, the fuel is dissolved in nitric acid and then the uranium, plutonium, and fission products are separated by solvent extraction using a mixture of tributyl phosphate and kerosene. The uranium goes to fuel fabrication and the plutonium is either stored or used for MOX fuel production. The fission products are separated into a liquid stream, which is processed with glass-making materials into a vitrified high-level waste (HLW) product. Other liquids and solid waste streams are also generated, and these are discussed in the section on radioactive waste management later in the article.

Reprocessing in the civil nuclear industry is a contentious and complex issue. Between 1976 and 1981 it was not carried out in the United States due to concerns that plutonium could be illegally diverted into the manufacture of nuclear weapons (although now permitted, it has not been resumed). Instead, a “once through” policy for nuclear fuel is followed, with spent fuel regarded as waste destined for permanent disposal. The UK, France, Japan, and Russia have reprocessing plants and all are busy reducing their stock of nuclear weapons (apart from Japan, which has none), so the amounts of stored plutonium are increasing. Options for handling plutonium include “burning” it in a fast reactor, or using it up as MOX fuel followed by disposal of the spent fuel. As well as the plutonium issue, decision-making factors include the economics of the process and national perceptions of future energy needs.

G Transport

Uranium concentrate, new nuclear fuel, spent fuel, and radioactive waste are transported by rail, road, ship, and air in packages designed to prevent the release of radioactive material under all foreseeable accident scenarios. The most radioactive items such as spent fuel or vitrified high level waste are transported in extremely rugged “flasks” or “casks”, which will typically have undergone high-speed impact tests and fire tests to demonstrate their integrity.


Nuclear power stations, reprocessing plants, fuel fabrication plants, uranium mines, and all other nuclear facilities produce solid and liquid wastes of varying characteristics and amounts. These are internationally classified as high level waste (HLW), intermediate level waste (ILW), and low level waste (LLW).

A typical 1000 MW nuclear power station produces about 300 cu m of LLW and ILW waste each year, of which 95 percent would be classified as LLW. It also produces about 30 tonnes of spent fuel, classified as HLW. In comparison, a coal-fired power station of the same capacity would produce 300,000 tonnes of ash per year, containing a very large amount of radioactivity and toxic heavy metals, which would be dispersed into landfill sites and the atmosphere. Worldwide, about 200,000 cu m of low and intermediate waste are produced from nuclear power stations each year, together with 10,000 cu m of HLW (primarily spent fuel).

Wastes of lower activity are also produced, including very low level waste from most nuclear facilities which can be disposed of in normal municipal waste disposal sites without special precautions. Uranium mines and mills produce large volumes of waste containing low concentrations of radioactive and toxic materials, which are handled by normal mining techniques such as tailings dams. The enrichment process produces depleted uranium, primarily consisting of U-238, which is slightly radioactive and requires some precautions for safe disposal.

A High Level Waste

This is highly radioactive, heat generating, long-lived material, which will remain biologically hazardous for thousands of years. The spent fuel from nuclear power plants, destined for permanent disposal, is classified as HLW, as is the concentrated liquid waste generated by reprocessing. The 30 tonnes of spent fuel produced each year by a typical power station will, after ten years, still produce a power of several hundred kilowatts and cooling will be necessary for about 50 years overall. For final disposal of spent fuel, the fuel rods would be removed from their assemblies and repacked in a dense lattice within a corrosion-resistant steel canister. A cover would be welded on and the canister covered with an overpack. However, this is not yet carried out (see section on Disposal below), and some countries (notably Russia) are reluctant to dispose of spent fuel because, if reprocessed, it is an energy resource.

Reprocessing one tonne of spent fuel produces about 0.1 cu m of radioactive liquid, containing about 99 percent of the fission product radioactivity. The liquid is stored in tanks with multiple cooling systems designed to remove the heat produced by the radioactive decay, and after several tens of years it can be processed for final disposal. For example, the vitrification process operated at the Sellafield reprocessing plant in England converts the liquid to a stable, solid form by turning it into a borosilicate glass (referred to as vitrified high level waste, VHLW) in a stainless steel container suitable for long-term storage and final disposal. Processes based on other immobilization technologies are in development elsewhere, such as the SYNROC process in Australia.

B Intermediate Level Waste

This consists of solid and liquid materials such as fuel cladding, contaminated equipment, sludges, evaporator concentrates, and spent ion-exchange resin. This material is not sufficiently radioactive to require cooling. Reprocessing one tonne of spent fuel produces about 1 cu m of ILW, containing about 1 per cent of the radioactivity in the fuel.

Various processes for retrieval, volume reduction, incineration, conditioning, and immobilization of ILW to convert it to stable, solid forms (usually based on cement, but also polymers and bitumen) are operated at power stations and reprocessing plants. The final product is typically contained in a drum, suitable for long-term storage or final disposal.

C Low Level Waste

This consists of trace-contaminated used protective clothing, gloves, contaminated rags, filters, and the like, and also larger items of lightly contaminated equipment. Brazil nuts and coffee beans contain as much natural radioactivity as typical LLW. Reprocessing one tonne of spent fuel produces about 4 cu m of LLW, containing about 0.001 percent of the radioactivity in the fuel. Size reduction techniques include shearing, shredding, and compaction. The waste is grouted into containers called “overpacks” to produce a stable waste form suitable for final disposal.

D Disposal

About 40 near-surface disposal sites for LLW have been in operation for over 30 years in countries with nuclear power industries, and another 30 are expected to come into operation in the next 15 years. They typically have concrete-lined trenches, an impervious cap, and systems for collecting water from the base of the trenches.

The intention in all countries with nuclear power industries is eventually to dispose of ILW and HLW in deep underground repositories, where the long-lived radioactive isotopes will be segregated for more than 100,000 years by a combination of engineered and natural barriers. Development and selection of final disposal sites is under way in all countries where they will be needed, although the rate of progress is generally slow due to the need to obtain public acceptance and address the main issues, which have been identified as transport of radioactivity in groundwater; migration of radioactivity in gas generated by the waste; natural disruptive events and inadvertent human intrusion; and the question of whether or not the waste should be retrievable. Meanwhile, ILW and HLW are stored at the sites where they are produced, which can generally be continued for 50 years or more.

As an alternative to constructing a series of national repositories (where geological conditions may be less than ideal), it has been proposed that waste should be transported to disposal sites in sparsely populated and more geologically suitable areas of the world such as Western Australia or the Gobi Desert. This, however, remains contentious.

A number of countries used to dispose of radioactive waste by dumping at sea. This practice is now discontinued following the London Convention of 1983; however, disposal of deep-sea sediments several hundred metres below the bottom of the sea in water depths of at least 4,000 m (13,123 ft) is a potentially attractive option where it is not envisaged that the waste would ever need to be retrieved. This option would require a large international collaborative effort to develop.

E Return of Wastes

The policy of some countries with small-scale nuclear power industries is to return spent fuel to the foreign supplier of the fuel. And the policy of European reprocessing plants is eventually to return the wastes arising from large-scale reprocessing of spent fuel to the country where the spent fuel came from. For example, vitrified HLW is shipped from the reprocessing plant at Cap la Hague in France back to Japan.

F Liquid Discharges

The liquid effluents generated by nuclear power stations, reprocessing plants, and other nuclear facilities are treated by a variety of efficient processes to remove radioactivity. Stringent limits are set for each site, radiation levels in discharge streams are monitored, and efforts are made to improve year on year. Any residual radioactivity in the effluent will generally end up in the sea where its uptake by “critical groups” (those most likely to receive the radiation) can be estimated. At the 1998 Oslo-Paris Commission (OSPAR) meeting in Portugal, the EU member states committed themselves to reduce discharges of radioactivity to the point where additional concentrations above background levels are close to zero.

Reprocessing effluents presents the greatest challenge. Techniques such as sand bed filtration, ion exchange, neutralization of acidic effluents to precipitate solids, removal of solids by hydrocyclone or ultrafiltration, and alkaline hydrolysis of organic solvent are used to clean the effluents until they can be discharged to sea. The separated radioactive material is processed as ILW, as described above.

G Aerial Discharges

The radioactive gases that are discharged are subject to similar limits and monitoring as liquid effluents to ensure the minimum uptake by “critical groups”.


Before discussing the safety issues surrounding nuclear power it is necessary to understand the basics of radiation.

A Introduction to Radiation

Heat and light are types of radiation that people can feel or see, but we cannot detect ionizing radiation in this way (although it can be measured very accurately by various types of instrument). Ionizing radiation passes through matter and causes atoms to become electrically charged (ionized), which can adversely affect the biological processes in living tissue.

Alpha radiation consists of positively charged particles made up of two protons and two neutrons. It is stopped completely by a sheet of paper or the thin surface layer of the skin; however, if alpha-emitters are ingested by breathing, eating, or drinking they can expose internal tissues directly and may lead to cancer.

Beta radiation consists of electrons, which are negatively charged and more penetrating than alpha particles. They will pass through 1 or 2 centimetres of water but are stopped by a sheet of aluminium a few millimetres thick.

X-rays are electromagnetic radiation of the same type as light, but of much shorter wavelength. They will pass through the human body but are stopped by lead shielding.

Gamma rays are electromagnetic radiation of shorter wavelength than X-rays. Depending on their energy, they can pass through the human body but are stopped by thick walls of concrete or lead.

Neutrons are uncharged particles and do not produce ionization directly. However, their interaction with the nuclei of atoms can give rise to alpha, beta, gamma, or X-rays, which produce ionization. Neutrons are penetrating and can be stopped only by large thicknesses of concrete, water, or paraffin.

Radiation exposure is a complex issue. We are constantly exposed to naturally occurring ionizing radiation from radioactive material in the rocks making up the Earth, the floors and walls of the buildings we use, the air we breathe, the food we eat or drink, and in our own bodies. We also receive radiation from outer space in the form of cosmic rays.

We are also exposed to artificial radiation from historic nuclear weapons tests, the Chernobyl disaster, emissions from coal-fired power stations, nuclear power plants, nuclear reprocessing plants, medical X-rays, and from radiation used to diagnose diseases and treat cancer. The annual exposure from artificial sources is far lower than from natural sources. The dose profile for an “average” member of the UK population is shown in the table above, although there will be differences between individuals depending on where they live and what they do (for example, airline pilots would have a higher dose from cosmic rays and radiation workers would have a higher occupational dose).

B Radiation Effects and Dose Limits

Large doses of ionizing radiation in short periods of time can damage human tissues, leading to death or injury within a few days. Moderate doses can lead to cancer after some years. And it is generally accepted that low doses will still cause some damage, despite the difficulty in detecting it (although there is a body of opinion that there exists a “threshold” below which there is no significant damage). There is still no definite conclusion as to whether exposure to the natural level of background radiation is harmful, although damaging effects have been demonstrated at levels a few times higher.

Absorbed radiation dose is measured in sieverts (Sv), although doses are usually expressed in millisieverts (mSv). One chest X-ray gives a dose of about 0.2 mSv. The natural background radiation dose in the UK is about 2.5 mSv per annum, although it doubles in some areas, and in certain parts of the world, it may reach several hundred mSv. A dose of 5 Sv (that is, 5,000 mSv) is likely to be fatal.

Basic principles and recommendations on radiation protection are issued by the International Commission on Radiological Protection (ICRP) and used to develop international standards and national regulations to protect radiation workers and the general public. The basic approach is consistent all over the world. Over and above the natural background level, the dose limit for a radiation worker is set at 100 mSv per year averaged over five years, and 1mSv per year over five years for a member of the general public. Doses should always be kept as low as reasonably achievable, and the limits should not be exceeded.

In the UK the recommended maximum annual dose for a radiation worker is set at 20 mSv (although higher limits may apply elsewhere in the world) and the typical annual dose for a radiation worker would be controlled to less than 1.5 mSv. However, some may receive more than 10 mSv, and a few may approach the annual limit.

C Ensuring Nuclear Safety

In common with all hazardous industrial activities, the risk of major nuclear accidents is minimized at power stations and reprocessing plants by means of multiple levels of protection. In order of importance, engineered systems are provided for prevention, detection, and control of any release of radioactive material. Escape and evacuation of people on site and nearby are available as the last resort. Sophisticated analysis is carried out to evaluate the effect of the protective systems in all foreseeable accident scenarios and to demonstrate that the risk of failure is sufficiently low.

For example, for a major release of radioactivity from a modern nuclear power station there would have to be a whole series of failures. The primary cooling system would have to fail, followed by the emergency cooling system, then the control rods, then the pressure vessel, and finally the malfunction of the containment building before significant amounts of radioactivity could be released.

The safety record of the nuclear industry worldwide over the last 45 years has been generally good, with the exception of the Windscale (Sellafield) fire in 1957 (which actually happened with a military plutonium production reactor rather than a power reactor), the Three Mile Island accident in 1979, the Chernobyl disaster of 1986, and the most recent accident, at Tokai-Mura in Japan in 1999, all of which are discussed in more detail in the next section. However, there have also been a number of incidents at nuclear power stations and reprocessing plants over the years, which resulted in severe damage and/or had the potential to escalate into major accidents, and should, therefore, be classified as “near misses”.

Lessons have been learned. While safety relies on the design (that is, engineered safety systems), just as importantly it depends on how the reactor is operated. Improvements have been made to reactor control systems in response to Three Mile Island and Chernobyl but without trained and competent operators following valid procedures there remains the possibility of a major nuclear accident. And there are 12 RBMK reactors of the same type as the Chernobyl device still in operation, which have an inherently less safe design than any other type of nuclear power reactor.

D Nuclear Accidents

There have been four particularly severe nuclear accidents in the past 45 years, which released, or almost released, large amounts of radioactivity.

The 1957 Windscale fire was the worst nuclear accident in UK history. It happened when an early plutonium production reactor (with few safety systems) caught fire and is not representative of modern nuclear power reactors.

The 1979 core meltdown at the Three Mile Island PWR was the worst nuclear accident in US history. The disaster was largely contained but happened because of deficiencies in the control system and incorrect responses by the operators when abnormal circumstances arose initially, which then escalated into a far worse situation.

The 1986 Chernobyl disaster was the worst nuclear accident in history. It was caused by the operators carrying out an unauthorized and previously untried procedure on an RBMK reactor that involved them disabling a number of safety devices. This led to the reactor becoming unstable and eventually exploding. In the years following the accident over 30 people (mainly firefighters) died from radiation exposure. A further 300 workers and firefighters suffered radiation sickness (those who were sent in to clean up the plant following the explosion were later found to have been at a significantly increased risk of lung cancer) and almost 2,000 people in the surrounding area who were children at the time have developed thyroid cancer (which is, fortunately, treatable, and so few have died), with more cases expected. Massive amounts of radioactive material were dispersed throughout the Northern hemisphere.

In recent years confidence in Japan’s nuclear industry has been shaken by a number of serious accidents. In 1999 a “criticality incident” occurred at the Tokai-Mura nuclear plant. There was a sustained burst of neutrons caused by a chain reaction that was triggered when operators carried out a prohibited procedure while manufacturing highly enriched fuel (15 per cent to 20 per cent U-235) for an experimental fast reactor. There were a small number of fatalities (those closest to the incident, which is typical of criticality accidents). People living in the surrounding area were irradiated with neutrons for some hours. In 2004, four people were killed and seven injured at a plant in Mihama, Western Japan, when a corroded pipe exploded, covering workers with scalding water that caused severe burns. Although officials insisted that there had been no radiation leak from the plant and that there was no danger to the surrounding area, the casualties were the highest in Japan’s history of nuclear power.


A Today

Worldwide, there are about 430 power reactors operating in 25 countries, providing about 17 percent of the world’s electricity. Of these 56 percent are PWRs, 22 percent are BWRs, 6 percent are pressurized heavy water reactors (mostly CANDUs), 3 percent are AGRs, and 23 percent are other types. In all, 88 percent are fuelled by enriched uranium oxide, the rest by natural uranium. A few light water reactors also use mixed oxide fuel (MOX) and this is likely to increase, partly as a way to dispose of the growing stocks of military plutonium. The number of fast breeder reactors (FBRs) has reduced with the closure of FBR programmes in several countries.

Nuclear power and hydro-electric power together provide 36 per cent of world electricity: neither put carbon dioxide into the atmosphere. In both cases the technology is mature. The new renewable technologies hardly appear in the statistics but, with financial support, they are starting to make their presence felt. It is unlikely, however, that renewables will ever provide more than 20 percent of world energy. In a world greedy for energy, where oil is already beginning to be supply-constrained and gas will follow by 2010, concerns about the security of energy supply are now being voiced. In the US, where power blackouts are prevalent in some states, life extension of nuclear stations is being implemented urgently to ensure supply despite pressures from the environmental movement to phase out nuclear power.

Life extension of nuclear reactors in the UK has been very successful but current policy appears to be to retire stations at the end of their useful lives and replace them with gas-fired stations. However, replacement by gas brings a huge carbon dioxide penalty which will derail the UK’s Kyoto Protocol’s obligations. The much more stringent post-2010 requirements that have been called for by the Royal Commission on Environmental Pollution require a 60 percent carbon dioxide reduction by 2050 but have virtually no chance of success without a nuclear input. Sweden and Germany are following a similar nuclear closure route.

Nuclear power construction is on the plateau or in decline in some developed countries and consequently, the teams of experienced nuclear engineers have been dispersed (despite the 38 new nuclear power plants currently under construction). Some countries, such as the UK, have lost the capacity to build a nuclear power station. University departments teaching nuclear technology have all but disappeared, which could be a limiting factor on new nuclear construction and will take the time to change.

Nuclear power is generally not discussed by politicians with the exception of France, some Far Eastern countries, China, and Russia. The EU energy commissioner Loyola de Palacio said, on November 7, 2000: “From the environmental point of view nuclear energy cannot be rejected if you want to maintain our Kyoto commitments.” She went on to imply she wanted nuclear power to be part of the Kyoto Protocol’s Clean Development Mechanism.

B The Short-Term Future

Attention is beginning to move towards building new nuclear power stations. Looking ahead to a doubling of energy demand by 2050, and with the world now trying to reduce its use of fossil fuels in order to contain carbon dioxide emissions, it is difficult to see how this can be achieved without a substantial increase in nuclear power. Nevertheless, the public perception of the industry in many countries is that it is more dangerous than other forms of energy and the problems of storing nuclear waste have not been fully solved (though there is considerable evidence to counter this argument).

A new generation of advanced reactors is being developed, which are more fuel efficient and inherently safer, with passive safety systems. The new designs are based on accumulated experience derived from operating PWRs and BWRs. Advanced boiling water and pressurized water reactors are already operating and the smaller AP 600 Westinghouse design has been certificated (as already mentioned, global certification, as with new aircraft, will be essential to get new designs into production). The European pressurized water reactor is available for construction, a number of liquid and gas cooled fast reactor systems have been designed, and some prototypes constructed. These new designs will produce electricity more cheaply than coal-fired stations and than gas-generated electricity (if gas prices continued to increase), and probably also more cheaply than renewable electricity (and with better availability).

Interest in high-temperature gas-cooled reactors (HTGRs) using helium at 950o C has been revived, particularly in Japan and China, and a Pebble Bed Modular Reactor (PBMR) with direct cycle gas turbine generator is being developed in South Africa.

The use of nuclear reactors to generate process heat is an important development, particularly if the heat is used for desalination. An integrated nuclear reactor producing electricity and clean water could produce water at between 0.7 dollars and 1.1 dollars per cubic metre. There is considerable interest in this technology from North Africa, the Arabian Peninsula states, Turkey, and northern China.

C Further Ahead

The existing types of nuclear reactor are not particularly efficient in their use of uranium. An alternative is the fast breeder reactor (FBR), which uses uranium some 60 times more efficiently than today’s PWRs and BWRs, although it is more expensive and is not yet a mature technology. Russian scientists have successfully operated the BS 600 fast reactor for 18 years with over 75 per cent availability. FBRs in other countries have been less successful and they eventually closed down because it was thought that the technology would not be required for 30 years and uranium and plutonium are readily available at the present time. In the long term, fast reactor technology could effectively increase world energy resources by a factor of ten and its time will no doubt come unless nuclear fusion can be engineered into a power station. Research on fusion continues, with the time horizon constantly receding, but it is expected that the prize will be worth the effort.

D Conclusions

The world is poised to make much more use of nuclear power, provided the public perception that nuclear power is too dangerous to contemplate ultimately alters. It is possible that destabilization of weather systems, resulting from global warming, may persuade people that nuclear power is the lesser of two evils.

The biggest drivers for new nuclear construction are the security of supply, the steadily increasing prices of natural gas and oil, the likely interruptions of gas and oil supply for political reasons, and the absence of carbon dioxide emissions from nuclear stations.

The way ahead seems to be represented by an increasing mixture of nuclear power and renewable energy.

Contributed By:
Nicholas S. Fells

Reviewed By:
Ian Fells


Endangered Species


Endangered Species, plant and animal species that are in immediate danger of extinction. The following degrees of endangerment have been defined. Critically endangered species, such as the California condor, are those that probably cannot survive without direct human intervention. Threatened species are abundant but are declining in total numbers. Rare species exist in relatively low numbers over their ranges but are not necessarily in immediate danger of extinction.

Extinction is actually a normal process in the course of evolution. Throughout geological time, many more species have become extinct than exist today. These species slowly disappeared because of climatic changes and the inability to adapt to such conditions as competition and predation. Since the 1600s, however, the process of extinction has accelerated rapidly through the impact of both human population growth and technological advances on natural ecosystems. Today the majority of the world’s environments are changing faster than the ability of most species to adapt to such changes through natural selection.


Species become extinct or endangered for a number of reasons, but the primary cause is the destruction of habitat. Drainage of wetlands, conversion of shrub lands to grazing lands, cutting and clearing of forests, desertification, urbanization and suburbanization, and highway and dam construction have seriously reduced available habitats. As the various habitats become fragmented into “islands”, the remaining animal populations crowd into smaller areas, causing further habitat destruction. Species in these small islands lose contact with other populations of their own kind, thereby reducing their genetic variation and making them less adaptable to environmental change. These small populations are highly vulnerable to extinction; for some species, the fragmented habitats become too small to support a viable population.

Since the 1600s, commercial exploitation of animals for food and other products has caused many species to become extinct or endangered. The slaughter of great whales for oil and meat, for example, has brought them to the brink of extinction; the African rhinoceros, killed for its horn, is also critically endangered. The great auk became extinct in the 19th century because of overhunting, and the Carolina parakeet perished as a species because of a combination of overhunting and habitat destruction. In 2000 cod was put on the endangered species list by the World Wide Fund for Nature (WWF). The number of cod in the United Kingdom’s waters has halved since the 1960s due to overfishing, which some claim is the result of excessively high quotas for catches set by the European Union. The decline in the cod population has also been influenced by a rise in sea temperatures, due to global warming, which has damaged the cod’s ability to spawn.

Introduced diseases, parasites, and predators against which native flora and fauna have no defenses have also exterminated or greatly reduced some species. The accidental introduction of a blight, for example, eliminated the chestnut tree from North American hardwood forests. Predator and pest control also have adverse effects. Excessive control of prairie dogs, for example, has nearly eliminated one of their natural predators, the black-footed ferret.

Pollution is another important cause of extinctions. Toxic chemicals—especially chlorinated hydrocarbons such as dichloro-diphenyl-trichloroethane (DDT) and polychlorinated biphenyls (PCBs)—have become concentrated in food webs, affecting most strongly those species at the end of the chain. Both DDT and the PCBs, for example, interfere with the calcium metabolism of birds, causing soft-shelled eggs and malformed young. PCBs also impair reproduction in some carnivorous animals. Even ordinarily non-toxic substances such as nitrogen-based fertilizers can be detrimental in excess. A 1999 UN report stated that worldwide fertilizer use has increased from 14 million tonnes in 1950 to 145 million tones in the late 1980s, resulting in a drastic increase in the amount of nitrogen in ground and water supplies. Excessive nitrogen has polluted drinking water supplies and has also contributed to “exuberant and unwanted” growth of algae and other plants in many freshwater habitats. The eutrophication and eventual decay of these plants produces harmful wastes and removes oxygen from the water, killing other organisms. Venting hot water or wastes from industrial plants, for example, can also encourage excessive vegetation and speed decay. Water pollution and increased water temperatures (see Global Warming) have wiped out endemic races of fish in several habitats. Terrestrial and aerial life also relies directly or indirectly on these water supplies. Additionally, the increasing scarcity of fresh water affects all species, including humans, as do other environmental concerns (see Environment: Environmental Problems).

A United Nations Environment Programme (UNEP) report on the global environment published in May 2002 concluded that over 11,000 species (including almost a quarter of all mammals) face extinction within 30 years. In total more than 5,000 plants, 1,000 mammals, and 5,000 other animals (including one in eight birds) are endangered, mostly due to habitat destruction and invasion by non-native species. The report states that factors that caused previous extinctions are operating with “ever-increasing intensity”, although it suggests that these problems could be eased if treaties and conventions such as the Kyoto Protocol and Convention on Biodiversity were implemented globally.


Some private and governmental efforts have been directed at saving declining species. One immediate approach is to protect a species by legislation. Laws were enacted in the United States in the early 1900s, for example, to protect wildlife from commercial trade and overhunting. In 1973 the Endangered Species Act provided mechanisms for the conservation of ecosystems on which endangered species depend; it also discouraged the exploitation of endangered species in other countries by banning the importation and trade of any product made from such species. The United States also has various agreements with other nations—for example, with Canada and Mexico for the legal protection of migratory birds.

International efforts centre on the Convention on International Trade in Endangered Species of Wild Fauna and Flora, ratified by 51 nations. Its purpose is to restrict exploitation of wildlife and plants by regulating and restricting trade in species. The effectiveness of such laws in various countries, however, depends on enforcement and support by people and the courts. Because of a lack of law enforcement, the willingness of some segments of society to trade in endangered species, and the activities of poachers and dealers who supply the trade, the future of many species is in doubt in spite of legal protection.

Efforts to save endangered species also include the propagation of breeding stock for release in the wild, either to restore a breeding population (as in the case of the peregrine falcon) or to augment a natural population (as in the case of the whooping crane). Due to breeding in captivity, the number of known California condors had risen from 27 in 1987 to more than 240 by 2005. Another approach involves the determination of critical habitats that must be preserved for endangered species. These habitats may be protected by the establishment of reserves; the value of these may be limited, however, because of the island effect. The objections of special interest groups also make land preservation for the protection of endangered species difficult.




Plant, any member of the plant kingdom, comprising about 260,000 known species of mosses, liverworts, ferns, herbaceous and woody plants, bushes, vines, trees, and various other forms that mantle the Earth and are also found in its waters. Plants range in size and complexity from small, nonvascular (without vein-like structures) mosses, which depend on direct contact with surface water, to giant sequoia trees, the largest living organisms, which can draw water and minerals through their vascular systems to heights of more than 100 m (330 ft).

Only a tiny percentage of plant species are directly used by human beings for food, shelter, fibre, and drugs. At the head of the list are rice, wheat, maize, legumes, cotton, conifers, and tobacco, on which whole economies and nations depend. Of even greater importance to human beings are the indirect benefits reaped from the entire plant kingdom, which for more than 3 billion years has been carrying out photosynthesis. Plants have laid down the fossil fuels that provide power for industrial society, and throughout their long history plants have supplied sufficient oxygen to the atmosphere to support the evolution of higher animals. Today the world’s biomass is composed overwhelmingly of plants, which not only underpin all food webs but also modify climates and create and hold down soil, making what would otherwise be stony, sandy masses habitable for life.


Plants are multicellular green organisms; their cells contain eukaryotic (with nuclei) protoplasm held within more or less rigid cell walls composed primarily of cellulose. The most important characteristic of plants is their ability to photosynthesize—that is, to make their own food by converting light energy into chemical energy—a process carried out in the green, chlorophyll-containing plastids (cellular organelles) called chloroplasts. A few plants have lost their chlorophyll and have become saprophytic or parasitic—that is, they absorb their food from dead or living organic matter, respectively—but details of their structure show that they are evolved plant forms.

The animal kingdom is also multicellular and eukaryotic, but its members differ from the plants in deriving nutrition from other organic matter; by ingesting food rather than absorbing it, as in the fungi; by lacking rigid cell walls; and, usually, by having sensory capabilities and being motile (able to move), at least at some stage. See Classification.

Fungi, also eukaryotic and long considered members of the plant kingdom, have now been placed in a separate kingdom because they lack chlorophyll and plastids and because their rigid cell walls contain chitin rather than cellulose. Fungi do not manufacture their own food; instead, they absorb it from dead or living organic matter.

The various groups of algae were also formerly placed in the plant kingdom because many are eukaryotic and because most have rigid cell walls and carry out photosynthesis. Nevertheless, because of the variety of pigment types, cell wall types, and different forms and structures found in the algae, they are now recognized as part of two separate kingdoms, containing a diversity of plant-like and other organisms that are not necessarily closely related. One of the divisions (phyla) of algae—comprising the green algae—is believed to have given rise to the plant kingdom, because its chlorophylls, cell walls, and other details of the cellular structure are similar to those of plants.


In the evolution of plants, it is now thought that single-celled green plants migrated from the sea to freshwater where they evolved into multicellular organisms, and then probably invaded the land several times before one plant lineage survived and eventually diversified into all known land plants. The common ancestor was likely to be closely related to tiny green algae plants known as coleochaetes, which still live in some of the world’s pristine freshwaters. Further, preliminary results from a major study using cladistics—systematic classification based on evolutionary relationships—has indicated that green plants, red plants, and brown plants evolved from three different varieties of one-celled, plant-like organisms and should, therefore, be grouped into separate kingdoms. Green plants would still comprise the largest kingdom of trees, shrubs, grasses, ferns, mosses, and flowering plants (see Angiosperms), whereas brown and red plants have survived mostly as species of seaweed and microscopic algae called diatoms. Recent research has also proposed that the first flowering plants that developed 142 million years ago were closely related to any of three existing species: Amborella, which grows only in New Caledonia and produces small, cream-coloured flowers[qv]; Nymphaea, or water lilies; and Austrobaileya, a plant native to Australia.


The many species of organisms in the plant kingdom are divided into several divisions (the botanical equivalent of phyla) totalling about 260,000 described species. The bryophytes are a diverse assemblage of three classes of nonvascular plants, with about 16,000 species, that includes the mosses, liverworts, and hornworts. Bryophytes lack a well-developed vascular system for the internal conduction of water and nutrients. The familiar leafy plant of bryophytes is the sexual, or gamete-producing (the gametophyte), generation of the life cycle of these organisms. The spore-producing generation (the sporophyte) of bryophytes is wholly or partially dependent on the gametophyte. Because of the lack of a vascular system and because the gametes require a film of water for dispersal, bryophytes are generally small plants that tend to occur in moist conditions, although some attain large size under favourable circumstances and others (usually very small) are adapted to desert life.

The other divisions are collectively termed vascular plants or tracheophytes. Vascular tissue is internal conducting tissue for the transport of water, minerals, and food. There are two types of vascular tissue: xylem, which conducts water and minerals from the ground to stems and leaves, and phloem, which conducts food produced in the leaves to the stems, roots, and storage and reproductive organs. Besides the presence of vascular tissue, tracheophytes contrast with bryophytes in that tracheophyte leafy plants are the asexual, or spore-producing, generation of their life cycle. In the evolution of tracheophytes, the spore-producing generation became much larger and more complex and is independent of the gamete-producing generation which became reduced. In seed-bearing plants (gymnosperms and angiosperms) the gametophyte generation is no longer free-living but is contained in the sporophyte tissue. The sporophyte embryo is contained in a seed which is dispersed from the plant. This ability to evolve larger and more diverse sporophytes while reducing the vulnerable gametophyte, together with the ability of the vascular system to lift water, freed tracheophytes from direct dependence on surface water. They were thus able to dominate all the terrestrial habitats of the Earth, except the higher arctic zones, and to provide food and shelter for its diverse animal inhabitants.


The tremendous variety of plant species is, in part, a reflection of the many distinct cell types that make up individual plants. Fundamental similarities exist among all these cell types, however, and these similarities indicate the common origin and the interrelationships of the different plant species. Each individual plant cell is at least partly self-sufficient, being isolated from its neighbours by a cell membrane, or plasma membrane, and a cell wall. The membrane and wall allow the individual cell to carry out its functions; at the same time, communication with surrounding cells is made possible through cytoplasmic connections called plasmodesmata.

A Cell Wall

The most important feature distinguishing the cells of plants from those of animals is the cell wall. In plants this wall protects the cellular contents and limits cell size. It also has important structural and physiological roles in the life of the plant, being involved in transport, absorption, and secretion.

A plant’s cell wall is composed of several chemicals, of which cellulose (a polymer made up of molecules of the sugar glucose) is the most important. Cellulose molecules are united into fibrils, which form the structural framework of the wall. Other important constituents of many cell walls are lignins, which add rigidity, and waxes, such as cutin and suberin, which reduce water loss from cells. Many plant cells produce both a primary cell wall, while the cell is growing, and a secondary cell wall laid down inside the primary wall after growth has ceased. Plasmodesmata penetrate both primary and secondary cell walls, providing pathways between cells for transporting substances.

B Protoplast

Within the cell wall are the living contents of the cell, called the protoplast. These contents are bounded by a single, two-layered cell membrane. The protoplast contains the cytoplasm, which in turn contains various membrane-bound organelles and vacuoles, as well as the nucleus, which is the hereditary unit of the cell.

B1 Vacuoles

Vacuoles are membrane-bound cavities filled with cell sap, which is made up mostly of water containing various dissolved sugars, salts, and other chemicals.

B2 Plastids

Plastids are organelles—specialized cellular parts that are analogous to organs—bounded by two membranes. Three kinds of plastids are important here. Chloroplasts contain chlorophylls and carotenoid pigments; they are the site of photosynthesis, the process in which light energy from the Sun is fixed as chemical energy in the bonds of various carbon compounds. Leucoplasts, which contain no pigments, are involved in the synthesis of starch, oils, and proteins. Chromoplasts manufacture carotenoids.

B3 Mitochondria

Whereas plastids are involved in various ways in storing energy, another class of cellular organelles, the mitochondria, are the sites of respiration. This process involves the transfer of chemical energy from carbon-containing compounds to adenosine triphosphate, or ATP, the chief energy source for cells. The transfer takes place in three stages: glycolysis (in which acids are produced from carbohydrates), the Krebs Cycle, and electron transfer. Like plastids, mitochondria are bounded by two membranes, of which the inner one is extensively folded; the folds serve as the surfaces on which the respiratory reactions take place.

B4 Ribosomes, Golgi Apparatus, and Endoplasmic Reticulum

Two other important cellular contents are the ribosomes, the sites of which amino acids are linked together to form proteins, and the Golgi apparatus, which plays a role in the secretion of materials from cells. In addition, a complex membrane system called the endoplasmic reticulum runs through much of the cytoplasm and appears to function as a communication system; various kinds of cellular substances are channelled through it from place to place. Ribosomes are often connected to the endoplasmic reticulum, which is continuous with the double membrane surrounding the nucleus of the cell.

B5 Nucleus

The nucleus controls the ongoing functions of the cell by specifying which proteins are produced. It also stores and passes on genetic information to future generations of cells during cell division. See Genetics.


There are many variants of the generalized plant cell and its parts. Similar kinds of cells are organized into structural and functional units, or tissues, which make up the plant as a whole, and new cells (and tissues) are formed at growing points of actively dividing cells. These growing points, called meristems, are located either at the stem tips and root tips (apical meristems), where they are responsible for the primary growth of plants, or laterally in stems and roots (lateral meristems), where they are responsible for secondary plant growth. Three tissue systems are recognized in vascular plants: dermal, vascular, and ground (or fundamental).

A Dermal System

The dermal system consists of the epidermis, or outermost layer, of the plant body. It forms the skin of the plant, covering the leaves, flowers, roots, fruits, and seeds. Epidermal cells vary greatly in function and structure.

The epidermis may contain stomata, openings through which gases are exchanged with the atmosphere (see Transpiration). These openings are surrounded by specialized cells called guard cells, which, through changes in their size and shape, alter the size of the stomatal openings and thus regulate the gas exchange. The epidermis is covered with a waxy coating called the cuticle, which functions as a waterproofing layer and thus reduces water loss from the plant surface through evaporation. If the plant undergoes secondary growth—growth that increases the diameter of roots and stems through the activity of lateral meristems—the epidermis is replaced by a periderm is made up of heavily waterproofed cells (mainly cork tissue) that are dead at maturity.

B Vascular System

The vascular tissue system consists of two kinds of conducting tissues: the xylem, responsible for conduction of water and dissolved mineral nutrients, and the phloem, responsible for conduction of food. The xylem also stores food and helps to support the plant.

B1 Xylem

The xylem consists of two types of conducting cells with tracheids and vessel elements. Cells of both types are elongated, with secondary walls, and lacking cytoplasm, and are dead at maturity. The walls have pits—areas in which secondary thickening does not occur—through which water moves from cell to cell. Tracheids have tapered ends that overlap allowing water to pass longitudinally through the end wall to end wall through these pits, and also in a transverse manner across the cells. Vessel elements are usually shorter and broader than tracheids, and with much less noticeable tapering (if any at all). In addition to pits, the ends of vessel elements also have perforations (or pores)—areas of the cell wall that lack both primary and secondary thickenings and through which water and dissolved nutrients may freely pass.

B2 Phloem

The phloem, or food-conducting tissue, consists of cells that are living at maturity. The principal cells of phloem, the sieve elements, are so called because of the clusters of pores in their walls through which the protoplasts of adjoining cells are connected. Two types of sieve elements occur sieve cells, with narrow pores in rather uniform clusters on the cell walls, and sieve-tube members, with larger pores on some walls of the cell than on others. Although the sieve elements contain cytoplasm at maturity, the nucleus and other organelles are lacking. Associated with the sieve elements are companion cells that do contain nuclei and that are responsible for manufacturing and secreting substances into the sieve elements and removing waste products from them.

C Ground System

The ground, or fundamental, tissue systems of plants consist of three types of tissue. The first, called parenchyma, is found throughout the plant and is living and capable of cell division at maturity. Usually, only primary walls are present, and these are uniformly thickened. The cells of parenchyma tissue carry out many specialized physiological functions—for example, photosynthesis, storage, secretion, and wound healing. They also occur in the xylem and phloem tissues.

Collenchyma, the second type of ground tissue, is also living at maturity and is made up of cells with unevenly thickened primary cell walls. Collenchyma tissue is pliable and functions as support tissue in young, growing parts of plants.

Sclerenchyma tissue, the third type, consists of cells that lack protoplasts at maturity and that have thick secondary walls, usually containing lignin. Sclerenchyma tissue is important in supporting and strengthening those parts of plants that have finished growing.


The body of a vascular plant is organized into three general kinds of organs: roots, stems, and leaves. These organs all contain the three kinds of tissue systems mentioned above, but they differ in the way the cells are specialized to carry out different functions.

A Roots

The function of roots is to anchor the plant to its substrate (usual soil) and to absorb water and minerals. Thus, roots are generally found underground and grow downwards, or in the direction of gravity. Unlike stems, they have no leaves or nodes. The epidermis is just behind the growing tip of roots and is covered with root hairs, which are outgrowths of the epidermal cells. The root hairs increase the surface area of the roots and serve as the surface through which water and nutrients are absorbed.

Internally, roots consist largely of xylem and phloem, although many are highly modified to carry out specialized functions. Thus, some roots are important food and storage organs—for example, beetroots, carrots, and radishes. Such roots have an abundance of parenchyma tissue. Many tropical trees have aerial prop roots that serve to hold the stem in an upright position. Epiphytes have roots modified for the rapid absorption of rainwater that flows over the bark of the host plants.

Roots increase in length through the activity of apical meristems, and in diameter through the activity of lateral meristems. Branch roots originate internally at some distance behind the growing tip when certain cells become meristematic.

B Stems

Stems are usually above ground, grow upwards, and bear leaves, which are attached in a regular pattern at nodes along the stem. The parts of the stem between nodes are called internodes. Stems increase in length through the activity of an apical meristem at the stem tip. This growing point also gives rise to new leaves, which surround and protect the stem tip, or apical bud, before they expand. Apical buds of deciduous trees, which lose their leaves during part of the year, are usually protected by modified leaves called bud scales.

Stems are more variable in external appearance and internal structure than root, but they too consist of the three tissue systems and have several features in common. Vascular tissue is present in bundles that run the length of the stem, forming a continuous network with the vascular tissue in the leaves and the roots. The vascular tissue of herbaceous plants is surrounded by parenchyma tissue, whereas the stems of woody plants consist mostly of hard xylem tissue. Stems increase in diameter through the activity of lateral meristems, which produce the bark and wood of woody plants. The bark, which also contains the phloem, serves as a protective outer covering, preventing damage and water loss.

Within the plant kingdom, there are many modifications of the basic stem, such as the thorns of hawthorns. Climbing stems, such as the tendrils of grapes, have special modifications that allow them to grow up and attach to their substrate. Many plants have reduced leaves or no leaves at all, and their stems act as the photosynthetic surface (see Cactus). Some stems creep along the surface of the ground and reproduce the plants through vegetative means; many grasses reproduce in this way (see Vegetative Reproduction). Other stems are borne underground and serve as food-storage organs, often allowing the plant to survive through the winter; the bulbs of tulips and the corms of crocus are examples.

C Leaves

Leaves are the primary photosynthetic organs of most plants. They are usually flattened blades that consist, internally, mostly of parenchyma tissue called the mesophyll, which is made up of loosely arranged cells with spaces between them. The spaces are filled with air, from which the cells absorb carbon dioxide and into which they expel oxygen (see photosynthesis). The mesophyll is bounded by the upper and lower surface of the leaf blade, which is covered by epidermal tissue. A vascular network runs through the mesophyll, providing the cell walls with water and removing the food products of photosynthesis to other parts of the plants.

The leaf blade is connected to the stem through a narrow portion called the petiole, or stalk, which consists mostly of vascular tissue. Appendages called stipules are often present at the base of the petiole.

Many specialized forms of leaves occur. Some are modified as spines, which help to protect plants from predators. Certain groups of plants possess highly modified leaves that trap and digest insects, providing nutrients that the plants cannot otherwise obtain (see Insectivorous Plants). Some leaves are brightly coloured and petal-like, serving to attract pollinators to otherwise small, unattractive flowers. Perhaps the most highly modified leaves are flowers themselves. The individual parts of flowers—carpels, stamens, petals, and sepals—are all modified leaves that have taken on reproductive functions.


The growth and differentiation of the various plant tissue and organ systems are controlled by various internal and external factors.

A Hormones

Plant hormones, specialized chemical substances produced by plants, are the main internal factors controlling growth and development. Hormones are produced in one part of a plant and transported to others, where they are effective in very small amounts. Depending on the target tissue, a given hormone may have different effects. Thus, auxin, one of the most important plant hormones, is produced by growing stem tips and transported to other areas where it may either promote growth or inhibit it. In stems, for example, auxin promotes cell elongation and the differentiation of vascular tissue, whereas in roots it inhibits growth in the main system but promotes the formation of adventitious roots. It also retards the abscission (dropping off) of flowers, fruits, and leaves.

Gibberellins are other important plant-growth hormones; more than 50 kinds are known. They control the elongation of stems, and they cause the germination of some grass seeds by initiating the production of enzymes that break down starch into sugars to nourish the plant embryo. Cytokinins promote the growth of lateral buds, acting in opposition to auxin; they also promote bud formation. In addition, plants produce the gas ethylene through the partial decomposition of certain hydrocarbons, and ethylene in turn regulates fruit maturation and abscission.

B Tropisms

Various external factors, often acting together with hormones, are also important in plant growth and development. One important class of responses to external stimuli is that of the tropisms—responses that cause a change in the direction of a plant’s growth. Examples are phototropism, the bending of a stem towards light, and geotropism, the response of a stem or root to gravity. Stems are negatively geotropic, growing upwards, whereas roots are positively geotropic, growing downwards. Photoperiodism, the response to cycles of darkness and light, is particularly important in the initiation of flowering. Some plants are short-day, flowering only when periods of light are less than a certain length (see Biological Clocks). Other variables—both internal, such as the age of the plant, and external, such as temperature—are also involved with the complex beginnings of flowering.


Because the most familiar ones are rooted in the ground, plants are commonly thought of as leading passive lives. However, a look at the ingeniously developed interactions that plants have with their biological and physical surroundings quickly corrects this notion.

A Cooperation and Competition

Many plant species exist as separate male and female plants, and pollen from male flowers must reach the female flowers in order for pollination and seed development to take place. The agent of pollination is sometimes the wind (a part of the physical environment), but in many cases, it is an insect, bat, or bird (members of the biological environment). Plants may also rely on agents for dispersing their seeds. Thus, after pollination, cherry trees develop cherries that attract birds, which eat the fruit and excrete the cherry stones some distance away.

Plants have evolved many other mutually beneficial relationships with other organisms, such as the nitrogen-fixing bacteria that occur in the nodules on the roots of legumes and the mycorrhizal fungi. Without them, many plants would be unable to grow properly. Many prairie grasses and other plants that flourish on open land depend on various herbivores to keep forests from closing in and shading them.

In the competition among plants for light, many species have increased in height, and evolved particular leaf shapes and crown shapes, in order to intercept the Sun’s rays. In addition, many plants produce chemical substances that inhibit the germination or establishment of seeds of other species near them, thus excluding competing species from mineral resources as well as light. Walnut species, for example, use such an allelopathy, or chemical inhibition.

B The Food Web

Because plants are autotrophic organisms—that is, they are able to manufacture their own food—they lie at the very foundation of the food web. Heterotrophic organisms (organisms that cannot manufacture their own food) usually lead less sedentary lives than plants, but they ultimately depend on autotrophs as sources of food. Plants are first fed upon by primary consumers, or herbivores, which in turn are fed upon by secondary consumers or carnivores. Decomposers act upon all levels of the food web. A large proportion of energy is lost at each step in the food web; only about 10 percent of the energy in one step is stored by the next. Thus, most food webs contain only a few steps.

C Plants and Human Beings

From the prehistoric beginnings of agriculture until recent times, only a small proportion of the total number of plant species have been taken from the wild and refined to become primary sources of food, fibre, shelter, and drugs. This process of plant cultivation and breeding began largely by accident, possibly as the seeds of wild fruits and vegetables, gathered near human habitations, sprouted and were crudely cultivated. Plants such as wheat, which possibly originated in the Middle East more than 9,000 years ago, were selected and replanted year after year for their superior food value; today many domesticated plants can scarcely be traced back to their wild ancestors or to the plant communities in which they originated. This selective process took place with no prior knowledge of plant breeding but, rather, through the constant and close familiarity that pre-industrial human beings had with plants.

Today, however, the human relationship with plants is nearly reversed: an increasing majority of people have little or no contact with plant cultivation, and the farmers that do have such contact are becoming more and more specialized in single crops. The breeding process, on the other hand, has been greatly accelerated, largely through advances in genetics. Plant geneticists are now able to develop, in only a few years, such plant strains as wind-resistant maize, for example, thus greatly increasing crop yields.

At the same time, human beings have accelerated the demand for food and energy to the extent that entire species and ecosystems of plants are being destroyed before scientists can make a proper inventory of the world’s plant populations or develop an understanding of which plant species have the potential to benefit humanity. Most species remain poorly understand; those that seem to offer the greatest hope exist in tropical areas where rapidly increasing human exploitation can quickly reduce the land to arid, sandy wastes. A basic knowledge of plants is important in its own right, but it is also useful in attempting to solve many of the problems facing the human world today. See Food Supply, The World.

See also Dicots; Diseases of Plants; Fruit; Monocots; Nut; Plant Distribution; Plant Propagation; Poisonous Plants and articles on major plant groups.

Reviewed By:
Department of Botany, Natural History Museum

Credited images: it.aliexpress