Education, International Standards in, the international comparison between school systems. Such comparisons are useful in describing a narrow range of performances, although, in their present state of development, they do little to explain, in ways that could help policymakers nationally, the reasons for the differences that such tests reveal.
The main cross-national comparative studies of learning achievement have been carried out under the auspices of the International Association for the Evaluation of Educational Achievement (IEA). Other major studies have been international versions of the United States’ National Assessment of Educational Progress programme. These International assessments of Educational Progress studies are known as IAEPs.
The 14 international studies of this kind that have been carried out between 1960 and 1995 have been predominantly concerned with levels of literacy, mathematics, and science. A unique feature of these studies, or at least most of them, is their use of common assessment instruments (questionnaires) to record learning achievement in specific areas. These are applied to national samples of students of the same ages or school grades in the countries concerned. However, several technical problems arise in establishing valid comparisons between school systems of different countries.
The first problem concerns the sample of school students to be tested. Testing is expensive and samples tend to be small. Much, therefore, depends on the composition of the samples. For example, students from private schools were not included in French studies. Such an exclusion would have had a marked effect in the United Kingdom (which did not, in this case, take part).
The second problem arises from the nature of the questions used in the tests. Different countries have different syllabuses that place different emphasis on different aspects of the school curriculum. Although through international cooperation between those responsible for developing the questionnaires that incorporate the tests, every effort is made to establish common ground, it is inevitable that the questions will suit some national systems better than others. In the 1991 IAEP assessment study of mathematics, for example, 30 percent of the questionnaire was devoted to “number and operations”. In response to questions about the emphasis placed on this area, Israel described it as representing 10 percent of its goal in mathematics, while Switzerland accorded it 50 per cent. In the area of “algebra and function”, however, the percentages were reversed.
Problems of this kind make comparisons between the quality of different education systems difficult to measure. The results of such comparisons do not give an explanation for the differences that emerge. However, information collected at the time of the testing confirms that, unsurprisingly, above-average performance is related to the amount of time spent on silent reading, to the emphasis on storytelling in the early years of school, and, above all, to the level of access to books. On the other hand, there did not seem to be any close connection between the length of the school year or class size and the results achieved.
Comparisons between standards achieved at different ages in different areas of the curriculum in different countries are in principle difficult to make and in practice have so far provided little reliable evidence. Fresh efforts are being made to improve the quality of that evidence in relation to science and mathematics, subjects which, unlike languages, have a degree of consistency of approach internationally, thereby enabling comparisons to be made with some confidence.
Peter Anthony Newsam