2
The most well-known ILSAs are the core studies of the Internaonal Associaon for the Evaluaon of Educaonal
Achievement (IEA) and the Organisaon for Economic Co-operaon and Development (OECD):
INTRODUCTION
Internaonal large-scale assessments (ILSAs) of educaon
are empirical studies that assess educaonal abilies around
the world. The data are used in various ways to help inform
policymakers, educaonal researchers, and the general public.
However, despite the widespread use of ILSA data, how to
interpret and report results is oen misunderstood. Dierent
study results are somemes reported or, interpreted to mean
the same thing, yet there exists important dierences that need
to be accounted for when using results. As leaders in the eld of
internaonal large-scale assessment, our intenon for this brief
is to provide context for a beer understanding of ILSA results
and; how they should be interpreted and reported—we discuss
what ILSAs are, the history of their development, dierences
as well as commonalies in approaches, organizaon, and
methodology, and important limitaons—and to express why we
believe that ILSAs are unique, important, and relevant tools for
understanding educaonal systems and student achievement
around the world, and informing evidence-based change.
WHAT ARE ILSAs?
ILSAs assess student achievement in specic disciplines and
provide context for the results by collecng addional data
at the student level. Further contextual details at the teacher,
principal, and/or system levels may also be collected. To
provide stascally valid results, a representave sample of
schools (usually around 150 to 200 schools) are drawn from
each parcipang country or educaon system, and a group of
students are randomly drawn from within each of the sampled
schools, either by sampling enre classrooms or by sampling
students across classrooms.
BRIEF HISTORY
The rst ILSA, IEA’s Pilot Twelve-Country Study (Foshay et al.
1962), was launched by a group of researchers in 1958 (Husén
1983; IEA 2018). Scholars from various disciplines met at the
UNESCO Instute for Educaon in Hamburg (Germany) and
decided to launch a then exploratory study to test whether it
was possible to compare learning outcomes across a range of
dierent countries and cultures. They chose to assess student
achievement in mathemacs, assuming that it would be easiest
to translate into dierent languages and was thus, more likely
to result in valid comparisons across countries. Their aim
was to nd out what could be learned through internaonal
assessment, with the hope that countries could learn from
each other. As Torsten Husén phrased it: “In general terms,
internaonal studies such as this one can enable educaonalists
(and ulmately those responsible for educaonal planning and
policy making) to benet from the educaonal experiences
of other countries. It helps educaonalists to view their own
system of educaon more objecvely because for the rst me
many of the variables related to educaonal achievement had to
be quaned in a standardized way” (Husén 1967, pp. 13-14).
While the rst ILSAs were conducted by researchers with
quite minimal resourcing to sasfy academic interests in
invesgang educaon, by 1990 educaonal policymakers
had begun to realize that ILSAs could potenally provide useful
evidence-based data. However, their nancial support for ILSAs
demanded rapid outcomes. Where the rst academic study
reports were somemes launched up to eight years aer the
study was conducted (Anderson et al. 1989), this wider interest
led to a new pressure to publish results as soon as possible. An
addional consequence was an interest in measuring trends
in educaon systems, a challenge that IEA rose to meet with
TIMSS, rst conducted in 1995 and followed up by a second
Source: OECD 2019; IEA 2019
IEA Internaonal Computer and
Informaon Literacy Study (ICILS)
OECD Programme for Internaonal
Student Assessment (PISA)
IEA Trends in Internaonal Mathemacs
and Science Study (TIMSS)
IEA Progress in Internaonal
Reading Literacy Study (PIRLS)
IEA Internaonal Civic and
Cizenship Educaon Study (ICCS)