Education specialists argue that ILSA scores should not be used to conduct educational policy

education
Credit: CC0 Public Domain

A pair of education specialists, one from Harvard, the other Boston College, have published a Policy Forum piece in the journal Science decrying the use of international large-scale education assessments (ILSAs) as tools for educational policymaking. In their paper, Judith Singer and Henry Braun point out problems with comparing ILSA results between countries and suggest they could be put to better use by local entities looking to improve their education systems.

As the authors note, ILSAs such as PISA and TIMSS have, in recent years, become a tool for nationalistic bragging among those with high scores—those with lower scores are more often scorned. But, they note, using ILSA scores in such a way offers little in the way of educational improvement, which is the generally accepted purpose of testing students. They also note that there is a danger in over-interpreting such scores because it can have a dramatic negative impact on policy. There is a problem, they point out, when journalists and politicians use ILSA test scores as means to serve their own ends rather than to champion the goals of educating the world's children—it can skew how people view their country's education system.

Ranking ILSA results by country, the authors note, is quite misleading—instead of offering a fair assessment of academic abilities, such lists instead offer little except headlines or web clicks. They also note that the rankings do not always give an accurate portrayal of academic achievement—some countries jump around on the list from year to year, for example, without making any changes. Others skew their own results by hiding low-scoring students. They note also that there are vast differences in how hard students try to do well on such tests—for many students in places like Korea or Japan, scoring high on an ILSA is so important that parents spend hours helping them prepare, or spend thousands on tutors. Meanwhile, in places like the U.S., students see little advantage in preparing at all.

The authors suggest that all countries view results on ILSAs as a means of learning more about their own system and in so doing discover ways that their own educational systems can be improved.

More information: Judith D. Singer et al. Testing international education assessments, Science (2018). DOI: 10.1126/science.aar4952

Summary
News stories on international large-scale education assessments (ILSAs) tend to highlight the performance of the media outlet's home country in comparison with the highest-scoring nations (in recent years, typically located in East Asia). Low (or declining) rankings can be so alarming that policy-makers leap to remedies—often ill-founded—on the basis of what they conclude is the "secret sauce" behind the top performers' scores. As statisticians studying the methods and policy uses of ILSAs (1), we believe the obsession with rankings—and the inevitable attempts to mimic specific features of the top performing systems—not only misleads, it diverts attention from more constructive uses of ILSA data. We highlight below the perils of drawing strong policy inferences from such highly aggregated data, illustrate benefits of conducting more nuanced analyses of ILSA data both within and across countries, and offer concrete suggestions for improving future ILSAs.

Journal information: Science

© 2018 Phys.org

Citation: Education specialists argue that ILSA scores should not be used to conduct educational policy (2018, April 6) retrieved 19 April 2024 from https://phys.org/news/2018-04-specialists-ilsa-scores-policy.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

How expectations impact actual exam scores

90 shares

Feedback to editors