Basic rights and Equality

Making the most of international large-scale assessments in education

5 min

by

Paulína Koršňáková

International large-scale assessments are often described as an investment in a country’s education system. But who acts to ensure that the investment pays off? This column argues that it is worth spending a little more money analyzing the results of these assessments and learning from them to ensure that we get something more than a ranking in a table. Failing to investigate the data properly undermines the possibility of improvements, and ultimately wastes the efforts of students, teachers, and researchers. If you are growing a champion pumpkin, it is not enough to weigh it periodically: you need to nourish it too.

International large-scale assessments (ILSAs) are expensive endeavors. This is an inescapable reality. The participation fee is just the beginning; other expenses occur within country.

Nevertheless, countries have good reasons for choosing to take part. A comparative perspective on their education system allows them to see what is possible. Countries have confidence in ILSAs’ well-developed frameworks, well thought-out processes, and high quality technical support. They also recognize the superlative opportunities offered for capacity-building and networking with colleagues across the world who are already experienced and used to implementing these studies as a quality assurance measure within their own countries.

More recently, with the international adoption of the United Nations’ Sustainable Development Goals (SDGs), taking part in ILSAs is an excellent way for countries to monitor their progress towards achieving SDG4: ensuring quality education and lifelong learning for all.

But is it enough to get the numbers? What happens after a report is released? Who carries out the secondary analysis of the results? Where is the follow-up? And how do we make sure that every drop of information and insight is squeezed from the report?

Practicalities of international large-scale assessments in education

In the ILSA world, much of the work done in-country for each study is coordinated by just one person. In an IEA study—such as PIRLS (reading literacy), TIMSS (mathematics and science), ICILS (digital literacy), or ICCS (civics and citizenship)—the role falls to a national research coordinator (NRC). In an OECD study—such as PISA (mathematics, science and reading)—it is a national project manager (NPM).

Ideally, these roles would be supported by a group of experts from various fields, including statistics and sampling, curricular content and context, education policy and research methodology, as well as people with administrative and language skills. If this were the case, enough time should be allocated first to orchestrate the implementation of the study, and then to speak about the work once the study is complete—to ensure that the follow-up, is indeed, followed up.

In the real world of a middle-income country, however, the NRC/NPM is frequently either on their own, or paired with another heroic creature overseeing the technical aspects of the study related to data management. This is particularly true for countries that are either smaller in size or operating on a very limited budget.

Value for money for lower-income countries?

While ILSAs have predominantly been the preserve of higher-income countries, this does not mean that they should be. Nor does it mean that lower-income countries should not spend their precious resources on participation.

Indeed, these countries can benefit even more from the technical support and capacity-building benefits of ILSA participation. Participation also allows them to monitor progress towards SDG targets, many of which use ILSAs as indicators.

As the participation of low- and middle-income countries increases, there should be a renewed focus on the need for researchers and practitioners to extract maximum value from the data that ILSAs yield.

Who is responsible for dealing with results?

I speak from personal experience. In the early 2000s, I ran the reporting of the first cycle of one ILSA in my home country of Slovakia, while at the same time, preparing for the field test of the next cycle. Working alongside one colleague, we were able to follow the correct procedures for analyzing and reporting the study. But it was a struggle to find the time to raise awareness of our findings or even to discuss and analyze them with practitioners.

Around 2004, I suggested hiring another person to take care of the new study cycle, allowing me to pay attention to exploring and sharing the results of the first one. I wanted to discuss the results with other stakeholders across the formal education system and beyond, so we could figure out national priorities, and how to achieve them. Sadly, this idea was never seriously considered.

The argument was made that the person in the NRC/NPM role should just focus on the study—that it is not their job to explore the data and find insights for further development and improvement of an education system.

So, whose job is that then? Someone who is responsible for the current state of the education system? Someone else?

Health checks for education

ILSAs can be described as a health check for an education system. They identify the symptoms, not the cure. Nobody expects newly arrived blood test results somehow to start the healing process for a patient automatically. The tests can diagnose a problem; a doctor must then decide on treatment. The treatment is based on the patient’s condition, not the pills prescribed to their neighbor.

The challenge we face with ILSAs is that there are no dedicated doctors in any education system, and the available experts tend to work in isolation.

Looking at the results of ILSAs is a project in itself. They contain valuable information, not just about student achievement but also about the home and school environments that support learning, teachers’ qualifications and attitudes to their profession, and wider issues of how education policy is implemented.

Yet this information is only useful if it gets to the people who can understand and appreciate it. Sometimes, it actually needs the engagement and insights of many different stakeholders across the education system to make the most of it.

If we can agree that it is worth spending money carrying out tests on our education systems, then we owe it to our young people and educators alike, to spend a little more to ensure that we get the best treatment plan at the end of it.

 

Paulína Koršňáková
Senior Research and Liaison Advisor, IEA.