Sections

Commentary

Does instructing students ‘at the right level’ truly drive the impacts of targeted remediation programs?

computer student

In many low- and middle-income countries, tailoring instruction to students’ learning levels has emerged as a promising strategy to close learning gaps—whether through teacher-led interventions that promote differentiated instruction (e.g., “Teaching at the Right Level,” TaRL) or computer-adaptive learning software (e.g., “Mindspark”). In the aftermath of the COVID-19 shock, interest in such tailored remediation programs has grown even stronger, and leading experts currently recommend targeting instruction by learning level instead of by grade as a “great buy” for education policymakers.

Surprisingly, this enthusiastic embrace of tailored instruction does not rest on evidence to suggest that personalization truly drives the impacts of such programs. This is because—contrary to what their name might suggest—personalization is just one of many components of “targeted remediation” programs. Take TaRL, for example, which often also provides additional learning time for students and a change in pedagogy. In India, this program is also known as Combined Activities for Maximized Learning (CAMaL, not TaRL)—a name that better captures its package of various intervention components. Or, take “Mindspark.” Aside from personalization, the computer-adaptive learning software also offers many other features, including games, practice exercises, and nudges for students to study more. How, then, do we know whether instructing students “at the right level” fuels the impact of these programs?

In a newly published study, we set out to answer this question. To isolate the effects of personalization, we randomly assigned about 1,500 students who attended a special type of government-run schools (India’s rural “model schools”) to two versions of the Mindspark math software. For half of these students, the computer provided materials that matched their enrolled grade level. The other half received the software’s individualized instruction component. These students were typically measured to be two to three grade levels behind, and the software successfully adjusted its content to match their diagnosed grade level. Since the only thing distinguishing those students who received individualized instruction from others that did not was chance, any differences in learning reflect the causal effect of personalization.

Contrary to what their name suggests, successful “targeted remediation” programs do many things at once—beyond just personalizing instruction. Ours is one of the first studies to isolate the impact of personalization in a lower-middle income country.

An unexpected finding

We were surprised by our findings. After nine months and more than 320 minutes of software usage, overall, students who received the fully personalized version of Mindspark did not show meaningful differences in math learning outcomes relative to those who received the grade-level version. This result is truly puzzling: Students were, on average, two to three grade levels behind, yet, on average, they did not learn more if they had access to personalized, remedial materials.

Access to personalized instruction did significantly improve the math achievement, however, of those students who were initially low performing. Those students in the personalization group who started the program in the bottom 25 percent of performers in their grade saw a positive effect (0.22 standard deviations) in their math learning (Figure 1). This result is more intuitive. The farther students are behind, the more they struggle with at-grade materials, and the greater their potential benefit from targeted remediation.

Figure 1

Personalized instruction may be more helpful where learning gaps are larger

Whether to invest in tailored instruction may then depend on a single question: How far are students behind? In some settings with more moderate learning gaps, minute personalization and differentiation strategies may be less relevant, our results show.

Whether to invest in tailored instruction may then depend on a single question: How far are students behind?

We conducted our study in a special type of school that selects students, has the requisite infrastructure to deploy educational software, and is more productive than regular government schools. Learning gaps are likely to be less pronounced in these schools than in traditional government schools.

For most of India’s regular government schools, however, we recently published a separate study suggesting learning deficits are severe enough—indeed, much more so than previously known—for personalized remediation to be beneficial. We also documented how students make even less progress across grades than previously believed. For instance, a staggering three-quarters of India’s eighth graders have not mastered the foundational math skills expected for a fourth grader. Taken together, the findings of these two studies suggest children who attend regular government schools may be better positioned to benefit from personalization. Future research should explore whether this is so.