My team at the Reboot Foundation recently asked a simple question: Is the use of technology in schools associated with increased student outcomes?
Based on a study of national and international databases, we uncovered a surprising answer, and our work suggests that there’s actually a pretty weak link between technology and student achievement.
Innovations in educational technology have often sparked dramatic pronouncements, to be sure. Socrates famously observed that writing tools would impair people’s ability to remember. The blackboard was championed by advocates in the mid-1800s as a powerful classroom-changing tool since it could be used to present something to all students at once.
At least in principle, today’s educational technologies are different and in many ways, they do offer unprecedented learning experiences. Virtual reality can place students in completely immersive environments, lettingthem experience the effects of ocean acidification or a different planet’s gravity. Adaptive learning systems model the student knowledge and attempt to provide students with new problems at just the right level of challenge.
At the same time, a growing body of evidence suggests that technology can have negative effects. Screen time can diminish face-to-face interactions, which are some of the most valuable learning opportunities for young children. Digital devices can easily distract people. Studiesshow, for instance, that people navigate and comprehend texts on paper more thoroughly than texts on screens.
My team at the Reboot Foundation recently explored the efficacy of education technology by analyzing two large achievement data sets. The first data set is the Program for International Student Assessment (PISA), which evaluates student achievement in over 90 countries. The second data set is the 2017 National Assessment of Educational Progress (NAEP), a national assessment known in the U.S. as “the Nation’s Report Card.”
We found internationally, there’s a weak link between technology and outcomes, and there’s little evidence of a positive relationship between student performance on PISA and their self-reported use of technology, and some evidence of a negative impact. On average students who reported low-to-moderate use of school technology tended to score higher on PISA than non-users, but students who reported a high use of technology tended to score lower than their peers who reported low or no use of technology.
For instance, students in France who reported using the Internet at school for a few minutes to a half-hour daily scored 13 points higher on the PISA math assessment than students who reported not spending any time on the Internet during class. However, students in France who reported spending more than a half-hour on the Internet every day in class consistently scored lower than their peers who reported no time on the Internet.
We also found evidence of a negative relationship between nations’ performances on PISA and their students’ reported use of technology after controlling for a variety of factors including prior performance and wealth. These results were consistent across the math, reading and science assessments.
In the U.S., the relationship between technology and outcomes was also mixed. On NAEP, the results of our analysis varied widely among grade levels, assessments and reported technologies. In some cases, we found positive outcomes, and using computers to conduct research for reading projects was positively associated with reading performance. But for other computer-based activities, such as using computers to practice spelling or grammar, there was little evidence of a positive relationship.
We also found evidence of a learning technology ceiling effect in some areas, with low to moderate usage showing a positive relationship while high usage showed a negative relationship.
The results regarding tablet use in fourth-grade classes were particularly worrisome, and the data showed a clear negative relationship with testing outcomes. Fourth-grade students who reported using tablets in “all or almost all” classes scored 14 points lower on the reading exam than students who reported “never” using classroom tablets. This difference in scores is the equivalent of a full grade level, or a year’s worth of learning.
Our findings have clear limitations. While our research controlled for certain outside variables like wealth and prior performance, the results are insufficient for causal conclusions. We do not have causal evidence, and so we cannot say that technology actually caused changes in student learning.
The current study builds on prior work, and our team replicated an analysis by the Organisation for Economic Co-operation and Development (OECD). In their report, they found that the presence of classroom technology was associated with lower PISA scores, and our team uncovered similar results.
In the end, what’s clear is that there remain a number of open questions about technology in schools. While there’s clear evidence that technology can improve learning outcomes, our data suggest that technology may not always be used in a way that prompts richer forms of learning.
Our findings also indicate that schools and teachers should be more careful about when—and how—education technology is deployed in classrooms. And as a society struggling to prepare our children for an uncertain future, we need more deliberate implementation and careful research on the connection between technology and learning.