The media has written extensively about artificial intelligence (AI), fretting about how it will replace humans in almost every job and be the demise of human civilization. But on a more positive note, we have gained a lot from machines and “AI-augmented” humans, from sensors to prosthetics to gene editing. Little attention, though, has been given to the more modest, but potentially impactful, knowledge transfer from machines to humans: teaching, learning, and assessment in schools.
We are referring to the deconstruction of complex human behavior into educational strategies that teachers can deploy in their classrooms. Machines can study and measure behavior in a way that the contributing cognitive and social processes that are part of children’s behavior are identifiable.
Here we consider three examples where computational psychometric models have identified successful strategies for solving problems in digital environments: 1) the analysis of eye-tracking data to improve the development of learning environments by human experts; 2) the analysis of “chat data” from collaborative problem solving tasks to provide learners and teachers with the most efficient strategies for successful collaboration; and 3) the analysis of sequences of learning behavior to identify hurdles that may lead to dropping out of school in order to identify the optimal point for feedback and encouragement.
In each of these examples, the machines identified the successful strategies to benefit students. The machines analyzed big data drawn from records of hundreds of students working in collaborative, digital environments. Using theoretical frameworks, the machines computed, or distinguished, the patterns of behaviors and paths taken by students through collaborative experiences. This is computational psychometrics: combining psychometric theory with big data representing psychological and psycho-educational characteristics to analyze group and individual differences.
When we have information that machines have identified as both successful and problematic sequences of behaviors, researchers can synthesize these learnings and communicate them to teachers in practical, grounded ways. We know, for example, that to teach successful collaborative behavior in the classroom, one needs to teach both “sharing of information“ and “negotiation;” to keep students engaged in a learning program, rewards and gamification tools can be applied (see Duolingo’s streak); to prepare students for a test, teachers can ensure their students have read the instructions—more than once.
Complementing these learnings from AI, big data can also provide us with a way of deconstructing complex behaviors, or in other words, pulling complex behaviors apart to understand each contributing part. This is of course not a skill confined to AI. Teachers themselves can and do engage in these activities to understand the depth and nature of what they are teaching—and assessing.
Assessments play a crucial role in teaching. This function is quite different from when assessment results are used to quantify a student’s learning achievement against benchmarks, or ranking, in comparison with others. The processes that machine learning helps us identify are precisely the processes that we need students to activate as they engage in problem-solving or collaborative behavior, for example. And these processes are what teachers need to build into their classroom-based assessments.
Machine learning and the deconstruction of complex behavior are major resources, not only for psychometricians, but also for teachers. This finding gives us greater insight into what we are measuring and informs how we design assessment tasks. Most recently, this approach was integrated with Evidence Centered Design (e-ECD) framework to develop learning and assessment systems. This fits well with the concept of learning engineering proposed many years ago by Herbert Simon, a Nobel Laureate and Turing Award-winning professor at Carnegie Mellon University, and more recently by Bror Saxberg, vice president of learning science at the Chan Zuckerberg Initiative, to design systems at scale. The idea is to use the principles of engineering design to improve learning outcomes. Teachers can use the same design considerations for sophisticated learning and assessment systems in their selection of appropriate activities and lesson plans.
These strategies are in principle available to teachers all over the world to improve their students’ learning. The missing piece is how to facilitate connection between these two very different worlds to ensure the strategies’ dissemination into teacher development programs. In the interim, we have initiatives such as the ACT Academy, an example of a digital learning and assessment system that assists students and teachers, and the Optimizing Assessment for All project underway in sub-Saharan Africa and Asia, an example of the application of design principles in the classroom. Both of these initiatives, though designed for very different populations, have adopted approaches based on deconstructing complex skill sets with teachers to build assessment tasks for use in the classroom.