Sections

Commentary

Using data and technology to enhance classroom teaching

Editor's note:

This is the second in a four-part blog series on the assessment of 21st century skills.

Education assessments like the OECD’s Programme for International Student Assessment (PISA) leverage technology to improve the assessment and teaching of 21st century skills at large scale. But how is this useful when classrooms don’t readily have access to technology? How does this help teachers and students in their daily learning environments? In today’s society, where much of the attention centers on technology and innovation, unequal access to technology can mean unequal access to quality education. How can research and learnings from technology, computer-delivered assessments contribute to all classrooms?

We are aware—and at times, alarmed—by the amount and nature of data that are collected and used by search engines and social media sites. Notwithstanding some of the concerns about these uses of data capture, there is no doubt that electronic capture can be harnessed for good. For example, there is emerging evidence that the collection of fine-grained student activity data associated with educational assessments can contribute directly to improving the teaching and learning of important 21st century skills. Putting aside efficiency functions like automated scoring of tests, online assessment provides the ideal opportunity to capture data that is useful to teachers for improving teaching and learning.

Let’s look at just one example. At the end of 2016, about 3.5 billion individuals—nearly one-half of the world’s population—were using the internet, a threefold increase in 10 years. Much reading is now done online, making digital reading a critical skill. Furthermore, the more that accessing information through computers becomes standard, the more important digital reading skills become critical to a successful professional, personal, and social life. For the first time in 2009, PISA assessed in a large-scale how well students are able to read digital texts.

Reading involves the decoding and comprehension of written text, but digital reading adds to this a new challenge: the ability to move around within hypertext environments by predicting the likely content of links and using tools such as tabs, hyperlinks, and browser buttons. So digital reading comprises two parts: text processing and navigation. In the same way that search engines or social media sites track our interactions, the PISA Digital Reading Assessment recorded every mouse click and every keystroke, making it a simple matter to examine students’ navigational pathways closely.

cue_data-technology-teaching_001

One very simple indicator of navigation behavior is whether students visit the page (or pages) that contain the information needed to solve a task successfully. There are powerful benefits in being able to group students according to how they interact with material and whether they were able to solve tasks. Imagine that four students (Students A, B, C, and D) complete an assessment of digital reading. For illustrative purposes, consider that we look closely at the responses of these four students to a single assessment item. Student A seems to struggle both to move around effectively within a set of hyperlinked websites and to answer the assessment item correctly. Student B answers the item correctly, but without showing the ability to navigate well. Student C shows the ability to navigate successfully, but is not able to solve the task correctly. Finally, Student D navigates to the page containing the information needed and is able to solve the task.

Making these distinctions has clear and critical implications for both the practice of assessment development and classroom teaching. Consider Student B. This student answered an assessment item correctly, but without being able to locate the target information. Logically speaking, this situation is nonsense, and we can conclude that Student B guessed the correct response. For assessment developers, treating Student B in the same way as Student A, the student who lacked ability in both text processing and navigation, leads to improvements in important test properties such as reliability and validity.

At the level of the classroom, the comparisons between these four hypothetical students are just as informative: teachers can compare students A and C, for example. Neither student solved the task correctly, so if using conventional scoring methods these students would have been treated in the same way. However, by tracking their navigational pathways, we can ascertain that unlike student A, student C did navigate successfully to the page containing the target information. It follows that the nature of intervention for these two students should be different. While student A appears to need assistance with both text processing and navigation, for student C, the focus should be on the latter.

So how do we support teachers in identifying what indicators to look for when students are completing classroom tasks? As the example from digital reading shows, one of the major ways that online data capture and analysis can inform teaching and learning is through the identification of patterns of student thinking and behavior. Teachers can use this information in their strategic design of tasks, and in heightening their awareness of the different approaches students take when engaging with tasks. So although many classrooms do not have access to technologies, the learnings from digital environments can be applied effectively.

Authors