This story about AI was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education.
Artificial intelligence has transformed almost every aspect of our lives, from driverless cars to Siri, and soon, education will be no different. The automation of a school or university’s administrative tasks and customization of student curricula is not only possible, but imminent. The goal is for our computers to make humanlike judgments and perform tasks to make educators’ lives easier, but if we’re not careful, these machines will replicate our racism, too.
Kids from Black and Latino or Hispanic communities—who are often already on the wrong side of the digital divide—will face greater inequalities if we go too far toward digitizing education without considering how to check the inherent biases of the (mostly white) developers who create AI systems. AI is only as good as the information and values of the programmers who design it, and their biases can ultimately lead to both flaws in the technology and amplified biases in the real world.
This was the topic at the recent conference “Where Does Artificial Intelligence Fit in the Classroom?”, put on by the United Nations General Assembly, the United Nations Educational, Scientific and Cultural Organization (UNESCO), the think tank WISE and the Transformative Learning Technologies Lab at Teachers College, and hosted by Columbia University.
While many argue that the efficiencies of AI can level the playing field in classrooms, we need more due diligence and intellectual exploration before we deploy the technology to schools. Systemic racism and discrimination are already embedded in our educational systems. Developers must intentionally build AI systems through a lens of racial equity if the technology is going to disrupt the status quo. We’ve already seen the risks of using biased algorithms in the courtroom: Software used to forecast the risk of reoffending incorrectly marks Black defendants as future criminals at twice the rate of white defendants.
Developers must intentionally build AI systems through a lens of racial equity if the technology is going to disrupt the status quo.
Previous attempts at making education more efficient and equitable demonstrate what can go wrong. Standardized testing promised an innovation that was irresistible to an earlier generation of education leaders hoping to democratize the system, and allowed schools and teachers to be held accountable when students didn’t measure up to expectations. But the designers of these assessment tools didn’t consider how the racism and inequality rife in American society would be baked into the tests if care wasn’t taken to make them more fair.
Overuse of standardized tests has helped concentrate wealthy people in select colleges and universities, stifling inclusion of and investment in talented people who happen to be lower-income. To fix this, the College Board, the nonprofit that prepares the SAT, announced a potential solution in May: the planned rollout of an “adversity score” assigned to each student who takes the college admissions exam. The score was to be comprised of 15 factors, including neighborhood and demographic characteristics such as crime rate and poverty to be added to each student’s result. However, bending to a wave of criticism, the College Board retreated from their plan in August.
Recent attempts to introduce AI in schools have led to improvements in assessing students’ prior and ongoing learning, placing students in appropriate subject levels, scheduling classes, and individualizing instruction. Such advances enable differentiated lesson plans for a diverse set of learners. But that sorting can be fraught with pernicious consequences if the algorithms don’t consider students’ nuanced experiences, trapping low-income and minority students in low-achievement tracks, where they face worse instruction and reduced expectations.
The spread of AI technology can also tempt districts to replace human teachers with software, as is already happening in such places as the Mississippi Delta. Faced with a teacher shortage, districts there have turned to online platforms. But students have struggled without trained human teachers who not only know the subject matter but know and care about the students.
Overzealous tech salesmen haven’t helped matters. The educational landscape is now littered with virtual schools because ed-tech companies promised that they would reach the hard-to-educate as well as Black and Latino or Hispanic students, and create efficiencies in low-funded districts. Instead, many of the startups have been hit by scandal: After nearly 2,000 students earned zero credits last year, two online charter schools in Indiana were forced to close.
AI won’t work if it’s intended as a way to avoid the hard work of recruiting skilled teachers, especially those who look like the kids they’re working with.
Artificial intelligence could still provide real benefits. For example, it could free teachers from time-consuming chores like grading homework. But AI won’t work if it’s intended as a way to avoid the hard work of recruiting skilled teachers, especially those who look like the kids they’re working with. For the rise of robots to equate to progress, it should improve work conditions and increase job satisfaction for teachers. AI should reduce attrition and increase the desirability of the job. But if technologists don’t work with Black teachers, they won’t know what conditions need to change to maximize higher-order thinking and tasks.
We must diversify the pool of technology creators to incorporate people of color in all aspects of AI development, while continuing to train teachers on its proper usage and building in regulations to punish discrimination in its application.
AI will continue to disrupt long-standing institutions; the education system will face this transformation all the same. But with diligent oversight, these new systems can be utilized to produce satisfied teachers, accomplished students, and—finally—equity in the classroom.