Sections

Commentary

Should schools ban or integrate generative AI in the classroom?

Regina Ta and
Regina Ta
Regina Ta Research Intern - The Brookings Institution
Darrell M. West
Darrell West
Darrell M. West Senior Fellow - Center for Technology Innovation, Douglas Dillon Chair in Governmental Studies

August 7, 2023


  • The advent of generative AI tools creates both opportunities and risks for students and teachers.
  • So far, public schools have followed one of three strategies, either banning generative AI, integrating it into curricula, or placing it under further review.
  • Moving forward, schools should develop guiding principles for the use of AI tools, provide training resources for educators, and empower educators to implement those principles.
students using laptop

The start of a new school year is soon approaching, but there is a major question left unresolved: What are schools going to do about generative AI? Since ChatGPT’s release on November 30, 2022, educators have been slow to address questions regarding whether to allow its use in the classroom and how the tool affects pedagogy, student learning, and creativity. Debates have been intense among stakeholders—including teachers, parents, students, and edtech developers—about the opportunities for personalized learning, enhanced evaluations, and augmenting human performance against the possible risks of increased plagiarism and cheating, disinformation and discriminatory bias, and weakened critical thinking.

In this post, we review current responses to generative AI across K-12 public school districts and explore what remains to be done. Right now, public schools have varied between banning or integrating generative AI and reviews are ongoing without any definitive guidelines. After sharing how public schools are addressing these options, we suggest a path forward in which schools establish guiding principles, provide training resources, empower educators to implement those principles, and help over-burdened districts that already are struggling with instructional, infrastructure, and financial challenges.

Three paths of action from public schools

Colleges and universities are largely deferring to faculty to determine policies on generative AI, so a lot of higher education is moving on an ad-hoc basis that varies by classroom, course, and professor. There is neither a common approach across universities, nor agreed-upon policies on how to move forward.

In the case of K-12 public school districts, most administrators generally are taking institutional action and implementing decisions for entire school districts. They are not delegating the decisions to teachers but are enacting across-the-board decisions that affect every teacher and student in their jurisdiction. Their efforts fall into one of three categories: banning, integrating, or reviewing generative AI.

Banning generative AI

By the end of May 2023, ChatGPT joined YouTube, Netflix, and Roblox on lists of websites either banned for school staff and students among various large U.S. school districts, where access would require special approval. The controversial movement to widely ban ChatGPT began when the two largest school districts in the nation—New York City Public Schools and Los Angeles Unified—blocked access to ChatGPT from school Wi-Fi networks and devices. Other districts soon followed suit.

Citing the Children’s Internet Protection Act (CIPA), Fairfax County Public Schools in Virginia restricted access to ChatGPT, since the chatbot may not be appropriate for minors. Texas’s Austin Independent School District cited similar concerns about academic integrity and child safety in its decision. Seattle Public Schools banned access to not only ChatGPT, but also six additional websites that provide AI-powered writing assistance, including Rytr, Jasper, and WordAI. While these were not full bans, student use restrictions affected teacher adoption and use.

However, one problem with the approach to ban or restrict ChatGPT is that students can always find ways to circumvent school-issued bans outside the classroom. ChatGPT and other such chatbot tools are accessible from home or non-school networks and devices. Students could also use other third-party writing tools, since it would be impractical to ban the growing number of websites and applications driven by generative AI. Besides, bans may only be band-aid solutions, distracting from the root causes of inefficacy in our school systems—for instance, concerns about ChatGPT-enabled cheating might instead point to a need for changing how teachers assess students.

But the biggest problem, by far, is that this approach could cause more harm than good, especially if the benefits as well as the opportunities are not weighed. For example, ChatGPT can enrich learning and teaching in K-12 classrooms, and a full ban might deny students and teachers potential opportunities to leverage the technology for instruction, or lesson development. Instead of universally banning ChatGPT, school districts should recognize that needs in adoption and use may vary by teacher, classroom, and student. Imagine using ChatGPT for a history vs. an art class, for students whose first language is not English, and for students with learning disabilities. Different issues can pop up in various use cases, so across-the-board bans, and even restrictions for that matter, could limit the ability of students and instructors to take advantage of relevant learning benefits, and in turn, have effects on adoption and use during postsecondary opportunities, or in the workplace.

Integrating generative AI

New York City Public Schools—the first school system to block access to ChatGPT—was also the first to reverse its ban. Within four months of the initial ban, the reversal came after convenings of tech industry representatives and educators to evaluate emerging risks and understand how to leverage ChatGPT’s capabilities for the better. To support teachers, NYC school district leaders have promised to provide resources developed by MIT (Massachusetts Institute of Technology), along with real-life examples of successful AI implementation from classrooms in the district that have been early adopters of technology. The district also plans to create a shared repository to track each school’s progress and share findings across schools.

Schools like Peninsula School District in Washington had already been working to integrate AI into their curricula, so when ChatGPT arrived, they were prepared: digital learning teams visited classrooms across different grade levels to share how language models work, as well as how to identify and leverage AI-generated content. Alliance City School District in Ohio is also embracing ChatGPT’s potential, resolving to proactively set boundaries on its usage to prevent misuse. In Lower Merion School District, students from Pennsylvania will hone their critical thinking skills by analyzing and editing AI-generated writing. In all the above cases, responsibly integrating generative AI as a teaching tool will require school districts to invest in proper oversight procedures and professional development for educators.

As such, Garden City Public Schools in New York has held training sessions for educators to demonstrate the capabilities of different generative AI tools, along with how to incorporate them effectively and tailor materials to students’ needs. Schools like Norway-Vulcan Area Schools in Michigan also plan to provide professional development opportunities for teachers, as well as strengthen the school community’s understanding of its honor code and plagiarism policies. The district has encouraged teachers to use Turnitin’s AI detector to check for cases of plagiarism, as they prepare to teach with generative AI in the fall.

There are some schools that are being more cautious as they integrate generative AI. In Texas, Mineral Wells Independent School District has adopted a more cautious approach, testing generative AI use in an experimental set of classrooms, and sending those instructors for general training in AI. Elsewhere in Texas, Eanes Independent School District is similarly focused on helping teachers make the most of generative AI, as they first try ChatGPT for administrative use cases, like scheduling or lesson planning.

Placing generative AI under review

While districts like Prince George’s County (MD), Jefferson County (KY), and Chicago (IL) have not banned ChatGPT, they have placed the chatbot under review. School districts that haven’t acted yet are watching and waiting, and most fall into this category. A recent survey by UNESCO (United Nations Educational, Scientific and Cultural Organization) found that less than 10% of schools have implemented guidance on generative AI, and of the schools with policies in place, 40% reported that the guidance was only communicated verbally—not in writing.

Just as we demand transparency from developers on how AI is built, we need to provide transparency for students and teachers on how AI can be used. Not enough schools have issued formal guidance on generative AI. A nationwide survey of K-12 teachers revealed that 72% have not received guidance on generative AI use. Generally, the longer schools delay their deliberation of bans or integrated use of new generative AI technologies, the higher the stakes—especially with a new school year on the horizon. As one of many generative AI tools being used for education, ChatGPT is increasingly accessed by students and teachers, and the absence of institutional policies may enable counterproductive use cases. Without an educational sandbox for generative AI usage, schools run the risk of having students deploy these rapidly developing technologies in unplanned ways with unintended outcomes affecting safety, equity, and learning.

School districts also have a critical opportunity to govern the use and misuse of generative AI tools before the academic year begins. Districts can shape its use and role in the future of education, instead of letting generative AI write it for them. In California, education policy researchers have made a similar call to action. More important, national concerns around the digital divide in education can make technology more useful in bridging learning gaps created by the lack of home internet. But that also means that schools must support the equitable distribution of generative AI’s benefits. Being proactive about the adoption and use generative AI now will prepare school districts to set precedents about using future technologies in the classroom.

Recommendations for moving forward

Many classroom policies thus far are too narrowly focused on one tool: ChatGPT. Right now, there are thousands of generative AI products that are on the market, and more are being developed every week. School districts need to consider the use not just of ChatGPT, but other generative AI applications, like Llama 2 or BARD, as well as the widespread educational tools, like PowerSchool, Kahoot!, or Khan Academy.

In closing, we recommend strategies below for how school districts can approach generative AI governance, regardless of the product.

Establish guiding principles

In collaboration with edtech specialists, teachers, and students, school districts should develop a set of common, guiding principles for students and teachers around generative AI use. These guidelines should define the purpose and scope of generative AI in the classroom, along with acceptable use cases. These may also serve to establish privacy protections for students and formalize procedures for how teachers can supervise student usage, give feedback, and handle misuse.

Provide training resources for teacher professional development

Whether administrators and/or teachers fear generative AI may disrupt their classrooms or instead welcome its potential, school districts can offer accessible training that will equip all teachers to meet the present moment. These training opportunities may not have to be developed from scratch – districts can adapt online resources, like the Consortium for School Networking (CoSN)’s resource library and TeachAI, who also offer some guiding principles. When educators gain a robust understanding of generative AI, they can apply it productively in their classrooms, as well as support responsible use and understanding among their students.

Empower educators to implement principles

Recognizing that there is no one-size-fits-all policy on generative AI, districts should empower educators to implement institutional recommendations and enforce academic integrity within their classrooms – while applying the technologies in ways that serve their students. This approach models that taken by Department of Education’s recent AI Report, which provides general guidance for learning and teaching with AI—without commenting on specific generative AI tools, due to their rapid progress. Teachers can reference district-level principles as a guiding framework, upon which they can design transparent, well-defined expectations for their students.

Help overburdened districts

Finally, we need to help overburdened and under-resourced districts that already are struggling with instructional, infrastructure, and financial challenges. There remain sharp inequities in public school resources, and modern technologies often accentuate those disparities. Some schools have good digital infrastructures, while others do not. The same also applies to the equitably available financial means to integrate new teaching tools in the classroom.

As schools consider how to utilize generative AI, we should be cognizant of these disparities and provide help to make sure marginalized districts are not left behind. Federal and state officials could earmark money to public school districts who receive minimal assistance on using generative AI to help teachers, students, and administrators deal with its utilization. In the end, for districts to ensure diversity, equity, and inclusion in the deployment of these tools, school leaders ought to level the playing field for their use, especially before its unyielding adoption and use.

The proposed strategies are not required of school districts in any order. Rather, they are the beginning of both immediate and future conversations for how to understand how to leverage generative AI tools in educational settings.

Authors

  • Acknowledgements and disclosures

    Meta and Google are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.