The case for using AI in school

70% of students are using ChatGPT for school—and that’s not a bad thing.

Children focused on laptops at a classroom desk with a person standing nearby, suggesting a learning environment involving technology.

Although large language models (LLMs) have been available since 2017, everything changed with the launch of ChatGPT in late 2022, which made use of a chatbox interface that made it easy for anybody from middle school students to your non-technical coworker to use. By early 2023, ChatGPT reached a landmark 100 million users in only about two months; for reference, it took TikTok nine months to reach that metric. 

Seeing how ubiquitous the technology became, it didn’t take long for questions about how the use of LLMs would impact schooling. By early 2023, many schools across the country attempted to implement blanket bans on the technology, which were quickly reversed after. 

“I think that attempting to ban the use of LLMs in school is foolish and doomed to fail,” says Michael Whitaker, Senior Vice President of Strategy Execution and Organizational Innovation at ICF.

“Kids are really good at finding the path through to get their work done in the way that is low friction or easy.” 

Through a survey we conducted at Innovating With AI, we found that about 70% of students are using LLMs at least somewhat frequently. Only 16% of student respondents said they rarely or never utilize the technology.

Teachers are in agreement, with about 30% of respondents believing that at least 80% of students are using the technology on a regular basis. 

With its usage being so prevalent, is there anything that can be done to prevent students from cheating with the use of LLMs? And is that even the right question to ask?

Cheating at school with LLMs

One of the biggest discussion points about LLMs in schools is that it will be used to cheat their way through classes. This can get complicated (and philosophical) quickly, since we need to define what cheating even means. In most cases, using ChatGPT to write full essays for a homework assignment would be considered cheating, but what about using it to write outlines or even correcting grammar?

When looking at more obvious forms of “cheating,” there are various methods to (at least attempt to) catch students cheating. According to our survey, the most popular form of detection is using style or tone analysis, with 51% of teachers saying that is a method that they use in their classroom. This can be a useful method on essay assignments because platforms like ChatGPT will likely give identical responses to questions. 

In our interview with Matthew Slowinksi, a U.S. history teacher for McAllen ISD, he spoke of a specific example where it was obvious that a wide swath of students used LLMs. 

“We were covering Malcolm X and Dr. Martin Luther King, and specifically, ‘what type of protest is more effective, one through violence or one through non-violence?’, and then everyone comes back with the same response, [one of non-violence]” said Slowinski. “I hate to say it like this, the kids don’t know how to cheat yet.”

It’s worth noting that many savvy students who are familiar with prompt engineering concepts may be able to slip past teachers who are using this method. You can even request a tool like ChatGPT to give a non-traditional answer, such as a violent protest in the example above. 

Another method, used by 40% of surveyed teachers, is using AI detection software, such as Turnitin AI or GPTZero. This is effectively an automated approach to tone and style analysis, using an algorithm to predict whether something was written by an LLM. Although these are often marketed as having over 99% effectiveness, they can be inaccurate. Illinois State and MIT are among many schools that have warned against their use, specifically because they can lead teachers to falsely accuse students of plagiarism. 

One effectively foolproof method, which nearly 50% of teachers surveyed are already implementing, is oral follow-ups or in-class writing assignments. This can allow students an opportunity to prepare material with the help of tools like ChatGPT beforehand, but forces students to learn how to interact without them. 

We spoke to Joshua Guther, a student at University of Canterbury, who has seen a process similar to this, where if essays are assigned outside of the classroom, there is still an expectation that they will need to discuss the material in person. 

“For most of my classes there will be a mix of written assignments and in-person discussion groups,” said Guther.  “If the person can’t really talk in depth about the topic, it’s obvious they don’t really know it and were using LLMs to complete the original assignment.”

In our survey, we found that 35% of students admit to using AI chatbots to write full essays and assignments, while a majority of students say they are using the tools to brainstorm ideas (85%) and write outlines (63%). This breakdown fits well in an environment where LLMs are tools, but not the complete picture. 

The changing classroom

Teachers, like students, are learning how to adapt to this new technology. It’s ultimately unrealistic to expect students to not use such a transformative tool. To keep up with the pace, educators should implement AI policies that factor in their use, rather than excluding them entirely. It’s easy to be idealistic, but the reality is that classrooms will have to evolve alongside the changing technology in real time.

 “We can’t put any safeguards on until the invention is out there,” said Slowinski. “Just embrace it and then learn from it.” 

Parents are also well aware of this wave of technology. Of the parents who answered our survey, over 95% of them are at least partially familiar with LLMs, with 75% of them having a child that they either suspect or know is using AI tools. Every parent who answered our survey was comfortable with their child using LLMs depending on the context and if used responsibly. 

Banning AI doesn’t work, but ignoring it is shortsighted. Instead, LLMs need to be treated as tools, which starts with clear expectations of how they are used, as well as how to treat activities in class that don’t use them.

How to create AI policies in classrooms

Just like Wikipedia and then Google before it, LLMs are here to stay in the classroom. Knowing this, it’s essential for classrooms to create AI policies that work with the technology, not against it. Here are some ideas to implement:

  • Follow up written homework with discussion. If homework involves writing short answers and essays, it should be followed up with oral discussions of the material. Even if students use LLMs to complete the work, they will be forced to learn enough of the material to involve themselves in discussion. You could even make the majority of the grade be the oral discussion instead of the assignment.
  • Use examples of proper AI use. Teachers should make it clear what constitutes plagiarism and cheating with AI tools.
  • Teach AI literacy in the classroom. Since LLMs are known for hallucinations and biases, it’s essential for both students and teachers to understand their limitations and to identify when it’s appropriate to use the tools versus doing outside research. These lessons can also be run as workshops for parents!
  • Create assignments that are AI-resistant. Adjust assignments to hone in on personal experience, such as asking for a first-person connection to, say, reading material. 
  • Promote transparency and AI usage disclosures. Students should explicitly state when and how they used AI in their assignments. This has an additional benefit of helping teachers “catch” students who aren’t disclosing when they see obvious similarities across groups of students.

Naturally, this should be used as a starting point, especially as the technology continues to develop. If educators lean into the change, instead of resisting, students will come away both tech-savvy and prepared for the world they will soon enter.

Separator

Join free: Weekly AI insights and analysis

Trusted by 200,000+ AI enthusiasts, entrepreneurs and consultants.