Feature Story | 27-Jan-2026

The dangers of not teaching students how to use AI responsibly

New insights from UVA research suggest teaching responsible AI use in classrooms is critical to preparing students for the modern workforce

University of Virginia School of Data Science

"If students do not learn how to use AI and other technologies appropriately in safe school settings where mistakes are an expected part of the learning process, then they may make mistakes when learning how to use these technologies as adults in higher-stakes settings like the workplace." - Bryan Christ

Generative artificial intelligence has disrupted the classroom, making educators feel as if the only immediate and well-intentioned choice they can make is to ban this technology from being used on assignments and in academic spaces. We spoke with Bryan Christ, a lecturer at the University of Virginia School of Data Science and applied scientist at Microsoft, about why it is more harmful than effective to take away students' ability to use large language models like ChatGPT, and why it is important for them to advocate for their right to have these tools in the classroom. 


Q: What are the consequences of not teaching students how to use AI responsibly?  

Christ: I think the biggest educational consequence of not teaching students how to use AI responsibly is that they might use it to circumvent rather than support the learning process. For example, students might use AI to solve their homework problems without trying to solve them on their own first. In this way, they offload the learning process to AI rather than using it to support the learning process by, for example, having it give them a hint when they are stuck on a problem.  

This could then set a dangerous precedent for students that AI can entirely complete tasks for them without their intervention or review, which could creep into their future roles in the workforce. These students would then engage in cognitive offloading of tasks to AI in their jobs rather than using it as a tool to support their productivity. This could lead them to submitting AI-generated work without first reviewing it, leading to critical errors in AI-generated outputs going unnoticed when models inevitably make mistakes. This is referred to as AI slop.  


Q: Many schools are responding to new technologies with bans and restrictions. What underlying issue do these approaches overlook, and how can it be addressed now?

Christ: I think these approaches sweep the larger issue of teaching students how to use technology appropriately under the rug. If students do not learn how to use AI and other technologies appropriately in safe school settings where mistakes are an expected part of the learning process, then they may make mistakes when learning how to use these technologies as adults in higher-stakes settings like the workplace. It is unreasonable to expect that students will not use these technologies in the future, so it is critical to teach them how to do so appropriately.  


Q: How can AI help students with learning disabilities? 

Christ: There are many ways AI could help students with learning disabilities. One way is that teachers can use it as a tool to support differentiation, or adapting instruction to individual student needs. For example, teachers could use AI to quickly convert an assignment into a different format for a student with a learning disability or to generate practice problems for students with learning disabilities that are aligned with their current readiness levels.  

Students with learning disabilities can also use AI on their own to support their learning. Some examples include voice chatting with AI to learn class concepts, using AI to read a document aloud, using AI to break down complex concepts into manageable chunks, using AI to generate practice problems, or using AI to convert learning materials into a different modality like a podcast or visual diagram.  


Q: How can schools and educators safely and effectively implement AI into the classroom without compromising students’ ability to be creative?

Christ: The best way for schools and educators to safely and effectively implement AI into the classroom is to find ways to use it as a tool to supplement rather than circumvent the learning process. 

  • One example is to have students use AI tools that are designed to support their learning by giving targeted hints or instructions when solving problems rather than giving the answer like Khanmigo or ChatGPT Study Mode. Such tools can support teachers by providing individualized instruction and tutoring at a scale unachievable by a single teacher alone.
  • Another example would be using AI to automatically customize practice problems to students' interests and current ability levels, which is known to support learning outcomes. 
  • A third example would be to use AI as a tool to foster creativity itself by having students use it to learn more about topics they are interested in while practicing learning skills like reading comprehension. For example, teachers (or students) could use AI to generate reading passages and associated comprehension questions or learning activities about things their students are interested in like space or sports. In these examples and other ways, we can empower teachers and students to use AI to support creativity and learning rather than circumvent it. 

Q: How can schools reduce confusion when policies on generative AI vary from class to class?

Christ: I think the most important thing is for schools to be very clear with students about their expectations around AI to minimize the chances that students use AI in a way that would be deemed inappropriate or cheating. While school-wide policies can be helpful, it can be hard to craft one policy that works for every class and learning situation. I think a better approach is to define what responsible use of AI looks like at a high level for the whole school and then let teachers decide what that looks like in their individual classrooms. 

For example, a school could decide that responsible use of AI means students do not use it to draft first drafts of their work, allowing individual teachers latitude in whether AI could be used to help students refine their ideas based on specific learning goals for individual assignments. Individual class policies can be confusing for students at times, but teachers can minimize this confusion by being very clear about appropriate uses of AI for each learning activity.  


Q: How do you think we can eliminate the stigma around AI in education while still addressing ethical and environmental concerns?  

Christ: I think the best way to eliminate stigma around AI in education is to provide real-life examples of effective use cases. Teachers and schools are correct to be concerned that AI could be used to circumvent the learning process and plagiarize content but should also be shown how it can support learning outcomes. One way to do this is by conducting research into effective uses of AI in education, which is a burgeoning area of research. For example, in a recent study we released, we found that while students performed similarly on AI-generated and human-written math problems, they consistently preferred AI-generated problems that were customized to their interests, directly demonstrating a practical way teachers could use AI to support learning.  

It is also important to be clear with students that AI tools, like all technology, have real ethical and environmental implications. For example, teachers could inform students about how AI could be used to plagiarize information by not citing sources or copyrighted data the models were trained on. Teachers could also teach students about the environmental impact of the technology in terms of electricity and water consumption.  


Q: On a larger scale, students aren’t allowed to be involved in decision making around generative AI that directly impacts their educational experience. How can they get involved in their schools or universities and make sure their voices are heard?

Christ: While teachers and schools may make decisions around generative AI without consulting students, it doesn’t necessarily mean they aren’t open to student feedback or learning about ways these policies affect students. Often, educators make policy decisions without students because it is more convenient, or they assume students might not be interested in helping shape these policies. Educators are well-intentioned and generally open to student feedback. I would recommend that students who are interested in helping shape generative AI policies talk to each other and their teachers. Often, all it takes is several interested students for schools to open lines of communication between decision makers and students.  


Additional resources on involving students in AI policy and decision-making in academic settings:

Students in the Driver’s Seat: Establishing Collaborative Cultures of Technology Governance at Universities (p.53) by Celia CalhounElla DuusDesiree HoOwen Kitzmann, and Mona Sloane

Biden’s AI executive order underlines need for student technology councils by UVA Associate Professor of Data Science and Media Studies Mona Sloane

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.