image:  A screenshot of the type of code students worked on during the study. 
Credit: University of California San Diego
How much do undergraduate computer science students trust chatbots powered by large language models like GitHub CoPilot and ChatGPT? And how should computer science educators modify their teaching based on these levels of trust?
These were the questions that a group of U.S. computer scientists set out to answer in a study that will be presented at the Koli Calling conference Nov. 11 to 16 in Finland. In the course of the study’s few weeks, researchers found that trust in generative AI tools increased in the short run for a majority of students. But in the long run, students said they realized they needed to be competent programmers without the help of AI tools. This is because these tools would often generate incorrect code or would not help students with code comprehension tasks.
The study was motivated by the dramatic change in the skills required from undergraduate computer science students since the advent of generative AI tools that can create code from scratch. “Computer science and programming is changing immensely,” said Gerald Soosairaj, one of the paper’s senior authors and an associate teaching professor in the Department of Computer Science and Engineering at the University of California San Diego.
Today, students are tempted to overly rely on chatbots to generate code and as a result might not learn the basics of programming, researchers said. These tools also might generate code that is incorrect or vulnerable to cybersecurity attacks. Conversely, students who refuse to use chatbots miss out on the opportunity to program faster and be more productive. But once they graduate, computer science students will most likely use generative AI tools in their day-to-day, and need to be able to do so effectively. This means they will still need to have a solid understanding of the fundamentals of computing and how programs work, so they can evaluate the AI-generated code they will be working with, researchers said.
“We found that student trust, on average, increased as they used GitHub Copilot throughout the study. But after completing the second part of the study–a more elaborate project–students felt that using Copilot to its full extent requires a competent programmer that can complete some tasks manually,” said Soosairaj.
The study surveyed 71 junior and senior computer science students, half of whom had never used Github CoPilot. After an 80-minute class where researchers explained how GitHub CoPilot works and had students use the tool, half of the students said their trust in the tool had increased, while about 17% said it had decreased. Students then took part in a 10-day long project where they worked on a large open-source codebase using GitHub Copilot throughout the project to add a small new functionality to the codebase. At the end of the project, about 39% of students said their trust in Copilot had increased. But about 37% said their trust in Copilot had decreased somewhat while about 24% said it had not changed.
The results of this study have important implications for how computer science educators should approach the introduction of AI assistants in introductory and advanced courses. Researchers make a series of recommendations for computer science educators in an undergraduate setting.
- 
	
To help students calibrate their trust and expectations of AI assistants, computer science educators should provide opportunities for students to use AI programming assistants for tasks with a range of difficulty, including tasks within large codebases.
 - 
	
To help students determine how much they can trust AI assistants’ output, computer science educators should ensure that students can still comprehend, modify, debug, and test code in large codebases without AI assistants.
 - 
	
Computer science educators should ensure that students are aware of how AI assistants generate output via natural language processing so that students understand the AI assistants’ expected behavior.
 - 
	
Computer science educators should explicitly inform and demonstrate key features of AI assistants that are useful for contributing to a large code base, such as adding files as context while using the ‘explain code’ feature and using keywords such as “/explain”, “/fix”, and “/docs” in GitHub Copilot.
 
“CS educators should be mindful that how we present and discuss AI assistants can impact how students perceive such assistants,” the researchers write.
Researchers plan to repeat their experiment and survey with a larger pool of 200 students this winter quarter.
Evolution of Programmers’ Trust in Generative AI Programming Assistants
Anshul Shah, Elena Tomson, Leo Porter, William G. Griswold, and Adalbert Gerald Soosai Raj. Department of Computer Science and Engineering, University of California San Diego
Thomas Rexin, North Carolina State University 
 
Method of Research
Experimental study
Subject of Research
People
Article Title
Evolution of Programmers’ Trust in Generative AI Programming Assistants
Article Publication Date
11-Nov-2025