‘The height of self-sabotage’: Computer science professors bash AI use
Ashley Liu, Contributing Illustrator
ChatGPT launched less than three years ago. According to computer science professors, their field hasn’t been the same since.
The grade weights of problem sets in at least two courses have dropped to zero. The weights of exams have jumped to as high as 90 percent. Some classes are even introducing graded in-person meetings. But the impacts go beyond grades.
“From my perspective as a former student, I think that using AI to complete assignments in university courses is the height of self-sabotage,” computer science professor Alan Weide wrote in an email to the News, explaining that using AI hinders learning problem-solving skills. “From my perspective as a teacher, I think it has increased dramatically the skepticism with which I view submissions by students to asynchronous assignments, which is not a good feeling.”
Theodore Kim, the director of undergraduate studies for the department, wrote in an email to the News that instructors have reported that students have started showing a diminished understanding of foundational computer science since the advent of ChatGPT.
Six professors told the News that the use of AI hinders the learning of foundational, if not all, concepts in computer science. Two pointed to the virtues of training oneself to learn a new craft –– even if that process involves making mistakes and not writing the best code from the beginning.
“If you want to learn to play the piano, it doesn’t matter if that piece is played much better on Spotify,” professor Xiuye Chen said in a phone interview with the News. “If you want to learn it, you got to sit down and do it yourself. So those are decisions you have to make for yourself.”
Professor Michael Shah compared the process of learning to code to training for long-distance running, where there is no substitute for putting on one’s shoes and “just running some miles.” Students should use their undergraduate years “greedily” to build critical thinking abilities and use AI with caution, he wrote in an email to the News.
Shah added that the fundamental “thinking” is what makes programming interesting, not “generating lots of code as fast as possible.”
Weide wrote that he has yet to see an application of generative AI that he would not rather do himself.
“Thinking about stuff is fun,” he wrote. “Writing programs is fun. Fixing broken programs is fun. Making good lectures is fun. Reading books, blogs, and documentation to learn new things is fun. Why would I outsource any of this?”
Shah also wrote that approaching a professor, undergraduate learning assistant or peer with a question and having a conversation can bring valuable social and technical skills, which he described as “one of the major advantages” of being at a university.
Curriculum changes
Weide wrote that he has restructured the grading system of his class “Data Science and Programming Techniques” so that only in-person assignments have a direct impact on students’ grades. Ninety percent of the grade is based on in-person exams, while 10 percent is based on completing weekly in-class lab assignments.
“When you’re taking an in-class exam, you don’t have access to ChatGPT or your phone or Google,” professor Stephen Slade said in a Zoom interview with the News. “It has to be between your ears.”
According to Weide, while he has not completely banned AI use in his other class, “Full Stack Web Programming,” he has reduced the weight of multi-week programming assignments from 55 percent to 13 percent. A significant portion of the group project, which accounts for 55 percent of the grade, now includes in-person meetings with the group advisor, he added.
In professor Ozan Erat’s class, too, exams account for 90 percent of the grade –– and problem sets, zero percent.
“Our idea here is to disincentivize the use of AI on these assignments because simply completing them will not impact a grade; that is, the only way for a student to benefit from the assignment is to actually do the work and understand the problem,” Weide wrote.
Computer science professor Rex Ying wrote in an email to the News that while he allows AI use for homework, he now designs his homework such that an AI model is not always correct, and he penalizes “heavily” for irresponsible use, including copy-and-pasting without checking for accuracy. That process has made writing homework questions more challenging, he added.
The computer science department lacks an overarching policy on AI use, so professors rely on discretion to create AI policies for each course. This “wide latitude” is in line with Yale College policies, Kim wrote, adding that he was consulted on and provided feedback for the Poorvu Center for Teaching and Learning’s AI guidelines.
“It depends on the class, depends on the students,” Slade said. “I don’t think I would be comfortable telling my colleagues whether they can or can’t use AI. I think they’re all knowledgeable, smart people, and they should be able to figure out what works for them.”
Constructive AI use
Chen teaches “Introduction to AI Applications,” a course in which she said she is “really trying to push the boundaries” with AI use.
“We expect students to learn how to use AI to help you learn about AI,” she said. “I really do ask them to ask your ChatGPT or whatever other AI system you choose lots and lots of questions, and I feel like it has been productive in general.”
According to Chen, while she allows AI use for coding –– the goal of the class is not to learn how to code, and introductory coding being a prerequisite for the course –– she does not allow AI use for “conceptual questions.”
“I fully expect, for the portions that are like a collaboration with AI, that you’re really using it like working with a coworker,” she said. “Then I have places where I ask for personal learning reflections, where it would just be lame to use AI. I’m only just asking how you feel about it.”
Three professors told the News that AI can be a helpful tool in reviewing for exams.
When students asked Slade for sample true-or-false questions for an upcoming midterm on binary coding, he said he simply told them to ask ChatGPT to produce sample questions based on his document on binary coding.
When he asked Gemini, Google’s AI assistant, to produce 100 true-or-false questions afterwards, “they all looked pretty good,” he said.
“It’s as if you had, I don’t know, a smart older sister who could help you study for a test,” he said. “My hope is that the students would use whatever is at their disposal to prepare them for the exam so that they can achieve mastery of the material.”
The computer science department was founded in 1969.
Correction, Oct. 3: A previous version of this article misstated a professor’s name. She is Xiuye Chen, not Xuyen.
link
