AI Cheating Is Getting Worse
Aug 19, 2024
Kyle Jensen, the director of Arizona State University’s writing programs, is gearing up for the fall semester. The responsibility is enormous: Each year, 23,000 students take writing courses under his oversight. The teachers’ work is even harder today than it was a few years ago, thanks to AI tools that can generate competent college papers in a matter of seconds.
A mere week after ChatGPT appeared in November 2022, The Atlantic declared that “The College Essay Is Dead.” Two school years later, Jensen is done with mourning and ready to move on. The tall, affable English professor co-runs a National Endowment for the Humanities–funded project on generative-AI literacy for humanities instructors, and he has been incorporating large language models into ASU’s English courses. Jensen is one of a new breed of faculty who want to embrace generative AI even as they also seek to control its temptations. He believes strongly in the value of traditional writing but also in the potential of AI to facilitate education in a new way—in ASU’s case, one that improves access to higher education.
But his vision must overcome a stark reality on college campuses. The first year of AI college ended in ruin, as students tested the technology’s limits and faculty were caught off guard. Cheating was widespread. Tools for identifying computer-written essays proved insufficient to the task. Academic-integrity boards realized they couldn’t fairly adjudicate uncertain cases: Students who used AI for legitimate reasons, or even just consulted grammar-checking software, were being labeled as cheats. So faculty asked their students not to use AI, or at least to say so when they did, and hoped that might be enough. It wasn’t.
Read the full article here.