As generative artificial intelligence tools like ChatGPT, Claude, and Gemini become increasingly accessible, college campuses across the country are grappling with a new academic dilemma: What happens when students turn to AI to write their essays?
It’s a salient question, but I didn’t write it. ChatGPT did, which is why the opening paragraph is bland and uninspiring.
I asked the program to “write a news article about what college professors think of students who use generative artificial intelligence to write their essays,” and out spat 645 mostly grammatically correct words and paragraphs with subheads such as “A Divided Faculty,” because generative AI apparently has a need to prove some faculty defend it; “Detecting AI Use: A Growing Arms Race,” because AI apparently wants you to feel sorry for it because it’s under attack; “Students Say AI is a Tool—Not a Substitute,” because AI wants to make you feel better about resorting to it instead of your own intellect and hard work; and “Reimagining the Role of Writing in Education,” to drive home the point that AI is a genie out of its bottle and there’s no putting it back.
The AI-generated news article went on to quote college professors that don’t exist, like “Dr. Lisa Chen,” who it claims is a UCLA Philosophy Department faculty professor: “We have to rethink what we’re asking students to do. If AI can easily generate an answer, maybe the question wasn’t challenging enough in the first place.”
See! It’s not students’ fault they’re lazing out and resorting to AI to complete their assignments. It’s teachers’ fault for not creating assignments that AI is incapable of completing.
The thing is, AI is incapable of writing acceptable college essays. It can’t unearth effective and compelling research from academically acceptable sources and properly cite them both intext and on a correctly formatted works cited page or bibliography. It can’t insightfully analyze the research it presents and cogently synthesize the data into a thoroughly developed thesis. Worse still, AI hallucination—when AI produces information not based on real data—is a well-established side effect, and when students turn in essays with fake sources, data, quotations, and evidence, that should mean an instant F … if the teacher catches it.
Courtney Brogno, a college lecturer who does exist and who teaches writing at both Cal Poly and Cuesta College, said, “Sometimes it’s really obvious. A couple paragraphs will sound like a student’s voice and then all of a sudden the essay sounds like a Ph.D. dissertation. Citations are different, huge words, different font. Those are really easy to catch.”
Brogno admits, however, that some probably aren’t easy to catch. One quarter she paid out of her own pocket for the cost of an AI detection program
“It took forever. I was spending more time on these essays looking [for evidence of cheating] than probably they had spent writing them,” Brogno said. “I just don’t have that time, not in a 10-week quarter, and not with six classes.”
Which brings up another point. Students who resort to AI will quite possibly go undetected because most college teachers are overworked to the point of not caring. They believe that when students cheat, they’re cheating themselves out of educational experiences.
“I like to remind students that ‘essay’ comes from the verb ‘assay,’ meaning ‘to attempt or try,’” Cal Poly lecturer Lauren Henley explained. “We have since commodified the word into a noun, a product to be stamped with a letter grade.
“When students use AI to generate or refine, they are slinking past ‘the try,’ and what’s worse is that they may be receiving high praise from unwitting instructors—a double thievery. I’m not a scientist, but I can surmise that bypassing ‘the try’ over and over while the brain is still forming can’t be good for the development of empathy, reasoning, and resilience, which our students will surely need for a future wherein the very AI they’ve been relying upon may likely steal or augment their jobs.”
It also doesn’t help that few teachers believe their schools have a clearly articulated policy on AI use. Henley calls this current college AI experiment “the Wild West.”
“Did you know that Canvas [the student-teacher interface used at Cal Poly] has a ChatGPT EDU button?” English Department lecturer Leslie St. John asked. “[It says,] ‘Get answers, solve problems, generate content instantly.’ ‘Generate content’ … boy, that leaves a bad taste in my mouth. And ‘instantly’? There’s nothing ‘instant’ about the writing [and] creative process.”
Cal Poly has a webpage devoted to artificial intelligence, but it’s more about its uses rather than the ethical implications of its use.
“My thought is it’s cheating, obviously,” Brogno said, “but then we’re confused by Cal Poly, because Cal Poly has [ChatGPT accounts] for free for students. Are we saying this is OK now? What is our policy? Nobody seems to know. Nobody knows.”
“I’m pretty old-school,” St. John admitted. “I want students to learn to read, write, and think for themselves—not outsource their creativity and intelligence to ChatGPT. Right now, I’m urging them not to use it, but I recognize how normalized it’s become already.”
Brogno can’t afford to play AI sleuth with every paper.
“When an essay’s turned in, I’m going to grade this as it is,” Brogno said. “I don’t have the time or energy to be a detective.”
She notes that lot of sororities and fraternities have historically kept files of previously turned-in essays that various members recycle over the years and turn in as original work: “Maybe it’s no different than that. And those kids probably got away with it, too. I’ve just decided I’m not going to spend an hour on an essay trying to prove my point.”
The implications of this kind of cheating are vast. The kinds of classes Brogno, Henley, and St. John teach are designed to train students in critical thinking, research, and argumentation—essential skills for everyone. If students don’t gain these experiences and learn to think critically, recognize logical fallacies, and analyze arguments, they’ll be more susceptible to the disinformation inundating media.
“I think it’s terrible,” Brogno said. “I think it takes away critical thinking and independent thought. It’s everything that’s wrong with America.”
As an assignment, she asks her students to use ChatGPT to generate an MLA (Modern Language Association) formatted works cited page on a specific topic and bring it to class.
“They do it, and then they come in the next day, and I say, ‘I want you to check all these citations,’” Brogno said. “Eighty percent of them don’t exist. And the citations are wrong. Then they go looking for the article. They can’t find it. And I say, ‘Well, if you can’t find it, how am I supposed to find it to check up on you?’”
Like college teachers, college students tend to be overworked and overwhelmed, and cheating—or taking “shortcuts” if it makes you feel better—seems inevitable.
“There are so many reasons why a young person without a fully formed frontal lobe would be drawn to shortcuts,” Henley said. “I think that in part we can blame a steady lowering of academic expectations over the past decade—if it’s uncomfortable or hard, you don’t have to do it—and concomitant grade inflation.”
Yet all these college teachers remain committed to their vocation.
“I’m going to do my job the best I can and try to make students better writers,” Brogno said.
The question is: Will students hold up their end of the bargain? ∆
Contact Arts Editor Glen Starkey at gstarkey@newtimesslo.com.
This article appears in Sept 11-21, 2025.
Related
Local News: Committed to You, Fueled by Your Support.
Local news strengthens San Luis Obispo County. Help New Times continue delivering quality journalism with a contribution to our journalism fund today.