Education
Teachers Are Not OK
Last month, I wrote an article about how schools were not prepared for ChatGPT and other generative AI tools, based on thousands of pages of public records I obtained from when ChatGPT was first released. As part of that article, I asked teachers to tell me how AI has changed how they teach.
The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses.
One thing is clear: teachers are not OK.
They describe trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I’ve been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. “That sure feels like bullshit.”
💡
Have you lost your job to an AI? Has AI radically changed how you work (whether you’re a teacher or not)? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at jason@404media.co.
Below, I have compiled some of the responses I got. Some of the teachers were comfortable with their responses being used on the record along with their names. Others asked that I keep them anonymous because their school or school district forbids them from speaking to the press. The responses have been edited by 404 Media for length and clarity, but they are still really long. These are teachers, after all.
Robert W. Gehl, Ontario Research Chair of Digital Governance for Social Justice at York University in Toronto
Simply put, AI tools are ubiquitous. I am on academic honesty committees and the number of cases where students have admitted to using these tools to cheat on their work has exploded.
I think generative AI is incredibly destructive to our teaching of university students. We ask them to read, reflect upon, write about, and discuss ideas. That’s all in service of our goal to help train them to be critical citizens. GenAI can simulate all of the steps: it can summarize readings, pull out key concepts, draft text, and even generate ideas for discussion. But that would be like going to the gym and asking a robot to lift weights for you.
“Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased.”
We need to rethink higher ed, grading, the whole thing. I think part of the problem is that we’ve been inconsistent in rules about genAI use. Some profs ban it altogether, while others attempt to carve out acceptable uses. The problem is the line between acceptable and unacceptable use. For example, some profs say students can use genAI for “idea generation” but then prohibit using it for writing text. Where’s the line between those? In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear “don’t use generative AI” from a prof but then log on to the university’s Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It’s inconsistent and confusing.
I’ve been working on ways to increase the amount of in-class discussion we do in classes. But that’s tricky because it’s hard to grade in-class discussions—it’s much easier to manage digital files. Another option would be to do hand-written in-class essays, but I have a hard time asking that of students. I hardly write by hand anymore, so why would I demand they do so?
I am sick to my stomach as I write this because I’ve spent 20 years developing a pedagogy that’s about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It’s demoralizing.
It has made my job much, much harder. I do not allow genAI in my classes. However, because genAI is so good at producing plausible-sounding text, that ban puts me in a really awkward spot. If I want to enforce my ban, I would have to do hours of detective work (since there are no reliable ways to detect genAI use), call students into my office to confront them, fill out paperwork, and attend many disciplinary hearings. All of that work is done to ferret out cheating students, so we have less time to spend helping honest ones who are there to learn and grow. And I would only be able to find a small percentage of the cases, anyway.
Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased.
Kaci Juge, high school English teacher
I personally haven’t incorporated AI into my teaching yet. It has, however, added some stress to my workload as an English teacher. How do I remain ethical in creating policies? How do I begin to teach students how to use AI ethically? How do I even use it myself ethically considering the consequences of the energy it apparently takes? I understand that I absolutely have to come to terms with using it in order to remain sane in my profession at this point.
Ben Prytherch, Statistics professor
LLM use is rampant, but I don’t think it’s ubiquitous. While I can never know with certainty if someone used AI, it’s pretty easy to tell when they didn’t, unless they’re devious enough to intentionally add in grammatical and spelling errors or awkward phrasings. There are plenty of students who don’t use it, and plenty who do.
LLMs have changed how I give assignments, but I haven’t adapted as quickly as I’d like and I know some students are able to cheat. The most obvious change is that I’ve moved to in-class writing for assignments that are strictly writing-based. Now the essays are written in-class, and treated like mid-term exams. My quizzes are also in-class. This requires more grading work, but I’m glad I did it, and a bit embarrassed that it took ChatGPT to force me into what I now consider a positive change. Reasons I consider it positive:
- I am much more motivated to write detailed personal feedback for students when I know with certainty that I’m responding to something they wrote themselves.
- It turns out most of them can write after all. For all the talk about how kids can’t write anymore, I don’t see it. This is totally subjective on my part, of course. But I’ve been pleasantly surprised with the quality of what they write in-class.
Switching to in-class writing has got me contemplating giving oral examinations, something I’ve never done. It would be a big step, but likely a positive and humanizing one.
There’s also the problem of academic integrity and fairness. I don’t want students who don’t use LLMs to be placed at a disadvantage. And I don’t want to give good grades to students who are doing effectively nothing. LLM use is difficult to police.
Lastly, I have no patience for the whole “AI is the future so you must incorporate it into your classroom” push, even when it’s not coming from self-interested people in tech. No one knows what “the future” holds, and even if it were a good idea to teach students how to incorporate AI into this-or-that, by what measure are us teachers qualified?
Kate Conroy
I teach 12th grade English, AP Language & Composition, and Journalism in a public high school in West Philadelphia. I was appalled at the beginning of this school year to find out that I had to complete an online training that encouraged the use of AI for teachers and students. I know of teachers at my school who use AI to write their lesson plans and give feedback on student work. I also know many teachers who either cannot recognize when a student has used AI to write an essay or don’t care enough to argue with the kids who do it. Around this time last year I began editing all my essay rubrics to include a line that says all essays must show evidence of drafting and editing in the Google Doc’s history, and any essays that appear all at once in the history will not be graded.
I refuse to use AI on principle except for one time last year when I wanted to test it, to see what it could and could not do so that I could structure my prompts to thwart it. I learned that at least as of this time last year, on questions of literary analysis, ChatGPT will make up quotes that sound like they go with the themes of the books, and it can’t get page numbers correct. Luckily I have taught the same books for many years in a row and can instantly identify an incorrect quote and an incorrect page number. There’s something a little bit satisfying about handing a student back their essay and saying, “I can’t find this quote in the book, can you find it for me?” Meanwhile I know perfectly well they cannot.
I teach 18 year olds who range in reading levels from preschool to college, but the majority of them are in the lower half that range. I am devastated by what AI and social media have done to them. My kids don’t think anymore. They don’t have interests. Literally, when I ask them what they’re interested in, so many of them can’t name anything for me. Even my smartest kids insist that ChatGPT is good “when used correctly.” I ask them, “How does one use it correctly then?” They can’t answer the question. They don’t have original thoughts. They just parrot back what they’ve heard in TikToks. They try to show me “information” ChatGPT gave them. I ask them, “How do you know this is true?” They move their phone closer to me for emphasis, exclaiming, “Look, it says it right here!” They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching. If I were to quit, it would be because of how technology has stunted kids and how hard it’s become to reach them because of that.
I am only 30 years old. I have a long road ahead of me to retirement. But it is so hard to ask kids to learn, read, and write, when so many adults are no longer doing the work it takes to ensure they are really learning, reading, and writing. And I get it. That work has suddenly become so challenging. It’s really not fair to us. But if we’re not willing to do it, we shouldn’t be in the classroom.
Jeffrey Fisher
The biggest thing for us is the teaching of writing itself, never mind even the content. And really the only way to be sure that your students are learning anything about writing is to have them write in class. But then what to do about longer-form writing, like research papers, for example, or even just analytical/exegetical papers that put multiple primary sources into conversation and read them together? I’ve started watching for the voices of my students in their in-class writing and trying to pay attention to gaps between that voice and the voice in their out-of-class writing, but when I’ve got 100 to 130 or 140 students (including a fully online asynchronous class), that’s just not really reliable. And for the online asynch class, it’s just impossible because there’s no way of doing old-school, low-tech, in-class writing at all.
“I’ve been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit.”
You may be familiar with David Graeber’s article-turned-book on Bullshit Jobs. This is a recent paper looking specifically at bullshit jobs in academia. No surprise, the people who see their jobs as bullshit jobs are mostly administrators. The people who overwhelmingly do NOT see their jobs as bullshit jobs are faculty.
But that is what I see AI in general and LLMs in particular as changing. The situations I’m describing above are exactly the things that turn what is so meaningful to us as teachers into bullshit. The more we think that we are unable to actually teach them, the less meaningful our jobs are.
I’ve been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit. I’m going through the motions of teaching. I’m putting a lot of time and emotional effort into it, as well as the intellectual effort, and it’s getting flushed into the void.
Post-grad educator
Last year, I taught a class as part of a doctoral program in responsible AI development and use. I don’t want to share too many specifics, but the course goal was for students to think critically about the adverse impacts of AI on people who are already marginalized and discriminated against.
When the final projects came in, my co-instructor and I were underwhelmed, to say the least. When I started digging into the projects, I realized that the students had used AI in some incredibly irresponsible ways—shallow, misleading, and inaccurate analysis of data, pointless and meaningless visualizations. The real kicker, though, was that we got two projects where the students had submitted a “podcast.” What they had done, apparently, was give their paper (which already had extremely flawed AI-based data analysis) to a gen AI tool and asked it to create an audio podcast. And the results were predictably awful. Full of random meaningless vocalizations at bizarre times, the “female” character was incredibly dumb and vapid (sounded like the “manic pixie dream girl” trope from those awful movies), and the “analysis” in the podcast exacerbated the problems that were already in the paper, so it was even more wrong than the paper itself.
In short, there is nothing particularly surprising in how badly the AI worked here—but these students were in a *doctoral* program on *responsible AI*. In my career as a teacher, I’m hard pressed to think of more blatantly irresponsible work by students.
Nathan Schmidt, University Lecturer, managing editor at Gamers With Glasses
When ChatGPT first entered the scene, I honestly did not think it was that big of a deal. I saw some plagiarism; it was easy to catch. Its voice was stilted and obtuse, and it avoided making any specific critical judgments as if it were speaking on behalf of some cult of ambiguity. Students didn’t really understand what it did or how to use it, and when the occasional cheating would happen, it was usually just a sign that the student needed some extra help that they were too exhausted or embarrassed to ask for, so we’d have that conversation and move on.
I think it is the responsibility of academics to maintain an open mind about new technologies and to react to them in an evidence-based way, driven by intellectual curiosity. I was, indeed, curious about ChatGPT, and I played with it myself a few times, even using it on the projector in class to help students think about the limits and affordances of such a technology. I had a couple semesters where I thought, “Let’s just do this above board.” Borrowing an idea from one of my fellow instructors, I gave students instructions for how I wanted them to acknowledge the use of ChatGPT or other predictive text models in their work, and I also made it clear that I expected them to articulate both where they had used it and, more importantly, the reason why they found this to be a useful tool. I thought this might provoke some useful, critical conversation. I also took a self-directed course provided by my university that encouraged a similar curiosity, inviting instructors to view predictive text as a tool that had both problematic and beneficial uses.
“ChatGPT isn’t its own, unique problem. It’s a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo”
However, this approach quickly became frustrating, for two reasons. First, because even with the acknowledgments pages, I started getting hybrid essays that sounded like they were half written by students and half written by robots, which made every grading comment a miniature Turing test. I didn’t know when to praise students, because I didn’t want to write feedback like, “I love how thoughtfully you’ve worded this,” only to be putting my stamp of approval on predictively generated text. What if the majority of the things that I responded to positively were things that had actually been generated by ChatGPT? How would that make a student feel about their personal writing competencies? What lesson would that implicitly reinforce about how to use this tool? The other problem was that students were utterly unprepared to think about their usage of this tool in a critically engaged way. Despite my clear instructions and expectation-setting, most students used their acknowledgments pages to make the vaguest possible statements, like, “Used ChatGPT for ideas” or “ChatGPT fixed grammar” (comments like these also always conflated grammar with vocabulary and tone). I think there was a strong element of selection bias here, because the students who didn’t feel like they needed to use ChatGPT were also the students who would have been most prepared to articulate their reasons for usage with the degree of specificity I was looking for.
This brings us to last semester, when I said, “Okay, if you must use ChatGPT, you can use it for brainstorming and outlining, but if you turn something in that actually includes text that was generated predictively, I’m sending it back to you.” This went a little bit better. For most students, the writing started to sound human again, but I suspect this is more because students are unlikely to outline their essays in the first place, not because they were putting the tool to the allowable use I had designated.
ChatGPT isn’t its own, unique problem. It’s a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo. It’s a symptom of the world of TikTok and Instagram and perfecting your algorithm, in which some people are professionally deemed the ‘content creators,’ casting everyone else into the creatively bereft role of the content “consumer.” And if that paradigm wins, as it certainly appears to be doing, pretty much everything that has been meaningful about human culture will be undone, in relatively short order. So that’s the long story about how I adopted an absolute zero tolerance policy on any use of ChatGPT or any similar tool in my course, working my way down the funnel of progressive acceptance to outright conservative, Luddite rejection.
John Dowd
I’m in higher edu, and LLMs have absolutely blown up what I try to accomplish with my teaching (I’m in the humanities and social sciences).
Given the widespread use of LLMs by college students I now have an ongoing and seemingly unresolvable tension, which is how to evaluate student work. Often I can spot when students have used the technology between both having thousands of samples of student writing over time, and cross referencing my experience with one or more AI use detection tools. I know those detection tools are unreliable, but depending on the confidence level they return, it may help with the confirmation. This creates an atmosphere of mistrust that is destructive to the instructor/student relationship.
“LLMs have absolutely blown up what I try to accomplish with my teaching”
I try to appeal to students and explain that by offloading the work of thinking to these technologies, they’re rapidly making themselves replaceable. Students (and I think even many faculty across academia) fancy themselves as “Big Idea” people. Everyone’s a “Big Idea” person now, or so they think. “They’re all my ideas,” people say, “I’m just using the technology to save time; organize them more quickly; bounce them back and forth”, etc. I think this is more plausible for people who have already put in the work and have the experience of articulating and understanding ideas. However, for people who are still learning to think or problem solve in more sophisticated/creative ways, they will be poor evaluators of information and less likely to produce relevant and credible versions of it.
I don’t want to be overly dramatic, but AI has negatively complicated my work life so much. I’ve opted to attempt to understand it, but to not use it for my work. I’m too concerned about being seduced by its convenience and believability (despite knowing its propensity for making shit up). Students are using the technology in ways we’d expect, to complete work, take tests, seek information (scary), etc. Some of this use occurs in violation of course policy, while some is used with the consent of the instructor. Students are also, I’m sure, using it in ways I can’t even imagine at the moment.
Sorry, bit of a rant, I’m just so preoccupied and vexed by the irresponsible manner in which the tech bros threw all of this at us with no concern, consent, or collaboration.
High school Spanish teacher, Oklahoma
I am a high school Spanish teacher in Oklahoma and kids here have shocked me with the ways they try to use AI for assignments I give them. In several cases I have caught them because they can’t read what they submit to me and so don’t know to delete the sentence that says something to the effect of “This summary meets the requirements of the prompt, I hope it is helpful to you!”
“Even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning”
Some of my students openly talk about using AI for all their assignments and I agree with those who say the technology—along with gaps in their education due to the long term effects of COVID—has gotten us to a point where a lot of young GenZ and Gen Alpha are functionally illiterate. I have been shocked at their lack of vocabulary and reading comprehension skills even in English. Teaching cognates, even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning. Trying to determine if and how a student used AI to cheat has wasted countless hours of my time this year, even in my class where there are relatively few opportunities to use it because I do so much on paper (and they hate me for it!).
A lot of teachers have had to throw out entire assessment methods to try to create assignments that are not cheatable, which at least for me, always involves huge amounts of labor.
It keeps me up at night and gives me existential dread about my profession but it’s so critical to address!!!
[Article continues after wall]
Education
Educators lack clarity on how to deal with AI in classrooms
An artificial intelligence furore that’s consuming Singapore’s academic community reveals how we’ve lost the plot over the role the hyped-up technology should play in higher education.
A student at Nanyang Technological University said in a Reddit post that she used a digital tool to alphabetize her citations for a term paper. When it was flagged for typos, she was then accused of breaking the rules over the use of Generative AI for the assignment. It snowballed when two more students came forward with similar complaints, one alleging that she was penalized for using ChatGPT to help with initial research, even though she says she did not use the bot to draft the essay.
The school, which publicly states it embraces AI for learning, initially defended its zero-tolerance stance in this case in statements to local media. But internet users rallied around the original Reddit poster and rejoiced at an update that she won an appeal to rid her transcript of the ‘academic fraud’ label.
Also Read: Rahul Matthan: AI models aren’t copycats but learners just like us
It may sound like a run-of-the-mill university dispute. But there’s a reason the saga went so viral, garnering thousands of upvotes and heated opinions from online commentators. It has laid bare the strange new world we’ve found ourselves in, as students and faculty are rushing to keep pace with how AI should or shouldn’t be used in universities.
It’s a global conundrum, but the debate has especially roiled Asia. Stereotypes of math nerds and tiger moms aside, a rigorous focus on tertiary studies is often credited for the region’s dramatic economic rise. The importance of education—and long hours of studying—is instilled from the earliest age. So how does this change in the AI era? The reality is that nobody has the answer yet.
Despite promises from ed-tech leaders that we’re on the cusp of ‘the biggest positive transformation that education has ever seen,’ the data on academic outcomes hasn’t kept pace with the technology’s adoption. There are no long-term studies on how AI tools impact learning and cognitive functions—and viral headlines that it could make us lazy and dumb only add to the anxiety. Meanwhile, the race to not be left behind in implementing the technology risks turning an entire generation of developing minds into guinea pigs.
For educators navigating this moment, the answer is not to turn a blind eye. Even if some teachers discourage the use of AI, it has become all but unavoidable for many scholars doing research in the internet age.
Also Read: You’re absolutely right, as the AI chatbot says
Most Google searches now lead with automated summaries. Scrolling through these should not count as academic dishonesty. An informal survey of 500 Singaporean students from secondary school through university conducted by a local news outlet this year found that 84% were using products like ChatGPT for homework on a weekly basis.
In China, many universities are turning to AI cheating detectors, even though the technology is imperfect. Some students are reporting on social media that they have to dumb down their writing to pass these tests or shell out cash for such detection tools themselves to ensure they beat them before submitting their papers.
It doesn’t have to be this way. The chaotic moment of transition has put new onus on educators to adapt and shift the focus on the learning process as much as the final results, Yeow Meng Chee, the provost and chief academic and innovation officer at the Singapore University of Technology and Design, tells me. This does not mean villainizing AI, but treating it as a tool and ensuring a student understands how they arrived at their final conclusion even if they used technology. This process also helps ensure the AI outputs, which remain imperfect and prone to hallucinations (or typos), are checked and understood.
Also Read: Technobabble: We need a whole new vocabulary to keep up with the evolution of AI
Ultimately, professors who make the biggest difference aren’t those who improve exam scores but who build trust, teach empathy and instil confidence in students to solve complex problems. The most important parts of learning still can’t be optimized by a machine.
The Singapore saga shows how everyone is on edge and whether a reference-sorting website even counts as a generative AI tool isn’t clear. It also exposed another irony: Saving time on a tedious task would likely be welcomed when the student enters the workforce—if the technology hasn’t already taken her entry-level job.
Demand for AI literacy in the labour market is becoming a must-have and universities ignoring it would do a disservice to student cohorts entering the real world.
We’re still a few years away from understanding the full impact of AI on teaching and how it can best be used in higher education. But let’s not miss the forest for the trees as we figure it out. ©Bloomberg
The author is a Bloomberg Opinion columnist covering Asia tech.
Education
Cambridge Judge Business School Executive Education launches four-month Cambridge AI Leadership Programme — EdTech Innovation Hub
Launched in collaboration with Emeritus, a provider of short courses, degree programmes, professional certificates, and senior executive programs, the Cambridge Judge Business School Executive Education course is now available for a September 2025 start.
The Cambridge AI Leadership Programme aims to help participants navigate the complexities of AI adoptions, identify scalable opportunities and build a strategic roadmap for successful implementation.
Using a blend of in-person and online learning, the course covers AI concepts, applications, and best practice to improve decision-making skills. It also covers digital transformation and ethical AI governance.
The program is aimed at senior leaders looking to lead their organizations through transformations and integrate AI technologies.
“AI is a transformative force reshaping business strategy, decision-making and leadership. Senior executives must not only understand AI but also use it to drive business goals, efficiency and new revenue opportunities,” explains Professor David Stillwell, Co-Academic Programme Director.
“The Cambridge AI Leadership Programme offers a strategic road map, equipping leaders with the skills and mindset to integrate AI into their organisations and lead in an AI-driven world.”
“The Cambridge AI Leadership Programme empowers decision-makers to harness AI in ways that align with their organisation’s goals and prepare for the future,” says Vesselin Popov, Co-Academic Programme Director.
“Through a comprehensive learning experience, participants gain strategic insights and practical knowledge to drive transformation, strengthen decision-making and navigate technological shifts with confidence.”
RTIH AI in Retail Awards
Our sister title, RTIH, organiser of the industry leading RTIH Innovation Awards, proudly brings you the first edition of the RTIH AI in Retail Awards, which is now open for entries.
As we witness a digital transformation revolution across all channels, AI tools are reshaping the omnichannel game, from personalising customer experiences to optimising inventory, uncovering insights into consumer behaviour, and enhancing the human element of retailers’ businesses.
With 2025 set to be the year when AI and especially gen AI shake off the ‘heavily hyped’ tag and become embedded in retail business processes, our newly launched awards celebrate global technology innovation in a fast moving omnichannel world and the resulting benefits for retailers, shoppers and employees.
Our 2025 winners will be those companies who not only recognise the potential of AI, but also make it usable in everyday work – resulting in more efficiency and innovation in all areas.
Winners will be announced at an evening event at The Barbican in Central London on Wednesday, 3rd September.
Education
AI, Irreality and the Liberal Educational Project (opinion)
I work at Marquette University. As a Roman Catholic, Jesuit university, we’re called to be an academic community that, as Pope John Paul II wrote, “scrutinize[s] reality with the methods proper to each academic discipline.” That’s a tall order, and I remain in the academy, for all its problems, because I find that job description to be the best one on offer, particularly as we have the honor of practicing this scrutinizing along with ever-renewing groups of students.
This bedrock assumption of what a university is continues to give me hope for the liberal educational project despite the ongoing neoliberalization of higher education and some administrators’ and educators’ willingness to either look the other way regarding or uncritically celebrate the generative software (commonly referred to as “generative artificial intelligence”) explosion over the last two years.
In the time since my last essay in Inside Higher Ed, and as Marquette’s director of academic integrity, I’ve had plenty of time to think about this and to observe praxis. In contrast to the earlier essay, which was more philosophical, let’s get more practical here about how access to generative software is impacting higher education and our students and what we might do differently.
At the academic integrity office, we recently had a case in which a student “found an academic article” by prompting ChatGPT to find one for them. The chat bot obeyed, as mechanisms do, and generated a couple pages of text with a title. This was not from any actual example of academic writing but instead was a statistically probable string of text having no basis in the real world of knowledge and experience. The student made a short summary of that text and submitted it. They were, in the end, not found in violation of Marquette’s honor code, since what they submitted was not plagiarized. It was a complex situation to analyze and interpret, done by thoughtful people who care about the integrity of our academic community: The system works.
In some ways, though, such activity is more concerning than plagiarism, for, at least when students plagiarize, they tend to know the ways they are contravening social and professional codes of conduct—the formalizations of our principles of working together honestly. In this case, the student didn’t see the difference between a peer-reviewed essay published by an academic journal and a string of probabilistically generated text in a chat bot’s dialogue box. To not see the difference between these two things—or to not care about that difference—is more disconcerting and concerning to me than straightforward breaches of an honor code, however harmful and sad such breaches are.
I already hear folks saying: “That’s why we need AI literacy!” We do need to educate our students (and our colleagues) on what generative software is and is not. But that’s not enough. Because one also needs to want to understand and, as is central to the Ignatian Pedagogical Paradigm that we draw upon at Marquette, one must understand in context.
Another case this spring term involved a student whom I had spent several months last fall teaching in a writing course that took “critical AI” as its subject matter. Yet this spring term the student still used a chat bot to “find a quote in a YouTube video” for an assignment and then commented briefly on that quote. The problem was that the quote used in the assignment does not appear in the selected video. It was a simulacrum of a quote; it was a string of probabilistically generated text, which is all generative software can produce. It did not accurately reflect reality, and the student did not cite the chat bot they’d copied and pasted from, so they were found in violation of the honor code.
Another student last term in the Critical AI class prompted Microsoft Copilot to give them quotations from an essay, which it mechanically and probabilistically did. They proceeded to base their three-page argument on these quotations, none of which said anything like what the author in question actually said (not even the same topic); their argument was based in irreality. We cannot scrutinize reality together if we cannot see reality. And many of our students (and colleagues) are, at least at times, not seeing reality right now. They’re seeing probabilistic text as “good enough” as, or conflated with, reality.
Let me point more precisely to the problem I’m trying to put my finger on. The student who had a chat bot “find” a quote from a video sent an email to me, which I take to be completely in earnest and much of which I appreciated. They ended the email by letting me know that they still think that “AI” is a really powerful and helpful tool, especially as it “continues to improve.” The cognitive dissonance between the situation and the student’s assertion took me aback.
Again: the problem with the “We just need AI literacy” argument. People tend not to learn what they do not want to learn. If our students (and people generally) do not particularly want to do work, and they have been conditioned by the use of computing and their society’s habits to see computing as an intrinsic good, “AI” must be a powerful and helpful tool. It must be able to do all the things that all the rich and powerful people say it does. It must not need discipline or critical acumen to employ, because it will “supercharge” your productivity or give you “10x efficiency” (whatever that actually means). And if that’s the case, all these educators telling you not to offload your cognition must be behind the curve, or reactionaries. At the moment, we can teach at least some people all about “AI literacy” and it will not matter, because such knowledge refuses to jibe with the mythology concerning digital technology so pervasive in our society right now.
If we still believe in the value of humanistic, liberal education, we cannot be quiet about these larger social systems and problems that shape our pupils, our selves and our institutions. We cannot be quiet about these limits of vision and questioning. Because not only do universities exist for the scrutinizing of reality with the various methods of the disciplines as noted at the outset of this essay, but liberal education also assumes a view of the human person that does not see education as instrumental but as formative.
The long tradition of liberal education, for all its complicity in social stratification down the centuries, assumes that our highest calling is not to make money, to live in comfort, to be entertained. (All three are all right in their place, though we must be aware of how our moneymaking, comfort and entertainment derive from the exploitation of the most vulnerable humans and the other creatures with whom we share the earth, and how they impact our own spiritual health.)
We are called to growth and wisdom, to caring for the common good of the societies in which we live—which at this juncture certainly involves caring for our common home, the Earth, and the other creatures living with us on it. As Antiqua et nova, the note released from the Vatican’s Dicastery for Culture and Education earlier this year (cited commendingly by secular ed-tech critics like Audrey Watters) reiterates, education plays its role in this by contributing “to the person’s holistic formation in its various aspects (intellectual, cultural, spiritual, etc.) … in keeping with the nature and dignity of the human person.”
These objectives of education are not being served by students using generative software to satisfy their instructors’ prompts. And no amount of “literacy” is going to ameliorate the situation on its own. People have to want to change, or to see through the neoliberal, machine-obsessed myth, for literacy to matter.
I do believe that the students I’ve referred to are generally striving for the good as they know how. On a practical level, I am confident they’ll go on to lead modestly successful lives as our society defines that term with regard to material well-being. I assume their motivation is not to cause harm or dupe their instructors; they’re taking part in “hustle” culture, “doing school” and possibly overwhelmed by all their commitments. Even if all this is indeed the case, liberal education calls us to more, and it’s the role of instructors and administrators to invite our students into that larger vision again and again.
If we refuse to give up on humanistic, liberal education, then what do we do? The answer is becoming clearer by the day, with plenty of folks all over the internet weighing in, though it is one many of us do not really want to hear. Because at least one major part of the answer is that we need to make an education genuinely oriented toward our students. A human-scale education, not an industrial-scale education (let’s recall over and over that computers are industrial technology). The grand irony of the generative software moment for education in neoliberal, late-capitalist society is that it is revealing so many of the limits we’ve been putting on education in the first place.
If we can’t “AI literacy” our educational problems away, we have to change our pedagogy. We have to change the ways we interact with our students inside the classroom and out: to cultivate personal relationships with them whenever possible, to model the intellectual life as something that is indeed lived out with the whole person in a many-partied dialogue stretching over millennia, decidedly not as the mere ability to move information around. This is not a time for dismay or defeat but an incitement to do the experimenting, questioning, joyful intellectual work many of us have likely wanted to do all along but have not had a reason to go off script for.
This probably means getting creative. Part of getting creative in our day probably means de-computing (as Dan McQuillan at the University of London labels it). To de-compute is to ask ourselves—given our ambient maximalist computing habits of the last couple decades—what is of value in this situation? What is important here? And then: Does a computer add value to this that it is not detracting from in some other way? Computers may help educators collect assignments neatly and read them clearly, but if that convenience is outweighed by constantly having to wonder if a student has simply copied and pasted or patch-written text with generative software, is the value of the convenience worth the problems?
Likewise, getting creative in our day probably means looking at the forms of our assessments. If the highly structured student essay makes it easier for instructors to assess because of its regularity and predictability, yet that very regularity and predictability make it a form that chat bots can produce fairly readily, well: 1) the value for assessing may not be worth the problems of teeing up chat bot–ifiable assignments and 2) maybe that wasn’t the best form for inviting genuinely insightful and exciting intellectual engagement with our disciplines’ materials in the first place.
I’ve experimented with research journals rather than papers, with oral exams as structured conversations, with essays that focus intently on one detail of a text and do not need introductions and conclusions and that privilege the student’s own voice, and other in-person, handmade, leaving-the-classroom kinds of assessments over the last academic year. Not everything succeeded the way I wanted, but it was a lively, interactive year. A convivial year. A year in which mostly I did not have to worry about whether students were automating their educations.
We have a chance as educators to rethink everything in light of what we want for our societies and for our students; let’s not miss it because it’s hard to redesign assignments and courses. (And it is hard.) Let’s experiment, for our own sakes and for our students’ sakes. Let’s experiment for the sakes of our institutions that, though they are often scoffed at in our popular discourse, I hope we believe in as vibrant communities in which we have the immense privilege of scrutinizing reality together.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Funding & Business5 days ago
HOLY SMOKES! A new, 200% faster DeepSeek R1-0528 variant appears from German lab TNG Technology Consulting GmbH