Education
What Happens After A.I. Destroys College Writing?
On a blustery spring Thursday, just after midterms, I went out for noodles with Alex and Eugene, two undergraduates at New York University, to talk about how they use artificial intelligence in their schoolwork. When I first met Alex, last year, he was interested in a career in the arts, and he devoted a lot of his free time to photo shoots with his friends. But he had recently decided on a more practical path: he wanted to become a C.P.A. His Thursdays were busy, and he had forty-five minutes until a study session for an accounting class. He stowed his skateboard under a bench in the restaurant and shook his laptop out of his bag, connecting to the internet before we sat down.
Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.” Weeks earlier, when I’d messaged Alex, he had said that everyone he knew used ChatGPT in some fashion, but that he used it only for organizing his notes. In person, he admitted that this wasn’t remotely accurate. “Any type of writing in life, I use A.I.,” he said. He relied on Claude for research, DeepSeek for reasoning and explanation, and Gemini for image generation. ChatGPT served more general needs. “I need A.I. to text girls,” he joked, imagining an A.I.-enhanced version of Hinge. I asked if he had used A.I. when setting up our meeting. He laughed, and then replied, “Honestly, yeah. I’m not tryin’ to type all that. Could you tell?”
OpenAI released ChatGPT on November 30, 2022. Six days later, Sam Altman, the C.E.O., announced that it had reached a million users. Large language models like ChatGPT don’t “think” in the human sense—when you ask ChatGPT a question, it draws from the data sets it has been trained on and builds an answer based on predictable word patterns. Companies had experimented with A.I.-driven chatbots for years, but most sputtered upon release; Microsoft’s 2016 experiment with a bot named Tay was shut down after sixteen hours because it began spouting racist rhetoric and denying the Holocaust. But ChatGPT seemed different. It could hold a conversation and break complex ideas down into easy-to-follow steps. Within a month, Google’s management, fearful that A.I. would have an impact on its search-engine business, declared a “code red.”
Among educators, an even greater panic arose. It was too deep into the school term to implement a coherent policy for what seemed like a homework killer: in seconds, ChatGPT could collect and summarize research and draft a full essay. Many large campuses tried to regulate ChatGPT and its eventual competitors, mostly in vain. I asked Alex to show me an example of an A.I.-produced paper. Eugene wanted to see it, too. He used a different A.I. app to help with computations for his business classes, but he had never gotten the hang of using it for writing. “I got you,” Alex told him. (All the students I spoke with are identified by pseudonyms.)
He opened Claude on his laptop. I noticed a chat that mentioned abolition. “We had to read Robert Wedderburn for a class,” he explained, referring to the nineteenth-century Jamaican abolitionist. “But, obviously, I wasn’t tryin’ to read that.” He had prompted Claude for a summary, but it was too long for him to read in the ten minutes he had before class started. He told me, “I said, ‘Turn it into concise bullet points.’ ” He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom.
Alex searched until he found a paper for an art-history class, about a museum exhibition. He had gone to the show, taken photographs of the images and the accompanying wall text, and then uploaded them to Claude, asking it to generate a paper according to the professor’s instructions. “I’m trying to do the least work possible, because this is a class I’m not hella fucking with,” he said. After skimming the essay, he felt that the A.I. hadn’t sufficiently addressed the professor’s questions, so he refined the prompt and told it to try again. In the end, Alex’s submission received the equivalent of an A-minus. He said that he had a basic grasp of the paper’s argument, but that if the professor had asked him for specifics he’d have been “so fucked.” I read the paper over Alex’s shoulder; it was a solid imitation of how an undergraduate might describe a set of images. If this had been 2007, I wouldn’t have made much of its generic tone, or of the precise, box-ticking quality of its critical observations.
Eugene, serious and somewhat solemn, had been listening with bemusement. “I would not cut and paste like he did, because I’m a lot more paranoid,” he said. He’s a couple of years younger than Alex and was in high school when ChatGPT was released. At the time, he experimented with A.I. for essays but noticed that it made easily noticed errors. “This passed the A.I. detector?” he asked Alex.
When ChatGPT launched, instructors adopted various measures to insure that students’ work was their own. These included requiring them to share time-stamped version histories of their Google documents, and designing written assignments that had to be completed in person, over multiple sessions. But most detective work occurs after submission. Services like GPTZero, Copyleaks, and Originality.ai analyze the structure and syntax of a piece of writing and assess the likelihood that it was produced by a machine. Alex said that his art-history professor was “hella old,” and therefore probably didn’t know about such programs. We fed the paper into a few different A.I.-detection websites. One said there was a twenty-eight-per-cent chance that the paper was A.I.-generated; another put the odds at sixty-one per cent. “That’s better than I expected,” Eugene said.
I asked if he thought what his friend had done was cheating, and Alex interrupted: “Of course. Are you fucking kidding me?”
As we looked at Alex’s laptop, I noticed that he had recently asked ChatGPT whether it was O.K. to go running in Nike Dunks. He had concluded that ChatGPT made for the best confidant. He consulted it as one might a therapist, asking for tips on dating and on how to stay motivated during dark times. His ChatGPT sidebar was an index of the highs and lows of being a young person. He admitted to me and Eugene that he’d used ChatGPT to draft his application to N.Y.U.—our lunch might never have happened had it not been for A.I. “I guess it’s really dishonest, but, fuck it, I’m here,” he said.
“It’s cheating, but I don’t think it’s, like, cheating,” Eugene said. He saw Alex’s art-history essay as a victimless crime. He was just fulfilling requirements, not training to become a literary scholar.
Alex had to rush off to his study session. I told Eugene that our conversation had made me wonder about my function as a professor. He asked if I taught English, and I nodded.
“Mm, O.K.,” he said, and laughed. “So you’re, like, majorly affected.”
I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale. As a result, I have always had a vague sense that my students are learning something, even when it is hard to quantify. In the past, if I was worried that a paper had been plagiarized, I would enter a few phrases from it into a search engine and call it due diligence. But I recently began noticing that some students’ writing seemed out of synch with how they expressed themselves in the classroom. One essay felt stitched together from two minds—half of it was polished and rote, the other intimate and unfiltered. Having never articulated a policy for A.I., I took the easy way out. The student had had enough shame to write half of the essay, and I focussed my feedback on improving that part.
It’s easy to get hung up on stories of academic dishonesty. Late last year, in a survey of college and university leaders, fifty-nine per cent reported an increase in cheating, a figure that feels conservative when you talk to students. A.I. has returned us to the question of what the point of higher education is. Until we’re eighteen, we go to school because we have to, studying the Second World War and reducing fractions while undergoing a process of socialization. We’re essentially learning how to follow rules. College, however, is a choice, and it has always involved the tacit agreement that students will fulfill a set of tasks, sometimes pertaining to subjects they find pointless or impractical, and then receive some kind of credential. But even for the most mercenary of students, the pursuit of a grade or a diploma has come with an ancillary benefit. You’re being taught how to do something difficult, and maybe, along the way, you come to appreciate the process of learning. But the arrival of A.I. means that you can now bypass the process, and the difficulty, altogether.
There are no reliable figures for how many American students use A.I., just stories about how everyone is doing it. A 2024 Pew Research Center survey of students between the ages of thirteen and seventeen suggests that a quarter of teens currently use ChatGPT for schoolwork, double the figure from 2023. OpenAI recently released a report claiming that one in three college students uses its products. There’s good reason to believe that these are low estimates. If you grew up Googling everything or using Grammarly to give your prose a professional gloss, it isn’t far-fetched to regard A.I. as just another productivity tool. “I see it as no different from Google,” Eugene said. “I use it for the same kind of purpose.”
Being a student is about testing boundaries and staying one step ahead of the rules. While administrators and educators have been debating new definitions for cheating and discussing the mechanics of surveillance, students have been embracing the possibilities of A.I. A few months after the release of ChatGPT, a Harvard undergraduate got approval to conduct an experiment in which it wrote papers that had been assigned in seven courses. The A.I. skated by with a 3.57 G.P.A., a little below the school’s average. Upstart companies introduced products that specialized in “humanizing” A.I.-generated writing, and TikTok influencers began coaching their audiences on how to avoid detection.
Unable to keep pace, academic administrations largely stopped trying to control students’ use of artificial intelligence and adopted an attitude of hopeful resignation, encouraging teachers to explore the practical, pedagogical applications of A.I. In certain fields, this wasn’t a huge stretch. Studies show that A.I. is particularly effective in helping non-native speakers acclimate to college-level writing in English. In some STEM classes, using generative A.I. as a tool is acceptable. Alex and Eugene told me that their accounting professor encouraged them to take advantage of free offers on new A.I. products available only to undergraduates, as companies competed for student loyalty throughout the spring. In May, OpenAI announced ChatGPT Edu, a product specifically marketed for educational use, after schools including Oxford University, Arizona State University, and the University of Pennsylvania’s Wharton School of Business experimented with incorporating A.I. into their curricula. This month, the company detailed plans to integrate ChatGPT into every dimension of campus life, with students receiving “personalized” A.I. accounts to accompany them throughout their years in college.
Education
Blunkett urges ministers to use ‘incredible sensitivity’ in changing Send system in England | Special educational needs
Ministers must use “incredible sensitivity” in making changes to the special educational needs system, former education secretary David Blunkett has said, as the government is urged not to drop education, health and care plans (EHCPs).
Lord Blunkett, who went through the special needs system when attending a residential school for blind children, said ministers would have to tread carefully.
The former home secretary in Tony Blair’s government also urged the government to reassure parents that it was looking for “a meaningful replacement” for EHCPs, which guarantee more than 600,000 children and young people individual support in learning.
Blunkett said he sympathised with the challenge facing Bridget Phillipson, the education secretary, saying: “It’s absolutely clear that the government will need to do this with incredible sensitivity and with a recognition it’s going to be a bumpy road.”
He said government proposals due in the autumn to reexamine Send provision in England were not the same as welfare changes, largely abandoned last week, which were aimed at reducing spending. “They put another billion in [to Send provision] and nobody noticed,” Blunkett said, adding: “We’ve got to reduce the fear of change.”
Earlier Helen Hayes, the Labour MP who chairs the cross-party Commons education select committee, called for Downing Street to commit to EHCPs, saying this was the only way to combat mistrust among many families with Send children.
“I think at this stage that would be the right thing to do,” she told BBC Radio 4’s Today programme. “We have been looking, as the education select committee, at the Send system for the last several months. We have heard extensive evidence from parents, from organisations that represent parents, from professionals and from others who are deeply involved in the system, which is failing so many children and families at the moment.
“One of the consequences of that failure is that parents really have so little trust and confidence in the Send system at the moment. And the government should take that very seriously as it charts a way forward for reform.”
A letter to the Guardian on Monday, signed by dozens of special needs and disability charities and campaigners, warned against government changes to the Send system that would restrict or abolish EHCPs.
Labour MPs who spoke to the Guardian are worried ministers are unable to explain essential details of the special educational needs shake-up being considered in the schools white paper to be published in October.
Downing Street has refused to rule out ending EHCPs, while stressing that no decisions have yet been taken ahead of a white paper on Send provision to be published in October.
Keir Starmer’s deputy spokesperson said: “I’ll just go back to the broader point that the system is not working and is in desperate need of reform. That’s why we want to actively work with parents, families, parliamentarians to make sure we get this right.”
after newsletter promotion
Speaking later in the Commons, Phillipson said there was “no responsibility I take more seriously” than that to more vulnerable children. She said it was a “serious and complex area” that “we as a government are determined to get right”.
The education secretary said: “There will always be a legal right to the additional support children with Send need, and we will protect it. But alongside that, there will be a better system with strengthened support, improved access and more funding.”
Dr Will Shield, an educational psychologist from the University of Exeter, said rumoured proposals that limit EHCPs – potentially to pupils in special schools – were “deeply problematic”.
Shield said: “Mainstream schools frequently rely on EHCPs to access the funding and oversight needed to support children effectively. Without a clear, well-resourced alternative, families will fear their children are not able to access the support they need to achieve and thrive.”
Paul Whiteman, general secretary of the National Association of Head Teachers, said: “Any reforms in this space will likely provoke strong reactions and it will be crucial that the government works closely with both parents and schools every step of the way.”
Education
The Guardian view on special needs reform: children’s needs must be the priority as the system is redesigned | Editorial
Children with special educational needs and disabilities (Send) must be supported through the education system to fulfil their potential as fully as possible. This is the bottom line for the families of the 1.6 million children with a recognised additional learning need in England, and all those who support them. It needs to be the government’s priority too.
There is no question that the rising number of children receiving extra help has placed pressure on schools and councils. There is wide agreement that the current trajectory is not sustainable. But if plans for reform are shaped around the aim of saving money by removing entitlements, rather than meeting the needs of children by improving schools, they should be expected to fail.
If ministers did not already know this, the Save Our Children’s Rights campaign launched this week ought to help. As it stands, there is no policy of restricting access to the education, health and care plans (EHCPs) that impose a legal duty on councils to provide specified support. But ministers’ criticisms of the adversarial aspects of the current system have led families to conclude that they should prepare for an attempt to remove their enforceable rights. Christine Lenehan, who advises the government, has indicated that the scope of EHCPs could be narrowed, while stressing a commitment to consultation. Tom Rees, who chairs the department for education’s specialist group, bluntly terms it “a bad system”.
Mr Rees’s panel has had its term extended until April. The education select committee will present the conclusions of its inquiry into the Send crisis in the autumn. Both should be listened to carefully. But the education secretary, Bridget Phillipson, and her team also need to show that they are capable of engaging beyond the circle of appointed experts and parliamentarians. Parents can make their views known through constituency MPs. Their voices and perspectives need to be heard in Whitehall too.
This is a hugely sensitive policy area. There is nothing parents care more about than the opportunities provided to their children, and this concern is intensified when those children have additional needs. Some positive steps have been taken during Labour’s first year. Increased capital spending on school buildings should make a difference to in-house provision, which relies on the availability of suitable spaces. Ministers are right, too, to focus on teacher training, while inclusion has been given greater prominence in the inspection framework. As with the NHS, there is a welcome emphasis on spreading best practice.
But big questions remain. Families are fearful that accountability mechanisms are going to be removed, and want to know how the new “inclusive mainstream” will be defined and judged. Councils are concerned about what happens to their £5bn in special needs budget deficits, when the so-called statutory override expires in 2028. The concerning role of private equity in special education – which mirrors changes in the children’s social care market – also needs addressing.
Schools need to adapt so that a greater range of pupils can be accommodated. The issue is how the government manages that process. The hope must be that the lesson ministers take from their failure on welfare is that consultation on highly sensitive changes, affecting millions of lives, must be thorough. In order to make change, they must build consensus.
-
Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.
Education
How AI is Transforming Education in Africa
Artificial Intelligence (AI) is reshaping industries across the globe, and education in Africa is no exception. From personalized learning platforms to AI-driven teacher training, the continent is witnessing a surge in innovative solutions tackling longstanding challenges. In this Q&A Insights piece, we dive into how AI is revolutionizing education, addressing questions from our iAfrica community about its impact, opportunities, and hurdles.
What are the biggest challenges in African education that AI can address?
Africa’s education sector faces issues like limited access to quality resources, teacher shortages, and diverse linguistic needs. AI can bridge these gaps in practical ways. For instance, AI-powered platforms like Eneza Education provide mobile-based learning in local languages, reaching students in remote areas with affordable, interactive content. Adaptive learning systems analyze student performance to tailor lessons, ensuring kids in overcrowded classrooms get personalized attention. AI also supports teacher training through virtual simulations, helping educators refine skills without costly in-person workshops.
“AI can democratize education by making high-quality resources accessible to students in rural areas.” – Dr. Aisha Mwinyi, EdTech Researcher
How is AI being used to improve access to education?
Access is a critical issue, with millions of African children out of school due to distance, poverty, or conflict. AI is stepping in with scalable solutions. Chatbots and virtual tutors, like those developed by Ustad Mobile, deliver bite-sized lessons via SMS or WhatsApp, working on basic phones for low-income communities. In Nigeria, uLesson uses AI to stream offline-capable video lessons, bypassing unreliable internet. These tools ensure learning continues in areas with limited infrastructure, from refugee camps to rural villages.
Can AI help with language barriers in education?
Absolutely. Africa’s linguistic diversity—over 2,000 languages—creates unique challenges. AI-driven translation tools, such as those integrated into Kolibri by Learning Equality, adapt content into local languages like Swahili, Yoruba, or Amharic. Speech-to-text and text-to-speech systems also help non-literate learners engage with digital materials. These innovations make education inclusive, especially for marginalized groups who speak minority languages.
What are some standout African AI education startups?
The continent is buzzing with homegrown talent. M-Shule in Kenya uses AI to deliver personalized SMS-based learning, focusing on primary school students. Chalkboard Education, operating in Ghana and Côte d’Ivoire, offers offline e-learning platforms for universities, using AI to track progress. South Africa’s Siyavula combines AI with open-source textbooks to provide math and science practice, serving millions of learners. These startups show Africa isn’t just adopting AI—it’s innovating with it.
What concerns exist about AI in education?
While the potential is huge, concerns linger. Data privacy is a big one—students’ personal information must be protected, especially in regions with weak regulations. There’s also the risk of over-reliance on tech, which could sideline human teachers. Affordability is another hurdle; AI solutions must be low-cost to scale. Experts emphasize the need for ethical AI frameworks, like those being developed by AI4D Africa, to ensure tools are culturally relevant and equitable.
“We must balance AI’s efficiency with the human touch that makes education transformative.” – Prof. Kwame Osei, Education Policy Expert
How can policymakers support AI in education?
Policymakers play a pivotal role. Investing in digital infrastructure—think affordable internet and device subsidies—is crucial. Governments should also fund local AI research, as seen in Rwanda’s Digital Skills Program, which trains youth to build EdTech solutions. Public-private partnerships can scale pilots, while clear regulations on data use build trust. Our community suggests tax incentives for EdTech startups to spur innovation.
What’s next for AI in African education?
The future is bright but demands action. AI could power virtual reality classrooms, making immersive learning accessible in underfunded schools. Predictive analytics might identify at-risk students early, reducing dropout rates. But scaling these requires collaboration—between governments, startups, and communities. As iAfrica’s Q&A Forum shows, Africans are eager to shape this future, asking sharp questions and sharing bold ideas.
Got more questions about AI in education? Drop them in our Q&A Forum and join the conversation shaping Africa’s tech-driven future.
Got more questions about AI in education? Drop them in an email to ai@africa.com and join the conversation shaping Africa’s tech-driven future.
-
Funding & Business7 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers7 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions7 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers6 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Funding & Business4 days ago
Dust hits $6M ARR helping enterprises build AI agents that actually do stuff instead of just talking