Education
‘Cheating is off the charts: AI tools reshape how students learn and study

The book report is now a thing of the past. Take-home tests and essays are becoming obsolete.
Student use of artificial intelligence has become so prevalent, high school and college educators say, that to assign writing outside of the classroom is like asking students to cheat.
“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. “Anything you send home, you have to assume is being AI’ed.”
The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study and how teachers teach, and it’s creating new confusion over what constitutes academic dishonesty.
“We have to ask ourselves, what is cheating?” says Cuny, a 2024 recipient of California’s Teacher of the Year award. “Because I think the lines are getting blurred.”
Cuny’s students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him “lock down” their screens or block access to certain sites. He’s also integrating AI into his lessons and teaching students how to use AI as a study aid “to get kids learning with AI instead of cheating with AI.”
In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading.
“I used to give a writing prompt and say, ‘In two weeks, I want a five-paragraph essay,’” says Gibson. “These days, I can’t do that. That’s almost begging teenagers to cheat.”
Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in “The Great Gatsby.” Many students say their first instinct is now to ask ChatGPT for help “brainstorming.” Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: “Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!”
Students are uncertain when AI usage is out of bounds
Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation, and it’s sometimes hard to know where to draw the line.
College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading “felt like a different language” until she read AI summaries of the texts.
“Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?”
Her class syllabi say things like: “Don’t use AI to write essays and to form thoughts,” she says, but that leaves a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater.
Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences.
“Whether you can use AI or not depends on each classroom. That can get confusing,” says Valencia 11th grader Jolie Lahey. She credits Cuny with teaching her sophomore English class a variety of AI skills like how to upload study guides to ChatGPT and have the chatbot quiz them, and then explain problems they got wrong.
But this year, her teachers have strict “No AI” policies. “It’s such a helpful tool. And if we’re not allowed to use it that just doesn’t make sense,” Lahey says. “It feels outdated.”
Schools are introducing guidelines, gradually
Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term “AI literacy” has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges.
Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions.
The University of California, Berkeley emailed all faculty new AI guidance that instructs them to “include a clear statement on their syllabus about course expectations” around AI use. The guidance offered language for three sample syllabus statements — for courses that require AI, ban AI in and out of class, or allow some AI use.
“In the absence of such a statement, students may be more likely to use these technologies inappropriately,” the email said, stressing that AI is “creating new confusion about what might constitute legitimate methods for completing student work.”

Carnegie Mellon University has seen a huge uptick in academic responsibility violations due to AI, but often students aren’t aware they’ve done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university’s Heinz College of Information Systems and Public Policy.
For example, one student who is learning English wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English. But he didn’t realize the platform also altered his language, which was flagged by an AI detector.
Enforcing academic integrity policies has become more complicated, since use of AI is hard to spot and even harder to prove, Fitzsimmons said. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line, but are now more hesitant to point out violations because they don’t want to accuse students unfairly. Students worry that if they are falsely accused, there is no way to prove their innocence.
Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told a blanket ban on AI “is not a viable policy” unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to “flipped classrooms,” where homework is done in class.
Emily DeJeu, who teaches communication courses at Carnegie Mellon’s business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in “a lockdown browser” that blocks students from leaving the quiz screen.
“To expect an 18-year-old to exercise great discipline is unreasonable,” DeJeu said. ”That’s why it’s up to instructors to put up guardrails.”
If you purchase a product or register for an account through a link on our site, we may receive compensation. By using this site, you consent to our User Agreement and agree that your clicks, interactions, and personal information may be collected, recorded, and/or stored by us and social media and other third-party partners in accordance with our Privacy Policy.
Education
President-elect of Oxford Union to face disciplinary proceedings for Charlie Kirk remarks | University of Oxford

The president-elect of the Oxford Union will face disciplinary proceedings for making “inappropriate remarks” celebrating the fatal shooting of Charlie Kirk, the union has announced on social media.
George Abaraonye, a student at the University of Oxford who became president-elect of the debating society after a vote in June, posted several comments in a WhatsApp group appearing to celebrate what happened, according to the Telegraph.
This included one saying: “Charlie Kirk got shot, let’s fucking go.” Another message, purportedly sent from Abaraonye’s Instagram account, read: “Charlie Kirk got shot loool.”
The Oxford Union said on Saturday that Abaraonye had suffered racial abuse and threats since his comments were revealed in the Telegraph on Thursday.
In a statement posted on social media on Saturday, the union reiterated that it had already condemned the president-elect’s “inappropriate remarks”. The society added: “We emphasise that these are his personal views and not those of the Union, nor do they represent the values of our institution.
“At the same time, we are deeply disturbed by and strongly condemn the racial abuse and threats that George has faced in response. No individual should ever be attacked because of the colour of their skin or the community they come from. Threats to his life are abhorrent. Such rhetoric has no place online, or anywhere in society.”
The statement went on to defend the right to free speech and freedom of expression, but added that free speech “cannot and will not come at the expense of violence, intimidation, or hate”.
“The Oxford Union does not possess executive powers to summarily dismiss a president-elect. However, the complaints filed against the president-elect have been forwarded for disciplinary proceedings and will be addressed with the utmost seriousness.
“Our duty is to demonstrate to our members, the university community, alumni, and the wider public, that disagreement must be expressed through debate and dialogue, not through abuse or threats. That is the tradition we uphold, and it is the standard we will continue to set.”
On Thursday, Abaraonye said he had “reacted impulsively” to the news of Kirk’s shooting, and that the comments were “quickly deleted” after news emerged of his death.
“Those words did not reflect my values,” Abaraonye added. “Nobody deserves to be the victim of political violence … I extend my condolences to his family and loved ones.
“At the same time, my reaction was shaped by the context of Mr Kirk’s own rhetoric – words that often dismissed or mocked the suffering of others. He described the deaths of American children from school shootings as an acceptable ‘cost’ of protecting gun rights. He justified the killing of civilians in Gaza, including women and children, by blaming them collectively for Hamas. He called for the retraction of the Civil Rights Act, and repeatedly spread harmful stereotypes about LGBTQ and trans communities. These were horrific and dehumanising statements.”
Kirk and Abaraonye had met during a debate on toxic masculinity held by the Oxford Union in May, the Telegraph reported. Donald Trump, the US president, paid tribute to Kirk as a “martyr for truth and freedom” after the shooting.
Valerie Amos, the master of University College, Oxford, said on Friday that no disciplinary action would be taken against Abaraonye by the college he attends.
Amos said: “Though Mr Abaraonye’s comments are abhorrent, they do not contravene the college’s policies on free speech, or any other relevant policy. Therefore, no disciplinary action will be taken.”
Education
‘It was personal, critical’: Bristol parents’ long battle over council Send services | Special educational needs

“I’ve realised how damaging the whole thing’s been because, you know, you can’t trust people,” Jen Smith says from her home in Bristol.
Smith is one of a number of parents of children with special education needs and disabilities (Send) who allege Bristol city council spied on them because of their online criticism of the local authority.
More than three years have passed since a leak of council correspondence containing personal details – including wedding photos – of parents of Send children, and the council has finally agreed to commission an independent investigation into the claims.
Smith and others – some of whom wish to remain unnamed – have called on the former Bristol mayor Marvin Rees – now Baron Rees of Easton – to give evidence to the investigation as they search for answers as to why they were monitored.
They want to know if the “social media spying scandal” as it is known in the city was linked the cutting of funding to the Bristol Parent Carers charity days after the allegations first surfaced.
Smith, who has a son and daughter who are autistic and has been battling for improved Send provision for years, became a member of Bristol Parent Carers in 2018 and assisted in running coffee mornings and support groups in the south of the city.
She would frequently post her frustrations with the Send system in the city on social media. “It wasn’t done in any capacity as being part of the forum,” she says.
“It was just that Send was so bad in Bristol we had to challenge it, because it was, it was just a mess.”
Her view was backed up by official reviews and reports at the time. A review into alternative learning provision commissioned by the council found a catalogue of failings, and a report by Ofsted and the Care Quality Commission found “significant areas of weakness in the local area’s practice”. “Parents and carers are overwhelmingly condemning of the Send system in Bristol because of the experiences they have had,” the regulators said.
A July 2022 article in the Bristolian, a self-proclaimed “scandal sheet”, published a leaked cache of emails and a spreadsheet of “combative” social media posts that showed officials in the council’s department for children, families and education department had collated examples of social media criticism by Smith and other parent carers.
One official says they are “working hard to uncover some concrete evidence” and lists a number of examples of social media posts, as well as revealing they had been trawling personal photos of some of the members of the parent carer forum.
In one line of the email, the official says: “External comms deduced this is [redacted] as image is the same as wedding photos on [redacted]’s personal Facebook site.”
In another email, an official refers to Smith’s “duplicity”.
She says: “It was personal, critical stuff … They were just so full of themselves. It’s almost like they had this little bubble where they thought they were really important.”
The council conducted an internal “fact finding” mission in August 2022, which found there had been no “systematic monitoring” of social media – an exercise that Smith and others called whitewash.
After a vote by its children and young people policy committee, however, the council announced last month that it would commission an independent investigation into the “historic monitoring of the social media accounts of parents and carers of Send children”.
Smith is critical of Rees, who was Labour mayor from 2016 to 2024 before the people of Bristol voted to abolish the mayoral system in favour of a committee system.
She found him “vitriolic toward Send parents”, alleging he had “a real issue with anybody speaking out whatsoever”. Rees has been contacted for comment.
after newsletter promotion
Kerry Bailes, who has been a Labour councillor in Bristol since 2021, believes she was among the parents monitored when one of her tweets appeared in a subject access request – a process for individuals to ask an organisation for a copy of their personal data.
Bailes, whose son has autism and campaigned for improved Send provision before the allegations of monitoring surfaced, said she had been shocked and baffled when she learned her tweet had appeared in some of the correspondence.
“It feels like a big betrayal,” she says. “It’s like being in an abusive relationship, that you’re reliant, you’re co-dependent on that service, or that person or that group of people, and it just feels like a huge betrayal, but you can’t leave them. Because what’s going to happen to the support for your child?”
Bailes said she took part in protests outside Bristol city hall to raise the profile of the crisis in Send provision in the city.
“We took snippets of that and we put it on social media,” she says. “Our aim was to help the council help themselves. At at the time, there were 250 children without a school placement, so we put bunting up with with one triangle for each child that was missing a school placement outside city hall.
“Prior to 2022 the parent carer forum wasn’t what it should have been. The council weren’t really working with them. We were trying to advocate for our children, advocate as an alliance. It just seemed to rub the council up the wrong way.”
Bailes dismissed the council’s subsequent internal investigation as “patting themselves on the back, saying everything’s legal and above board”.
A spokesperson for Bristol city council said: “The children and young people policy committee is committed to inclusion and transparency and has voted to conduct an independent review into historical social media use.
“The council is also progressing with its Send and inclusion strategy, which includes investment in educational psychology services, the development of an inclusion and outreach service, and is spending over £40m to create new specialist places for children over the coming five years.”
No timetable has been set for the independent investigation.
Education
Ukraine urges ethical use of AI in education

Deputy minister urges careful use of AI in schools, warning it must support education, not replace it.
AI can help build individual learning paths for Ukraine’s 3.5 million students, but its use must remain ethical, First Deputy Minister of Education and Science Yevhen Kudriavets has said.
Speaking to UNN, Kudriavets stressed that AI can analyse large volumes of information and help students acquire the knowledge they need more efficiently. He said AI could construct individual learning trajectories faster than teachers working manually.
He warned, however, that AI should not replace the educational process and that safeguards must be found to prevent misuse.
Kudriavets also said students in Ukraine should understand the reasons behind using AI, adding that it should be used to achieve knowledge rather than to obtain grades.
The deputy minister emphasised that technology itself is neutral, and how people choose to apply it determines whether it benefits education.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries