Tools & Platforms
AI Has Done Far More Harm Than Good in My Classroom (Opinion)

When I joined my district’s artificial intelligence committee earlier this year, we began by developing a shared philosophy that would preface our new AI policy.
“How can we say that we welcome AI?” asked a high-level district administrator. “I want to be clear that we aren’t afraid. We are embracing it.”
I winced. While administrators are eager to prove their innovative spirit, my experiences have led me to believe that integrating AI in classrooms will do more harm than good.
Since 2022, I’ve seen upward of 100 AI-generated responses that students have submitted as “original” work in my English/language arts classes. If you are familiar with student writing, it is very easy to tell the difference between a chatbot’s response and a high schooler’s work.
However, it is difficult to definitively prove that a piece of writing is AI-generated. Detectors have varying levels of reliability, and even the most reliable detectors sometimes generate a false positive.
Instead, I rely on Google Docs’ history with plugins like Draftback or Revision History to watch students’ drafting process in real time. Students using generative AI typically just paste in the bot’s output as a large block of text. My course syllabi are clear that I need access to students’ editing history to verify academic integrity.
But last spring, students caught onto my strategy and began typing out AI-generated responses. This creates an artificial “drafting history,” which usually shows that the response was written in one sitting in 15-30 minutes, without any significant revisions. Of course, this is nothing like human writing.
But is this enough to ethically hold students accountable for cheating? Not quite. Even if generative AI is allowed in classrooms, how can educators draw the line between ethical and unethical use—and hold students accountable for crossing it?
Those in favor of AI integration argue that students have always cheated: If a student wants to avoid work, they can find something to copy. But careful practitioners could craft assignments for which plagiarism could be easily detected.
With AI, however, students can avoid any intellectual labor in an unprecedented manner. My students have even used it for purely opinion-based questions like, “Which character in Gatsby is most insufferable and why?” or personal reflections like, “Describe a time you knew you were learning.” AI can answer most prompts, regardless of how personal or creative, with varying levels of accuracy.
Some would argue that this means we need to rethink our questions. This may be true to some extent—but aren’t these prompts still worth thinking about?
Education is about the process of learning, not the product. I ask my students to write short stories because I want them to engage in the difficult work of developing style, character, plot, and setting that all work together to create a thematic statement—not because I am in desperate need of 55 short stories.
Writing is thinking; it is a generative and metacognitive process. Writing is also relational, as writers have to look within themselves to connect with others. AI may make writing more efficient, but efficiency is not the goal. Intellectual challenge is what produces the learning.
Lately, I have to brace myself whenever I read student work. I want to believe the best about my kids, but AI has complicated this. Distrust creates a barrier between me and my students that feels foreign; it reminds me of the crabby old teachers I was warned about in graduate school. Every new teacher is cautioned to stay away from colleagues who believe that young people will lie, cheat, and steal whenever given the opportunity. Adolescents don’t want to learn from people who antagonize them, and I don’t want to be one of those people. Any false accusations signal to students that we doubt their ability, which can be emotionally crushing, even if the student is cleared of wrongdoing.
To be clear, I am not advocating that AI should never be used in the classroom. Using it sparingly and with purpose can have a positive impact. AI has theoretical benefits for personalized learning; it can also generate model work for critique, engage students in dialogue about a text, suggest organization strategies, and more. I’ve attended professional development workshops and read compelling case studies from classroom teachers and ed-tech companies where these strategies are presented as tools that can boost learning.
The operative word here, though, is can. AI can be used in supportive ways that are conducive to deep learning, but that is not how most students in my classroom are using it.
Teaching students to use AI ethically does not mean they will stop using it to avoid cognitive labor, no matter what we’d like to believe. And even if students do use AI in these more ethical, supportive ways, it does not necessarily provide better assistance than a capable peer. Offloading the feedback process to a machine deprives students of the opportunity to collaborate. Rather than using a chatbot as a sounding board, I want my students to use one another. That way, both the givers and recipients benefit from the exchange and develop essential collaborative skills in the process.
When I expressed these hesitations in our AI committee, I was told that “the train is leaving the station whether we are on it or not, so we might as well climb aboard.” But where exactly is the train headed? And are we sure that’s somewhere we want to go?
For my own classroom, I will largely be going back to pencil and paper next year, and most writing will be done in class. I don’t want to waste time or squander relationships in trying to determine whether a student’s writing is their own. I want them to practice and grow in their skill and confidence. I may integrate AI periodically if I feel it can meet a need in my classroom. But I want to make this choice myself and not let the current zeitgeist make it for me.
Tools & Platforms
Sam Altman or Elon Musk: AI godfather Geoffrey Hinton’s 6-word reply on who he trusts more

When asked to choose who he trusts more between Tesla CEO Elon Musk and OpenAI chief executive Sam Altman, AI “godfather” Geoffrey Hinton offered a different kind of answer. The choice seemed so difficult that Hinton didn’t use an analogy from science. Instead, he recalled a quote from Republican Senator Lindsey Graham, when asked to pick between two presidential candidates.He remembered a moment from the 2016 presidential race when Graham was asked to choose between Donald Trump or Ted Cruz. Graham’s response, delivered with a wry honesty, was a line Hinton had never forgotten: “It’s like being shot or poisoned.”
Hinton’s warning on potential of AI technology in destruction
Hinton was speaking to The Financial Times during an interview where he sounded the alarm about AI’s potential dangers. Once a key figure in accelerating AI development, Hinton has shifted to expressing deep concerns about its future. He believes that AI poses a grave threat to humanity, arguing that the technology could soon help an average person create bioweapons.“A normal person assisted by AI will soon be able to build bioweapons and that is terrible,” Hinton said, adding, “Imagine if an average person in the street could make a nuclear bomb.”During a two-hour interview, Hinton discussed various topics, including AI’s “nuclear-level” threats, his own use of AI tools, and even how a chatbot contributed to his recent breakup. He also recently warned that AI could soon surpass human abilities, including the power to manipulate emotions by learning from vast datasets to influence feelings and behaviors more effectively than people.Hinton’s concerns stem from his belief that AI is genuinely intelligent. He argues that “by any definition of intelligence, AI is intelligent.” Using several analogies, he explained that an AI’s experience of reality is not so different from a human’s.“It seems very obvious to me. If you talk to these things and ask them questions, it understands,” Hinton stated. He added that there is “very little doubt in the technical community that these things will get smarter.”
Tools & Platforms
Anthropic’s Claude restrictions put overseas AI tools backed by China in limbo

An abrupt decision by American artificial intelligence firm Anthropic to restrict service to Chinese-owned entities anywhere in the world has cast uncertainty over some Claude-dependent overseas tools backed by China’s tech giants.
After Anthropic’s notice on Friday that it would upgrade access restrictions to entities “more than 50 per cent owned … by companies headquartered in unsupported regions” such as China, regardless of where they are, Chinese users have fretted over whether they could still access the San Francisco-based firm’s industry-leading AI models.
While it remains unknown how many entities could be affected and how the restrictions would be implemented, anxiety has started to spread among some users.
Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team.
Singapore-based Trae, an AI-powered code editor launched by Chinese tech giant ByteDance for overseas users, is a known user of OpenAI’s GPT and Anthropic’s Claude models. A number of users of Trae have raised the issue of refunds to Trae staff on developer platforms over concerns that their access to Claude would no longer be available.
Dario Amodei, CEO and cofounder of Anthropic, speaks at the International Network of AI Safety Institutes in San Francisco, November 20, 2024. Photo: AP alt=Dario Amodei, CEO and cofounder of Anthropic, speaks at the International Network of AI Safety Institutes in San Francisco, November 20, 2024. Photo: AP>
A Trae manager responded by saying that Claude was still available, urging users not to consider refunds “for the time being”. The company had just announced a premium “Max Mode” on September 2, which boasted access to significantly more powerful coding abilities “fully supported” by Anthropic’s Claude models.
Other Chinese tech giants offer Claude on their coding agents marketed to international users, including Alibaba Group Holding’s Qoder and Tencent Holdings’ CodeBuddy, which is still being beta tested. Alibaba owns the South China Morning Post.
ByteDance and Trae did not respond to requests for comment.
Amid the confusion, some Chinese AI companies have taken the opportunity to woo disgruntled users. Start-up Z.ai, formerly known as Zhipu AI, said in a statement on Friday that it was offering special offers to Claude application programming interface users to move over to its models.
Anthropic’s decision to restrict access to China-owned entities is the latest evidence of an increasingly divided AI landscape.
In China, AI applications and tools for the domestic market are almost exclusively based on local models, as the government has not approved any foreign large language model for Chinese users.
Anthropic faced pressure to take action as a number of Chinese companies have established subsidiaries in Singapore to access US technology, according to a report by The Financial Times on Friday.
Anthropic’s flagship Claude AI models are best known for their strong coding capabilities. The company’s CEO Dario Amodei has repeatedly called for stronger controls on exports of advanced US semiconductor technology to China.
Anthropic completed a US$13 billion funding round in the past week that tripled its valuation to US$183 billion. On Wednesday, the company said its software development tool Claude Code, launched in May, was generating more than US$500 million in run-rate revenue, with usage increasing more than tenfold in three months.
The firm’s latest Claude Opus 4.1 coding model achieved an industry-leading score of 74.5 per cent on SWE-bench Verified – a human-validated subset of the large language model benchmark, SWE-bench, that is supposed to more reliably evaluate AI models’ capabilities.
This article originally appeared in the South China Morning Post (SCMP), the most authoritative voice reporting on China and Asia for more than a century. For more SCMP stories, please explore the SCMP app or visit the SCMP’s Facebook and Twitter pages. Copyright © 2025 South China Morning Post Publishers Ltd. All rights reserved.
Copyright (c) 2025. South China Morning Post Publishers Ltd. All rights reserved.
Tools & Platforms
‘Please join the Tesla silicon team if you want to…’: Elon Musk offers job as he announces ‘epic’ AI chip

Elon Musk has announced a major step forward for Tesla‘s chip development, confirming a ‘great design review’ for the company’s AI5 chip. The CEO made the announcement on X, signaling Tesla’s intensified push into custom semiconductors amid a fierce global competition, and also offered job to engineers at Tesla’s silicon team.According to Musk, the AI5 chip is set to be ‘epic,’ and the upcoming AI6 has a ‘shot at being the best by AI chip by far.’“Just had a great design review today with the Tesla AI5 chip design team! This is going to be an epic chip. And AI6 to follow has a shot at being the best by AI chip by far,” Musk said in a post on X.Musk revealed that Tesla’s silicon strategy has been streamlined. The company is moving from developing two separate chip architectures to focusing all of its talent on just one. “Switching from doing 2 chip architectures to 1 means all our silicon talent is focused on making 1 incredible chip. No-brainer in retrospect,” he wrote.
Job at Tesla chipmaking team
In a call for new talent, Musk invited engineers to join the Tesla silicon team, emphasising the critical nature of their work. He noted that they would be working on chips that “save lives” where “milliseconds matter.”Earlier this year, Tesla signed a major chip supply agreement with Samsung Electronics, reportedly valued at $16.5 billion. The deal is set to run through the end of 2033.Musk confirmed the partnership, stating that Samsung has agreed to allow “full customisation of Tesla-designed chips.” He also revealed that Samsung’s newest fabrication plant in Texas will be dedicated to producing Tesla’s next-generation A16 chipset.This contract is a significant win for Samsung, which has reportedly been facing financial struggles and stiff competition in the chip manufacturing market.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi