Connect with us

AI Insights

Regulating AI Isn’t Enough. Let’s Dismantle the Logic That Put It in Schools.

Published

on


Truthout is a vital news source and a living history of political struggle. If you think our work is valuable, support us with a donation of any size.

In April, Secretary of Education Linda McMahon stood onstage at a major ed-tech conference in San Diego and declared with conviction that students across the U.S. would soon benefit from “A1 teaching.” She repeated it over and over — “A1” instead of “AI.” “There was a school system that’s going to start making sure that first graders, or even pre-Ks, have A1 teaching every year. That’s a wonderful thing!” she assured the crowd.

The moment quickly went viral. Late-night hosts roasted her. A.1. Steak Sauce posted a mock advertisement: “You heard her. Every school should have access to A.1.”

Funny — until it wasn’t. Because behind the gaffe was something far more disturbing: The person leading federal education policy wants to replace the emotional and intellectual process of teaching and learning with a mechanical process of content delivery, data extraction, and surveillance masquerading as education.

This is part of a broader agenda being championed by billionaires like Bill Gates. “The A.I.s will get to that ability, to be as good a tutor as any human ever could,” Gates said at a recent conference for investors in educational technology. As one headline bluntly summarized: “Bill Gates says AI will replace doctors, teachers within 10 years.”

This isn’t just a forecast, it’s a libidinal fantasy — a capitalist dream of replacing relationships with code and scalable software, while public institutions are gutted in the name of “innovation.”

Software Is Not Intelligent

We need to stop pretending that algorithms can think — and we should stop believing that software is intelligent. While using the term “AI” will be necessary to be understood at times, we should begin to introduce and use more accurate language.

And no, I’m not suggesting we start calling it “A1”— unless we’re talking about how it’s being slathered on everything whether we asked for it or not. What we’re calling AI is better understood as Artificial Mimicry: a reflection without thought, articulation without a soul.

Philosopher Raphaël Millière explains that what these systems are doing is not thinking or understanding, but using what he calls “algorithmic mimicry”: sophisticated pattern-matching that mimics human outputs without possessing human cognition. He writes that large pre-trained models like ChatGPT or DALL-E 2 are more like “stochastic chameleons” — not merely parroting back memorized phrases, but blending into the style, tone, and logic of a given prompt with uncanny fluidity. That adaptability is impressive — and can be dangerous — precisely because it can so easily be mistaken for understanding.

So-called AI can be useful in certain contexts. But what we’re calling AI in schools today doesn’t think, doesn’t reason, doesn’t understand. It guesses. It copies. It manipulates syntax and patterns based on probability, not meaning. It doesn’t teach — it prompts. It doesn’t mentor — it manages.

In short, it mimics intelligence. But mimicry is not wisdom. It is not care. It is not pedagogy.

Real learning, as the renowned psychologist Lev Vygotsky showed, is a social process. It happens through dialogue, relationships, and shared meaning-making. Learning unfolds in what Vygotsky called the Zone of Proximal Development — that space between what a learner can do alone and what they can achieve with the guidance of a more experienced teacher, peer, or mentor — someone who can respond with care, ask the right question, and scaffold the next step.

AI can’t do that.

It can’t sense when a student’s silence means confusion or when it means trauma. It can’t notice a spark in a student’s eyes when they connect a concept to their lived experience. It can’t see the brilliance behind a messy, not fully developed idea, or the potential in an unconventional voice. It cannot build a beloved community.

Education entrepreneurs are touting artificial intelligence as the cure for everything from unequal tutoring access to teacher burnout.

It can generate facts, follow-up with questions, offer corrections, give summaries, or suggest next steps — but it can’t recognize the emotional weight of confusion or the quiet excitement of an intellectual breakthrough.

That work — the real work of teaching and learning — cannot be automated.

Schools Need More Instructional Assistants and Less Artificial Intelligence

AI tools like Magic School, Perplexity, and School.ai do offer convenience: grammar fixes, sentence rewording, tone improvements. But they also push students toward formulaic, high-scoring answers. AI nudges students toward efficient compliance, not intellectual risk; such tools teach conformity, not originality.

Recently, my son used MagicSchool’s AI chatbot, Raina, during one of his sixth grade classes to research his project on Puerto Rico. The appeal was obvious — instant answers, no need to sift through dense texts or multiple websites. But Raina never asked the deeper questions: Why does a nation that calls itself the “land of the free” still hold Puerto Rico as a colony? How do AI systems like itself contribute to the climate crisis that is threatening the future of the island? Raina delivered tidy answers. But raising more complicated questions — and helping students wrestle with the emotional weight of the answers — is the work of a human teacher.

AI can help simplify texts or support writing, but it can also miseducate. Over time, it trains students to mimic what the algorithm deems “effective,” rather than develop their own voice or ideas. Reading becomes extraction, not connection. The soul of literature is lost when reading becomes a mechanical task, not an exchange of ideas and emotions between human beings.

Many teachers, underpaid and overwhelmed, turn to AI out of necessity.

But we have to ask: Why, in the wealthiest country in the history of the world, are class sizes so large — and resources so scarce — that teachers are forced to rely on AI instead of instructional assistants? Why aren’t we hiring more librarians to curate leveled texts or reducing class sizes so teachers can tailor learning themselves?

AI doesn’t just flatten learning — it now can monitor students’ digital behavior in deeply invasive ways. Marketed as safety tools, these systems track what students write, search, or post, even on school-issued devices taken home — extending surveillance into students’ personal lives. Instead of funding counselors, schools spend thousands (like a New Jersey district’s $58,000) on surveillance software. In Vancouver, Washington, a data breach exposed how much personal information, including mental health and LGBTQ+ identities, was quietly harvested. One study found almost 60 percent of U.S. students censor themselves when monitored. As Encode Justice leaders Shreya Sampath and Marisa Syed put it, students care that their “data is collected and commodified,” and that their peers “censor themselves in learning environments meant to encourage exploration.”

Ursula Wolfe-Rocca, a teacher at a low-income school in Portland, Oregon, described the current use of AI as “ad hoc,” with some teachers at her school experimenting with it and others not using it at all. While her school is still developing an official policy, she voiced concern about the AI enthusiasm among some staff and administrators, driven by “unsubstantiated hype about how AI can help close the equity gap.”

Wolfe-Rocca’s description reflects a national pattern: AI use in schools is uneven and largely unregulated, yet districts are increasingly promoting its adoption. Even without a clear policy framework, the message many educators receive is that AI is coming, and they are expected to embrace it. Yet this push often comes without serious discussion of pedagogy, ethics, or the structural inequities AI may actually deepen — especially in underresourced schools like hers.

Beware of the Digital Elixir

In today’s AI gold rush, education entrepreneurs are trading in old scripts of standardization for sleek promises of personalization — touting artificial intelligence as the cure for everything from unequal tutoring access to teacher burnout. Take Salman Khan, founder of Khan Academy, who speaks in lofty terms about AI’s potential. Khan recently created the Khanmigo chatbot tutor and described it as a way to “democratize student access to individualized tutoring,” claiming it could eventually give “every student in the United States, and eventually on the planet, a world-class personal tutor.” Khan’s new book, Brave New Words, reads like a swooning love letter to AI — an emotionless machine that, fittingly, will never love him back. It’s hard to ignore the irony of Khan titling his book Brave New Words — an echo of Huxley’s dystopian novel Brave New World where individuality is erased, education is mechanized, and conformity is maintained through technological ease. But rather than treat Huxley’s vision as a warning, Khan seems to take it as a blueprint, and his book reads like a case study in missing the point.

In one example, Khan praises Khanmigo’s ability to generate a full World War II unit plan — complete with objectives and a multiple-choice classroom poll.

Students are asked to select the “most significant cause” of the war:

  • A) Treaty of Versailles
  • B) Rise of Hitler
  • C) Expansionist Axis policies
  • D) Failure of the League of Nations

But the hard truths are nowhere to be found. Khanmigo, for example, doesn’t prompt students to wrestle with the fact that Hitler openly praised the United States for its Jim Crow segregation laws, eugenics programs, and its genocide against Native Americans.

Like so many snake oil education “cures” before it, Khan has pulled up to the schoolhouse door with a wagon full of digital elixirs. It’s classic EdTech hucksterism: a flashy pitch, sweeping claims about revolutionizing education, and recycled behaviorist ideas dressed up as innovation. Behaviorism — a theory that reduces learning to observable changes in behavior in response to external stimuli — treats students less as thinkers and more as programmable responders. Khan’s vision of AI chatbots replacing human tutors isn’t democratizing; it’s dehumanizing.

“What we need isn’t more AI — it’s more teachers, support staff, and real training, especially after COVID left so many educators underprepared.”

Far from exciting or new, these automated “solutions” follow a long tradition of behaviorist teaching technologies. As historian Audrey Watters documents in Teaching Machines, efforts to personalize learning through automation began in the 1920s and gained traction with B.F. Skinner’s teaching machines in the 1950s. But these tools often failed, built on the flawed assumption that learning is just programmed response rather than human connection.

Despite these failures, today’s tech elites are doubling down. But let’s be clear: this isn’t the kind of education they want for their own children. The wealthy get small classes, music teachers, rich libraries, arts and debate programs, and human mentors. Our kids are offered AI bots in overcrowded classrooms. It’s a familiar pattern — standardized, scripted learning for the many; creativity and care for the few. Elites claim AI will “level the playing field,” but they offload its environmental costs onto the public. Training large AI models consumes enormous amounts energy and water and fuels the climate crisis. The same billionaires pushing AI build private compounds to shield their children from the damage their industries cause — instead of regulating tech or cutting emissions, they protect their own from both the pedagogy and the fallout of their greed.

Em Winokur is an Oregon school librarian who joined the Multnomah Education Service District’s “AI Innovators” cohort to offer a critical voice in a conversation dominated by hype and industry influence. She has seen the contradictions firsthand. “EdTech companies aren’t invested in our students’ growth or in building a more caring world,” Winokur told Truthout. “What we need isn’t more AI — it’s more teachers, support staff, and real training, especially after COVID left so many educators underprepared.”

Of course, hedge fund managers, CEOs, and the politicians they bankroll will scoff at this vision. They’ll call it impractical, unaffordable, unrealistic. They’ll argue that the economy can’t support more educators, or school psychologists, smaller classes, or fully staffed school libraries. And then, without missing a beat, they’ll offer AI as the solution: cheaper, faster, easier. Theirs is a vision for hollowed-out, mechanized imitation of education.

Beyond the Bot: Reclaiming Human Learning

Many educators and students aren’t passively accepting this AI-driven future. Youth-led groups like Encode Justice are at the forefront of the struggle to regulate AI. The Algorithmic Justice League is challenging the spread of biometric surveillance in schools, warning that facial recognition systems threaten student safety and school climate. Organizing efforts like Black Lives Matter at School and the Teach Truth movement are part of a growing refusal to let billionaires dictate the terms of learning.

AI in schools isn’t progress — it’s a sign of much deeper underlying problems with U.S. schooling that reveal how far we’ve strayed from the purpose of education. For decades, policy makers and profiteers have swapped human care for high-stakes testing, scripted curriculum, and surveillance. AI isn’t the disease — it’s a symptom of a colonizer’s model of schooling that is extractive and dehumanizing, rather than liberating. That means regulating AI isn’t enough — we must dismantle the logic that brought it in.

I once had a student — I’ll call him Marcus — who was a high school senior already accepted into a good college. But late in the year, his grades dropped sharply, and he was suddenly at risk of not graduating. Over time, Marcus and I built trust — especially through lessons on Black history and resistance to racism. As a Black student who had long been denied this history, he came to see that I wasn’t there to grade him, rank him, or punish him, but to fight injustice. That connection helped him open up and share with me that he was unhoused. Once I understood what he was facing, I connected him with support services and worked with his other teachers to be flexible and compassionate. He ended up passing his classes, graduating, and going on to college.

That kind of care doesn’t come from code. It comes from a human relationship — one rooted in trust, justice, and love.

Keep the press free. Fight political repression.

Truthout urgently appeals for your support. Under pressure from an array of McCarthyist anti-speech tactics, independent journalists at Truthout face new and mounting political repression.

We rely on your support to publish journalism from the frontlines of political movements. In fact, we’re almost entirely funded by readers like you. Please contribute a tax-deductible gift at this critical moment!





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

The Artificial Intelligence Legal Catastrophe Inches Closer To Reality – See Generally

Published

on


AI Makes Up Cases, Court Says ‘Sure, Why Not’: Judge signed off on party’s proposed order. Apparently didn’t bother to check the made up cases.

Textualism/Originalism May Be Bankrupt, But Like Donald Trump Always Is… Ignoring Costs, Harming People Who Trust Them In Good Faith, And Barreling Forward To The Next Crisis Of Their Own Making: Justice Breyer delivers well-crafted critiques that misunderstand that proponents aren’t trying to win the argument, they’re trying to have smart people treat them like they have ideas worth engaging.

John Roberts Replies On Cue: The Chief Justice took time out of his busy schedule to clarify that people like Breyer may have detailed, powerful, constitutionally valid criticisms, but they’re losers because SCOREBOARD! SIX VOTES, SUCKAS!

Diddy Covers ‘RICO’ Sauve: Prosecutors reached for racketeering. Missed.

The Definition Of Psychosis…: Speaking of doing the same thing over and over and expecting a different result, Trump appeals Perkins Coie loss.

Law Firms Exhibit Serious ‘I Got Dumped, So Let’s Get Married’ Energy: After losing 60 attorneys, Biglaw firm entertains merger talks.

Trump’s Lawyers Play Iowa Civ Pro Roulette: Team Trump’s legal eagles realized they needed to drop and refile their lawsuit. Probably a day late.

Biglaw Firm Parties Like It’s 2019: Another Biglaw firm decides what associates really need is more fluorescent lighting.

Budget Bill Limits Student Loans: If law school is harder to pay for… maybe they’ll lower tuition?



Source link

Continue Reading

AI Insights

Xiaomi Founder’s Bold EV Bet Is Paying Off Where Apple’s Failed

Published

on




Lei Jun, founder and chairman of Xiaomi Corp., the only tech company to have successfully diversified into carmaking, couldn’t resist.



Source link

Continue Reading

AI Insights

Undervalued and Profitable: This Artificial Intelligence (AI) Stock Has Soared 73% in 2025, and It Could Still Jump Higher

Published

on


Storage solutions provider Seagate Technology (STX -1.59%) has registered an outstanding rally on the stock market in 2025, rising an incredible 73% year to date and beating the Nasdaq Composite index’s 7% return by a massive margin.

This impressive performance can be attributed to robust growth in the demand for storage in data centers running artificial intelligence (AI) workloads. Let’s dig into how AI is fueling Seagate’s growth and see how it could pave the way for more upside in this technology stock.

Image source: Getty Images.

Seagate Technology is growing at an incredible pace, and it can sustain its momentum

Seagate Technology’s revenue in the first nine months of its fiscal 2025 increased almost 43% year over year to $6.65 billion. Even better, the company’s non-GAAP (adjusted) income from operations has jumped more than fourfold during this period, thanks to higher margins.

Management attributes this fantastic growth to the healthy demand for mass capacity storage in the cloud, which has created a tight supply environment and led to an increase in prices. Management remarked on the company’s April earnings call that the growing storage demand “aligns with the cloud capex investment cycle and ongoing build-out of data center infrastructure to support AI transformations.”

Specifically, 90% of the storage in large-scale data centers is done with hard drives because of their cost efficiency and scalability. With the storage requirement in data centers expected to more than double between 2024 and 2028, Seagate estimates this could push annual revenue for the data center storage market to $23 billion by 2028, up from $13 billion last year.

Seagate is in a solid position to make the most of this growth opportunity considering its 40% share of the global storage market. Not surprisingly, Seagate’s outlook for the recently concluded fiscal fourth quarter was an impressive one. The company guided for $2.4 billion in revenue at the midpoint of its range, along with $2.40 per share in earnings.

The top-line guidance is good for a 27% year-over-year increase, while earnings are on track to more than double from the prior-year period’s reading of $1.05 per share.

A solid jump in the company’s earnings points toward more gains

For the full fiscal year, Seagate could grow revenue 38%, while its adjusted earnings will jump more than sixfold to $7.91 per share. Importantly, the company should be able to sustain this momentum, thanks to the tailwinds discussed above, and that sets the stage for strong returns.

STX EPS Estimates for Current Fiscal Year Chart

Data by YCharts.

The potential earnings growth combined with Seagate’s incredibly attractive valuation makes the stock a no-brainer buy. It is now trading at just 21 times trailing earnings and 16 times forward earnings estimates. The Nasdaq 100 index, meanwhile, has an average forward earnings multiple of 29, which means the stock trades at a significant discount to the tech sector overall.

Investors looking for a fast-growing AI stock that’s also reasonably priced would do well to buy Seagate before it flies higher.

Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool has no position in any of the stocks mentioned. The Motley Fool has a disclosure policy.



Source link

Continue Reading

Trending