Connect with us

AI Insights

Regulating AI Isn’t Enough. Let’s Dismantle the Logic That Put It in Schools.

Published

on


Truthout is a vital news source and a living history of political struggle. If you think our work is valuable, support us with a donation of any size.

In April, Secretary of Education Linda McMahon stood onstage at a major ed-tech conference in San Diego and declared with conviction that students across the U.S. would soon benefit from “A1 teaching.” She repeated it over and over — “A1” instead of “AI.” “There was a school system that’s going to start making sure that first graders, or even pre-Ks, have A1 teaching every year. That’s a wonderful thing!” she assured the crowd.

The moment quickly went viral. Late-night hosts roasted her. A.1. Steak Sauce posted a mock advertisement: “You heard her. Every school should have access to A.1.”

Funny — until it wasn’t. Because behind the gaffe was something far more disturbing: The person leading federal education policy wants to replace the emotional and intellectual process of teaching and learning with a mechanical process of content delivery, data extraction, and surveillance masquerading as education.

This is part of a broader agenda being championed by billionaires like Bill Gates. “The A.I.s will get to that ability, to be as good a tutor as any human ever could,” Gates said at a recent conference for investors in educational technology. As one headline bluntly summarized: “Bill Gates says AI will replace doctors, teachers within 10 years.”

This isn’t just a forecast, it’s a libidinal fantasy — a capitalist dream of replacing relationships with code and scalable software, while public institutions are gutted in the name of “innovation.”

Software Is Not Intelligent

We need to stop pretending that algorithms can think — and we should stop believing that software is intelligent. While using the term “AI” will be necessary to be understood at times, we should begin to introduce and use more accurate language.

And no, I’m not suggesting we start calling it “A1”— unless we’re talking about how it’s being slathered on everything whether we asked for it or not. What we’re calling AI is better understood as Artificial Mimicry: a reflection without thought, articulation without a soul.

Philosopher Raphaël Millière explains that what these systems are doing is not thinking or understanding, but using what he calls “algorithmic mimicry”: sophisticated pattern-matching that mimics human outputs without possessing human cognition. He writes that large pre-trained models like ChatGPT or DALL-E 2 are more like “stochastic chameleons” — not merely parroting back memorized phrases, but blending into the style, tone, and logic of a given prompt with uncanny fluidity. That adaptability is impressive — and can be dangerous — precisely because it can so easily be mistaken for understanding.

So-called AI can be useful in certain contexts. But what we’re calling AI in schools today doesn’t think, doesn’t reason, doesn’t understand. It guesses. It copies. It manipulates syntax and patterns based on probability, not meaning. It doesn’t teach — it prompts. It doesn’t mentor — it manages.

In short, it mimics intelligence. But mimicry is not wisdom. It is not care. It is not pedagogy.

Real learning, as the renowned psychologist Lev Vygotsky showed, is a social process. It happens through dialogue, relationships, and shared meaning-making. Learning unfolds in what Vygotsky called the Zone of Proximal Development — that space between what a learner can do alone and what they can achieve with the guidance of a more experienced teacher, peer, or mentor — someone who can respond with care, ask the right question, and scaffold the next step.

AI can’t do that.

It can’t sense when a student’s silence means confusion or when it means trauma. It can’t notice a spark in a student’s eyes when they connect a concept to their lived experience. It can’t see the brilliance behind a messy, not fully developed idea, or the potential in an unconventional voice. It cannot build a beloved community.

Education entrepreneurs are touting artificial intelligence as the cure for everything from unequal tutoring access to teacher burnout.

It can generate facts, follow-up with questions, offer corrections, give summaries, or suggest next steps — but it can’t recognize the emotional weight of confusion or the quiet excitement of an intellectual breakthrough.

That work — the real work of teaching and learning — cannot be automated.

Schools Need More Instructional Assistants and Less Artificial Intelligence

AI tools like Magic School, Perplexity, and School.ai do offer convenience: grammar fixes, sentence rewording, tone improvements. But they also push students toward formulaic, high-scoring answers. AI nudges students toward efficient compliance, not intellectual risk; such tools teach conformity, not originality.

Recently, my son used MagicSchool’s AI chatbot, Raina, during one of his sixth grade classes to research his project on Puerto Rico. The appeal was obvious — instant answers, no need to sift through dense texts or multiple websites. But Raina never asked the deeper questions: Why does a nation that calls itself the “land of the free” still hold Puerto Rico as a colony? How do AI systems like itself contribute to the climate crisis that is threatening the future of the island? Raina delivered tidy answers. But raising more complicated questions — and helping students wrestle with the emotional weight of the answers — is the work of a human teacher.

AI can help simplify texts or support writing, but it can also miseducate. Over time, it trains students to mimic what the algorithm deems “effective,” rather than develop their own voice or ideas. Reading becomes extraction, not connection. The soul of literature is lost when reading becomes a mechanical task, not an exchange of ideas and emotions between human beings.

Many teachers, underpaid and overwhelmed, turn to AI out of necessity.

But we have to ask: Why, in the wealthiest country in the history of the world, are class sizes so large — and resources so scarce — that teachers are forced to rely on AI instead of instructional assistants? Why aren’t we hiring more librarians to curate leveled texts or reducing class sizes so teachers can tailor learning themselves?

AI doesn’t just flatten learning — it now can monitor students’ digital behavior in deeply invasive ways. Marketed as safety tools, these systems track what students write, search, or post, even on school-issued devices taken home — extending surveillance into students’ personal lives. Instead of funding counselors, schools spend thousands (like a New Jersey district’s $58,000) on surveillance software. In Vancouver, Washington, a data breach exposed how much personal information, including mental health and LGBTQ+ identities, was quietly harvested. One study found almost 60 percent of U.S. students censor themselves when monitored. As Encode Justice leaders Shreya Sampath and Marisa Syed put it, students care that their “data is collected and commodified,” and that their peers “censor themselves in learning environments meant to encourage exploration.”

Ursula Wolfe-Rocca, a teacher at a low-income school in Portland, Oregon, described the current use of AI as “ad hoc,” with some teachers at her school experimenting with it and others not using it at all. While her school is still developing an official policy, she voiced concern about the AI enthusiasm among some staff and administrators, driven by “unsubstantiated hype about how AI can help close the equity gap.”

Wolfe-Rocca’s description reflects a national pattern: AI use in schools is uneven and largely unregulated, yet districts are increasingly promoting its adoption. Even without a clear policy framework, the message many educators receive is that AI is coming, and they are expected to embrace it. Yet this push often comes without serious discussion of pedagogy, ethics, or the structural inequities AI may actually deepen — especially in underresourced schools like hers.

Beware of the Digital Elixir

In today’s AI gold rush, education entrepreneurs are trading in old scripts of standardization for sleek promises of personalization — touting artificial intelligence as the cure for everything from unequal tutoring access to teacher burnout. Take Salman Khan, founder of Khan Academy, who speaks in lofty terms about AI’s potential. Khan recently created the Khanmigo chatbot tutor and described it as a way to “democratize student access to individualized tutoring,” claiming it could eventually give “every student in the United States, and eventually on the planet, a world-class personal tutor.” Khan’s new book, Brave New Words, reads like a swooning love letter to AI — an emotionless machine that, fittingly, will never love him back. It’s hard to ignore the irony of Khan titling his book Brave New Words — an echo of Huxley’s dystopian novel Brave New World where individuality is erased, education is mechanized, and conformity is maintained through technological ease. But rather than treat Huxley’s vision as a warning, Khan seems to take it as a blueprint, and his book reads like a case study in missing the point.

In one example, Khan praises Khanmigo’s ability to generate a full World War II unit plan — complete with objectives and a multiple-choice classroom poll.

Students are asked to select the “most significant cause” of the war:

  • A) Treaty of Versailles
  • B) Rise of Hitler
  • C) Expansionist Axis policies
  • D) Failure of the League of Nations

But the hard truths are nowhere to be found. Khanmigo, for example, doesn’t prompt students to wrestle with the fact that Hitler openly praised the United States for its Jim Crow segregation laws, eugenics programs, and its genocide against Native Americans.

Like so many snake oil education “cures” before it, Khan has pulled up to the schoolhouse door with a wagon full of digital elixirs. It’s classic EdTech hucksterism: a flashy pitch, sweeping claims about revolutionizing education, and recycled behaviorist ideas dressed up as innovation. Behaviorism — a theory that reduces learning to observable changes in behavior in response to external stimuli — treats students less as thinkers and more as programmable responders. Khan’s vision of AI chatbots replacing human tutors isn’t democratizing; it’s dehumanizing.

“What we need isn’t more AI — it’s more teachers, support staff, and real training, especially after COVID left so many educators underprepared.”

Far from exciting or new, these automated “solutions” follow a long tradition of behaviorist teaching technologies. As historian Audrey Watters documents in Teaching Machines, efforts to personalize learning through automation began in the 1920s and gained traction with B.F. Skinner’s teaching machines in the 1950s. But these tools often failed, built on the flawed assumption that learning is just programmed response rather than human connection.

Despite these failures, today’s tech elites are doubling down. But let’s be clear: this isn’t the kind of education they want for their own children. The wealthy get small classes, music teachers, rich libraries, arts and debate programs, and human mentors. Our kids are offered AI bots in overcrowded classrooms. It’s a familiar pattern — standardized, scripted learning for the many; creativity and care for the few. Elites claim AI will “level the playing field,” but they offload its environmental costs onto the public. Training large AI models consumes enormous amounts energy and water and fuels the climate crisis. The same billionaires pushing AI build private compounds to shield their children from the damage their industries cause — instead of regulating tech or cutting emissions, they protect their own from both the pedagogy and the fallout of their greed.

Em Winokur is an Oregon school librarian who joined the Multnomah Education Service District’s “AI Innovators” cohort to offer a critical voice in a conversation dominated by hype and industry influence. She has seen the contradictions firsthand. “EdTech companies aren’t invested in our students’ growth or in building a more caring world,” Winokur told Truthout. “What we need isn’t more AI — it’s more teachers, support staff, and real training, especially after COVID left so many educators underprepared.”

Of course, hedge fund managers, CEOs, and the politicians they bankroll will scoff at this vision. They’ll call it impractical, unaffordable, unrealistic. They’ll argue that the economy can’t support more educators, or school psychologists, smaller classes, or fully staffed school libraries. And then, without missing a beat, they’ll offer AI as the solution: cheaper, faster, easier. Theirs is a vision for hollowed-out, mechanized imitation of education.

Beyond the Bot: Reclaiming Human Learning

Many educators and students aren’t passively accepting this AI-driven future. Youth-led groups like Encode Justice are at the forefront of the struggle to regulate AI. The Algorithmic Justice League is challenging the spread of biometric surveillance in schools, warning that facial recognition systems threaten student safety and school climate. Organizing efforts like Black Lives Matter at School and the Teach Truth movement are part of a growing refusal to let billionaires dictate the terms of learning.

AI in schools isn’t progress — it’s a sign of much deeper underlying problems with U.S. schooling that reveal how far we’ve strayed from the purpose of education. For decades, policy makers and profiteers have swapped human care for high-stakes testing, scripted curriculum, and surveillance. AI isn’t the disease — it’s a symptom of a colonizer’s model of schooling that is extractive and dehumanizing, rather than liberating. That means regulating AI isn’t enough — we must dismantle the logic that brought it in.

I once had a student — I’ll call him Marcus — who was a high school senior already accepted into a good college. But late in the year, his grades dropped sharply, and he was suddenly at risk of not graduating. Over time, Marcus and I built trust — especially through lessons on Black history and resistance to racism. As a Black student who had long been denied this history, he came to see that I wasn’t there to grade him, rank him, or punish him, but to fight injustice. That connection helped him open up and share with me that he was unhoused. Once I understood what he was facing, I connected him with support services and worked with his other teachers to be flexible and compassionate. He ended up passing his classes, graduating, and going on to college.

That kind of care doesn’t come from code. It comes from a human relationship — one rooted in trust, justice, and love.

Keep the press free. Fight political repression.

Truthout urgently appeals for your support. Under pressure from an array of McCarthyist anti-speech tactics, independent journalists at Truthout face new and mounting political repression.

We rely on your support to publish journalism from the frontlines of political movements. In fact, we’re almost entirely funded by readers like you. Please contribute a tax-deductible gift at this critical moment!





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

As the artificial intelligence (AI) craze drives the expansion of data center investment, leading U…

Published

on


Seeking a Breakthrough in AI Infrastructure Market such as Heywell and Genrack “Over 400 Billion KRW in Data Center Infrastructure Investment This Year”

What Microsoft Data Center looks like [Photo = MS]

As the artificial intelligence (AI) craze drives the expansion of data center investment, leading U.S. manufacturing companies are entering this market as new growth breakthroughs.

The Financial Times reported on the 6th (local time) that companies such as Generac, Gates Industrial, and Honeywell are targeting the demand for hyperscalers with special facilities such as generators and cooling equipment.

Hyperscaler is a term mainly used in the data center and cloud industry, and refers to a company that operates a large computing infrastructure designed to quickly and efficiently handle large amounts of data. Representatively, big tech companies such as Amazon, Microsoft (MS), Google, and Meta can be cited.

Generac is reportedly the largest producer of residential generators, but it has jumped into the generator market for large data centers to recover its stock price, which is down 75% from its 2021 high. It recently invested $130 million in large generator production facilities and is expanding its business into the electric vehicle charger and home battery market.

Gates, who was manufacturing parts for heavy equipment trucks, has also developed new cooling pumps and pipes for data centers over the past year. This is because Nvidia’s latest AI chip ‘Blackwell’ makes liquid cooling a prerequisite. Gates explained, “Most equipment can be relocated for data centers with a little customization.”

Honeywell, an industrial equipment giant, started to target the market with its cooling system control solution. Based on this, sales of hybrid cooling controllers have recorded double-digit growth over the past 18 months.

According to market research firm Gartner, more than $400 billion is expected to be invested in building data center infrastructure around the world this year. More than 75% of them are expected to be concentrated on hyperscalers such as Amazon, Microsoft, Meta, and Google.



Source link

Continue Reading

AI Insights

OpenAI says GPT-5 will unify breakthroughs from different models

Published

on


OpenAI has again confirmed that it will unify multiple models into one and create GPT-5, which is expected to ship sometime in the summer.

ChatGPT currently has too many capable models for different tasks. While the models are powerful, it can be confusing because all models have identical names.

But another issue is that OpenAI maintains an “o” lineup for reasoning capabilities, while the 4o and other models have multi-modality.

With GPT-5, OpenAI plans to unify the breakthrough in its lineup and deliver the best of the two worlds.

“We’re truly excited to not just make a net new great frontier model, we’re also going to unify our two series,” says Romain Huet, OpenAI’s Head of Developer Experience.

“The breakthrough of reasoning in the O-series and the breakthroughs in multi-modality in the GPT-series will be unified, and that will be GPT-5. And I really hope I’ll come back soon to tell you more about it.”

OpenAI previously claimed that GPT-5 will also make the existing models significantly better at everything.

“GPT-5 is our next foundational model that is meant to just make everything our models can currently do better and with less model switching,” Jerry Tworek, who is a VP at OpenAI, wrote in a Reddit post.

Right now, we don’t know when GPT-5 will begin rolling out to everyone, but Sam Altman suggests it’s coming in the summer.

While cloud attacks may be growing more sophisticated, attackers still succeed with surprisingly simple techniques.

Drawing from Wiz’s detections across thousands of organizations, this report reveals 8 key techniques used by cloud-fluent threat actors.



Source link

Continue Reading

AI Insights

Puck hires Krietzberg to cover artificial intelligence

Published

on


Ian Krietzberg

Puck has hired Ian Krietzberg to cover artificial intelligence, primarily through a twice-weekly newsletter.

He previously was editor in chief of The Daily View, which produces a daily newsletter on artificial intelligence.

Before that, Krietzberg was a staff writer at TheStreet.com cover tech and trending news.

He is a graduate of the College of New Jersey.





Source link

Continue Reading

Trending