Connect with us

AI Insights

Regulating AI Isn’t Enough. Let’s Dismantle the Logic That Put It in Schools.

Published

on


Truthout is a vital news source and a living history of political struggle. If you think our work is valuable, support us with a donation of any size.

In April, Secretary of Education Linda McMahon stood onstage at a major ed-tech conference in San Diego and declared with conviction that students across the U.S. would soon benefit from “A1 teaching.” She repeated it over and over — “A1” instead of “AI.” “There was a school system that’s going to start making sure that first graders, or even pre-Ks, have A1 teaching every year. That’s a wonderful thing!” she assured the crowd.

The moment quickly went viral. Late-night hosts roasted her. A.1. Steak Sauce posted a mock advertisement: “You heard her. Every school should have access to A.1.”

Funny — until it wasn’t. Because behind the gaffe was something far more disturbing: The person leading federal education policy wants to replace the emotional and intellectual process of teaching and learning with a mechanical process of content delivery, data extraction, and surveillance masquerading as education.

This is part of a broader agenda being championed by billionaires like Bill Gates. “The A.I.s will get to that ability, to be as good a tutor as any human ever could,” Gates said at a recent conference for investors in educational technology. As one headline bluntly summarized: “Bill Gates says AI will replace doctors, teachers within 10 years.”

This isn’t just a forecast, it’s a libidinal fantasy — a capitalist dream of replacing relationships with code and scalable software, while public institutions are gutted in the name of “innovation.”

Software Is Not Intelligent

We need to stop pretending that algorithms can think — and we should stop believing that software is intelligent. While using the term “AI” will be necessary to be understood at times, we should begin to introduce and use more accurate language.

And no, I’m not suggesting we start calling it “A1”— unless we’re talking about how it’s being slathered on everything whether we asked for it or not. What we’re calling AI is better understood as Artificial Mimicry: a reflection without thought, articulation without a soul.

Philosopher Raphaël Millière explains that what these systems are doing is not thinking or understanding, but using what he calls “algorithmic mimicry”: sophisticated pattern-matching that mimics human outputs without possessing human cognition. He writes that large pre-trained models like ChatGPT or DALL-E 2 are more like “stochastic chameleons” — not merely parroting back memorized phrases, but blending into the style, tone, and logic of a given prompt with uncanny fluidity. That adaptability is impressive — and can be dangerous — precisely because it can so easily be mistaken for understanding.

So-called AI can be useful in certain contexts. But what we’re calling AI in schools today doesn’t think, doesn’t reason, doesn’t understand. It guesses. It copies. It manipulates syntax and patterns based on probability, not meaning. It doesn’t teach — it prompts. It doesn’t mentor — it manages.

In short, it mimics intelligence. But mimicry is not wisdom. It is not care. It is not pedagogy.

Real learning, as the renowned psychologist Lev Vygotsky showed, is a social process. It happens through dialogue, relationships, and shared meaning-making. Learning unfolds in what Vygotsky called the Zone of Proximal Development — that space between what a learner can do alone and what they can achieve with the guidance of a more experienced teacher, peer, or mentor — someone who can respond with care, ask the right question, and scaffold the next step.

AI can’t do that.

It can’t sense when a student’s silence means confusion or when it means trauma. It can’t notice a spark in a student’s eyes when they connect a concept to their lived experience. It can’t see the brilliance behind a messy, not fully developed idea, or the potential in an unconventional voice. It cannot build a beloved community.

Education entrepreneurs are touting artificial intelligence as the cure for everything from unequal tutoring access to teacher burnout.

It can generate facts, follow-up with questions, offer corrections, give summaries, or suggest next steps — but it can’t recognize the emotional weight of confusion or the quiet excitement of an intellectual breakthrough.

That work — the real work of teaching and learning — cannot be automated.

Schools Need More Instructional Assistants and Less Artificial Intelligence

AI tools like Magic School, Perplexity, and School.ai do offer convenience: grammar fixes, sentence rewording, tone improvements. But they also push students toward formulaic, high-scoring answers. AI nudges students toward efficient compliance, not intellectual risk; such tools teach conformity, not originality.

Recently, my son used MagicSchool’s AI chatbot, Raina, during one of his sixth grade classes to research his project on Puerto Rico. The appeal was obvious — instant answers, no need to sift through dense texts or multiple websites. But Raina never asked the deeper questions: Why does a nation that calls itself the “land of the free” still hold Puerto Rico as a colony? How do AI systems like itself contribute to the climate crisis that is threatening the future of the island? Raina delivered tidy answers. But raising more complicated questions — and helping students wrestle with the emotional weight of the answers — is the work of a human teacher.

AI can help simplify texts or support writing, but it can also miseducate. Over time, it trains students to mimic what the algorithm deems “effective,” rather than develop their own voice or ideas. Reading becomes extraction, not connection. The soul of literature is lost when reading becomes a mechanical task, not an exchange of ideas and emotions between human beings.

Many teachers, underpaid and overwhelmed, turn to AI out of necessity.

But we have to ask: Why, in the wealthiest country in the history of the world, are class sizes so large — and resources so scarce — that teachers are forced to rely on AI instead of instructional assistants? Why aren’t we hiring more librarians to curate leveled texts or reducing class sizes so teachers can tailor learning themselves?

AI doesn’t just flatten learning — it now can monitor students’ digital behavior in deeply invasive ways. Marketed as safety tools, these systems track what students write, search, or post, even on school-issued devices taken home — extending surveillance into students’ personal lives. Instead of funding counselors, schools spend thousands (like a New Jersey district’s $58,000) on surveillance software. In Vancouver, Washington, a data breach exposed how much personal information, including mental health and LGBTQ+ identities, was quietly harvested. One study found almost 60 percent of U.S. students censor themselves when monitored. As Encode Justice leaders Shreya Sampath and Marisa Syed put it, students care that their “data is collected and commodified,” and that their peers “censor themselves in learning environments meant to encourage exploration.”

Ursula Wolfe-Rocca, a teacher at a low-income school in Portland, Oregon, described the current use of AI as “ad hoc,” with some teachers at her school experimenting with it and others not using it at all. While her school is still developing an official policy, she voiced concern about the AI enthusiasm among some staff and administrators, driven by “unsubstantiated hype about how AI can help close the equity gap.”

Wolfe-Rocca’s description reflects a national pattern: AI use in schools is uneven and largely unregulated, yet districts are increasingly promoting its adoption. Even without a clear policy framework, the message many educators receive is that AI is coming, and they are expected to embrace it. Yet this push often comes without serious discussion of pedagogy, ethics, or the structural inequities AI may actually deepen — especially in underresourced schools like hers.

Beware of the Digital Elixir

In today’s AI gold rush, education entrepreneurs are trading in old scripts of standardization for sleek promises of personalization — touting artificial intelligence as the cure for everything from unequal tutoring access to teacher burnout. Take Salman Khan, founder of Khan Academy, who speaks in lofty terms about AI’s potential. Khan recently created the Khanmigo chatbot tutor and described it as a way to “democratize student access to individualized tutoring,” claiming it could eventually give “every student in the United States, and eventually on the planet, a world-class personal tutor.” Khan’s new book, Brave New Words, reads like a swooning love letter to AI — an emotionless machine that, fittingly, will never love him back. It’s hard to ignore the irony of Khan titling his book Brave New Words — an echo of Huxley’s dystopian novel Brave New World where individuality is erased, education is mechanized, and conformity is maintained through technological ease. But rather than treat Huxley’s vision as a warning, Khan seems to take it as a blueprint, and his book reads like a case study in missing the point.

In one example, Khan praises Khanmigo’s ability to generate a full World War II unit plan — complete with objectives and a multiple-choice classroom poll.

Students are asked to select the “most significant cause” of the war:

  • A) Treaty of Versailles
  • B) Rise of Hitler
  • C) Expansionist Axis policies
  • D) Failure of the League of Nations

But the hard truths are nowhere to be found. Khanmigo, for example, doesn’t prompt students to wrestle with the fact that Hitler openly praised the United States for its Jim Crow segregation laws, eugenics programs, and its genocide against Native Americans.

Like so many snake oil education “cures” before it, Khan has pulled up to the schoolhouse door with a wagon full of digital elixirs. It’s classic EdTech hucksterism: a flashy pitch, sweeping claims about revolutionizing education, and recycled behaviorist ideas dressed up as innovation. Behaviorism — a theory that reduces learning to observable changes in behavior in response to external stimuli — treats students less as thinkers and more as programmable responders. Khan’s vision of AI chatbots replacing human tutors isn’t democratizing; it’s dehumanizing.

“What we need isn’t more AI — it’s more teachers, support staff, and real training, especially after COVID left so many educators underprepared.”

Far from exciting or new, these automated “solutions” follow a long tradition of behaviorist teaching technologies. As historian Audrey Watters documents in Teaching Machines, efforts to personalize learning through automation began in the 1920s and gained traction with B.F. Skinner’s teaching machines in the 1950s. But these tools often failed, built on the flawed assumption that learning is just programmed response rather than human connection.

Despite these failures, today’s tech elites are doubling down. But let’s be clear: this isn’t the kind of education they want for their own children. The wealthy get small classes, music teachers, rich libraries, arts and debate programs, and human mentors. Our kids are offered AI bots in overcrowded classrooms. It’s a familiar pattern — standardized, scripted learning for the many; creativity and care for the few. Elites claim AI will “level the playing field,” but they offload its environmental costs onto the public. Training large AI models consumes enormous amounts energy and water and fuels the climate crisis. The same billionaires pushing AI build private compounds to shield their children from the damage their industries cause — instead of regulating tech or cutting emissions, they protect their own from both the pedagogy and the fallout of their greed.

Em Winokur is an Oregon school librarian who joined the Multnomah Education Service District’s “AI Innovators” cohort to offer a critical voice in a conversation dominated by hype and industry influence. She has seen the contradictions firsthand. “EdTech companies aren’t invested in our students’ growth or in building a more caring world,” Winokur told Truthout. “What we need isn’t more AI — it’s more teachers, support staff, and real training, especially after COVID left so many educators underprepared.”

Of course, hedge fund managers, CEOs, and the politicians they bankroll will scoff at this vision. They’ll call it impractical, unaffordable, unrealistic. They’ll argue that the economy can’t support more educators, or school psychologists, smaller classes, or fully staffed school libraries. And then, without missing a beat, they’ll offer AI as the solution: cheaper, faster, easier. Theirs is a vision for hollowed-out, mechanized imitation of education.

Beyond the Bot: Reclaiming Human Learning

Many educators and students aren’t passively accepting this AI-driven future. Youth-led groups like Encode Justice are at the forefront of the struggle to regulate AI. The Algorithmic Justice League is challenging the spread of biometric surveillance in schools, warning that facial recognition systems threaten student safety and school climate. Organizing efforts like Black Lives Matter at School and the Teach Truth movement are part of a growing refusal to let billionaires dictate the terms of learning.

AI in schools isn’t progress — it’s a sign of much deeper underlying problems with U.S. schooling that reveal how far we’ve strayed from the purpose of education. For decades, policy makers and profiteers have swapped human care for high-stakes testing, scripted curriculum, and surveillance. AI isn’t the disease — it’s a symptom of a colonizer’s model of schooling that is extractive and dehumanizing, rather than liberating. That means regulating AI isn’t enough — we must dismantle the logic that brought it in.

I once had a student — I’ll call him Marcus — who was a high school senior already accepted into a good college. But late in the year, his grades dropped sharply, and he was suddenly at risk of not graduating. Over time, Marcus and I built trust — especially through lessons on Black history and resistance to racism. As a Black student who had long been denied this history, he came to see that I wasn’t there to grade him, rank him, or punish him, but to fight injustice. That connection helped him open up and share with me that he was unhoused. Once I understood what he was facing, I connected him with support services and worked with his other teachers to be flexible and compassionate. He ended up passing his classes, graduating, and going on to college.

That kind of care doesn’t come from code. It comes from a human relationship — one rooted in trust, justice, and love.

Keep the press free. Fight political repression.

Truthout urgently appeals for your support. Under pressure from an array of McCarthyist anti-speech tactics, independent journalists at Truthout face new and mounting political repression.

We rely on your support to publish journalism from the frontlines of political movements. In fact, we’re almost entirely funded by readers like you. Please contribute a tax-deductible gift at this critical moment!





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

AI: the key to human-centered business

Published

on


Three key areas for AI deployment

In this context, there are three areas where AI can help organizations bridge cultural gaps and transcend operational constraints by focusing on amplifying human qualities. The following three brief cases illustrate how AI can:

  • Scale empathy in customer interactions
  • Dissolve knowledge silos within organizations
  • Support service delivery across linguistic and cultural boundaries

The pattern that emerges from these examples is clear: AI is at its most powerful not when it replaces humans but when it amplifies human connection.

1] Scaling empathy by bridging interpersonal divides

Many organizations build their service models to optimize cost efficiency and maximize throughput. Hitting these targets often means using strategies that consumers have come to dread: offshoring contact centers or replacing humans entirely with automated systems. Over time, this kind of approach reshapes both organizational culture and customer expectations, making empathy feel like a luxury rather than a standard. The consequences of these attitude shifts are very real: according to TCN’s 2024 survey, nearly two-thirds (63%) of Americans say they’re likely to abandon a brand after a single poor service experience – a nearly 50% increase over the past four years. At the same time, consumer expectations for empathy and responsiveness are rising. In most narratives about “the rise of the machines,” AI is the villain in these kinds of situations, responsible for accelerating the move away from empathy and connection. Yet the truth is that, when AI is implemented thoughtfully, it can help bridge this gap by supporting warmer and more personalized customer experiences.

Here are two live examples of how AI can boost rather than dilute feelings of empathy and connection.

  • AI-powered contact center platforms like Genesys provide agents with on-screen hints about customer tone, journey stage, and emotional context as a call unfolds, then suggest phrasing for responses. On the surface, this is a technical solution to improve efficiency and global staffing flexibility. But its deeper value lies in its ability to help humans tailor responses to provide personalized customer engagement, thus scaling the emotional intelligence embedded in their customer interactions.
  • AI can be unexpectedly effective at scaling empathy even in high-stakes settings like healthcare. The shift towards a “digital front door” for healthcare encounters in the US presents physicians with an enormous challenge: tens of thousands of patient messages arriving via Electronic Health Record inboxes every day. Many require responses that not only contain medically accurate information but that are also emotionally nuanced. A recent study from NYU found that AI-generated responses to patient messages were rated as more empathetic than those written by physicians, scoring higher on warmth, tone, and relational language. While not always as clinically precise, the AI replies were more likely to convey positivity and build connections. This suggests a powerful new role for generative tools. Instead of impersonal templated responses or terse replies from overburdened healthcare providers, AI can deliver personalized responses, relieving cognitive load on doctors while reinforcing a culture of compassion.



Source link

Continue Reading

AI Insights

AI helped the Nasdaq Composite clinch a perfect week

Published

on


The Nasdaq Marketsite is seen during morning trading on April 7, 2025 in New York City. 

Michael M. Santiago | Getty Images

The Nasdaq Composite was an overachiever last week. After ending Friday higher — outperforming the S&P 500 and Dow Jones Industrial Average, both of which closed in the red — the tech-heavy index officially had five straight days of all-time closing highs.

On a weekly basis, the Nasdaq Composite, with its 2% advance, also pulled ahead of the S&P 500’s 1.6% increase and the Dow Jones Industrial Average’s 1% rise.

Given the Nasdaq Composite’s moniker as the “tech-heavy index” (much favored by financial writers who have to come up with many ways to describe it), there’s little surprise in saying that technology companies were the main engine for its finish at the top of the podium.

But it’s not just any tech. OpenAI seemed to be behind much of the market’s rise, suggesting the artificial intelligence narrative is still compelling to investors. Shares of Oracle soared last week largely because of a deal it had made with the artificial intelligence firm; companies associated with it, such as Broadcom and Nvidia, have also had their share prices lifted previously.

With a rate cut by the U.S. Federal Reserve all but certain to come this week — which would especially benefit cash-burning, debt-ridden, yet-to-be-profitable tech startups like OpenAI — the youngest of the three major U.S. indexes might continue to outshine its siblings in the near term.

What you need to know today

U.S. and China discuss trade deal in Spain. On Sunday, officials from both countries arrived in Madrid to begin a round of discussions on their trade partnership and divestment plans for TikTok — the first time the app’s come on the agenda.

China’s economic slowdown deepens. The country’s retail sales and industrial output declined in August and fell short of estimates, while growth of fixed-asset investments declined on a year-to-date basis.

Europe’s initial public offering market lags behind. When Sweden’s Klarna chose to list in New York instead of Europe, it was a sign of how much the continent was being eclipsed by the U.S. and Asia for companies looking to go public.

Nasdaq Composite has a perfect week of record highs. The tech-heavy index rose 0.44% on Friday, sealing its unblemished run for the week. South Korea’s Kospi index touched a record high early Monday as the government reversed its tax-hike plan.

[PRO] Fed meeting the marquee event for this week. Investors are expecting the U.S. central bank to lower interest rates at their meeting this week, which concludes on Sept. 17. That move could turbocharge stocks.

And finally…

Two humanoid robots are on display at the China Mobile booth at the Mobile World Conference in Shanghai on June 19, 2025.

Nurphoto | Nurphoto | Getty Images

Is the humanoid robot industry ready for its ChatGPT moment?

Humanoid robots, which have made significant technological advances this year, may be at the precipice of a ChatGPT-like spike in investment and popularity — or at least, that’s what many in the industry believe. 

Companies such as UBTech Robotics and Galbot, have also installed robots in local factories, according to local media reports. According to Zhao Yuli, chief strategy officer at Galbot, these deployments have come alongside a surge of investor interest and government support in the sector, as well as the maturation of both robotics and generative AI technology. 

— Dylan Butts and Victoria Yeo



Source link

Continue Reading

AI Insights

Ifá Divinity, Omoluabi Ethos, and Artificial Intelligence in the digital age: A contemporary synthesis

Published

on



In an era marked by rapid digital advancement and algorithmic governance, the merging of indigenous epistemologies with emerging technologies is not just an ideal; it is a critical necessity for our future. As the leading Professor in the core field of Cybersecurity and Information Technology Management, I assert my position at the intersection of ancestral wisdom and computational intelligence. I advocate for a powerful synthesis that respects both sacred heritage and scientific innovation.

This discourse—Ifá Divinity, Omoluabi Ethos, and Artificial Intelligence in the Digital Age—urges a revaluation of digital ethics and our interactions with technology. By exploring Yoruba metaphysics and moral philosophy, we gain valuable insights for today’s complexities. Ifá provides a profound perspective, while the Omoluabi ethos emphasizes integrity and communal responsibility, offering a moral framework for AI development that transcends Western utilitarianism. This fosters a deeper connection between humanity and technology, highlighting our shared responsibilities in creating a just digital future.

In this synthesis, we go beyond simply digitising tradition; we elevate it. We don’t just welcome AI into our world; we shape it to embody the principles of prophetic stewardship, communal resilience, and ethical foresight. This is a rallying cry for scholars, technologists, and spiritual guardians to create a transformative new paradigm. Imagine a world where cybersecurity transcends mere technical protection to become a means of cultural preservation. Picture information management evolving from simple data governance into a powerful vessel for transmitting wisdom. In this vision, artificial intelligence becomes not just a tool but a conduit for divine intelligence, guiding us toward a brighter future.

Contextual Foundation: Ifá and the Omoluabi Ethos

The Ifá divination system is an essential pillar within Yoruba cosmology, embodying a rich and intricate framework that weaves together profound wisdom, ethical principles, and metaphysical insights. Central to this venerable tradition is the Omoluabi ideal, a guiding principle emphasising the importance of integrity, humility, and a strong sense of community responsibility. This ideal serves as a moral compass for individuals and a framework for harmonious living within society.

In addition to its spiritual and cultural significance, the Ifá system functions as a crucial repository of indigenous logic and ethical reasoning. Through its intricate rituals and interpretations, Ifá offers deep insights that resonate with contemporary discourses, particularly in our increasingly digital world. By engaging with the teachings of Ifá, individuals today can find valuable perspectives that enrich their understanding of ethics and foster a more meaningful connection to their cultural heritage.

Professor Olu Longe’s Legacy: Ifá as Computational Logic

Professor Olu Longe, the first Nigerian Professor of Computer Science, was a pioneering figure in African computer science and a respected academic renowned for his innovative thinking. He notably contributed to the integration of traditional African practices with modern technological concepts. His groundbreaking work focused on exploring the similarities between Ifá divination and algorithmic reasoning, highlighting a unique intersection of culture and technology. Professor Longe’s legacy continues to inspire many in the field.

Professor Longe conducted extensive research on the intricate mechanics of Ifá, a system deeply embedded in Yoruba culture that is traditionally used for guidance and decision-making. His work emphasised the systematic and data-driven nature of Ifá, framing it not just as a religious or spiritual practice, but as a sophisticated methodology for analysing information and deriving insights.

By establishing these connections, he decisively opened new avenues for understanding how ancient wisdom directly informs contemporary artificial intelligence principles. His contributions significantly advanced the discourse on the coexistence of traditional knowledge systems with modern computational theories, firmly positioning Ifá as a relevant model in the study of algorithms and decision support systems. Professor Longe’s legacy undeniably inspires new generations of scholars and practitioners eager to explore the powerful intersections of culture, technology, and intelligence.

Professor Ojo Emmanuel Ademola: A Visionary Pioneer of the Digital Age

Dive into the world of Professor Ojo Emmanuel Ademola, a standout thought leader who’s shaping the future in the fast-paced realm of digital innovation. With his cutting-edge insights and transformative ideas, he’s not just keeping pace with the digital revolution—he’s leading the charge!

Professor Ojo Emmanuel Ademola, the first Nigerian Professor of Cybersecurity and Information Technology Management, offers a contemporary and globally relevant viewpoint on the subject. While he does not adhere to Ifá divinity, his contributions have significant implications for the field. He is actively involved in developing cyber-ethical frameworks that align with the Omoluabi values of integrity and responsibility.

His work transcends mere digital transformation; it actively integrates strategies that honour cultural heritage while fully embracing cutting-edge technological advancements. Through his thought leadership, he masterfully bridges African intellectual traditions with global digital trends, igniting a transformative dialogue that resonates powerfully across cultures.

He asserts with conviction that we can fully embrace the profound cultural significance of Ifá within the Omoluabi framework, while confidently moving beyond its spiritual boundaries. This bold approach paves the way for vibrant, inclusive dialogues that flourish in our dynamic digital age. It’s a remarkable opportunity to celebrate diversity and forge unprecedented connections!

AI, Ethics, and Indigenous Knowledge Systems

The intersection of artificial intelligence and indigenous systems, particularly Ifá, raises important questions about incorporating human-centred values from African traditions. A key consideration is whether the Omoluabi ethos can serve as a moral foundation for governing AI.

The engagement of individuals beyond conventional systems, like Professor Ademola, is essential for defining the ethical framework of AI. Addressing these challenges requires the creation of collaborative knowledge ecosystems where scholars, technologists, and cultural custodians unite to develop frameworks that are both technically robust and deeply informed by ethical considerations and cultural sensitivities.

Solutions and Forward Pathways

The initiative confidently emphasises the integration of Afrocentric principles into AI development and ethics, highlighting the virtues of Omoluabi as essential ethical guidelines for AI models. It advocates for employing Ifá’s symbolic logic as a powerful cognitive framework that significantly enhances the context-awareness of AI systems.

This initiative goes beyond mere preservation; it’s a vibrant effort to digitise Ifá texts and the rich oral traditions of the Yoruba people. Crucially, this process is conducted with the full consent of the community and under the watchful eye of dedicated scholars. A key aspect of the project is to boost AI literacy among traditional custodians, creating a strong partnership and mutual understanding that bridges the gap between ancient wisdom and modern technology.

Get ready for a groundbreaking initiative that dares to blend Afrocentric ethics into the realms of cybersecurity and AI education! This dynamic movement not only champions innovative interdisciplinary research but also embraces an inspiring dialogue that celebrates both spiritual and secular viewpoints. It’s a thrilling opportunity to enrich the tech landscape with diverse voices and contributions, igniting a vibrant cultural exchange that transforms the way we think about technology. Excitement is in the air as we open the door to a more inclusive future!

Conclusion: Harmony Without Homogenization

In our fast-evolving digital landscape, Ifá maintains its vital role within the Omoluabi framework. Visionary thinkers like Professor Ojo Emmanuel Ademola show us how we can meaningfully engage with indigenous knowledge, respecting its essence while advancing ethical technology, even if we don’t practice its spiritual aspects. Rather than viewing this as a clash between tradition and modernity, we can embrace it as a beautiful fusion of ancestral wisdom and technological progress. This collaboration has the potential to enrich both domains, fostering a brighter, more inclusive future for all.



Source link

Continue Reading

Trending