Tools & Platforms
Is Big Tech Turning Education Into an AI Video Game?
(TNS) — “Why am I learning AI if it’s going to eventually take my job?” one of my students asked me at the end of the school year.
“I don’t know,” I said. “I wonder the same thing about mine.”
Students are off for the summer, but Big Tech is working hard pitching its brand to schools, marketing its products to students as “homework buddies” and “personal tutors” and to educators as “teaching assistants” and “work pals,” while undermining the entire field of education and sending out a sea of mixed messages.
We all have reason to worry. The dizzying pace at which artificial intelligence has infiltrated schools and dominated the discourse within education has left the classroom a battleground of contradictions.
Our fears aren’t hyperbolic. Schools in Texas and Arizona are already using AI to “teach” kids with educators as mere “guides” rather than experts in their content area.
Last year, one of my seniors told me she preferred AI to her teachers “because I can talk to AI in the middle of the night, but my teachers don’t email me back until the next morning.”
In May, Luis von Ahn, CEO of the foreign language education app Duolingo, said: “It’s just a lot more scalable to teach with AI than with teachers.” Schools will exist mostly just for child care. And President Donald Trump’s April 23 executive order calls for the use of AI in schools, claiming the “early exposure” will spark “curiosity and creativity.”
This pressure isn’t only coming from the White House. Education websites have uncritically embraced AI at a stunning pace. Edutopia used to highlight resources for teaching literature, history, art, math and science and instead is dominated now by AI “tools” marketed to burned-out, overworked educators to save time. EdTechTeacher and Colleague.AI call AI “knowledgeable colleagues” and “friendly buddies,” shifting away from teachers’ specific subject areas.
If this isn’t dizzying enough, when we educators are directed or forced to use AI in our teaching, we’re criticized when we do.
What’s really happening in the classroom is this: Teachers are unable to teach the problem-solving skills kids will need as they grow up and are blamed when an entire generation is outsourcing their imaginations to Big Tech. No wonder test scores have plunged, and anxiety and depression have risen.
What’s really happening in the classroom is this: Teachers are unable to teach the problem-solving skills kids will need as they grow up and are blamed when an entire generation is outsourcing their imaginations to Big Tech.
Liz Shulman, English teacher at Evanston Township High School and in the School of Education and Social Policy at Northwestern University
Yet in glossy AI advertisements paid with the billions of dollars Big Tech is making off schools, the classroom is portrayed as a student-centered space where kids engage with personalized technology that differentiates better than teachers as though it’s just another school supply item like the pencil cases on their desks.
The kids know it. When I teach grammar, students want to use Grammarly. When we read a book together, they say ChatGPT can summarize it for them in seconds. When I teach them any part of the writing process, they list the dozens of AI apps that are designed to “write” the essay for them. Students readily admit they use AI to cheat, but they’re constantly getting messages to use their “writing coach,” “debate-partner” and “study buddy.”
It’s always been an uphill battle for educators to get kids to like school. It’s part of the profession. “It’s our job to push students, and it’s our students’ job to resist,” a mentor told me when I was a new teacher. “In the middle,” he continued, “therein lies learning.”
Wherein lies learning now? Will school become a video game packaged as, well, school?
If educators don’t teach writing, we’re told we’re not teaching students how to communicate. If we don’t teach reading, we’re told we’re not teaching them how to think critically. If we don’t teach them business skills, we’re told we’re not preparing them to enter the workforce. Now we’re being told if we don’t teach them AI, we’re not preparing them for their future that consists of what, exactly? The future that’s poised to steal their jobs?
At the end of the school year in my freshman English class, we read Erich Remarque’s novel “All Quiet on the Western Front.” I asked my ninth graders to choose passages that stood out to them. Many of them chose this one: “We are forlorn like children, and experienced like old men. We are crude and sorrowful and superficial — I believe we are lost.”
They noticed the alienation the soldiers feel from themselves. I wondered if it’s how they felt, too — estranged from their own selves. Ironically, their discovery showed the whole point of reading literature — to understand oneself and the world better and to increase one’s capacity for empathy and compassion. As my mentor teacher told me decades ago, therein lies learning.
Our kids have become soldiers caught on the front lines in the battle for education, stuck in the crossfire of Big Tech and school. The classroom — a sacred space that should prioritize human learning, discovery and academic risk-taking — has become a flashpoint in America, and our kids are in the center of it.
I recently finished reading “The Road Back,” Remarque’s sequel to “All Quiet on the Western Front.” The novel dramatizes the ongoing alienation of the soldiers once they’ve returned home from war.
“Why can’t you let the kids enjoy the few years that are left to them,” Willy, one of the soldiers pleads, “while they need still know nothing about it?”
Is the classroom going to remain a torched battleground such as the one my students read about in “All Quiet on the Western Front” — kids hunkering in the trenches of our schools while the adults fight over the eroded terrain of education? Will they become even more cut off from their own selves, just when they’re getting to know who they are?
Liz Shulman teaches English at Evanston Township High School and in the School of Education and Social Policy at Northwestern University. She is working on a book of stories from the classroom.
©2025 Chicago Tribune. Distributed by Tribune Content Agency, LLC.
Tools & Platforms
How Some Nonprofits Are Turning to AI As a Tool for Good
As millions of young people worldwide increasingly rely on AI chatbots to acquire knowledge as part of their learning — and even complete assignments for them — one organization is concerned that those in developing countries without access to the tech could be put at an unfair disadvantage.
And it’s using the very technology it believes is causing this problem to fix it.
Education Above All, a nonprofit based in Qatar, believes that because most of the world’s popular AI chatbots are created in Silicon Valley, they aren’t equipped to understand the linguistic and ethnic nuances of non-English-speaking countries, creating education inequities on a global scale. But its team sees AI as a way to tackle this problem.
In January 2025, the charity teamed up with MIT, Harvard, and the United Nations Development Programme to introduce a free and open-source AI literacy program called Digi-Wise. Delivered in partnership with educators in the developing world, it encourages children to spot AI-fueled misinformation, use AI tools responsibly in the classroom, and even develop their own AI tools from scratch.
As part of this, the charity has developed its own generative AI chatbot called Ferby. It allows users to access and personalize educational resources from the Internet-Free Education Resource Bank, an online library containing hundreds of free and open-source learning materials.
Education Above All said it’s already being used by over 5 million Indian children to access “project-based learning” in partnership with Indian nonprofit Mantra4Change. More recently, Education Above All has embedded Ferby into edtech platform SwiftChat, which is used by 124 million students and teachers across India.
“Ferby curates, customizes, and creates learning materials to fit local realities, so a teacher in rural Malawi can run the right science experiment as easily as a teacher in downtown Doha,” said Aishwarya Shetty, an education specialist at Education Above All. “By marrying offline ingenuity with AI convenience, we make learning local, low-resource, and always within reach, yet at scale.”
Education Above All is among a group of organizations using AI to tackle global inequality and work toward realizing the United Nations Sustainable Development Goals. Created in 2015, the UN SDGs comprise 17 social, economic, and environmental targets that serve as guidelines for nations, businesses, and individuals to follow to help achieve a more peaceful and prosperous world. Education Above All’s projects fall under SDG 4: inclusive and equitable education.
A global effort
A range of other organizations are using AI to augment and enhance their education programming.
Tech To The Rescue, a global nonprofit that connects charities with pro-bono software development teams to meet their goals, is another organization using AI in support of the UN SDGs. Last year, it launched a three-year AI-for-good accelerator program to help NGOs meet the various UN SDGs using AI.
One organization to benefit from the program is Mercy Corps, a humanitarian group that works across over 40 countries to tackle crises like poverty, the climate crisis, natural disasters, and violence. Through the accelerator, it created an AI strategy tool that helps first responders predict disasters and coordinate resources. The World Institute on Disability AI also participated in the accelerator program, creating a resource-matching system that helps organizations allocate support to people with disabilities in hours rather than weeks.
Similarly, the International Telecommunication Union — the United Nations’ digital technology agency, and one of its oldest arms — is supporting organizations using technology to achieve the UN SDGs through its AI for Good Innovation Factory startup competition. For example, an Indian applicant — a startup called Bioniks — has enabled a teenager to reclaim the ability to do simple tasks like writing and getting dressed through the use of AI-powered prosthetics.
Challenges to consider
While AI may prove to be a powerful tool for achieving the UN SDGs, it comes with notable risks. Again, as AI models are largely developed by American tech giants in an industry already constrained by gender and racial inequality, unconscious bias is a major flaw of AI systems.
To address this, Shetty said layered prompts for non-English users, human review of underlying AI datasets, and the creation of indigenous chatbots are paramount to achieving Education Above All’s goals.
AI models are also power-intensive, making them largely inaccessible to the populations of developing countries. That’s why Shetty urges AI companies to provide their solutions via less tech-heavy methods, like SMS, and to offer offline features so users can still access AI resources when their internet connections drop. Open-source, free-of-charge subscriptions can help, too, she added.
AI as a source for good
Challenges aside, Shetty is confident that AI can be a force for good over the next few years, particularly around education. She told BI, “We are truly energized by how the global education community is leveraging AI in education: WhatsApp-based math tutors reaching off-grid learners; algorithms that optimize teacher deployment in shortage areas; personalized content engines that democratize education; chatbots that offer psychosocial support in crisis zones and more.”
But Shetty is clear that AI should augment, rather than displace, human educators. And she said the technology should only be used if it can solve challenges faced by humans and add genuine value.
“Simply put,” she said, “let machines handle the scale, let humans handle the soul, with or without AI tools.”
Tools & Platforms
EU unveils AI code of practice to help businesses comply with bloc’s rules
LONDON – The European Union on Thursday released a code of practice on general purpose artificial intelligence to help thousands of businesses in the 27-nation bloc using the technology comply with the bloc’s landmark AI rule book.
The EU code is voluntary and complements the EU’s AI Act, a comprehensive set of regulations that was approved last year and is taking effect in phases.
The code focuses on three areas: transparency requirements for providers of AI models that are looking to integrate them into their products; copyright protections; and safety and security of the most advanced AI systems
The AI Act’s rules on general purpose artificial intelligence are set to take force on Aug. 2. The bloc’s AI Office, under its executive Commission, won’t start enforcing them for at least a year.
General purpose AI, exemplified by chatbots like OpenAI’s ChatGPT, can do many different tasks and underpin many of the AI systems that companies are using across the EU.
Under the AI Act, uses of artificial intelligence face different levels of scrutiny depending on the level of risk they pose, with some uses deemed unacceptable banned entirely. Violations could draw fines of up to 35 million euros ($41 million), or 7% of a company’s global revenue.
Some Big Tech companies such as Meta have resisted the regulations, saying they’re unworkable, and U.S. Vice President JD Vance, speaking at a Paris summit in February, criticized “excessive regulation” of AI, warning it could kill “a transformative industry just as it’s taking off.”
More recently, more than 40 European companies, including Airbus, Mercedes-Benz, Philips and French AI startup Mistral, urged the bloc in an open letter to postpone the regulations for two years. They say more time is needed to simplify “unclear, overlapping and increasingly complex EU regulations” that put the continent’s competitiveness in the global AI race at risk.
There was no sign that Brussels was prepared to stop the clock.
“Today’s publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent,” the commission’s executive vice president for tech sovereignty, security and democracy, Henna Virkkunen, said in a news release.
Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
Tools & Platforms
Google announces latest AI American Infrastructure Acadmey cohort
Google on Thursday announced the second cohort to take part in its AI Academy American Infrastructure Academy, which seeks to support companies using AI to address issues such as cybersecurity, education, and transportation.
The four-month program is designed for companies at a seed to Series A stage and provides equity-free support and resources like leadership coaching and sales training. It’s primarily virtual, but founders will convene for an in-person summit eventually at Google. Applications opened in late April of this year and closed mid-May; companies selected had to pass a competitive criteria, including having at least six months of runway and having proof of traction.
Google has a pretty good track record so far of identifying notable AI startups. Alumni from Google’s American Infrastructure first cohort last year include the government contractor company Cloverleaf AI, which went on to raise a $2.8 million seed round, and Zordi, an autonomous agtech that had already raised $20 million from Khlosa Ventures.
And it partners with some of the most significant AI companies that use its cloud.
Here were the companies selected for this latest batch:
- Attuned Intelligence — AI-powered voice agents for call centers.
- Block Harbor — cybersecurity for vehicle systems.
- CircNova — uses AI to analyze RNA for therapeutics.
- CloudRig — provides AI technology to help contractors manage schedules, production, and work plans.
- Making Space — connects employers with disabled talent and prospective employees.
- MedHaul — connects healthcare organizations, like hospitals and clinics, to non-emergency medical transportation to book rides for patients with mobility needs.
- Mpathic — automates clinical workflows and provides AI oversight to clinical trials.
- Nimblemind.ai — helps organize health data.
- Omnia Fishing — offers personalized fishing suggestions, such as where to fish and what to bring along with you.
- Otrafy — automates the process of supply management.
- Partsimony — helps companies build and manage supply chains.
- Satlyt — a computing platform to process satellite data.
- StudyFetch — offers personalized learning experiences for students, educators, and institutions.
- Tansy AI — lets users manage their health, such as tracking appointments and records.
- Tradeverifyd — helps businesses track global supply chain risk.
- Vetr Health — offers at-home veterinary care.
- Waterplan — lets businesses track water risk.
This is just one of a number of programs where Google invests in AI startups and research. TechCrunch reported a few months ago that it launched its inaugural AI Futures Fund initiative to back startups building with the latest AI tools from DeepMind.
Last year, Google’s charitable wing announced a $20 million commitment to researchers and scientists in AI and an AI accelerator program to give $20 million to nonprofits developing AI technology. Sundar Pichai also said the company would create a $120 million Global AI Opportunity Fund to help make AI education more accessible to people throughout the world.
Aside from this, Google has a few notable other Academies seeking to help founders, including its Founders Academy and Growth Academy. A Google spokesperson told us earlier this year that its Google for Startups Founders Fund would also look to start backing AI-focused startups as of this year.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education3 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children