Education
MAGA Billionaire Is Latest to Champion Anti-DEI School, This One With More AI

Bill Ackman, the finance billionaire who has long espoused support for Donald Trump, has a new preoccupation: a network of AI-fueled private schools that teach students topics at breakneck speed, the curricula for which do not include any sort of troublesome social or political ideas.
The Wall Street Journal describes the Alpha School as a “fast-growing private school that eschews lessons on diversity, equity and inclusion.” The school’s co-founder, MacKenzie Price, told the newspaper that the curriculum is designed to avoid any sort of “political, social issues” that might get “in the way” of students’ education. “We stay very much out of that,” she said.
Alpha’s educational model is quite unique: young K-12 students are taught subjects over the course of two hours using “AI-enabled software.” After that, the rest of the day is parsed out through a variety of physically and socially engaging activities. The school’s website mentions a variety of workshops, some of which are based around leadership, some of which involve business education, and some of which just seem to resemble playtime. The school, which was founded over a decade ago, has campuses spread throughout the country, and it plans to open a new location in Manhattan this year, WSJ reports.
What is Ackman’s role? He’s largely a brand ambassador, according to the WSJ report. The outlet notes that Ackman became interested in it partially due to its “stance on DEI and avoidance of concepts such as the gender continuum.” Over the past several months, Ackman has been “hyping” up the school to parents he knows, and this week, he plans to appear on a panel alongside Price, the outlet writes.
You can actually imagine Alpha’s model working quite well for many subjects, but when you get to the humanities, that’s when you run into trouble. Subjects like history, art, and literature are intrinsically subjective (they require an interpretive lens), which is why they have historically presented such thorny curricular dilemmas. One person’s socially relevant tome on 19th-century race relations is another person’s anti-American woke propaganda designed to ruin the minds of our nation’s youth. How, exactly, do places like Alpha School teach children about the American novel without letting “political, social issues” get “in the way”? From the outside, that part is unclear.
One thing’s for sure: Ackman’s support for Alpha is part of a broader trend in which billionaires (particularly tech billionaires) seek to platform alternative educational models. Bill Gates has long been a cheerleader for the charter school movement. Jeff Bezos founded his own network of preschools. And then there’s Elon Musk, who, when his elite private school wasn’t cutting it for his kids, launched his own school, Ad Astra, which he helped design (if you think about it, this is sorta like homeschool for billionaires). Since then, Musk has sought to expand the school and recently opened a campus in Texas.
For decades, billionaires have also waged a not-so-secret war on America’s public school system. The school choice movement—of which places like Alpha and Ad Astra are only the latest iterations—has largely been promulgated and funded by the 1 percent. At the same time, efforts have long been made to defund the public school system. Project 2025 (which many people believe has acted as a policy blueprint for the second Trump administration) has advocated for dismantling the Department of Education, and, earlier this year, while he was still helming the Trump administration’s DOGE initiative, Musk claimed he supported abolishing the DOE. In February, DOGE purported to cut $1 billion in research contracts from the agency (most of DOGE’s cuts have ended up being bullshit, however).
It’s unclear how the 1 percent envisions a majority of Americans paying for this style of private education, as reports show that tuition for, say, the Alpha School, costs about $45,000 a year. Such fee structures obviously preclude a majority of the U.S. population from participation. I suppose it’s possible that the price of admission at these schools will drop eventually. Or, maybe, the plan is just to dumb the general population down with trade schools until we all become pliant, obedient workers, while the gilded class turbo-charges its offspring intellectually and weans them on an elitist worldview that precludes any sort of empathy for the have-nots. It’s unclear what the ultimate endgame is here, although I can’t say the view looks particularly rosy from the bleachers.
Education
Opinion | Global AI war will be won in the key arena of education and training

Having taught AI and data analytics in China, I have seen the payoff: graduates join internet giants, leading electric-vehicle makers and the finance industry.
According to Norwegian Business School professor Vegard Kolbjørnsrud, six principles define how humans and AI can work together in organisations. These principles aren’t just for managers or tech executives; they form a core mindset that should be embedded in any national AI education strategy to improve productivity for professors, teachers and students.
Let’s briefly unpack each principle and how it relates to broader national competitiveness in AI education.
The first is what he calls the addition principle. Organisational intelligence grows when human and digital actors are added effectively. We need to teach citizens to migrate from low-value to higher-level tasks with AI. A nation doesn’t need every citizen to be a machine-learning engineer, but it needs most people to understand how AI augments roles in research and development, healthcare, logistics, manufacturing, finance and creative industries. Thus, governments should democratise AI by investing in platforms that reskill everyone, fast.
Education
What counts as cheating? – NBC 5 Dallas-Fort Worth

The book report is now a thing of the past. Take-home tests and essays are becoming obsolete.
High school and college educators say student use of artificial intelligence has become so prevalent that assigning writing outside of the classroom is like asking students to cheat.
“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. “Anything you send home, you have to assume is being AI’ed.”
The question is how schools can adapt, because many of the teaching and assessment tools used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study and how teachers teach, and it’s creating new confusion over what constitutes academic dishonesty.
“We have to ask ourselves, what is cheating?” says Cuny, a 2024 recipient of California’s Teacher of the Year award. “Because I think the lines are getting blurred.”
Cuny’s students at Valencia High School in Southern California now do most of their writing in class. He monitors student laptop screens from his desktop, using software that lets him “lock down” their screens or block access to certain sites. He’s also integrating AI into his lessons and teaching students how to use AI as a study aid “to get kids learning with AI instead of cheating with AI.”
In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She also incorporates more verbal assessments to have students discuss their understanding of the assigned reading.
“I used to give a writing prompt and say, ‘In two weeks, I want a five-paragraph essay,’” says Gibson. “These days, I can’t do that. That’s almost begging teenagers to cheat.”
Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in “The Great Gatsby.” Many students say their first instinct is to ask ChatGPT for help “brainstorming.” Within seconds, ChatGPT yields a list of essay ideas, examples, and quotes to back them up. The chatbot ends by asking if it can do more: “Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!”
Students are uncertain when AI usage is out of bounds
Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation, and it’s sometimes hard to know where to draw the line.
College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading “felt like a different language” until she read AI summaries of the texts.
“Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?”
Her class syllabi say things like: “Don’t use AI to write essays and to form thoughts,” she says, leaving a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as cheaters.
Schools tend to leave AI policies to teachers, often meaning that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences.
“Whether you can use AI or not depends on each classroom. That can get confusing,” says Valencia 11th grader Jolie Lahey. She credits Cuny with teaching her sophomore English class various AI skills like uploading study guides to ChatGPT, having the chatbot quiz them, and then explaining problems they got wrong.
But this year, her teachers have strict “No AI” policies. “It’s such a helpful tool. And if we’re not allowed to use it that just doesn’t make sense,” Lahey says. “It feels outdated.”
Schools are introducing guidelines, gradually
Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term “AI literacy” has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges.
Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions.
The University of California, Berkeley, emailed all faculty new AI guidance that instructs them to “include a clear statement on their syllabus about course expectations” regarding AI use. The guidance offered language for three sample syllabus statements: for courses that require AI, ban AI in and out of class, or allow some AI use.
“In the absence of such a statement, students may be more likely to use these technologies inappropriately,” the email said, stressing that AI is “creating new confusion about what might constitute legitimate methods for completing student work.”
Carnegie Mellon University has seen a huge uptick in academic responsibility violations due to AI, but often students aren’t aware they’ve done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university’s Heinz College of Information Systems and Public Policy.
For example, one student learning English wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English. But he didn’t realize the platform also altered his language, which was flagged by an AI detector.
Fitzsimmons said enforcing academic integrity policies has become more complicated since the use of AI is hard to spot and even harder to prove. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line, but they are now more hesitant to point out violations because they don’t want to accuse students unfairly. Students worry that there is no way to prove their innocence if they are falsely accused.
Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told that a blanket ban on AI “is not a viable policy” unless instructors change how they teach and assess students. Many faculty members are doing away with take-home exams. Some have returned to pen-and-paper tests in class, she said, and others have moved to “flipped classrooms,” where homework is done in class.
Emily DeJeu, who teaches communication courses at Carnegie Mellon’s business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in “a lockdown browser” that blocks students from leaving the quiz screen.
“To expect an 18-year-old to exercise great discipline is unreasonable,” DeJeu said. “That’s why it’s up to instructors to put up guardrails.”
Education
Schools Forced to Redefine What Cheating Means Amid AI Use

Published: September 16, 2025
By India McCarty
AI makes everything easier these days, including cheating. As more students turn to the tech to help them in school, teachers have to redefine their concept of cheating on tests and papers.
“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” Casey Cuny, an English teacher of 23 years, told the AP News. “Anything you send home, you have to assume is being AI’ed.”
EducationWeek reported that, in a survey conducted by Turnitin, “some AI use was detected in about 1 out of 10 assignments,” and that “at least 20 percent of each assignment [they reviewed] had evidence of AI use in the writing.”
Cuny continued, “We have to ask ourselves, what is cheating? Because I think the lines are getting blurred.”
Related: How AI and ChatGPT are Changing Education
Students agree, with many saying they turn to ChatGPT for help with brainstorming. However, it’s all too easy to take the chatbot up on its offer of simply writing the paper or doing the work for them.
“Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?” college sophomore Lily Brown said.
She explained that there is a gray area when it comes to how teachers enforce AI restrictions — most syllabi say things like “Don’t use AI to write essays and to form thoughts,” but that leaves a lot of wiggle room for students who want to use the technology.
Now, schools work to put detailed rules about AI use in place, hoping to cut down on any cheating. The University of California, Berkeley emailed faculty with AI guidance that told them to “include a clear statement on their syllabus about course expectations” surrounding the tech.
The University of Kansas has also made their guidelines clear, with James Basham, a professor of special education and director of the school’s Center for Innovation, Design & Digital Learning calling the rules “a foundation.”
“As schools consider forming an AI task force, for example, they’ll likely have questions on how to do that, or how to conduct an audit and risk analysis,” he explained in an interview with KU’s newspaper. “The framework can help guide them through that, and we’ll continue to build on this.”
It can be tricky to decide what’s cheating and what’s just a little extra help when it comes to using AI, but as schools wise up, regulations for the tech use are becoming more widespread.
Read Next: Is ChatGPT Use Becoming More Common Among School Kids?
Questions or comments? Please write to us here.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries