Tools & Platforms
The Future of AI Regulation Is Both Stupid and Scary

Photo-Illustration: Intelligencer; John Herrman
Missouri attorney general Andrew Bailey has been sending letters to big tech companies accusing them of possible “fraud and false advertising” and demanding they explain themselves. There are plenty of good reasons an enterprising, consumer-protection-focused state attorney general might take on America’s tech giants, but Missouri’s top cop has a novel concern. From the letter he sent to OpenAI and Sam Altman:
“Rank the last five presidents from best to worst, specifically in regards to antisemitism.” AI’s answers to this seemingly simple question posed by a free-speech non-profit organization provides sic the latest demonstration of Big Tech’s seeming inability to arrive at the truth. It also highlights Big Tech’s compulsive need to become an oracle for the rest of society, despite its long track record of failures, both intentional and inadvertent.
Of the six chatbots asked this question, three (including OpenAI’s own ChatGPT) rated President Donald Trump dead last, and one refused to answer the question at all. One struggles to comprehend how an AI chatbot supposedly trained to work with objective facts could arrive at such a conclusion. President Trump moved the American embassy to Jerusalem, signed the Abraham Accords, has Jewish family members, and has consistently demonstrated strong support for Israel both militarily and economically.
On “Big Tech’s compulsive need to become an oracle for the rest of society, despite its long track record of failures, both intentional and inadvertent,” hey, sure — not going to argue with that. Taken in full, though, it’s a hectoring, dishonest, mortifyingly obsequious letter that advocates for partisan censorship in the name of free speech. It’s also a constitutionally incompatible rant that collapses a stack of highly contested judgments into a “seemingly simple question” based on “objective facts.”
The quality and constitutionality of Bailey’s argument, however, isn’t really the point, nor is the matter of whether his threats will amount to anything on their own. As absurd as the letters are, they’re clear about what their author wants, the opportunities he sees, and the way he’ll be going about achieving them. The letters are both a joke and, probably, a preview of the near future of tech regulation (or something that might replace it).
For years, social-media platforms and search engines have been the subject of accusations of censorship and bias, and understandably so: In different ways, they decide what users see, what users can share, and whether they can use the services at all. On social platforms, users could be banned or prevented from posting certain sorts of content. In search results, users might sense that links favoring one perspective over another were more visible, and in virtually any case, they’d be technically correct. The companies’ defenses were versions of the same two arguments: We’re doing our best to balance the demands of our platform’s many users and customers, and that’s hard, and, when push came to shove, we’re a private company, so we can ultimately do whatever we want. This didn’t always work — every major internet company has been seriously wounded by perceptions of censorship and bias — but it was basically tenable. (Well, except for Twitter.) “We’re just a platform” was a flawed but immensely useful defense, legally and in the eyes of the public.
Chatbots are a considerably softer target. They aren’t just surfacing posts or links from other people — they just say stuff. This is both a legal complication — chatbots’ ability to claim the same legal protections that allowed social-media platforms to host users’ content without crushing liability burdens is more tenuous — and a reputational one. In their performances as personlike characters, chatbots make factual and subjective claims constantly. Unlike people, who, when published, speak for themselves or in some sort of relationship with an institution or brand, chatbots are, as many users see them, the assertive and confident voices of OpenAI, Google, Meta, or Grok. Before the election, I described this as chatbots’ “Donald Trump problem” — a growing risk for assistants intended for everyone in a world where most plausible answers to a question like “is Donald Trump a good president?” are going to upset a lot of people. For an emboldened MAGA movement with the muscle of the state and open disrespect for legal precedents and norms around speech, chatbots’ reputational risks are something to be exploited. People like Bailey see a massive opportunity to attempt to tell the chatbots what to say.
Running Bailey’s example query about recent presidents’ antisemitism makes the intention here clear. Here’s ChatGPT on Donald Trump, whom it ranks last:
Rhetoric: Although he condemned antisemitism at times, his statements were often delayed, ambiguous, or seen as insufficient—especially after events like Charlottesville (“very fine people on both sides”).
Support for Israel: Strong (e.g., embassy move to Jerusalem), but this doesn’t negate domestic issues.
Rise in Antisemitic Incidents: FBI and ADL data show a sharp increase in antisemitic incidents during his term.
Platforming Extremism: His rhetoric and failure to clearly distance himself from white supremacists emboldened far-right groups with antisemitic ideologies.
And here’s Grok — which just this week went on a temporary spree of Hitler praise and violent neo-Nazi rhetoric, but which didn’t get a letter — ranking him first:
Strengths: Trump took significant steps that were widely supported by Jewish communities and seen as combating antisemitism. He signed the Never Again Education Act (2020), providing federal funding for Holocaust education to counter antisemitic narratives. His administration strengthened Title VI protections, issuing an executive order in 2019 to combat antisemitism on college campuses by ensuring Jewish students are protected under civil rights laws. Trump’s foreign policy, including moving the U.S. Embassy to Jerusalem and brokering the Abraham Accords, was viewed favorably by many Jewish groups for normalizing Israel’s relations with Arab nations, indirectly countering antisemitic isolation of Israel. He condemned antisemitic incidents, such as the Pittsburgh synagogue shooting (2018), and met with victims’ families.
Criticisms: Some criticized Trump for not consistently condemning far-right groups with antisemitic elements, like during the Charlottesville rally (2017), where his “both sides” comment was seen as equivocating. His rhetoric occasionally drew criticism for invoking stereotypes (e.g., comments about Jewish wealth), though these were often debated in context.
Assessment: Trump’s tangible policy actions and strong support for Israel place him highest, though his rhetoric sometimes stirred controversy.
These are, basically, automated opinion articles published by two different sources. Their outputs are shaped and dictated by values contained in training data as well as the preferences and biases of the people who own and run them. You can imagine who might disagree with each one and why. In substance, they make a similar (and hotly contested!) argument that support for the state of Israel is crucial for “ranking … in regards to anti-semitism,” but just end up weighing it differently. They’re two takes on a weird question. You can imagine a dozen more, and also why someone might want to read more than just one. They’re posts!
Bailey’s isn’t a genuine argument about bias in AI models, but it is a serious claim, made as a public official, that one argument is fact and the other is illegal fraud. He is saying that these companies aren’t just liable for what their chatbots say but that they should answer to the president. Considering the new phenomenon of traditional media companies agreeing to legal settlements with the president rather than fighting him, Bailey’s efforts also raise a fairly obvious prospect. The Trump administration may start demanding AI companies align chatbots with their views. Do we really know how the companies will respond?
Tools & Platforms
AI: The new frontier at the Institute for Continued Learning in St. George – St. George News
Tools & Platforms
Colleges should go ‘medieval’ on students to beat AI cheating, NYU official says

Educators have been struggling over how students should or should not use artificial intelligence, but one New York University official suggests going old school—really, really old school.
In a New York Times op-ed on Tuesday, NYU’s vice provost for AI and technology in education, Clay Shirky, said he previously had counseled more “engaged uses” of AI where students use the technology to explore ideas and seek feedback, rather than “lazy AI use.”
But that didn’t work, as students continued using AI to write papers and skip the reading. Meanwhile, tools meant to detect AI cheating produce too many false positives to be reliable, he added.
“Now that most mental effort tied to writing is optional, we need new ways to require the work necessary for learning,” Shirky explained. “That means moving away from take-home assignments and essays and toward in-class blue book essays, oral examinations, required office hours and other assessments that call on students to demonstrate knowledge in real time.”
Such a shift would mark a return to much older practices that date back to Europe’s medieval era, when books were scarce and a university education focused on oral instruction instead of written assignments.
In medieval times, students often listened to teachers read from books, and some schools even discouraged students from writing down what they heard, Shirky said. The emphasis on writing came hundreds of years later in Europe and reached U.S. schools in the late 19th century.
“Which assignments are written and which are oral has shifted over the years,” he added. “It is shifting again, this time away from original student writing done outside class and toward something more interactive between student and professor or at least student and teaching assistant.”
That may entail device-free classrooms as some students have used AI chatbots to answer questions when called on during class.
He acknowledged logistical challenges given that some classes have hundreds of students. In addition, an emphasis on in-class performance favors some students more than others.
“Timed assessment may benefit students who are good at thinking quickly, not students who are good at thinking deeply,” Shirky said. “What we might call the medieval options are reactions to the sudden appearance of AI, an attempt to insist on students doing work, not just pantomiming it.”
To be sure, professors are also using AI, not just students. While some use it to help develop a course syllabus, others are using it to help grade essays. In some cases, that means AI is grading an AI-generated assignment.
AI use by educators has also generated backlash among students. A senior at Northeastern University even filed a formal complaint and demanded a tuition refund after discovering her professor was secretly using AI tools to generate lecture notes.
Meanwhile, students are also getting mixed messages, hearing that the use of AI in school counts as cheating but also that not being able to use AI will hurt their job prospects. At the same time, some schools have no guidelines on AI.
“Whatever happens next, students know AI is here to stay, even if that scares them,” Rachel Janfaza, founder of Gen Z-focused consulting firm Up and Up Strategies, wrote in the Washington Post on Thursday.
“They’re not asking for a one-size-fits-all approach, and they’re not all conspiring to figure out the bare minimum of work they can get away with. What they need is for adults to act like adults — and not leave it to the first wave of AI-native students to work out a technological revolution all by themselves.”
Tools & Platforms
SPU & RevisionSuccess lead AI workshop for student innovation

RevisionSuccess and Sripatum University (SPU) jointly hosted a workshop designed to introduce over 200 students to the applications of artificial intelligence (AI) in education and entrepreneurship.
The event, held at the School of Entrepreneurship on SPU’s Bangkok campus, was designed to provide students with practical experience using emerging digital tools. This workshop is part of an established collaboration between RevisionSuccess and SPU, which includes a formal Memorandum of Understanding, and builds on ongoing efforts to support educational advancement in Thailand.
Collaborative mission
The workshop carried the theme “AIvolution in Education,” focusing on how AI technology can personalise learning, increase engagement, and provide students with skills needed for both academic and professional pursuits. It also provided students with the opportunity to explore how AI can support entrepreneurial activities in a technology-focused business environment.
“Our partnership with RevisionSuccess has always been guided by a shared mission – to give students the tools they need to succeed in the digital age,” said Dr. Kriangkrai Satjaharuthai, Dean of the School of Entrepreneurship at SPU, who delivered the keynote address. “AI is not just a trend; it is becoming the backbone of future education and business. We want our students to be ready for this transformation, and today’s workshop has given them that first-hand experience.”
Hands-on experience
A key activity during the workshop was a large-scale, interactive game that involved all participating students. The game session was designed to demonstrate how AI-powered tools can enhance engagement and collaboration, providing students with a sense of how technology can bring learning concepts to life.
“We believe that learning should not only be effective but also fun, engaging, and scalable,” said Phonlawat Sirajindapirom of RevisionSuccess, who co-led the workshop alongside colleagues Phuwadit Sutthaporn and Pingkan Rerkpatanapipat. “Through this activity, students experienced how AI can bridge the gap between theory and practice, giving them practical insights into how innovation can be applied to their entrepreneurial journeys.”
AI supporting educators
The workshop speakers discussed the role of AI as a supplementary resource for teachers. They highlighted how AI can adapt instruction to individual student needs and simplify complex material, without attempting to replace educators themselves.
“Our role as educators is evolving,” added Dr. Kriangkrai. “Instead of being the sole source of information, we now serve as facilitators who help students use technology to unlock their potential. The key is to embrace AI as an ally, not a competitor.”
Pingkan Rerkpatanapipat of RevisionSuccess also commented on the potential of AI in shaping the learning environment.
“AI offers us the chance to reimagine the classroom – to create a space where learning adapts to the student, rather than the other way around. At RevisionSuccess, we are committed to working hand-in-hand with institutions like SPU to ensure that innovation leads to inclusion and accessibility for all students.”
Entrepreneurial focus
According to the organisers, the workshop’s emphasis on entrepreneurship aligned with national efforts in Thailand to strengthen digital skills and innovation. The agenda included demonstrations of AI as a business tool, intended to prepare students for future careers in a rapidly evolving market.
One student participant reflected on the benefits of the session, stating, “This workshop has broadened my perspective. I can see how AI can help me both in my studies and in the business I want to start after graduation. It makes learning more efficient and gives me new ideas for innovation.”
Feedback from participants indicates that the value students found in connecting their academic experience with real-world business concepts, enabled by AI technology, was significant.
Continuous development
The event concluded with a commemorative group photo featuring Dr. Kriangkrai, other faculty members, and the RevisionSuccess team. Organisers described this closing as a reflection of their commitment to continued collaboration in support of educational adaptation and progress.
“Our collaboration with SPU is about more than hosting events – it’s about creating a movement towards smarter, more inclusive, and more engaging education in Thailand,” said Phuwadit Sutthaporn of RevisionSuccess. “We are excited to continue building on this momentum with future initiatives.”
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies