AI Research
OpenAI and Anthropic researchers decry ‘reckless’ safety culture at Elon Musk’s xAI

AI safety researchers from OpenAI, Anthropic, and other organizations are speaking out publicly against the “reckless” and “completely irresponsible” safety culture at xAI, the billion-dollar AI startup owned by Elon Musk.
The criticisms follow weeks of scandals at xAI that have overshadowed the company’s technological advances.
Last week, the company’s AI chatbot, Grok, spouted antisemitic comments and repeatedly called itself “MechaHitler.” Shortly after xAI took its chatbot offline to address the problem, it launched an increasingly capable frontier AI model, Grok 4, which TechCrunch and others found to consult Elon Musk’s personal politics for help answering hot-button issues. In the latest development, xAI launched AI companions that take the form of a hyper-sexualized anime girl and an overly aggressive panda.
Friendly joshing among employees of competing AI labs is fairly normal, but these researchers seem to be calling for increased attention to xAI’s safety practices, which they claim to be at odds with industry norms.
“I didn’t want to post on Grok safety since I work at a competitor, but it’s not about competition,” said Boaz Barak, a computer science professor currently on leave from Harvard to work on safety research at OpenAI, in a Tuesday post on X. “I appreciate the scientists and engineers @xai but the way safety was handled is completely irresponsible.”
Barak particularly takes issue with xAI’s decision to not publish system cards — industry standard reports that detail training methods and safety evaluations in a good faith effort to share information with the research community. As a result, Barak says it’s unclear what safety training was done on Grok 4.
OpenAI and Google have a spotty reputation themselves when it comes to promptly sharing system cards when unveiling new AI models. OpenAI decided not to publish a system card for GPT-4.1, claiming it was not a frontier model. Meanwhile, Google waited months after unveiling Gemini 2.5 Pro to publish a safety report. However, these companies historically publish safety reports for all frontier AI models before they enter full production.
Techcrunch event
San Francisco
|
October 27-29, 2025
Barak also notes that Grok’s AI companions “take the worst issues we currently have for emotional dependencies and tries to amplify them.” In recent years, we’ve seen countless stories of unstable people developing concerning relationship with chatbots, and how AI’s over-agreeable answers can tip them over the edge of sanity.
Samuel Marks, an AI safety researcher with Anthropic, also took issue with xAI’s decision not to publish a safety report, calling the move “reckless.”
“Anthropic, OpenAI, and Google’s release practices have issues,” Marks wrote in a post on X. “But they at least do something, anything to assess safety pre-deployment and document findings. xAI does not.”
The reality is that we don’t really know what xAI did to test Grok 4. In a widely shared post in the online forum LessWrong, one anonymous researcher claims that Grok 4 has no meaningful safety guardrails based on their testing.
Whether that’s true or not, the world seems to be finding out about Grok’s shortcomings in real time. Several of xAI’s safety issues have since gone viral, and the company claims to have addressed them with tweaks to Grok’s system prompt.
OpenAI, Anthropic, and xAI did not respond to TechCrunch’s request for comment.
Dan Hendrycks, a safety adviser for xAI and director of the Center for AI Safety, posted on X that the company did “dangerous capability evaluations” on Grok 4. However, the results to those evaluations have not been publicly shared.
“It concerns me when standard safety practices aren’t upheld across the AI industry, like publishing the results of dangerous capability evaluations,” said Steven Adler, an independent AI researcher who previously led safety teams at OpenAI, in a statement to TechCrunch. “Governments and the public deserve to know how AI companies are handling the risks of the very powerful systems they say they’re building.”
What’s interesting about xAI’s questionable safety practices is that Musk has long been one of the AI safety industry’s most notable advocates. The billionaire leader of xAI, Tesla, and SpaceX has warned many times about the potential for advanced AI systems to cause catastrophic outcomes for humans, and he’s praised an open approach to developing AI models.
And yet, AI researchers at competing labs claim xAI is veering from industry norms around safely releasing AI models. In doing so, Musk’s startup may be inadvertently making a strong case for state and federal lawmakers to set rules around publishing AI safety reports.
There are several attempts at the state level to do so. California state Sen. Scott Wiener is pushing a bill that would require leading AI labs — likely including xAI — to publish safety reports, while New York Gov. Kathy Hochul is currently considering a similar bill. Advocates of these bills note that most AI labs publish this type of information anyway — but evidently, not all of them do it consistently.
AI models today have yet to exhibit real-world scenarios in which they create truly catastrophic harms, such as the death of people or billions of dollars in damages. However, many AI researchers say that this could be a problem in the near future given the rapid progress of AI models, and the billions of dollars Silicon Valley is investing to further improve AI.
But even for skeptics of such catastrophic scenarios, there’s a strong case to suggest that Grok’s misbehavior makes the products it powers today significantly worse.
Grok spread antisemitism around the X platform this week, just a few weeks after the chatbot repeatedly brought up “white genocide” in conversations with users. Musk has indicated that Grok will be more ingrained in Tesla vehicles, and xAI is trying to sell its AI models to The Pentagon and other enterprises. It’s hard to imagine that people driving Musk’s cars, federal workers protecting the U.S., or enterprise employees automating tasks will be any more receptive to these misbehaviors than users on X.
Several researchers argue that AI safety and alignment testing not only ensures that the worst outcomes don’t happen, but they also protect against near-term behavioral issues.
At the very least, Grok’s incidents tend to overshadow xAI’s rapid progress in developing frontier AI models that best OpenAI and Google’s technology, just a couple years after the startup was founded.
AI Research
MIT Researchers Develop AI Tool to Improve Flu Vaccine Strain Selection

Insider Brief
- MIT researchers have developed VaxSeer, an AI system that predicts which influenza strains will dominate and which vaccines will offer the best protection, aiming to reduce guesswork in seasonal flu vaccine selection.
- Using deep learning on decades of viral sequences and lab data, VaxSeer outperformed the World Health Organization’s strain choices in 9 of 10 seasons for H3N2 and 6 of 10 for H1N1 in retrospective tests.
- Published in Nature Medicine, the study suggests VaxSeer could improve vaccine effectiveness and may eventually be applied to other rapidly evolving health threats such as antibiotic resistance or drug-resistant cancers.
MIT researchers have unveiled an artificial intelligence tool designed to improve how seasonal influenza vaccines are chosen, potentially reducing the guesswork that often leaves health officials a step behind the fast-mutating virus.
The study, published in Nature Medicine, was authored by lead researcher Wenxian Shi along with Regina Barzilay, Jeremy Wohlwend, and Menghua Wu. It was supported in part by the U.S. Defense Threat Reduction Agency and MIT’s Jameel Clinic.
According to MIT, the system, called VaxSeer, was developed by scientists at MIT’s Computer Science and Artificial Intelligence Laboratory and the MIT Jameel Clinic for Machine Learning in Health. It uses deep learning models trained on decades of viral sequences and lab results to forecast which flu strains are most likely to dominate and how well candidate vaccines will work against them. Unlike traditional approaches that evaluate single mutations in isolation, VaxSeer’s large protein language model can capture the combined effects of multiple mutations and model shifting viral dominance more accurately.
“VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” Shi noted. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”
In retrospective tests covering ten years of flu seasons, VaxSeer’s strain recommendations outperformed those of the World Health Organization in nine of ten cases for H3N2 influenza, and in six of ten cases for H1N1, researchers said. In one notable example, the system correctly identified a strain for 2016 that the WHO did not adopt until the following year. Its predictions also showed strong correlation with vaccine effectiveness estimates reported by U.S., Canadian, and European surveillance networks.
The tool works in two parts: one model predicts which viral strains are most likely to spread, while another evaluates how effectively antibodies from vaccines can neutralize them in common hemagglutination inhibition assays. These predictions are then combined into a coverage score, which estimates the likely effectiveness of a candidate vaccine months before flu season begins.
“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” Barzilay noted.
AI Research
Analysis and Trading in One

We had just discussed new crypto projects with AI integrations, and now ChadFi launches an AI terminal: analysis and trading in one. It is worth noting that the platform is at an early stage of development, but states that its AI-powered platform’s beta version already implements research, analysis, and execution in a single cycle.
They present the operational sequence as Data Collection – AI Analysis – Insights Generation – Trade Execution – Feedback Loop, that is, the analytical pipeline and the execution loop are closed within a single interface.
What Is the Actual Working Stage of the Platform?
At the moment, they state three core components: Deep Analysis engine, SpoonFed Setups, and All-in-One Execution. Thus, some functions are already available in the terminal, with the expansion of integrations with centralized venues planned for the next release.
Deep Analysis engine works across five data domains:
-
Technical indicators for the detection of chart patterns
-
Project fundamentals
-
On-chain flows and address activity
-
Social sentiment via X metrics
-
Smart money activity
Among the specific AI Analysis functions the platform offers:
-
AI-Powered Token Analysis
-
Personalized Entry and Exit Recommendations
-
Advanced Technical Analysis Tools
-
Real-Time Market Monitoring
-
Customizable Alerts and Notifications
-
Sentiment Analysis
AI has been helping major players in market analysis and decision-making long before this became popular and before AI-powered platforms began appearing every week. Learn more about the AI in Cryptocurrency Trading: Technical Review & Market Capabilities.
The output layer forms SpoonFed Setups – predefined scenarios for entry and position management that convert observations across multiple layers into actionable steps. The scenarios are then transferred into the execution loop, and the Feedback Loop feeds the result back into the analytics workspace.
Also, the platform states real-time whale monitoring, comprehensive wallet profiling taking into account historical performance and behavioral characteristics, visualization of liquidity movement between addresses, protocols, and segments, as well as observation of narrative rotation. These signals enter the overall pipeline and are used as one of the sources for SpoonFed Setups.
All-in-One Execution is designed for single-interface operation and supports multiple take-profit and stop-loss orders, and the built-in contract safety scanners serve as a mechanism for preliminary checks for common smart contract risks and the presence of dangerous patterns.
To make all this truly convenient for each individual user, their interface supports layout customization and a Customizable Dashboard with watchlists 2.0 and a set of widgets to assemble data layers on one screen for a specific task.
To avoid missing important signals, the Alerts and Notifications system is configured by conditions and delivery channels. For collaboration and social distribution, sharing of setups and interaction via X, Discord, and Telegram are supported.
A Good Initiative, but Is It a Worthy Product?
It is too early to state definitively. It is necessary to pass the beta stage and see how this will actually work at the level of a full-fledged system.
Also, they do not provide information about which AI models they use, how they train them, or how data management and model policies are arranged. If there are problems with the AI models, then one of the key functionalities of the platform would be eliminated.
However, even in this case, a focus on a customizable and detailed visibility toolkit, where the activity of influential addresses is concentrated and how market focus shifts across segments, can be valuable on its own. But again, only if data handling is implemented with genuine quality and reliability.
AI Research
Minus-AI Launches the Coolest Video Ad Agent for the AI Era

Singapore, Sept. 01, 2025 (GLOBE NEWSWIRE) — Minus-AI: The Coolest AI Video Ad Agent
Minus-AI, a Singapore-based AI-native startup, has officially launched its breakthrough platform that transforms brand information into cinematic, multi-shot video ads in just minutes. Positioned at the intersection of AI marketing, content marketing, AI video ads, and AI video generation, Minus-AI is redefining how businesses of every size create and scale their marketing.
The company proudly states its vision in one bold slogan: “Minus-AI is the coolest AI video ad agent.”
(Frame generated by Minus AI)
A Startup with Momentum
Founded in late 2024, Minus-AI immediately attracted over one million USD in angel investment from renowned figures in the global film and entertainment industry. This rapid validation underscores both the technical depth of the team and the enormous demand for next-generation AI marketing solutions.
Minus-AI’s co-founders bring complementary expertise:
Dr. Luo, who previously served as Senior Principal Scientist at Autodesk Research, brings expertise in reinforcement learning and AI-driven creativity. His collaborations with creatives have been featured at various international venues. With Minus AI, he set out on a mission to build tools that harness the power of AI to enhance creative processes.
Ms. Cai, a graduate of New York University (NYU), was the founder of one of the earliest VR education startups in China, which quickly achieved profitability. With a background bridging creative technology and business execution, she now leads product and commercialization at Minus-AI.
Together, they represent the fusion of advanced AI research and creative entrepreneurship.
The Meaning of “Minus-AI”
As Dr. Luo explains, the name Minus-AI carries a philosophy:
“Minus-AI stands for reducing meaningless labor and leaving time for what truly matters. The dash in Minus-AI is also a minus sign — cutting away the unnecessary.”
This philosophy reflects the company’s mission: to simplify the complexity of content marketing, giving businesses a direct path from idea to finished ad, without wasted effort.
(Minus-AI logo design)
Five Core Advantages of Minus-AI
1. Trendy Ideas, Done for You
Most businesses struggle to keep up with fast-moving social media trends. Minus-AI solves this by embedding hotspots and viral formats directly into its system. From concept to creative format, the platform delivers fresh ideas already tailored to your product and the cultural moment.
-
Business3 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi