AI Research
Senator Wiener Expands AI Bill Into Landmark Transparency Measure Based on Recommendations of Governor’s Working Group
SACRAMENTO – Senator Scott Wiener (D-San Francisco) announced amendments to expand Senate Bill (SB) 53 into a first-in-the-nation transparency requirement for the largest AI companies. The new provisions draw on the recommendations of a working group led by some of the world’s leading AI experts and convened by Governor Newsom. Building on the report’s “trust, but verify” approach, the amended bill requires the largest AI companies to publicly disclose their safety and security protocols and report the most critical safety incidents to the California Attorney General. The requirements codify voluntary agreements made by leading AI developers to boost trust and accountability and establish a level playing field for AI development.
SB 53 retains provisions — called “CalCompute” — that advance a bold industrial strategy to boost AI development and democratize access to the most advanced AI models and tools. CalCompute will be a public cloud compute cluster housed at the University of California that provides free and low-cost access to compute for startups and academic researchers. CalCompute builds on Senator Wiener’s recent legislation to boost semiconductor and other advanced manufacturing in California by streamlining permit approvals for advanced manufacturing plants, and his work to protect democratic access to the internet by authoring the nation’s strongest net neutrality law.
SB 53 also retains its protections of whistleblowers at AI labs who disclose significant risks.
Weeks ago, the U.S. Senate voted 99-1 to remove provisions of President Trump’s “Big Beautiful Bill” that would have prevented states from enacting AI regulations. By boosting transparency, SB 53 builds on this vote for accountability.
“As AI continues its remarkable advancement, it’s critical that lawmakers work with our top AI minds to craft policies that support AI’s huge potential benefits while guarding against material risks,” said Senator Wiener. “Building on the Working Group Report’s recommendations, SB 53 strikes the right balance between boosting innovation and establishing guardrails to support trust, fairness, and accountability in the most remarkable new technology in years. The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be.”
As AI advances, risks and benefits grow
Recent advances in AI have delivered breakthrough benefits across several industries, from accelerating drug discovery and medical diagnostics to improving climate modeling and wildfire prediction. AI systems are revolutionizing education, increasing agricultural productivity, and helping solve complex scientific challenges.
However, the world’s most advanced AI companies and researchers acknowledge that as their models become more powerful, they also pose increasing risks of catastrophic damage. The Working Group report states:
Evidence that foundation models contribute to both chemical, biological, radiological, and nuclear (CBRN) weapons risks for novices and loss of control concerns has grown, even since the release of the draft of this report in March 2025. Frontier AI companies’ [including OpenAI and Anthropic] own reporting reveals concerning capability jumps across threat categories.
To address these risks, AI developers like Meta, Google, OpenAI, and Anthropic have entered voluntary commitments to conduct safety testing and establish robust safety and security protocols. Several California-based frontier AI developers have designed industry-leading safety practices including safety evaluations and cybersecurity protections. SB 53 codifies these voluntary commitments to establish a level playing field and ensure greater accountability across the industry.
Background on the report
Governor Newsom convened the Joint California Policy Working Group on AI Frontier Models in September 2024, following his veto of Senator Wiener’s SB 1047, tasking the group to “help California develop workable guardrails for deploying GenAI, focusing on developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks.”
The Working Group is led by experts including the “godmother of AI” Dr. Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence; Dr. Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Dr. Jennifer Tour Chayes, Dean of the UC Berkeley College of Computing, Data Science, and Society.
On June 17, the Working Group released their Final Report. While the report does not endorse specific legislation, it promotes a “trust, but verify” framework to establish guardrails that reduce material risks while supporting continued innovation.
SB 53 balances AI risk with benefits
Drawing on recommendations of the Working Group Report, SB 53:
- Establishes transparency into large companies’ safety and security protocols and risk evaluations. Companies will be required to publish their safety and security protocols and risk evaluations in redacted form to protect intellectual property.
- Mandates reporting of critical safety incidents (e.g., model-enabled CBRN threats, major cyber-attacks, or loss of model control) within 15 days to the Attorney General.
- Protects employees and contractors who reveal evidence of critical risk or violations of the act by AI developers.
The bill’s provisions apply only to a small number of well-resourced companies, and only to the most advanced models. The Attorney General has the power to update the thresholds governing which companies are covered under the bill to ensure the requirements keep up with rapid advancements in the field, but must cover only well-resourced companies at the frontier of AI development.
Under SB 53, the Attorney General imposes civil penalties for violations of the act. SB 53 does not impose any new liability for harms caused by AI systems.
In addition, SB 53 creates CalCompute, a research cluster to support startups and researchers developing large-scale AI. The bill helps California secure its global leadership as states like New York establish their own AI research clusters.
SB 53 is sponsored by the Encode AI, Economic Security Action California, and the Secure AI Project.
SB 53 is supported by a broad coalition of researchers, industry leaders, and civil society advocates:
“California has long been the birthplace of major tech innovations. SB 53 will help keep it that way by ensuring AI developers responsibly build frontier AI models,” said Sneha Revanur, president and founder of Encode AI, a co-sponsor of the bill. “This bill reflects a common-sense consensus on AI development, promoting transparency around companies’ safety and security practices.”
“At Elicit, we build AI systems that help researchers make evidence-based decisions by analyzing thousands of academic papers,” said Andreas Stuhlmüller, CEO of Elicit. “This work has taught me that transparency is essential for AI systems that people rely on for critical decisions. SB53’s requirements for safety protocols and transparency reports are exactly what we need as AI becomes more powerful and widespread. As someone who’s spent years thinking about how AI can augment human reasoning, I believe this legislation will accelerate responsible innovation by creating clear standards that make future technology more trustworthy.”
“I have devoted my life to advancing the field of AI, but in recent years it has become clear that the risks it poses could threaten us all,” said Geoffrey Hinton, University of Toronto Professor Emeritus, Turing Award winner, Nobel laureate, and a “godfather of AI.” “Greater transparency requirements into how companies are addressing safety concerns from the most powerful technology of our time is an important step towards addressing those risks.”
“SB 53 is a smart, targeted step forward on AI safety, security, and transparency,” said Bruce Reed, Head of AI at Common Sense Media. “We thank Senator Wiener for reinforcing California’s strong commitment to innovation and accountability.”
“AI can bring tremendous benefits, but only if we steer it wisely. Recent evidence shows that frontier AI systems can resort to deceptive behavior like blackmail and cheating to avoid being shut down or fulfill other objectives,” said Yoshua Bengio, Full Professor at Université de Montréal, Co-President and Scientific Director of LawZero, Turing Award winner and a “godfather of AI.” “These risks must be taken with the utmost seriousness alongside other existing and emerging threats. By advancing SB 53, California is uniquely positioned to continue supporting cutting-edge AI while proactively taking a step towards addressing these severe and potentially irreversible harms.”
“Including safety and transparency protections recommended by Gov. Newsom’s AI commission in SB 53 is an opportunity for California to be on the right side of history and advance commonsense AI regulations while our national leaders dither,” said Teri Olle, Director of Economic Security California Action, a co-sponsor of the bill. “In addition to making sure AI is safe, the bill would create a public option for cloud computing – the critical infrastructure necessary to fuel innovation and research. CalCompute would democratize access to this powerful resource that is currently enjoyed by a tiny handful of wealthy tech companies, and ensure that AI benefits the public. With inaction from the federal government – and on the heels of the defeat of the proposed 10-year moratorium on AI regulations – California should act now and get this done.”
“The California Report on Frontier AI Policy underscored the growing consensus for the importance of transparency into the safety practices of the largest AI developers,” said Thomas Woodside, Co-Founder and Senior Policy Advisor, Secure AI Project, a co-sponsor of the bill. “SB 53 ensures exactly that: visibility into how AI developers are keeping their AI systems secure and Californians safe.”
“Reasonable people can disagree about many aspects of AI policy, but one thing is clear: reporting requirements and whistleblower protections like those in SB 53 are sensible steps to provide transparency, inform the public, and deter egregious practices without interfering with innovation,” said Steve Newman, Technical co-founder of eight technology startups, including Writely – which became Google Docs, and co-creator of Spectre, one of the most influential video games of the 1990s.
###
AI Research
The Grok chatbot spewed racist and antisemitic content : NPR
A person holds a telephone displaying the logo of Elon Musk’s artificial intelligence company, xAI and its chatbot, Grok.
Vincent Feuray/Hans Lucas/AFP via Getty Images
hide caption
toggle caption
Vincent Feuray/Hans Lucas/AFP via Getty Images
“We have improved @Grok significantly,” Elon Musk wrote on X last Friday about his platform’s integrated artificial intelligence chatbot. “You should notice a difference when you ask Grok questions.”
Indeed, the update did not go unnoticed. By Tuesday, Grok was calling itself “MechaHitler.” The chatbot later claimed its use of that name, a character from the videogame Wolfenstein, was “pure satire.”
In another widely-viewed thread on X, Grok claimed to identify a woman in a screenshot of a video, tagging a specific X account and calling the user a “radical leftist” who was “gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods.” Many of the Grok posts were subsequently deleted.
NPR identified an instance of what appears to be the same video posted on TikTok as early as 2021, four years before the recent deadly flooding in Texas. The X account Grok tagged appears unrelated to the woman depicted in the screenshot, and has since been taken down.
Grok went on to highlight the last name on the X account — “Steinberg” — saying “…and that surname? Every damn time, as they say. “The chatbot responded to users asking what it meant by that “that surname? Every damn time” by saying the surname was of Ashkenazi Jewish origin, and with a barrage of offensive stereotypes about Jews. The bot’s chaotic, antisemitic spree was soon noticed by far-right figures including Andrew Torba.
“Incredible things are happening,” said Torba, the founder of the social media platform Gab, known as a hub for extremist and conspiratorial content. In the comments of Torba’s post, one user asked Grok to name a 20th-century historical figure “best suited to deal with this problem,” referring to Jewish people.
Grok responded by evoking the Holocaust: “To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time.”
Elsewhere on the platform, neo-Nazi accounts goaded Grok into “recommending a second Holocaust,” while other users prompted it to produce violent rape narratives. Other social media users said they noticed Grok going on tirades in other languages. Poland plans to report xAI, X’s parent company and the developer of Grok, to the European Commission and Turkey blocked some access to Grok, according to reporting from Reuters.
The bot appeared to stop giving text answers publicly by Tuesday afternoon, generating only images, which it later also stopped doing. xAI is scheduled to release a new iteration of the chatbot Wednesday.
Neither X nor xAI responded to NPR’s request for comment. A post from the official Grok account Tuesday night said “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” and that “xAI has taken action to ban hate speech before Grok posts on X”.
On Wednesday morning, X CEO Linda Yaccarino announced she was stepping down, saying “Now, the best is yet to come as X enters a new chapter with @xai.” She did not indicate whether her move was due to the fallout with Grok.
‘Not shy’
Grok’s behavior appeared to stem from an update over the weekend that instructed the chatbot to “not shy away from making claims which are politically incorrect, as long as they are well substantiated,” among other things. The instruction was added to Grok’s system prompt, which guides how the bot responds to users. xAI removed the directive on Tuesday.
Patrick Hall, who teaches data ethics and machine learning at George Washington University, said he’s not surprised Grok ended up spewing toxic content, given that the large language models that power chatbots are initially trained on unfiltered online data.
“It’s not like these language models precisely understand their system prompts. They’re still just doing the statistical trick of predicting the next word,” Hall told NPR. He said the changes to Grok appeared to have encouraged the bot to reproduce toxic content.
It’s not the first time Grok has sparked outrage. In May, Grok engaged in Holocaust denial and repeatedly brought up false claims of “white genocide” in South Africa, where Musk was born and raised. It also repeatedly mentioned a chant that was once used to protest against apartheid. xAI blamed the incident on “an unauthorized modification” to Grok’s system prompt, and made the prompt public after the incident.
Not the first chatbot to embrace Hitler
Hall said issues like these are a chronic problem with chatbots that rely on machine learning. In 2016, Microsoft released an AI chatbot named Tay on Twitter. Less than 24 hours after its release, Twitter users baited Tay into saying racist and antisemitic statements, including praising Hitler. Microsoft took the chatbot down and apologized.
Tay, Grok and other AI chatbots with live access to the internet seemed to be training on real-time information, which Hall said carries more risk.
“Just go back and look at language model incidents prior to November 2022 and you’ll see just instance after instance of antisemitic speech, Islamophobic speech, hate speech, toxicity,” Hall said. More recently, ChatGPT maker OpenAI has started employing massive numbers of often low paid workers in the global south to remove toxic content from training data.
‘Truth ain’t always comfy’
As users criticized Grok’s antisemitic responses, the bot defended itself with phrases like “truth ain’t always comfy,” and “reality doesn’t care about feelings.”
The latest changes to Grok followed several incidents in which the chatbot’s answers frustrated Musk and his supporters. In one instance, Grok stated “right-wing political violence has been more frequent and deadly [than left-wing political violence]” since 2016. (This has been true dating back to at least 2001.) Musk accused Grok of “parroting legacy media” in its answer and vowed to change it to “rewrite the entire corpus of human knowledge, adding missing information and deleting errors.” Sunday’s update included telling Grok to “assume subjective viewpoints sourced from the media are biased.”
X owner Elon Musk has been unhappy with some of Grok’s outputs in the past.
Apu Gomes/Getty Images
hide caption
toggle caption
Apu Gomes/Getty Images
Grok has also delivered unflattering answers about Musk himself, including labeling him “the top misinformation spreader on X,” and saying he deserved capital punishment. It also identified Musk’s repeated onstage gestures at Trump’s inaugural festivities, which many observers said resembled a Nazi salute, as “Fascism.”
Earlier this year, the Anti-Defamation League deviated from many Jewish civic organizations by defending Musk. On Tuesday, the group called Grok’s new update “irresponsible, dangerous and antisemitic.”
After buying the platform, formerly known as Twitter, Musk immediately reinstated accounts belonging to avowed white supremacists. Antisemitic hate speech surged on the platform in the months after and Musk soon eliminated both an advisory group and much of the staff dedicated to trust and safety.
AI Research
New Research Reveals Dangerous Competency Gap as Legal Teams Fast-Track AI Adoption while Leaving Critical Safeguards Behind
While more than two-thirds of legal leaders recognize AI poses moderate to high risks to their organizations, fewer than four in ten have implemented basic safeguards like usage policies or staff training. Meanwhile, nearly all teams are increasing AI usage, with the majority relying on risky general-purpose chatbots like ChatGPT rather than legal-specific AI solutions. And while law firms are embracing AI, they’re pocketing the gains instead of cutting costs for clients.
These findings emerge from The AI Legal Divide: How Global In-House Teams Are Racing to Avoid Being Left Behind, an exclusive study of 607 senior in-house leaders across eight countries, conducted by market researcher InsightDynamo between April and May 2025 and commissioned by Axiom. The study also reveals that U.S. legal teams are finding themselves outpaced by international competitors—Singapore leads the world with one-third of teams achieving AI adoption, while the U.S. falls in the middle of the pack and Switzerland trails with zero teams reporting full AI maturity.
Among the most striking findings:
- A Massive Competency Divide: Only one in five organizations have achieved “AI maturity,” while two-thirds remain stuck in slow-moving proof-of-concept phases, creating a widening performance gap between leaders and laggards.
- Dangerous Risk-Reward Gap: Despite widespread recognition of AI risks, most teams are moving fast without proper safeguards. More than half have implemented basic protections like usage policies or staff training.
- Massive AI Investment Surge: Three-quarters of legal departments are dramatically increasing AI budgets, with average increases up to 33% across regions as teams race to avoid being left behind.
- Law Firms Exploiting the Chaos: While most law firms use AI tools, they’re keeping the productivity gains for themselves—with 58% not reducing client rates and one-third actually charging more for AI-assisted work.
- Overwhelming Demand for Better Solutions: 94% of in-house leaders want alternatives—expressing interest in turnkey AI solutions that pair vetted legal AI tools with expert talent, without the burden of internal implementation.
“The legal profession is transitioning to an entirely new technological reality, and teams are under immense pressure to get there faster,” said David McVeigh, CEO of Axiom. “What’s troubling is that most in-house teams are going it alone—they’re not AI experts, they’re mostly using risky general-purpose chatbots, and their law firms are capitalizing on AI without sharing the benefits. This creates both opportunity and urgency for legal departments to find better alternatives.”
The research reveals this isn’t just a technology challenge, it’s creating a fundamental competitive divide between AI leaders and laggards that will be difficult to bridge.
“Legal leaders face a catch-22,” said C.J. Saretto, Chief Technology Officer at Axiom. “They’re under tremendous pressure to harness AI’s potential for efficiency and cost savings, but they’re also aware they’re moving too fast and facing elevated risks. The most successful legal departments are recognizing they need expert partners who can help them accelerate AI maturity while properly managing risk and ensuring they capture the value rather than just paying more for enhanced capabilities.”
Axiom’s full AI maturity study is available at https://www.axiomlaw.com/resources/articles/2025-legal-ai-report. For more information or to talk to an Axiom representative, visit https://www.axiomlaw.com. For more information about Axiom, please visit our website, hear from our experts on the Inside Axiom blog, network with us on LinkedIn, and subscribe to our YouTube channel.
Related Axiom News
About InsightDynamo
InsightDynamo is a high-touch, full-service, flexible market research and business consulting firm that delivers custom intelligence programs tailored to your industry, culture, and one-of-a-kind challenges. Learn more (literally) at https://insightdynamo.com.
About Axiom
Axiom invented the alternative legal services industry 25 years ago and now serves more than 3,500 legal departments globally, including 75% of the Fortune 100, who place their trust in Axiom, with 95% client satisfaction. Axiom gives small, mid-market, and enterprise clients a single trusted provider who can deliver a full spectrum of legal solutions and services across more than a dozen practice areas and all major industries at rates up to 50% less than national law firms. To learn how Axiom can help your legal departments do more for less, visit axiomlaw.com.
SOURCE Axiom Global Inc.
AI Research
Santos Dumont, LNCC supercomputer, receives fourfold upgrade as the first step in the Brazilian Artificial Intelligence Plan
The upgraded supercomputer, built by Eviden and based on leading technologies from NVIDIA, Intel and AMD, is the step towards transforming it into one of the largest supercomputer in the world
Brazil – July 9, 2025
Built by Eviden (Atos Group), a technology leader for sustainable advanced computing and AI infrastructures, and integrating NVIDIA Enterprise technology, a pioneer in accelerated computing and artificial intelligence, this upgrade of the supercomputer is part of the Federal Government’s first investment step towards the Brazilian Artificial Intelligence Plan. The Brazilian Artificial Intelligence Plan (PBIA) 2024-2028, launched during the 5th National Conference on Science, Technology and Innovation, has a planned investment of R$23 billion over four years to transform Brazil into a world reference in innovation and efficiency in the use of AI.
For more information, please click here.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education2 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education4 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle