When it comes to AI, as California goes, so goes the nation. The biggest state in the US by population is also the central hub of AI innovation for the entire globe, home to 32 of the world’s top 50 AI companies. That size and influence have given the Golden State the weight to become a regulatory trailblazer, setting the tone for the rest of the country on environmental, labor, and consumer protection regulations — and more recently, AI as well. Now, following the dramatic defeat of a proposed federal moratorium on states regulating AI in July, California policymakers see a limited window of opportunity to set the stage for the rest of the country’s AI laws.
Tools & Platforms
AI to cut paperwork to free up doctors’ time for patients

- Patients and frontline staff could see huge benefits from new AI helping people out of hospital quicker and slashing bureaucracy.
- Tool is one of the Prime Minister’s AI Exemplars, including real-world projects using AI to make people’s lives easier and modernise services across health, justice, tax and planning.
- Group of leading projects will receive support to expand the use of their technology more quickly, helping to drive efficiencies and boost growth through Plan for Change.
Patients could get home to family and off busy wards more quickly, thanks to game-changing AI that could help write the documents that are needed to discharge people from hospital.
The cutting-edge technology will help cut waiting lists, by giving frontline staff the precious gift of time and making care more efficient so that loved ones return to the comfort of their homes quickly. Currently being developed at Chelsea and Westminster NHS Trust, it is one of many projects to receive backing from the Prime Minister as part of the AI Exemplars programme.
The AI-assisted tool could deliver the support that NHS staff have been crying out for – helping doctors to draft discharge documents faster by extracting key details from medical records, such as diagnoses and test results, using a large language model. After a full review from a medical expert responsible for the patient, these documents are then used to discharge a patient from a ward and refer them to other care services that may be needed.
It would radically improve an outdated system that can leave patients on wards unnecessarily for hours, waiting for time-pressed doctors providing urgent care to sit down and fill in forms before they can go home. In some cases, the current system for writing discharge summaries can also inaccurately record basic patient details – like what treatment they’ve had, or changes to medication – and put them in harms way.
Another project announced today, ‘Justice Transcribe’, will be transformational for Probation Officers – by helping to transcribe and take notes in their meetings with offenders after they leave prison. The technology, which was found to halve the time officers spent organising notes between meetings and in their personal time, is set to be given to all 12,000 probation officers later this year.
Projects being announced today as part of the Prime Minister’s AI Exemplars programme are prime examples of how the government wants to use AI across the public sector to make people’s lives easier and help deliver the Plan for Change. Over the coming months, these exemplars will be developed and trialled, with those showing the most promise potentially rolled out more widely. It follows the Prime Minister’s approach that people should not spend their time on tasks that AI can do quicker and better.
Speaking on a visit to Chelsea and Westminster Hospital, Technology Secretary Peter Kyle said:
This is exactly the kind of change we need: AI being used to give doctors, probation officers and other key workers more time to focus on delivering better outcomes and speeding up vital services.
This government inherited a public sector decimated by years of under-investment and is crying out for reform. These AI Exemplars show the best ways in which we’re using tech to build a smarter, more efficient state.
When we get this right across government, we’re talking about unlocking £45 billion in productivity gains – delivering our Plan for Change and investing in growth not bureaucracy.
Health and Social Care Secretary Wes Streeting said:
This potentially transformational discharge tool is a prime example of how we’re shifting from analogue to digital as part of our 10 Year Health Plan.
We’re using cutting-edge technology to build an NHS fit for the future and tackle the hospital backlogs that have left too many people waiting too long.
Doctors will spend less time on paperwork and more time with patients, getting people home to their families faster and freeing up beds for those who need them most.
The NHS Federated Data Platform, a system designed to connect IT across health and care services, is hosting the AI-assisted discharge summaries tool. This means that it can handover information to different care services in an efficient and secure way, while also making it easier to use the technology across the country if tests are successful.
Planning
The AI Exemplars programme will also include the ‘Extract’ tool, which will standardise data faster by converting decades-old, handwritten planning documents and maps into data in minutes. It will power new types of planning software to slash the 250,000 estimated hours spent by planning officers each year manually checking these documents.
Schools
Other technology backed by the programme, the ‘AI Content Store’, will also help make more accurate AI tools to support teachers to mark work and plan lessons – ensuring they are able to spend more time helping children in the classroom with face-to-face teaching, supporting the government’s mission to break down barriers to opportunity.
Justice
A further tool in the programme is ‘Justice Transcribe’. Early feedback from probation officers has shown that the technology allows them to focus on the personal, and often emotive meetings with offenders, instead of having to interrupt to take notes and clarify details. The technology is based on ‘Minute’, part of the Humphrey package of AI tools built by government to help make the civil service more efficient.
Civil service
The suite of AI tools known as ‘Humphrey’, that helps make the civil service more efficient, is also included in the package. It comes as ‘Consult’, a tool in the package, analyses the thousands of responses any government consultation might receive in hours, before presenting policy makers and experts with interactive dashboards to explore what the public are saying directly.
It has been the first AI tool to undergo testing against a new ‘social readiness’ standard, where the tech was shared with members of the public to get their views on the value it adds, the strength of safeguards in place and the risks associated with using the technology. Members of the public noted that Consult is well targeted to replace an “old school process” that is very “archaic” and ripe for improvement with AI.
The independent report, completed after deliberative focus groups by the Centre for Collective Intelligence at Nesta, a charity focused on innovation for the public good, found that 82% of people felt positive or neutral about the use of the technology across government.
Notes to editors
With more to be announced in the coming months, AI Exemplars include:
- Justice Transcribe, Ministry of Justice.
- ‘Humphrey’, Department for Science, Innovation and Technology.
- Education Content store, Department for Education.
- AI Tax Compliance, HMRC.
- ‘Extract’ and the Digital Planning Programme, Department for Science, Innovation and Technology and Ministry of Housing, Communities and Local Government.
- ‘Minute’ for Local Government, Department for Science, Innovation and Technology.
- GOV.UK Chat, Department for Science, Innovation and Technology.
- AI for diagnostics, NHS.
Tools & Platforms
Ten ways AI could shape the net zero transition – BusinessGreen
Tools & Platforms
The debate behind SB 53, the California bill trying to prevent AI from building nukes

This week, the California State Assembly is set to vote on SB 53, a bill that would require transparency reports from the developers of highly powerful, “frontier” AI models. The models targeted represent the cutting-edge of AI — extremely adept generative systems that require massive amounts of data and computing power, like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Anthropic’s Claude. The bill, which has already passed the state Senate, must pass the California State Assembly before it goes to the governor to either be vetoed or signed into law.
AI can offer tremendous benefits, but as the bill is meant to address, it’s not without risks. And while there is no shortage of existing risks from issues like job displacement and bias, SB 53 focuses on possible “catastrophic risks” from AI. Such risks include AI-enabled biological weapons attacks and rogue systems carrying out cyberattacks or other criminal activity that could conceivably bring down critical infrastructure. Such catastrophic risks represent widespread disasters that could plausibly threaten human civilization at local, national, and global levels. They represent risks of the kind of AI-driven disasters that have not yet occurred, rather than already-realized, more personal harms like AI deepfakes.
Exactly what constitutes a catastrophic risk is up for debate, but SB 53 defines it as a “foreseeable and material risk” of an event that causes more than 50 casualties or over $1 billion in damages that a frontier model plays a meaningful role in contributing to. How fault is determined in practice would be up to the courts to interpret. It’s hard to define catastrophic risk in law when the definition is far from settled, but doing so can help us protect against both near- and long-term consequences.
By itself, a single state bill focused on increased transparency will probably not be enough to prevent devastating cyberattacks and AI-enabled chemical, biological, radiological, and nuclear weapons. But the bill represents an effort to regulate this fast-moving technology before it outpaces our efforts at oversight.
SB 32 is the third state-level bill to try to specifically focus on regulating AI’s catastrophic risks, after California’s SB 1047, which passed the legislature only to be vetoed by the governor — and New York’s Responsible AI Safety and Education (RAISE) Act, which recently passed the New York legislature and is now awaiting Gov. Kathy Hochul’s approval.
SB 53, which was introduced by state Sen. Scott Wiener in February, requires frontier AI companies to develop safety frameworks that specifically detail how they approach catastrophic risk reduction. Before deploying their models, companies would have to publish safety and security reports. The bill also gives them 15 days to report “critical safety incidents” to the California Office of Emergency Services, and establishes whistleblower protections for employees who come forward about unsafe model deployment that contributes to catastrophic risk. SB 53 aims to hold companies publicly accountable for their AI safety commitments, with a financial penalty up to $1 million per violation.
In many ways, SB 53 is the spiritual successor to SB 1047, also introduced by Wiener.
Both cover large models that are trained at 10^26 FLOPS, a measurement of very significant computing power used in a variety of AI legislation as a threshold for significant risk, and both bills strengthen whistleblower protections. Where SB 53 departs from SB 1047 is its focus on transparency and prevention
While SB 1047 aimed to hold companies liable for catastrophic harms caused by their AI systems, SB 53 formalizes sharing safety frameworks, which many frontier AI companies, including Anthropic, already do voluntarily. It focuses squarely on the heavy-hitters, with its rules applying only to companies that generate $500 million or more in gross revenue.
“The science of how to make AI safe is rapidly evolving, and it’s currently difficult for policymakers to write prescriptive technical rules for how companies should manage safety,” said Thomas Woodside, the co-founder of Secure AI Project, an advocacy group that aims to reduce extreme risks from AI and is a sponsor of the bill, over email. “This light touch policy prevents backsliding on commitments and encourages a race to the top rather than a race to the bottom.”
Part of the logic of SB 53 is the ability to adapt the framework as AI progresses. The bill authorizes the California Attorney General to change the definition of a large developer after January 1, 2027, in response to AI advances.
Proponents of the bill are optimistic about its chances of being signed by the governor should it pass the legislature, which it is expected to. On the same day that Gov. Gavin Newsom vetoed SB 1047, he commissioned a working group focusing solely on frontier models. The resulting report by the group provided the foundation for SB 53. “I would guess, with roughly 75 percent confidence, that SB 53 will be signed into law by the end of September,” said Dean Ball — former White House AI policy adviser, vocal SB 1047 critic, and SB 53 supporter — to Transformer.
But several industry organizations have rallied in opposition, arguing that additional compliance regulation would be expensive, given that AI companies should already be incentivized to avoid catastrophic harms. OpenAI has lobbied against it and technology trade group Chamber of Progress argues that the bill would require companies to file unnecessary paperwork and unnecessarily stifle innovation.
“Those compliance costs are merely the beginning,” Neil Chilson, head of AI policy at the Abundance Institute, told me over email. “The bill, if passed, would feed California regulators truckloads of company information that they will use to design a compliance industrial complex.”
By contrast, Anthropic enthusiastically endorsed the bill in its current state on Monday. “The question isn’t whether we need AI governance – it’s whether we develop it thoughtfully today or reactively tomorrow,” the company explained in a blog post. “SB 53 offers a solid path toward the former.” (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI, while Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic. Neither organization has editorial input into our content.)
The debate over SB 53 ties into broader disagreements about whether states or the federal government should drive AI safety regulation. But since the vast majority of these companies are based in California, and nearly all do business there, the state’s legislation matters for the entire country.
“A federally led transparency approach is far, far, far preferable to the multi-state alternative,” where a patchwork of state regulations can conflict with each other, said Cato Institute technology policy fellow Matthew Mittelsteadt in an email. But “I love that the bill has a provision that would allow companies to defer to a future alternative federal standard.”
“The natural question is whether a federal approach can even happen,” Mittelsteadt continued. “In my opinion, the jury is out on that but the possibility is far more likely that some suggest. It’s been less than 3 years since ChatGPT was released. That is hardly a lifetime in public policy.”
But in a time of federal gridlock, frontier AI advancements won’t wait for Washington.
The catastrophic risk divide
The bill’s focus on, and framing of, catastrophic risks is not without controversy.
The idea of catastrophic risk comes from the fields of philosophy and quantitative risk assessment. Catastrophic risks are downstream of existential risks, which threaten humanity’s actual survival or else permanently reduce our potential as a species. The hope is that if these doomsday scenarios are identified and prepared for, they can be prevented or at least mitigated.
But if existential risks are clear — the end of the world, or at least as we know it — what falls under the catastrophic risk umbrella, and the best way to prioritize those risks, depends on who you ask. There are longtermists, people focused primarily on humanity’s far future, who place a premium on things like multiplanetary expansion for human survival. They’re often chiefly concerned by risks from rogue AI or extremely lethal pandemics. Neartermists are more preoccupied with existing risks, like climate change, mosquito vector-borne disease, or algorithmic bias. These camps can blend into one another — neartermists would also like to avoid getting hit by asteroids that could wipe out a city, and longtermists don’t dismiss risks like climate change — and the best way to think of them is like two ends of a spectrum rather than a strict binary.
You can think of the AI ethics and AI safety frameworks as the near- and longtermism of AI risk, respectively. AI ethics is about the moral implications of the ways the technology is deployed, including things like algorithmic bias and human rights, in the present. AI safety focuses on catastrophic risks and potential existential threats. But, as Vox’s Julia Longoria reported in the Good Robot series for Unexplainable, there are inter-personal conflicts leading these two factions to work against each other, much of which has to do with emphasis. (AI ethics people argue that catastrophic risk concerns over-hype AI capabilities and ignores its impact on vulnerable people right now, while AI safety people worry that if we focus too much on the present, we won’t have ways to mitigate larger-scale problems down the line.)
But behind the question of near versus long-term risks lies another one: what, exactly, constitutes a catastrophic risk?
SB 53 initially set the standard for catastrophic risk at 100 rather than 50 casualties — similar to New York’s RAISE Act — before halving the threshold in an amendment to the bill. While the average person might consider, say, many people driven to suicide after interacting with AI chatbots to be catastrophic, such a risk is outside of the bill’s scope. (The California State Assembly just passed a separate bill to regulate AI companion chatbots by preventing them from participating in discussions about suicidal ideation or sexually explicit material.)
SB 53 focuses squarely on harms from “expert-level” frontier AI model assistance in developing or deploying chemical, biological, radiological, and nuclear weapons; committing crimes like cyberattacks or fraud; and “loss of control” scenarios where AIs go rogue, behaving deceptively to avoid being shut down and replicating themselves without human oversight. For example, an AI model could be used to guide the creation of a new deadly virus that infects millions and kneecaps the global economy.
“The 50 to 100 deaths or a billion dollars in property damage is just a proxy to capture really widespread and substantial impact,” said Scott Singer, lead author of the California Report for Frontier AI Policy, which helped inform the basis of the bill. “We do look at like AI-enabled or AI potentially [caused] or correlated suicide. I think that’s like a very serious set of issues that demands policymaker attention, but I don’t think it’s the core of what this bill is trying to address.”
Transparency is helpful in preventing such catastrophes because it can help raise the alarm before things get out of hand, allowing AI developers to correct course. And in the event that such efforts fail to prevent a mass casualty incident, enhanced safety transparency can help law enforcement and the courts figure out what went wrong. The challenge there is that it can be difficult to determine how much a model is accountable for a specific outcome, Irene Solaiman, the chief policy officer at Hugging Face, a collaboration platform for AI developers, told me over email.
“These risks are coming and we should be ready for them and have transparency into what the companies are doing,” said Adam Billen, the vice president of public policy at Encode, an organization that advocates for responsible AI leadership and safety. (Encode is another sponsor of SB 53.) “But we don’t know exactly what we’re going to need to do once the risks themselves appear. But right now, when those things aren’t happening at a large scale, it makes sense to be sort of focused on transparency.”
However, a transparency-focused bill like SB 53 is insufficient for addressing already-existing harms. When we already know something is a problem, the focus should be on mitigating it.
“Maybe four years ago, if we had passed some sort of transparency legislation like SB 53 but focused on those harms, we might have had some warning signs and been able to intervene before the widespread harms to kids started happening,” Billen said. “We’re trying to kind of correct that mistake on these problems and get some sort of forward-facing information about what’s happening before things get crazy, basically.”
SB 53 risks being both overly narrow and unclearly scoped. We have not yet faced these catastrophic harms from frontier AI models, and the most devastating risks might take us entirely by surprise. We don’t know what we don’t know.
It’s also certainly possible that models trained below 10^26 FLOPS, which aren’t covered by SB 53, have the potential to cause catastrophic harm under the bill’s definition. The EU AI Act sets the threshold for “systemic risk” at the smaller 10^25 FLOPS, and there’s disagreement about the utility of computational power as a regulatory standard at all, especially as models become more efficient.
As it stands right now, SB 53 occupies a different niche from bills focused on regulating AI use in mental healthcare or data privacy, reflecting its authors’ desire not to step on the toes of other legislation or bite off more than it can reasonably chew. But Chilson, the Abundance Institute’s head of AI policy, is part of a camp that sees SB 53’s focus on catastrophic harm as a “distraction” from the real near-term benefits and concerns, like AI’s potential to accelerate the pace of scientific research or create nonconsensual deepfake imagery, respectively.
That said, deepfakes could certainly cause catastrophic harm. For instance, imagine a hyper-realistic deepfake impersonating a bank employee to commit fraud at a multibillion-dollar scale, said Nathan Calvin, the vice president of state affairs and general counsel at Encode. “I do think some of the lines between these things in practice can be a bit blurry, and I think in some ways…that is not necessarily a bad thing,” he told me.
It could be that the ideological debate around what qualifies as catastrophic risks, and whether that’s worthy of our legislative attention, is just noise. The bill is intended to regulate AI before the proverbial horse is out of the barn. The average person isn’t going to worry about the likelihood of AI sparking nuclear warfare or biological weapons attacks, but they do think about how algorithmic bias might affect their lives in the present. But in trying to prevent the worst-case scenarios, perhaps we can also avoid the “smaller,” nearer harms. If they’re effective, forward-facing safety provisions designed to prevent mass casualty events will also make AI safer for individuals.
If SB 53 passes the legislature and gets signed by Gov. Newsom into law, it could inspire other state attempts at AI regulation through a similar framework, and eventually encourage federal AI safety legislation to move forward.
How we think about risk matters because it determines where we focus our efforts on prevention. I’m a firm believer in the value of defining your terms, in law and debate. If we’re not on the same page about what we mean when we talk about risk, we can’t have a real conversation.
Tools & Platforms
The EU AI Act is Here (Is Your Data Ready to Lead?)

The accelerated adoption of AI and generative AI tools has reshaped the business landscape. With powerful capabilities now within reach, organizations are rapidly exploring how to apply AI across operations and strategy.
In fact, 93% of UK CEOs have adopted generative AI tools in the last year, and according to the latest State of AI report by McKinsey, 78% of businesses use AI in more than one business function.
With such an expansion, governing bodies are acting promptly to ensure AI is deployed responsibly, safely and ethically. For example, the EU AI Act restricts unethical practices, such as facial image scraping, and mandates AI literacy. This ensures organizations understand how their tools generate insights before acting on them. These policies aim to reduce the risk of AI misuse due to insufficient training or oversight.
In July, the EU released its final General-Purpose AI (GPAI) Code of practice, outlining voluntary guidelines on transparency, safety and copyright for foundation models. While voluntary, companies that opt out may face closer scrutiny or more stringent enforcement. Alongside this, new phases of the act continue to take effect, with the latest compliance deadline taking place in August.
This raises two critical questions for organizations. How can they utilize AI’s transformative power while staying ahead of new regulations? And how will these regulations shape the path forward for enterprise AI?
How New Regulations Are Reshaping AI Adoption
The EU AI Act is driving organizations to address longstanding data management challenges to reduce AI bias and ensure compliance. AI systems under “unacceptable risk” — those that pose a clear threat to individual rights, safety or freedoms — are already restricted under the act.
Meanwhile, broader compliance obligations for general-purpose AI systems are taking this year. Stricter obligations for systemic-risk models, including those developed by leading providers, follow in August 2026. With this rollout schedule, organizations must move quickly to build AI readiness, starting with AI-ready data. That means investing in trusted data foundations that ensure traceability, accuracy and compliance at scale.
In industries such as financial services, where AI is used in high-stakes decisions like fraud detection and credit scoring, this is especially urgent. Organizations must show that their models are trained on representative and high-quality data, and that the results are actively monitored to support fair and reliable decisions. The act is accelerating the move toward AI systems that are trustworthy and explainable.
Data Integrity as a Strategic Advantage
Meeting the requirements of the EU AI Act demands more than surface level compliance. Organizations must break down data silos, especially where critical data is locked in legacy or mainframe systems. Integrating all relevant data across cloud, on-premises and hybrid environments, as well as across various business functions, is essential to improving the reliability of AI outcomes and reduce bias.
Beyond integration, organizations must prioritize data quality, governance and observability to ensure that the data used in AI models is accurate, traceable and continuously monitored. Recent research shows that 62% of companies cite data governance as the biggest challenge to AI success, while 71% plan to increase investment in governance programmes.
The lack of interpretability and transparency in AI models remains a significant concern, raising questions around bias, ethics, accountability and equity. As organizations operationalise AI responsibly, robust data and AI governance will play a pivotal role in bridging the gap between regulatory requirements and responsible innovation.
Additionally, incorporating trustworthy third-party datasets, such as demographics, geospatial insights and environmental risk factors, can help increase the accuracy of AI outcomes and strengthen fairness with additional context. This is increasingly important given the EU’s direction toward stronger copyright protection and mandatory watermarking for AI generated content.
A More Deliberate Approach to AI
The early excitement around AI experimentation is now giving way to more thoughtful, enterprise-wide planning. Currently, only 12% of organizations report having AI-ready data. Without accurate, consistent and contextualised data in place, AI initiatives are unlikely to deliver measurable business outcomes. Poor data quality and governance limits performance and introduces risk, bias and opacity across business decisions that affect customers, operations, and reputation.
As AI systems grow more complex and agentic, capable of reasoning, taking action, and even adapting in real-time, the demand for trusted context and governance becomes even more critical. These systems cannot function responsibly without a strong data integrity foundation that supports transparency, traceability and trust.
Ultimately, the EU AI Act, alongside upcoming legislation in the UK and other regions, signals a shift from reactive compliance to proactive AI readiness. As AI adoption grows, powering AI initiatives with integrated, high-quality, and contextualised data will be key to long-term success with scalable and responsible AI innovation.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi