AI Insights
California Assembly introduces law to advance safety of artificial intelligence and public computing power

In what is being called groundbreaking legislation, the California Assembly passed a law that will advance the safety of artificial intelligence and public computing power. SB 53 was introduced by Senator Scott Wiener (D-San Francisco) “to promote the responsible development of large-scale artificial intelligence (AI) systems,” according to a press release.
Artificial intelligence (AI) presents “substantial risks” and experts said California is setting the path to manage those risks.
“The greatest innovations happen when our brightest minds have the resources they need and the freedom to speak their minds,” Senator Wiener said. “SB 53 supports the development of large-scale AI systems by providing low-cost compute to researchers and start-ups through CalCompute. At the same time, the bill also provides critical protections to workers who need to sound the alarm if something goes wrong in developing these highly advanced systems. We are still early in the legislative process, and this bill may evolve as the process continues. I’m closely monitoring the work of the Governor’s AI Working Group, as well as developments in the AI field for changes that warrant a legislative response. California’s leadership on AI is more critical than ever as the new federal Administration proceeds with shredding the guardrails meant to keep Americans safe from the known and foreseeable risks that advanced AI systems present.”
While California is home to many AI companies and researchers, SB 53 will secure leadership in an emerging space to startups “by providing low-cost compute and other resources to aid responsible AI development,” according to a press rellease.
“California has a history of making important public investments where it counts: from stem cell research to our stellar higher education system, we have led the way in using public dollars to foster the American entrepreneurial spirit,” Teri Olle, director of economic security for California Action, said. “We are proud to sponsor SB 53, a bill that would continue in this important tradition by creating the infrastructure for a public option for cloud computing needed in AI development. If we want to see AI used to promote the public good and make life better and easier for people, then we must broaden access to the computing power required to fuel innovation.”
As AI continues to grow, researchers like AI Godfather Professor Yoshua Bengio released the International AI Safety Report—the first-ever document created by 100 independent AI experts globally. The safety report ensures protections to whistleblowers, who have a “key window into the safety practices of large AI labs and the emergent capabilities of large-scale AI models,” according to a press release.
“With CalCompute, we’re democratizing access to the computational resources that power AI innovation,” Sunny Gandhi, vice president of political affairs at Encode, said. “And by protecting whistleblowers, we’re ensuring that security isn’t sacrificed for speed. California can be a leader by making transformative technology both more accessible and more transparent.”
AI Insights
Darwin Awards For AI Celebrate Epic Artificial Intelligence Fails

As the AI Darwin Awards prove, some AI ideas turn out to be far less bright than they seem.
getty
Not every artificial intelligence breakthrough is destined to change the world. Some are destined to make you wonder “With all this so-called intelligence flooding our lives, how could anyone think that was a smart idea?” That’s the spirit behind the AI Darwin Awards, which recognize the most spectacularly misguided uses of the technology. Submissions are open now.
Reads an introduction to the growing list of nominees, which include legal briefs replete with fictional court cases, fake books by real writers and an Airbnb host manipulating images with AI to make it appear a guest owed money for damages:
“Behold, this year’s remarkable collection of visionaries who looked at the cutting edge of artificial intelligence and thought, ‘Hold my venture capital.’ Each nominee has demonstrated an extraordinary commitment to the principle that if something can go catastrophically wrong with AI, it probably will — and they’re here to prove it.”
A software developer named Pete — who asked that his last name not be used to protect his privacy — launched the AI Darwin Awards last month, mostly as a joke, but also as a cheeky reminder that humans ultimately decide how technology gets deployed.
Don’t Blame The Chainsaw
“Artificial intelligence is just a tool — like a chainsaw, nuclear reactor or particularly aggressive blender,” reads the website for the awards. “It’s not the chainsaw’s fault when someone decides to juggle it at a dinner party.
“We celebrate the humans who looked at powerful AI systems and thought, ‘You know what this needs? Less testing, more ambition, and definitely no safety protocols!’ These visionaries remind us that human creativity in finding new ways to endanger ourselves knows no bounds.”
The AI Darwin Awards are not affiliated with the original Darwin Awards, which famously call out people who, through extraordinarily foolish choices, “protect our gene pool by making the ultimate sacrifice of their own lives.” Now that we let machines make dumb decisions for us too, it’s only fair they get their own awards.
Who Will Take The Crown?
Among the contenders for the inaugural AI Darwin Awards winner are the lawyers who defended MyPillow CEO Mike Lindell in a defamation lawsuit. They submitted an AI-generated brief with almost 30 defective citations, misquotes and references to completely fictional court cases. A federal judge fined the attorneys for their misstep, saying they violated a federal law requiring that lawyers certify court filings are grounded in the actual law.
Another nominee: the AI-generated summer reading list published earlier this year by the Chicago Sun Times and The Philadelphia Inquirer that contained fake books by real authors. “WTAF. I did not write a book called Boiling Point,” one of those authors, Rebecca Makkai, posted to BlueSky. Another writer, Min Jin Lee, also felt the need to issue a clarification.
“I have not written and will not be writing a novel called Nightshare Market,” the Pachinko author wrote on X. “Thank you.”
Then there’s the executive producer at Xbox Games Studios who suggested scores of newly laid-off employees should turn to chatbots for emotional support after losing their jobs, an idea that did not go over well.
“Suggesting that people process job loss trauma through chatbot conversations represents either breathtaking tone-deafness or groundbreaking faith in AI therapy — likely both,” the submission reads.
What Inspired The AI Darwin Awards?
The creator of the awards, who lives in Melbourne, Australia, and has worked in software for three decades, said he frequently uses large language models, including to craft the irreverent text for the AI Darwin Awards website. “It takes a lot of steering from myself to give it the desired tone, but the vast majority of actual content, probably 99%, is all the work of my LLM minions,” he said in an interview.
Pete got the idea for the awards as he and co-workers shared their experiences with AI on Slack. “Occasionally someone would post the latest AI blunder of the day and we’d all have either a good chuckle, or eye-roll or both,” he said.
The awards sit somewhere between reality and satire.
“AI will mean lots of good things for us all and it will mean lots of bad things,” the contest’s creator said. “We just need to work out how to try and increase the good and decrease the bad. In fact, our first task is to identify both the good and the bad. Hopefully the AI Darwin Awards can be a small part of that by highlighting some of the ‘bad.’”
He plans to invite the public to vote on candidates in January, with the winner to be announced in February.
For those who’d rather not win an AI Darwin Award, the site includes a handy guide for how for avoiding the dubious distinction. It includes these tips: “Test your AI systems in safe environments before deploying them globally,” “consider hiring humans for tasks that require empathy, creativity or basic common sense” and “ask ‘What’s the worst that could happen?’ and then actually think about the answer.”
AI Insights
Redefining speed: The AI revolution in clinical decision-making

Clinicians need one main thing: More time
As the EHR and data collection have become more robust, clinicians are spending more time on paperwork and administration. The American Medical Association conducted surveys in 2024 and found that physicians spent an average of 13 hours on indirect patient care (order entry, documentation, lab interpretation) and over seven hours on administrative tasks (prior authorization, insurance forms, meetings). On top of patient care, this meant a 57.8-hour workweek.
Ultimately, clinicians need more time with their patients and less time taking notes. They need more time to understand complex cases and less time spent searching for information. Information overload is also a challenge: Medical knowledge is doubling every 73 days, and patients are increasingly relying on multiple medications. It also takes an average of 17 years between clinical discovery and changing practice based on evidence—clinicians need efficient ways to stay updated in their area of expertise.
AI can produce time savings that add up
We’re seeing a revolution in how artificial intelligence (AI) can support them. As AI is introduced further into healthcare administrative work and clinical settings, there are opportunities for clinicians to be more productive and meaningful with their time.
When we look at how AI-enabled features can save time for clinicians, the amazing thing is that it’s not massive blocks of time—like 5 or 10 minutes. It’s 10 seconds on a task, or 30 seconds here, or 45 seconds there. And the clinicians we speak with are so happy about it. AI can help speed up the little things—the couple of clicks saved—and over time, that can make a huge difference. It’s multiple moments of small savings that add up to these meaningful productivity gains.
So, as we find ways to further integrate UpToDate into the workflow, this is what we think about: Finding those extra moments that matter. Getting clinical information closer to the provider so they don’t have to open extra applications for decision-making. We’re looking for multiple ways to get evidence and clinical intelligence streamlined throughout the care experience and into the EHR, presenting tremendous opportunities for time savings.
The opportunities are plentiful. How can ambient and note-taking technology link to the relevant evidence-based clinical content for quick reference? How could patient interactions with chatbots ahead of a clinic visit prep the provider with relevant evidence in advance? Identifying innovative partners that can work alongside us in ambient solutions, documentation, chatbots, and more can help bring content and evidence closer to clinicians and save those seconds over time.
Time savings can bring new clinical opportunities
What can clinicians do with that saved time? Some have been concerned that GenAI tools will deteriorate clinical decision-making skills—our recent Future Ready Healthcare report showed that 57% of respondents share these concerns. But I like to think about the opportunities created through those time savings: How can AI help open up space for deeper critical thinking?
With AI saving time and supporting smaller tasks, the first thing it can do is alleviate some of the administrative burden, which is already happening. It can also expand critical thinking opportunities and provide space to consider challenges in healthcare that historically we haven’t had time to solve. It can “re-humanize medical practice” in a way that provides professional fulfillment and allows clinicians to spend more time as caregivers, rather than note-takers. When these efforts are scaled across the workforce, it can result in productivity gains and operational efficiencies across an enterprise.
AI tools need to be grounded in expert-driven evidence
As we rapidly move into the AI era, it’s easy to find tools that seem to give faster answers, especially among generative AI (GenAI) tools. But are they grounded in evidence and industry recommendations?
Keeping expert clinicians in the loop is critical—if you’ve trusted UpToDate for a while, you’ll know this is our position. Our clinical decision support is grounded not just in evidence but in the recommendations of over 7,600 clinical practitioners and experts who curate content as new evidence emerges, and provide graded recommendations to help guide decision-making, even when the conditions are gray. Relying on clinical recommendations curated by human experts keeps the information and care guidance current and relevant. As AI is layered on top of these human-generated recommendations, clinicians can start finding information more efficiently—saving precious seconds with each patient.
We know this expertise matters. A 2024 Wolters Kluwer Health survey of US physicians showed they were overall positive about the prospects of GenAI in clinical settings; however, 91% said they would have to know the materials the AI was trained on were created by doctors and medical experts in order to trust it. They also overwhelmingly wanted (89%) the technology vendor to be transparent about where the information came from, who created it, and how it was sourced.
The UpToDate, you know and trust, is entering a new era, which is in line with Bud Rose’s vision for a consultative conversation with clinical experts. And we’re just getting started—join us in helping shape the next wave of healthcare innovation.
Read our vision for the future of healthcare and explore our perspectives on AI in clinical content.
AI Insights
Swift Tests Use of AI to Fight Cross-Border Payment Fraud

Swift conducted tests to demonstrate the potential impact of artificial intelligence in preventing cross-border payments fraud.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries