Connect with us

AI Research

Nuclear energy plan unveiled by UK and US, promising thousands of jobs

Published

on


Charlotte EdwardsBusiness reporter, BBC News

Getty Images Aerial landscape view of Drax power station with thick white steam rising from cooling towers on a sunny dayGetty Images

The UK and US are set to sign a landmark agreement aimed at accelerating the development of nuclear power.

The move is expected to generate thousands of jobs and strengthen Britain’s energy security.

It is expected to be signed off during US President Donald Trump’s state visit this week, with both sides hoping it will unlock billions in private investment.

Prime Minister Sir Keir Starmer said the two nations were “building a golden age of nuclear” that would put them at the “forefront of global innovation”.

The government has said that generating more power from nuclear can cut household energy bills, create jobs, boost energy security, and tackle climate change.

The new agreement, known as the Atlantic Partnership for Advanced Nuclear Energy, aims to make it quicker for companies to build new nuclear power stations in both the UK and the US.

It will streamline regulatory approvals, cutting the average licensing period for nuclear projects from up to four years to just two.

‘Nuclear renaissance’

The deal is also aimed at increasing commercial partnerships between British and American companies, with a number of deals set to be announced.

Key among the plans is a proposal from US nuclear group X-Energy and UK energy company Centrica to build up to 12 advanced modular nuclear reactors in Hartlepool, with the potential to power 1.5 million homes and create up to 2,500 jobs.

The broader programme could be worth up to £40bn, with £12bn focused in the north east of England.

Other plans include multinational firms such as Last Energy and DP World working together on a micro modular reactor at London Gateway port. This is backed by £80m in private investment.

Elsewhere, Holtec, EDF and Tritax are also planning to repurpose the former Cottam coal-fired plant in Nottinghamshire into a nuclear-powered data centre hub.

This project is estimated to be worth £11bn and could create thousands of high-skilled construction jobs, as well as permanent jobs in long-term operations.

Beyond power generation, the new partnership includes collaboration on fusion energy research, and an end to UK and US reliance on Russian nuclear material by 2028.

Commenting on the agreement, Energy Secretary Ed Miliband said: “Nuclear will power our homes with clean, homegrown energy and the private sector is building it in Britain, delivering growth and well-paid, skilled jobs for working people.”

And US Energy Secretary Chris Wright described the move as a “nuclear renaissance”, saying it would enhance energy security and meet growing global power demands, particularly from AI and data infrastructure.

Sir Keir has previously said he wants the UK to return to being “one of the world leaders on nuclear”.

In the 1990s, nuclear power generated about 25% of the UK’s electricity but that figure has fallen to around 15%, with no new power stations built since then and many of the country’s ageing reactors due to be decommissioned over the next decade.

In November 2024, the UK and 30 other countries signed a global pledge to triple their nuclear capacity by 2050.

And earlier this year, the government announced a deal with private investors to build the Sizewell C nuclear power station in Suffolk.

Its nuclear programme also includes the UK’s first small modular reactors (SMRs), which will be built by UK firm Rolls Royce.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Stanford Develops Real-World Benchmarks for Healthcare AI Agents

Published

on


Beyond the hype and hope surrounding the use of artificial intelligence in medicine lies the real-world need to ensure that, at the very least, AI in a healthcare setting can carry out tasks that a doctor would in electronic health records.

Creating benchmark standards to measure that is what drives the work of a team of Stanford researchers. While the researchers note the enormous potential of this new technology to transform medicine, the tech ethos of moving fast and breaking things doesn’t work in healthcare. Ensuring that these tools are capable of doing these tasks is vital, and then they can be used as tools that augment the care clinicians provide every day.

“Working on this project convinced me that AI won’t replace doctors anytime soon,” said Kameron Black, co-author on the new benchmark paper and a Clinical Informatics Fellow at Stanford Health Care. “It’s more likely to augment our clinical workforce.”

MedAgentBench: Testing AI Agents in Real-World Clinical Systems

Black is one of a multidisciplinary team of physicians, computer scientists, and researchers from across Stanford University who worked on the new study, MedAgentBench: A Virtual EHR Environment to Benchmark Medical LLM Agents, published in the New England Journal of Medicine AI.

Although large language models (LLMs) have performed well on the United States Medical Licensing Examination (USMLE) and at answering medical-related questions in studies, there is currently no benchmark testing how well LLMs can function as agents by performing tasks that a doctor would normally do, such as ordering medications, inside a real-world clinical system where data input can be messy. 

Unlike chatbots or LLMs, AI agents can work autonomously, performing complex, multistep tasks with minimal supervision. AI agents integrate multimodal data inputs, process information, and then utilize external tools to accomplish tasks, Black explained. 

Overall Success Rate (SR) Comparison of State-of-the-Art LLMs on MedAgentBench

Model

Overall SR

Claude 3.5 Sonnet v2

69.67%

GPT-4o

64.00%

DeepSeek-V3 (685B, open)

62.67%

Gemini-1.5 Pro

62.00%

GPT-4o-mini

56.33%

o3-mini

51.67%

Qwen2.5 (72B, open)

51.33%

Llama 3.3 (70B, open)

46.33%

Gemini 2.0 Flash

38.33%

Gemma2 (27B, open)

19.33%

Gemini 2.0 Pro

18.00%

Mistral v0.3 (7B, open)

4.00%

While previous tests only assessed AI’s medical knowledge through curated clinical vignettes, this research evaluates how well AI agents can perform actual clinical tasks such as retrieving patient data, ordering tests, and prescribing medications. 

“Chatbots say things. AI agents can do things,” said Jonathan Chen, associate professor of medicine and biomedical data science and the paper’s senior author. “This means they could theoretically directly retrieve patient information from the electronic medical record, reason about that information, and take action by directly entering in orders for tests and medications. This is a much higher bar for autonomy in the high-stakes world of medical care. We need a benchmark to establish the current state of AI capability on reproducible tasks that we can optimize toward.”

The study tested this by evaluating whether AI agents could utilize FHIR (Fast Healthcare Interoperability Resources) API endpoints to navigate electronic health records.

The team created a virtual electronic health record environment that contained 100 realistic patient profiles (containing 785,000 records, including labs, vitals, medications, diagnoses, procedures) to test about a dozen large language models on 300 clinical tasks developed by physicians. In initial testing, the best model, in this case, Claude 3.5 Sonnet v2, achieved a 70% success rate.

“We hope this benchmark can help model developers track progress and further advance agent capabilities,” said Yixing Jiang, a Stanford PhD student and co-author of the paper.

Many of the models struggled with scenarios that required nuanced reasoning, involved complex workflows, or necessitated interoperability between different healthcare systems, all issues a clinician might face regularly. 

“Before these agents are used, we need to know how often and what type of errors are made so we can account for these things and help prevent them in real-world deployments,” Black said.

What does this mean for clinical care? Co-author James Zou and Dr. Eric Topol claim that AI is shifting from a tool to a teammate in care delivery. With MedAgentBench, the Stanford team has shown this is a much more near-term reality by showcasing several frontier LLMs in their ability to carry out many day-to-day clinical tasks that a physician would perform. 

Already the team has noticed improvements in performance of the newest versions of models. With this in mind, Black believes that AI agents might be ready to handle basic clinical “housekeeping” tasks in a clinical setting sooner than previously expected. 

“In our follow-up studies, we’ve shown a surprising amount of improvement in the success rate of task execution by newer LLMs, especially when accounting for specific error patterns we observed in the initial study,” Black said. “With deliberate design, safety, structure, and consent, it will be feasible to start moving these tools from research prototypes into real-world pilots.”

The Road Ahead

Black says benchmarks like these are necessary as more hospitals and healthcare systems are incorporating AI into tasks including note-writing and chart summarization.

Accurate and trustworthy AI could also help alleviate a looming crisis, he adds. Pressed by patient needs, compliance demands, and staff burnout, healthcare providers are seeing a worsening global staffing shortage, estimated to exceed 10 million by 2030.

Instead of replacing doctors and nurses, Black hopes that AI can be a powerful tool for clinicians, lessening the burden of some of their workload and bringing them back to the patient bedside. 

“I’m passionate about finding solutions to clinician burnout,” Black said. “I hope that by working on agentic AI applications in healthcare that augment our workforce, we can help offload burden from clinicians and divert this impending crisis.”

Paper authors: Yixing Jiang, Kameron C. Black, Gloria Geng, Danny Park, James Zou, Andrew Y. Ng, and Jonathan H. Chen

Read the piece in the New England Journal of Medicine AI.



Source link

Continue Reading

AI Research

Maine police can’t investigate AI-generated child sexual abuse images

Published

on


This story appears as part of a collaboration between The Maine Monitor and Maine Focus, the investigative team of the Bangor Daily News, a partnership to strengthen investigative journalism in Maine. You can show your support for this effort with a donation to The Monitor. Read more about the partnership.

A Maine man went to watch a children’s soccer game. He snapped photos of kids playing. Then he went home and used artificial intelligence to take the otherwise innocuous pictures and turn them into sexually explicit images.

Police know who he is. But there is nothing they could do because the images are legal to have under state law, according to Maine State Police Lt. Jason Richards, who is in charge of the Computer Crimes Unit.

While child sexual abuse material has been illegal for decades under both federal and state law, the rapid development of generative AI — which uses models to create new content based on user prompts — means Maine’s definition of those images has lagged behind other states. Lawmakers here attempted to address the proliferating problem this year but took only a partial step.

“I’m very concerned that we have this out there, this new way of exploiting children, and we don’t yet have a protection for that,” Richards said.

Two years ago, it was easy to discern when a piece of material had been produced by AI, he said. It’s now hard to tell without extensive experience. In some instances, it can take a fully clothed picture of a child and make the child appear naked in an image known as a “deepfake.” People also train AI on child sexual abuse materials that are already online.

Nationally, the rise of AI-generated child sexual abuse material is a concern. At the end of last year, the National Center for Missing and Exploited Children saw a 1,325% increase in the number of tips it received related to AI-generated materials. That material is becoming more commonly found when investigating cases of possession of child sexual abuse materials.

On Sept. 5, a former Maine state probation officer pleaded guilty to accessing with intent to view child sexual abuse materials in federal court. When federal investigators searched the man’s Kik account, they found he had sought out the content and had at least one image that was “AI-generated,” according to court documents.

The explicit material generated by AI has rapidly become intertwined with the real stuff at the same time as his staff are seeing increasing reports. In 2020, Richards’ team received 700 tips relating to child sexual abuse materials and reports of adults sexually exploiting minors online in Maine.

By the end of 2025, Richards said he expects his team will have received more than 3,000 tips. They can only investigate about 14% any given year. His team now has to discard any material that is touched by AI.

“It’s not what could happen, it is happening, and this is not material that anyone is OK with in that it should be criminalized,” Shira Burns, the executive director of the Maine Prosecutors’ Association, said.

Across the country, 43 states have created laws outlawing sexual deepfakes, and an additional 28 states have banned the creation of AI-generated child sexual abuse material. Twenty-two states have done both, according to MultiState, a government relations firm that tracks how state legislatures have passed laws governing artificial intelligence.

More than two dozen states have enacted laws banning AI-generated child sexual abuse material. Rep. Amy Kuhn, D-Falmouth, proposed doing so earlier this year. But lawmakers on the Judiciary Committee had concerns about how the proposed legislation could cause constitutional issues.

She agreed to drop that portion of the bill for now. The version of the bill that passed expanded the state’s pre-existing law against “revenge porn” to include dissemination of altered or so-called “morphed images” as a form of harassment. But it did not label morphed images of children as child sexual abuse material.

Rep. Amy Kuhn, D-Falmouth, speaks at the Maine State House on June 27, 2023. Photo by Linda Coan O’Kresik of the Bangor Daily News.

The legislation, which was drafted chiefly by the Maine Prosecutors’ Association and the Maine Coalition Against Sexual Assault, was modeled after already enacted law in other places. Kuhn said she plans to propose the expanded definition of sexually explicit material mostly unchanged from her early version when the Legislature reconvenes in January.

Maine’s lack of a law at least labeling morphed images of children as child sexual abuse material makes the state an outlier, said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered AI. She studies the abusive uses of AI and the intersection of legislation around AI-generated content and the Constitution.

In her research, Pfefferkorn said she’s found that most legislatures that have considered changing pre-existing laws on child sexual abuse material have at least added that morphed images of children should be considered sexually explicit material.

“It’s a bipartisan area of interest to protect children online, and nobody wants to be the person sticking their hand up and very publicly saying, ‘I oppose this bill that would essentially better protect children online,’” Pfefferkorn said.

There is also pre-existing federal law and case law that Maine can look to in drafting its own legislation, she said. Morphed images of children are already banned federally, she said. While federal agencies have a role in investigating these cases, they typically handle only the most serious ones. It mostly falls on the state to police sexually explicit materials.

Come 2026, both Burns and Kuhn said they are confident that the Legislature will fix the loophole because there are plenty of model policies to follow across the country.

“We’re on the tail end of addressing this issue, but I am very confident that this is something that the judiciary will look at, and we will be able to get a version through, because it’s needed,” Burns said.



Source link

Continue Reading

AI Research

Empowering clinicians with intelligence at the point of conversation

Published

on

















Empowering clinicians with intelligence at the point of conversation | Healthcare IT News



Skip to main content



Source link

Continue Reading

Trending