AI Research
AI rollout in NHS hospitals stalled by major implementation challenges, study finds
Complex contracting, outdated IT systems, scepticism among staff, and insufficient training have slowed the rollout
The UK’s flagship programme to embed AI into NHS hospitals is facing significant hurdles, with a new study revealing that implementing the technology is far more difficult than policymakers anticipated.
Researchers from University College London (UCL), the Nuffield Trust and the University of Cambridge found that governance delays, complex contracting, outdated IT systems, scepticism among clinical staff, and insufficient training have slowed down the nationwide rollout.
Their findings, published in The Lancet eClinicalMedicine, provide the first in-depth analysis of real-world AI deployment across NHS hospitals.
The research comes at a crucial moment for the government’s 10-year NHS plan, announced in July, which highlights digital transformation, including AI adoption, as a central pillar for improving services and patient care.
Delays and difficulties
In 2023, NHS England launched a £21 million programme to introduce AI diagnostic tools across 66 hospital trusts in England, grouped into 12 imaging networks.
The tools are designed to prioritise urgent chest scan cases, highlight abnormalities for review, and support specialist decision-making, particularly in detecting lung cancer.
But according to the study, contracting for the tools took between four and 10 months longer than expected.
By June 2025 – 18 months after contracting was supposed to be complete – a third of trusts (23 out of 66) were still not using the technology in clinical practice.
“A key problem was that clinical staff were already very busy,” said lead author Dr Angus Ramsay of UCL’s Department of Behavioural Science and Health.
“Finding time to go through the selection process was a challenge, as was supporting integration of AI with local IT systems and obtaining local governance approvals.”
Staff concerns and IT barriers
Researchers interviewed NHS staff, AI suppliers, and network teams across 10 imaging networks and six hospital trusts, observing training, governance and planning sessions.
They found that enthusiasm for AI was uneven. Some clinicians, especially senior staff, expressed concerns about accountability if AI missed a diagnosis or made a decision without sufficient clinical oversight.
Training offered to staff often did not adequately address these concerns, leading the study team to recommend early and continuous education on AI in future projects.
The NHS’s fragmented IT landscape proved another obstacle. Hospitals operate with diverse, often outdated IT systems, making it difficult to embed new tools at scale.
Procurement was also challenging: in some cases, trusts were overwhelmed by large volumes of highly technical information, raising the risk of missing crucial details.
Despite the setbacks, researchers identified factors that eased implementation.
Strong national programme leadership, resource sharing across imaging networks, and local collaboration between clinicians, IT teams, and AI suppliers all played a role in driving progress. Dedicated project managers, where available, were also critical in keeping hospitals on track.
The study, funded by the National Institute for Health and Care Research (NIHR), is one of the first to analyse large-scale AI implementation outside a lab setting.
While previous research suggested AI could boost diagnostic accuracy, reduce errors, and ease workforce burdens, the UCL-led evaluation highlights that such benefits may not be realised quickly.
The researchers call for dedicated project management in future schemes and for NHS staff to receive proper training in AI’s use and limitations.
Last month, the Department for Science, Innovation and Technology (DSIT) said it is testing a new AI tool designed to speed up patient discharges. Some other AI initiatives under way in the NHS include a safety monitoring system designed to spot potential safety scandals by analysing hospital data and a physiotherapy app that halved waiting lists for musculoskeletal treatment in Cambridgeshire and Peterborough.
Another trial is testing a “superhuman” predictive tool to assess patients’ risk of disease and early death.
AI Research
How artists, writers and designers can benefit from Artificial Intelligence – Deccan Herald
AI Research
Arista touts liquid cooling, optical tech to reduce power consumption for AI networking

Both technologies will likely find a role in future AI and optical networks, experts say, as both promise to reduce power consumption and support improved bandwidth density. Both have advantages and disadvantages as well – CPOs are more complex to deploy given the amount of technology included in a CPO package, whereas LPOs promise more simplicity.
Bechtolsheim said that LPO can provide an additional 20% power savings over other optical forms. Early tests show good receiver performance even under degraded conditions, though transmit paths remain sensitive to reflections and crosstalk at the connector level, Bechtolsheim added.
At the recent Hot Interconnects conference, he said: “The path to energy-efficient optics is constrained by high-volume manufacturing,” stressing that advanced optics packaging remains difficult and risky without proven production scale.
“We are nonreligious about CPO, LPO, whatever it is. But we are religious about one thing, which is the ability to ship very high volumes in a very predictable fashion,” Bechtolsheim said at the investor event. “So, to put this in quantity numbers here, the industry expects to ship something like 50 million OSFP modules next calendar year. The current shipment rate of CPO is zero, okay? So going from zero to 50 million is just not possible. The supply chain doesn’t exist. So, even if the technology works and can be demonstrated in a lab, to get to the volume required to meet the needs of the industry is just an incredible effort.”
“We’re all in on liquid cooling to reduce power, eliminating fan power, supporting the linear pluggable optics to reduce power and cost, increasing rack density, which reduces data center footprint and related costs, and most importantly, optimizing these fabrics for the AI data center use case,” Bechtolsheim added.
“So what we call the ‘purpose-built AI data center fabric’ around Ethernet technology is to really optimize AI application performance, which is the ultimate measure for the customer in both the scale-up and the scale-out domains. Some of this includes full switch customization for customers. Other cases, it includes the power and cost optimization. But we have a large part of our hardware engineering department working on these things,” he said.
AI Research
Learning by Doing: AI, Knowledge Transfer, and the Future of Skills | American Enterprise Institute

In a recent blog, I discussed Stanford University economist Erik Brynjolfsson’s new study showing that young college graduates are struggling to gain a foothold in a job market shaped by artificial intelligence (AI). His analysis found that, since 2022, early-career workers in AI-exposed roles have seen employment growth lag 13 percent behind peers in less-exposed fields. At the same time, experienced workers in the same jobs have held steady or even gained ground. The conclusion: AI isn’t eliminating work outright, but it is affecting the entry-level rungs that young workers depend on as they begin climbing career ladders.
The potential consequences of these findings, assuming they bear out, become clearer when read alongside Enrique Ide’s recent paper, Automation, AI, and the Intergenerational Transmission of Knowledge. Ide argues that when firms automate entry-level tasks, the opportunity for new workers to gain the tacit knowledge—the kind of workplace norms and rhythms of team-based work that aren’t necessarily written down—isn’t passed on. Thus, productivity gains accrue to seasoned workers while would-be novices lose the hands-on training they need to build the foundation for career progress.
This short-circuiting of early career experiences, Ide says, has macro-economic consequences. He estimates that automating even five percent of entry-level tasks reduces long-run US output growth by an estimated 0.05 percentage points per year; at 30 percent automation, growth slows by more than 0.3 points. Over a hundred year timeline, this would reduce total output by 20 percent relative to a world without AI automation. In other words: automating the bottom rungs might lift firms’ quarterly performance, but at the cost of generational growth.
This is where we need to pause and take a breath. While Ide’s results sound dramatic, it is critical to remember that the dynamics and consequences of AI adoption are unpredictable, and that a century is a very long time. For instance, who would have said in 2022 that one of the first effects of AI automation would be to benefit less tech-savvy boomer and Gen-X managers and harm freshly minted Gen-Z coders?
Given the history of positive, automation-induced wealth and employment effects, why would this time be different?
Finally, it’s important to remember that in a dynamic market-driven economy, skill requirements are always changing and firms are always searching for ways to improve their efficiency relative to competitors. This is doubly true as we enter the era of cognitive, as opposed to physical, automation. AI-driven automation is part of the pathway to a more prosperous economy and society for ourselves and for future generations. As my AEI colleague Jim Pethokoukis recently said, “A supposedly powerful general-purpose technology that left every firm’s labor demand utterly unchanged wouldn’t be much of a GPT.” Said another way, unless AI disrupts our economy and lives, it cannot deliver its promised benefits.
What then should we do? I believe the most important step we can take right now is to begin “stress-testing” our current workforce development policies and programs and building scenarios for how industry and government will respond should significant AI-related job disruptions occur. Such scenario planning could be shaped into a flexible “playbook” of options to guide policymakers geared to the types and numbers of affected workers. Such planning didn’t occur prior to the automation and trade shocks of the 1990s and 2000s with lasting consequences for factory workers and American society. We should try to make sure this doesn’t happen again with AI.
Pessimism is easy and cheap. We should resist the lure of social media-monetized AI doomerism and focus on building the future we want to see by preparing for and embracing change.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries