Connect with us

AI Insights

AI rollout in NHS hospitals faces major challenges

Published

on


Implementing artificial intelligence (AI) into NHS hospitals is far harder than initially anticipated, with complications around governance, contracts, data collection, harmonisation with old IT systems, finding the right AI tools and staff training, finds a major new UK study led by UCL researchers. 

Authors of the study, published in The Lancet eClinicalMedicine, say the findings should provide timely and useful learning for the UK Government, whose recent 10-year NHS plan identifies digital transformation, including AI, as a key platform to improving the service and patient experience. 

In 2023, NHS England launched a programme to introduce AI to help diagnose chest conditions, including lung cancer, across 66 NHS hospital trusts in England, backed by £21 million in funding. The trusts are grouped into 12 imaging diagnostic networks: these hospital networks mean more patients have access to specialist opinions. Key functions of these AI tools included prioritising critical cases for specialist review and supporting specialists’ decisions by highlighting abnormalities on scans.

Funded by the National Institute for Health and Care Research (NIHR), this research was conducted by a team from UCL, the Nuffield Trust, and the University of Cambridge, analysing how procurement and early deployment of the AI tools went. The study is one of the first studies to analyse real-world implementation of AI in healthcare.

Evidence from previous studies¹, mostly laboratory-based, suggested that AI might benefit diagnostic services by supporting decisions, improving detection accuracy, reducing errors and easing workforce burdens.

In this UCL-led study, the researchers reviewed how the new diagnostic tools were procured and set up through interviews with hospital staff and AI suppliers, identifying any pitfalls but also any factors that helped smooth the process.

They found that setting up the AI tools took longer than anticipated by the programme’s leadership. Contracting took between four and 10 months longer than anticipated and by June 2025, 18 months after contracting was meant to be completed, a third (23 out of 66) of the hospital trusts were not yet using the tools in clinical practice.

Key challenges included engaging clinical staff with already high workloads in the project, embedding the new technology in ageing and varied NHS IT systems across dozens of hospitals and a general lack of understanding, and scepticism, among staff about using AI in healthcare.

The study also identified important factors which helped embed AI including national programme leadership and local imaging networks sharing resources and expertise, high levels of commitment from hospital staff leading implementation, and dedicated project management.

The researchers concluded that while “AI tools may offer valuable support for diagnostic services, they may not address current healthcare service pressures as straightforwardly as policymakers may hope” and are recommending that NHS staff are trained in how AI can be used effectively and safely and that dedicated project management is used to implement schemes like this in the future.

First author Dr Angus Ramsay (UCL Department of Behavioural Science and Health) said: “In July ministers unveiled the Government’s 10-year plan for the NHS, of which a digital transformation is a key platform.

“Our study provides important lessons that should help strengthen future approaches to implementing AI in the NHS.

“We found it took longer to introduce the new AI tools in this programme than those leading the programme had expected.

“A key problem was that clinical staff were already very busy – finding time to go through the selection process was a challenge, as was supporting integration of AI with local IT systems and obtaining local governance approvals. Services that used dedicated project managers found their support very helpful in implementing changes, but only some services were able to do this.

“Also, a common issue was the novelty of AI, suggesting a need for more guidance and education on AI and its implementation.

“AI tools can offer valuable support for diagnostic services, but they may not address current healthcare service pressures as simply as policymakers may hope.”

The researchers conducted their evaluation between March and September last year, studying 10 of the participating networks and focusing in depth on six NHS trusts. They interviewed network teams, trust staff and AI suppliers, observed planning, governance and training and analysed relevant documents.

Some of the imaging networks and many of the hospital trusts within them were new to procuring and working with AI.

The problems involved in setting up the new tools varied – for example, in some cases those procuring the tools were overwhelmed by a huge amount of very technical information, increasing the likelihood of key details being missed. Consideration should be given to creating a national approved shortlist of potential suppliers to facilitate procurement at local level, the researchers said.

Another problem was initial lack of enthusiasm among some NHS staff for the new technology in this early phase, with some more senior clinical staff raising concerns about the potential impact of AI making decisions without clinical input and on where accountability lay in the event a condition was missed. The researchers found the training offered to staff did not address these issues sufficiently across the wider workforce – hence their call for early and ongoing training on future projects.

In contrast, however, the study team found the process of procurement was supported by advice from the national team and imaging networks learning from each other. The researchers also observed high levels of commitment and collaboration between local hospital teams (including clinicians and IT) working with AI supplier teams to progress implementation within hospitals.

In this project, each hospital selected AI tools for different reasons, such as focusing on X-ray or CT scanning, and purposes, such as to prioritise urgent cases for review or to identify potential symptoms.


The NHS is made up of hundreds of organisations with different clinical requirements and different IT systems and introducing any diagnostic tools that suit multiple hospitals is highly complex. These findings indicate AI might not be the silver bullet some have hoped for but the lessons from this study will help the NHS implement AI tools more effectively.”


Naomi Fulop, Senior Author, Professor UCL Department of Behavioural Science and Health

Limitations

While the study has added to the very limited body of evidence on the implementation and use of AI in real-world settings, it focused on procurement and early deployment. The researchers are now studying the use of AI tools following early deployment when they have had a chance to become more embedded. Further, the researchers did not interview patients and carers and are therefore now conducting such interviews to address important gaps in knowledge about patient experiences and perspectives, as well as considerations of equity.

Source:

Journal reference:

Ramsay, A. I. G., et al. (2025). Procurement and early deployment of artificial intelligence tools for chest diagnostics in NHS services in England: a rapid, mixed method evaluation. eClinicalMedicine. doi.org/10.1016/j.eclinm.2025.103481



Source link

AI Insights

AI drug companies are struggling—but don’t blame the AI

Published

on


Moonshot hopes of artificial intelligence being used to expedite the development of drugs are coming back down to earth. 

More than $18 billion has flooded into more than 200 biotechnology companies touting AI to expedite development, with 75 drugs or vaccines entering clinical trials, according to Boston Consulting Group. Now, investor confidence—and funding—is starting to waver.

In 2021, venture capital investment in AI drug companies reached an apex with more than 40 deals being made worth about $1.8 billion. This year, there have been fewer than 20 deals worth about half of that peak sum, the Financial Times reported, citing data from Pitchbook. 

Some existing companies have struggled in the face of challenges. In May, biotech company Recursion tabled three of its prospective drugs in a cost-cutting effort following a merger with Exscientia, a similar biotech firm, last year. Fortune previously reported that none of Recursion’s discovered AI-compounds have reached the market as approved drugs. After a major restructuring in December 2024, biotech company BenevolentAI delisted from the Euronext Amsterdam stock exchange in March before merging with Osaka Holdings. 

A Recursion spokesperson told Fortune the decision to shelve the drugs was “data-driven” and a planned outcome of its merger with Exscientia.

“Our industry’s 90% failure rate is not acceptable when patients are waiting, and we believe approaches like ours that integrate cutting-edge tools and technologies will be best positioned for long-term success,” the spokesperson said in a statement.

BenevolentAI did not respond to a request for comment.

The struggles of the industry coincide with a broader conversation around the failure of generative AI to deliver more quickly on its lofty promises of productivity and efficiency. An MIT report last month found 95% of generative AI pilots at companies failed to accelerate revenue. A U.S. Census Bureau survey this month found AI adoption in large U.S. companies has declined from its 14% peak earlier this year to 12% as of August.

But the AI technology used to help develop drugs is far different than those from large language models used in most workplace initiatives and should therefore not be held to the same standards, according to Scott Schoenhaus, managing director and equity research analyst for KeyBanc Capital Markets Inc. Instead, the industry faces its own set of challenges.

“No matter how much data you have, human biology is still a mystery,” Schoenhaus told Fortune.

Macro and political factors drying up AI drug development funding

At the crux of the slowed funding and slower development results may not be the limitations of the technology itself, but rather a slew of broader factors, Schoenhaus said.

“Everyone acknowledges the funding environment has dried up,” he said. “The biotech market is heavily influenced by low interest rates. Lower interest rates equals more funding coming into biotechs, which is why we’re seeing funding for biotech at record lows over the last several years, because interest rates have remained elevated.”

It wasn’t always this way. Leveraging AI in drug development is not only thanks to growing access to semiconductor chips, but also how technology has allowed for quick and now cheap ways of mapping the entire human genome. In 2001, it cost more than $100 million to map the human genome. Two decades later, that undertaking cost about $1,000.

Beyond having the pandemic to thank for next-to-nothing interest rates in 2021, COVID also expedited partnerships between AI drug development start ups and Big Pharma companies. In early 2022 biotechnology startup AbCellera and Eli Lilly got emergency FDA approval for an antibody used in the early COVID vaccines, a tangible example of how the tech could be used to aid in drug discoveries.

But since then, there have been other industry hurdles, Schoenhaus said, including Big Pharma cutting back on research and development costs amid slowing demand, as well as uncertainty surrounding whether President Donald Trump would impose a tariff on pharmaceuticals as the U.S. and European Union tussled over a trade deal. Trump signed a memo this week threatening to ban direct-to-consumer advertising for prescription medications, theoretically driving down pharma revenues.

Limitations of AI

That’s not to say there haven’t been technological hiccups in the industry.

“There is scrutiny around the technology themselves,” Schoenhaus said. “Everyone’s waiting for these readouts to prove that.”

The next 12 months of emerging data from AI drug development startups will be critical in determining how successful these companies stand to be, Schoenhaus said. Some of the results so far have been mixed. For example, Recursion released data from a mid-stage clinical trial of a drug to treat a neurovascular condition in September last year, finding the drug was safe but that there was little evidence of how effective it was. Company shares fell double digits following the announcement. 

These companies are also limited by how they’re able to leverage AI. The drug development process is one that takes 10 years and is intentionally bottlenecked to ensure the safety and efficacy of the drugs in question, according to according to David Siderovski, chair of University of North Texas Health Science Center’s Department of Pharmacology & Neuroscience, who has previously worked with AI drug development companies in the private sector. Biotechnology companies using AI to make these processes more efficient are usually only tackling one small part of this bottleneck, such as being able to screen and identify a drug-like molecule faster than previously.

“There are so many stages that have to be jumped over before you can actually declare the [European Medicines Agency], or the FDA, or Health Canada, whoever it is, will designate this as a safe, approved drug to be marketed to patients out in the world,” Siderovski told Fortune. “That one early bottleneck of auditioning compounds is not the be-all and end-all of satisfying shareholders by announcing, ‘We have approval for this compound as a drug.’”

Smaller companies in the sector have also made a concerted effort to partner less with Big Pharma companies, preferring instead to build their own pipelines, even if it means no longer having access to the franchise resources of industry giants. 

“They want to be able to pursue their technology and show the validation of their platform sooner than later,” Schoenhaus said. “They’re not going to wait around for large pharma to pursue a partnered molecule. They’d rather just do it themselves and say, ‘Hey, look, our technology platform works.’”

Schoenhaus sees this strategy as a way for companies looking to prove themselves by perfecting the use of AI to better understand the slippery, mysterious, and still greatly unknown frontier of human biology.

“It’s just a very much more complex application of AI,” he said, “hence why I think we are still seeing these companies focus on their own internal pipelines so that they can really, squarely focus their resources on trying to better their technology.”



Source link

Continue Reading

AI Insights

Companies Rehire Human Workers to Fix Artificial Intelligence Generated Content After Mass Layoffs

Published

on


IN A NUTSHELL
  • 🤖 Companies increasingly use AI to replace human workers, highlighting the trend of automation.
  • 🔄 Many businesses find that AI outputs lack quality, leading to a return to human expertise.
  • 👥 Freelancers like Lisa Carstens and Harsh Kumar are rehired to fix AI-generated content.
  • 💼 The evolving landscape poses questions about fair compensation for human improvements to AI work.

The integration of artificial intelligence (AI) into workplaces has become a prevalent trend, often at the expense of human employees. This shift, while aiming to optimize efficiency and cut costs, has exposed the limitations of relying solely on AI. As companies increasingly replace human roles with AI, they encounter unforeseen challenges that highlight the irreplaceable value of human expertise. The journey reveals the complex dynamics between technology adoption and workforce sustainability, raising important questions about the future of work and the role of AI in it.

AI’s Shortcomings Lead to Reemployment

While AI promises to revolutionize industries by automating tasks, its execution often falls short, leading companies to reconsider their human workforce. AI-generated outputs frequently lack the nuance and precision that human creativity and expertise bring. For instance, textual content may appear repetitive, designs might lack clarity, and AI-generated code could result in unstable applications. These deficiencies compel businesses to turn back to the very employees they had previously let go.

Lisa Carstens, an independent illustrator and designer, experienced firsthand the limitations of AI. Based in Spain, Carstens found herself rehired to fix AI-generated visuals that were, at best, superficially appealing and, at worst, unusable. She noted that many companies assumed AI could operate without human intervention, only to realize the opposite.

“ChatGPT Crushes All Competition”: OpenAI’s Bot Gets 46.6 Billion Visits While Controlling 83% of Global AI Traffic Right Now

“There are people who understand AI’s imperfections and those who become frustrated when it doesn’t perform as expected,” Carstens explains, highlighting the delicate balance freelancers must maintain when rectifying AI’s mistakes.

The Emergence of a New Freelance Economy

AI has inadvertently given rise to a new type of freelance work focused on improving AI-generated content. Developers like Harsh Kumar, based in India, have seen a resurgence in demand for their skills as AI’s limitations become apparent. Clients who invested heavily in AI coding tools often found the results to be unsatisfactory, leading them to seek human expertise to salvage projects.

“We Built A Walking Robot From Just 18 Metal Parts”: Tokyo Engineers Create Open-Source Bipedal Robot That Anyone Can Assemble At Home

Kumar echoes the sentiment that AI can enhance productivity but cannot entirely replace human input. “Humans will remain essential for long-term projects,” he asserts, emphasizing that AI, created by humans, still requires human oversight. While work is plentiful, the nature of assignments has evolved, with a focus on refining and iterating upon AI’s initial attempts at content creation.

The Challenges of Human-AI Collaboration

The dynamic between AI and human workers is not without its challenges. While companies that over-relied on AI often seek to rehire their former employees, they also attempt to reduce compensation for these roles. The justification is that the work now involves refining existing AI-generated content rather than creating it from scratch.

“Robotic Arms Move Like Dancers”: AI System Choreographs Factory Robots That Solve 40 Tasks In Seconds While Traditional Programming Takes Hours Of Human Work

This shift highlights a more integrated human-machine collaboration where both entities contribute uniquely to the final product. However, it also raises questions about fair compensation and the value of human expertise in a world increasingly influenced by AI. As companies attempt to balance cost-cutting with quality assurance, the debate over appropriate remuneration for freelance revisions of AI work continues.

AI in the Workplace: A Double-Edged Sword

While AI offers numerous advantages, such as increased efficiency and cost savings, it also presents significant challenges. Businesses must navigate the delicate balance between adopting AI technologies and maintaining a skilled human workforce. The experiences of freelancers like Carstens and Kumar underline the necessity of human oversight in ensuring AI-generated content meets industry standards.

As AI continues to evolve, companies must critically assess its role in their operations. The initial allure of AI-driven cost reductions must be weighed against the potential for subpar results and the subsequent need for human intervention. This ongoing evaluation highlights the importance of strategic planning in technology adoption, ensuring that businesses maximize AI’s benefits without compromising quality.

As AI becomes further entrenched in workplaces, companies must decide how best to leverage technology while valuing human contributions. The need for skilled professionals to enhance AI outputs underscores the irreplaceable nature of human expertise. Will businesses find a sustainable model that harmonizes technological advancements with human creativity and skill, or will the pendulum swing back toward a more human-centric approach?

This article is based on verified sources and supported by editorial technologies.

Did you like it? 4.5/5 (24)



Source link

Continue Reading

AI Insights

Artificial Intelligence goes for a test run at the Sheriff’s Office | Top Stories

Published

on

























Artificial Intelligence goes for a test run at the Sheriff’s Office | Top Stories | wrex.com

We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.

For any issues, contact wrex@wrex.com or call 815-335-2213.



Source link

Continue Reading

Trending