AI Research
New Study Reveals Challenges in Integrating AI into NHS Healthcare

Implementing artificial intelligence (AI) within the National Health Service (NHS) has emerged as a daunting endeavor, revealing significant challenges rarely anticipated by policymakers and healthcare leaders. A recent peer-reviewed qualitative study conducted by researchers at University College London (UCL) sheds light on the complexities involved in the procurement and early deployment of AI technologies tailored for diagnosing chest conditions, particularly lung cancer. The study surfaces amidst a broader national momentum aimed at integrating digital technology within healthcare systems as outlined in the UK Government’s ambitious 10-year NHS plan, which identifies digital transformation as pivotal for enhancing service delivery and improving patient experiences.
As artificial intelligence gains traction in healthcare diagnostics, NHS England launched a substantial initiative in 2023, whereby AI tools were introduced across 66 NHS hospital trusts, underpinned by a notable funding commitment of £21 million. This ambitious project aimed to establish twelve imaging diagnostic networks that could expand access to specialist healthcare opinions for a greater number of patients. The expected functionalities of these AI tools are significant, including prioritizing urgent cases for specialist review and assisting healthcare professionals by flagging abnormalities in radiological scans—tasks that could potentially ease the burden on overworked NHS staff.
However, two key aspects have emerged from this research, revealing that the rollout of AI systems has not proceeded as swiftly as NHS leadership had anticipated. Building on evidence gleaned from interviews with hospital personnel and AI suppliers, the UCL team identified procurement processes that were unanticipatedly protracted, with delays stretching from four to ten months beyond initial schedules. Strikingly, by June 2025—18 months post-anticipated completion—approximately a third of the participating hospital trusts had yet to integrate these AI tools into clinical practice. This delay emphasizes a critical gap between the technological promise of AI and the operational realities faced by healthcare institutions.
Compounding these challenges, clinical staff equipped with already high workloads have found it tough to engage wholeheartedly with the AI project. Many staff members expressed skepticism about the efficacy of AI technologies, rooted in concerns about their integration with existing healthcare workflows, and the compatibility of new AI tools with aging IT infrastructures that vary widely across numerous NHS hospitals. The researchers noted that many frontline workers struggled to perceive the full potential of AI, especially in environments that overly complicated the procurement and implementation processes.
In addition to identifying these hurdles, the study underscored several factors that proved beneficial in the smooth embedding of AI tools. Enthusiastic and committed local hospital teams played a significant role in facilitating project management, and strong national leadership was critical in guiding the transition. Hospitals that employed dedicated project managers to oversee the implementation found their involvement invaluable in navigating bureaucratic obstacles, indicating a clear advantage to having directed oversight in challenging integrations.
Dr. Angus Ramsay, the study’s first author, observed the lessons highlighted by this investigation, particularly within the context of the UK’s push toward digitizing the NHS. The study advocates for a recalibrated approach towards AI implementation—one that considers existing pressures within the healthcare system. Ramsay noted that the integration of AI technologies, while potentially transformative, requires tempered expectations regarding their ability to resolve deep-rooted challenges within healthcare services as policymakers might wish.
Throughout the evaluation, which spanned from March to September of last year, the research team analyzed how different NHS trusts approached AI deployment and their varied focal points, such as X-ray and CT scanning applications. They observed both the enthusiasm and the reluctance among staff to adapt to this novel technology, with senior clinical professionals expressing reservations over accountability and decision-making processes potentially being handed over to AI systems without adequate human oversight. This skepticism highlighted an urgent need for comprehensive training and guidance, as current onboarding processes were often inadequate for addressing the query-laden concerns of employees.
The analysis conducted by the UCL-led research team revealed that initial challenges, such as the overwhelming amount of technical information available, hampered effective procurement. Many involved in the selection process struggled to distill and comprehend essential elements contained within intricate AI proposals. This situation suggests the utility of establishing a national shortlist of approved AI suppliers to streamline procurement processes at local levels and alleviate the cognitive burdens faced by procurement teams.
Moreover, the emergence of widespread enthusiasm in some instances provided a counterbalance to initial skepticism. The collaborative nature of the imaging networks was particularly striking; team members freely exchanged knowledge and resources, which enriched the collective expertise as they navigated the implementation journey. The fact that many hospitals had staff committed to fostering interdepartmental collaboration made a substantial difference, aiding the mutual learning process involved in the integration of AI technologies.
One of the most pressing findings from the study was the realization that AI is unlikely to serve as a “silver bullet” for the multifaceted issues confronting the NHS. The variability in clinical requirements among the numerous organizations that compose the NHS creates an inherently complicated landscape for the introduction of diagnostic tools. Professor Naomi Fulop, a senior author of the study, emphasized that the diversity of clinical needs across numerous agencies complicates the implementation of diagnostic systems that can cater effectively to everyone. Lessons learned from this research will undoubtedly inform future endeavors in making AI tools more accessible while ensuring the NHS remains responsive to its staff and patients.
Moving forward, an essential next step will involve evaluating the use of AI tools post-implementation, aiming to understand their impact once they have been fully integrated into clinical operations. The researchers acknowledge that, while they successfully captured the procurement and initial deployment stages, further investigation is necessary to assess the experiences of patients and caregivers, thereby filling gaps in understanding around equity in healthcare delivery with AI involvement.
The implications of this study are profound, shedding light on the careful considerations necessary for effective AI introduction within healthcare systems, underscoring the urgency of embedding educational frameworks that equip staff not just with operational knowledge, but with an understanding of the philosophical, ethical, and practical nuances of AI in medicine. This nuanced understanding is pivotal as healthcare practitioners prepare for a future increasingly defined by technological integration and automation.
Faculty members involved in this transformative study, spanning various academic and research backgrounds, are poised to lead this critical discourse, attempting to bridge the knowledge gap that currently exists between technological innovation and clinical practice. As AI continues its trajectory toward becoming an integral part of healthcare, this analysis serves as a clarion call for future studies that prioritize patient experience, clinical accountability, and healthcare equity in the age of artificial intelligence.
Subject of Research: AI tools for chest diagnostics in NHS services.
Article Title: Procurement and early deployment of artificial intelligence tools for chest diagnostics in NHS services in England: A rapid, mixed method evaluation.
News Publication Date: 11-Sep-2025.
Web References: –
References: –
Image Credits: –
Keywords
AI, NHS, healthcare, diagnostics, technology, implementation, policy, research, patient care, digital transformation.
Tags: AI integration challenges in NHS healthcareAI tools for urgent case prioritizationartificial intelligence in lung cancer diagnosiscomplexities of AI deployment in healthcareenhancing patient experience with AIfunding for AI in NHS hospitalshealthcare technology procurement difficultiesNHS digital transformation initiativesNHS imaging diagnostic networksNHS policy implications for AI technologiesrole of AI in improving healthcare deliveryUCL research on AI in healthcare
AI Research
How to Scale Up AI in Government

State and local governments are experimenting with artificial intelligence but lack systematic approaches to scale these efforts effectively and integrate AI into government operations. Instead, efforts have been piecemeal and slow, leaving many practitioners struggling to keep up with the ever-evolving uses of AI for transforming governance and policy implementation.
While some state and local governments are leading in implementing the technology, AI adoption remains fragmented. Last year, some 150 state bills were considered relating to the government use of AI, governors in 10 states issued executive orders supporting the study of AI for use in government operations, and 10 legislatures tasked agencies with capturing comprehensive inventories.
Taking advantage of the opportunity presented by AI is critical as decision-makers face an increasing slate of challenging implementation problems and as technology quickly evolves and develops new capabilities. The use of AI is not without risks. Developing and adapting the necessary checks and guidance is critical but can be challenging for such dynamic technologies. Shifting from seeing AI as merely a technical capability to considering what AI technology should be asked to do can help state and local governments think more creatively and strategically. Here are some of the benefits governments are already exploring:
Administrative efficiency: Half of all states are using AI chatbots to reduce administrative burden and free staff for substantive and creative work. The Indiana General Assembly uses chatbots to answer questions about regulations and statutes. Austin, Texas, streamlines residential construction permitting with AI, while Vermont’s transportation agency inventories road signs and assesses pavement quality.
Research synthesis: AI tools help policymakers quickly access evolving best practices and evidence-based approaches. Overton’s AI platform, for example, allows policymakers to identify how existing evidence aligns with priority areas, compare policy approaches across states and nations, and match with relevant researchers and projects.
Implementation monitoring: AI fills critical gaps in program evaluation without major new investments. California’s transportation department analyzes traffic patterns to optimize highway safety and inform infrastructure investments.
Predictive modeling: AI-enabled models help test assumptions about which interventions will succeed. These models use features such as organizational characteristics, physical and contextual factors, and historical implementation data to predict success of policy interventions, and their outputs can help tailor interventions and improve outcomes and success. Applications include targeting health interventions to patients with modifiable risk factors, identifying lead service lines in municipal water systems, predicting flood response needs and flagging households at eviction risk.
Scaling up to wider adoption in policy and practice requires proactive steps by state and local governments and attendant guidance, monitoring and evaluation:
Adaptive policy framework: AI adoption often outpaces planning, and the definition of AI is often specific to its application. States need to define AI applications by sector (health, transportation, etc.) and develop adaptive operating strategies to guide and assess its impact. Thirty states have some guidance, but comprehensive approaches require clear definitions and inventories of current use.
Funding strategies: Policymakers must identify and leverage funding streams to cover the costs of procurement and training. Federal grants like the State and Local Cybersecurity Grant Program offer potential, though current authorization expires this Sept. 30. Massachusetts’ FutureTech Act exemplifies direct state investment, authorizing $1.23 billion for IT capital projects including AI.
Smart procurement: Effective AI procurement requires partnerships with vendors and suppliers and between chief information officers and procurement specialists. Contracts must ensure ethical use, performance monitoring and continuous improvement, but few states have procurement language related to AI. Speed matters — AI purchases risk obsolescence during lengthy procurement cycles.
Training and workforce development: Both current and future state and local government workforces need AI skills. Solutions include AI training academies and literacy programs for government workers, joint training programs between professional associations, and the General Services Administration’s AI Community of Practice‘s events and training. The Partnership for Public Service has recently opened up its AI Government Leadership program to state and local policymakers. Universities including Stanford and Michigan offer specialized programs for policymakers. Graduate programs in public policy, administration and law should incorporate AI governance tracks.
State AI policy development involves governor’s offices, chief information offices, security offices and legislatures. But success requires moving beyond pilot projects to systematic implementation. Governments that embrace this transition will be best positioned for future challenges. The opportunity exists now to set standards for AI-enabled governance, but it requires proactive steps in policy development, funding, procurement, workforce development and safeguards.
Joie Acosta is a senior behavioral scientist and the Global Scholar in Translation at RAND, a nonprofit, nonpartisan research institute. Sara Hughes is a senior policy researcher and the Global Scholar of Implementation at RAND and a professor of policy analysis at the RAND School of Public Policy.
Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.
AI Research
AI-powered search engine to help Singapore lawyers with legal research

SINGAPORE – An artificial intelligence (AI)-powered search engine is expected to accelerate legal research and free up time for more than three quarters of all lawyers working in Singapore who subscribe to legal research platform LawNet.
Developed in collaboration with the Singapore Academy of Law, this new tool allows lawyers to ask legal research questions in natural language and receive contextual, relevant responses.
It is trained on Singapore’s legal context and supported by data such as judgments, Singapore Law Reports, legislation and books.
GPT-Legal Q&A, which has been rolled out on LawNet, was launched by Justice Kwek Mean Luck on the second day of the TechLaw.Fest on Sept 11 at the Sands Expo and Convention Centre.
The earlier GPT-Legal model launched in 2024 provided summaries of unreported court judgments, and has since been used to generate more than 15,000 of them.
“This is a game-changing feature. This new function enables lawyers to ask legal research questions in natural language, and receive contextual, relevant responses, which are generated by AI grounded in LawNet’s content,” said Justice Kwek.
“It is designed to complement traditional keyword-based search by offering a more intuitive and responsive research experience.”
For a start, the feature is focused on delivering insights on contract law, as it is a fundamental area of law that underpins many specialised fields.
“This is a significant undertaking. It involves extensive development and rigorous testing, to align technology to the demands of your work. As such, we will be rolling out this implementation in phases,” said Justice Kwek.
The model will be improved to give insights into other significant areas of law like family law and criminal law.
The Infocomm Media Development Authority has also developed an agentic AI demonstrator for the Singapore Academy of Law to help corporate secretaries arrange annual general meetings (AGMs).
Agentic AI can help to perform tasks without the need for human intervention.
The AI agent can automate tasks like looking through the schedules of directors to find a time slot for AGMs.
With the AI agent offering routine corporate secretarial duties autonomously, professionals will be freed up to focus on higher-value advisory and strategic tasks.
Source: The Straits Times © SPH Media Limited. Permission required for reproduction
AI Research
AI-powered research training to begin at IPE for social science scholars

Hyderabad: The Institute of Public Enterprise (IPE), Hyderabad, has launched a pioneering 10-day Research Methodology Course (RMC) focused on the application of Artificial Intelligence (AI) tools in social science research. Sponsored by the Indian Council of Social Science Research (ICSSR), Ministry of Education, Government of India, the program commenced on October 6 and will run through October 16, 2025, at the IPE campus in Osmania University.
Designed exclusively for M.Phil., Ph.D., and Post-Doctoral researchers across social science disciplines, the course aims to equip young scholars with cutting-edge AI and Machine Learning (ML) skills to enhance research quality, ethical compliance, and interdisciplinary collaboration. The initiative is part of ICSSR’s Training and Capacity Building (TCB) programme and is offered free of cost, with travel and daily allowances reimbursed as per eligibility.
The course is being organized by IPE’s Centre for Data Science and Artificial Intelligence (CDSAI), under the academic leadership of Prof. S Sreenivasa Murthy, Director of IPE and Vice-Chairman of AIMS Telangana Chapter. Dr. Shaheen, Associate Professor of Information Technology & Analytics, serves as the Course Director, while Dr. Sagyan Sagarika Mohanty, Assistant Professor of Marketing, is the Co-Director.
Participants will undergo hands-on training in Python, R, Tableau, and Power BI, alongside modules on Natural Language Processing (NLP), supervised and unsupervised learning, and ethical frameworks such as the Digital Personal Data Protection (DPDP) Act, 2023.
The curriculum also includes field visits to policy labs like T-Hub and NIRDPR, mentorship for research proposal refinement, and guidance on publishing in Scopus and ABDC-indexed journals.
Speaking about the program, Dr. Shaheen emphasized the need for social scientists to evolve beyond traditional methods and embrace computational tools for data-driven insights.
“This course bridges the gap between conventional research and emerging technologies, empowering scholars to produce impactful, ethical, and future-ready research,” she said.
Seats for the course are allocated on a first-come, first-served basis. The last date for nominations is September 15, 2025. With its unique blend of technical training, ethical grounding, and publication support, the RMC at IPE intends to take a significant step to empower scholars in the process of modernizing social science research in India.
Interested candidates can contact: Dr Shaheen, Programme Director, at [email protected] or on mobile number 9866666620.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi