Connect with us

Tools & Platforms

As ambulance leaders turn to technology, how will the NHS navigate the ‘Wild West’ of AI?

Published

on


After diagnosing the NHS as “broken”, the government has placed a big bet on tech being the key treatment for its ailing system, promising that it will become the most “AI-enabled” health system in the world.

With services facing a battle over finances, as well as a lack of staff able to meet patients’ needs, health leaders have been exploring the use of AI for some time. The evidence is already there for its use in reading patients’ scans. But, more broadly, how does the use of AI tools translate into emergency care?

Here, ambulance leaders tell The Independent about the realities of using AI in a complex, fast-paced and potentially dangerous environment.

‘We’ve got to get it right first time’

Guiding drones, traffic light prediction, helping with diagnoses and live language translation these are just a few of the ways in which AI could be used within the UK’s ambulance sector.

Graham Norton, digital transformation lead for the Northern Ambulance Alliance, believes that AI will “absolutely” become an everyday tool for ambulance staff.

“There is absolutely no reason why AI will not be a routine part of the day-to-day activities across the ambulance sector. It should be,” he said.

Mr Norton and Mr Johnny Sammut, director of digital services for the Welsh Ambulance Services University NHS Trust, both agree that AI has huge potential to help health workers battling an increasingly challenging environment.

But the pair say this comes with a heavy safety warning.

“The reason that we’re different [in ambulance services, compared to the rest of the health service] is that this is genuine life and death, and a lot of the time, certainly over the phones, you can’t even eyeball the patient. So, it’s not to say there isn’t a huge enthusiasm [for AI] and huge, huge potential. But we’ve got to get it right first time,” says Mr Norton.

In areas of the NHS such as diagnostic services, AI is being used to read patient scans. But, if a concern is flagged, these readings are usually looked at afterwards by a health professional, creating a safety net.

But Mr Norton warned: “If you’re using AI at an emergency care level I’m talking about 999 and 111 calls, for example by the nature of what you’re trying to do, you don’t have the same level of safety net.”

Tackling health inequality

The Yorkshire Ambulance Service is currently one of a handful of trusts testing out the use of AI within services, with the main focus on testing safe AI transcribing tools.

These are so-called “ambient AI” which can listen, record and transcribe notes for paramedics on scene or call handlers. Mr Norton said the devices could even be used to translate patients who don’t speak English, using a Google Translate-type tool.

“If we can have AI helping us with translation and transcription, we’re going to be able to deal with real health inequality. There’s a real health inequality for people who don’t speak English as a first language,” he said.

Meanwhile, in Wales, Mr Sammut said the service was already seeing “immediate time saving benefits”, in terms of reducing admin burden for staff, by using AI.

Last month, the trust soft-launched a 111 online virtual agent, similar to an AI chat function, which provides patients with a conversational way to ask about symptoms.

In another use, which is quite different, Mr Sammut said there is work to link AI-enabled drones with hazardous area response teams – teams which respond to complex and major emergencies.

“So this provides situational awareness in the sky on particularly complex or dangerous scenes. We’ve got AI now embedded into technology and those drones will have things like intelligent tracking. They’ll be able to pull thermal and non-thermal imaging together and then they’re able to survey and track particular areas of a scene using AI. It develops its own situational awareness in the sky.”

The service also hopes to develop AI which can assist with predicting ambulance demand. It can also help paramedics in the field, by interpreting echocardiographs (ECGs) for example, or anomalies in a patient’s skin.

“The risk of not doing this [using AI] is far greater [than not]. When you think about the NHS, where we are today, the burden that sits on staff and the levels of funding… to not follow through with AI is quite frankly dangerous.”

However, in such a high-risk and fast-moving area, the ambulance executive did point out some risks.

“The other thing that I’ve got in my mind at the minute is: what downstream risk do we create with AI? I’m thinking from a cybersecurity perspective. So one of the very real concerns that I do have with AI is how do we avoid, track and mitigate against AI poisoning.

“AI poisoning is whereby someone will feed one of your AI models a whole heap of fake information and fake data and… you know the price of us getting AI wrong isn’t money alone. It’s life. So if someone is able to poison those models, that is a very real risk to the public.”

News stories over the past two years, including major cybersecurity attacks on the NHS and individual hospitals, show how precarious an area this is.

In terms of risk management, Mr Norton also points out that there needs to be a way of quality assessing AI providers.

The potential is “phenomenal”, he said, but the service must “slow down a little bit”. “You’ve got to avoid the Wild West here,” he adds.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Duke AI program emphasizes critical thinking for job security :: WRAL.com

Published

on


Duke’s AI program is spearheaded by a professor who is not just teaching, he also built his own AI model. 

Professor Jon Reifschneider says we’ve already entered a new era of teaching and learning across disciplines.

He says, “We have folks that go into healthcare after they graduate, go into finance, energy, education, etc. We want them to bring with them a set of skills and knowledge in AI, so that they can figure out: ‘How can I go solve problems in my field using AI?'”

He wants his students to become literate in AI, which is a challenge in a field he describes as a moving target. 

“I think for most people, AI is kind of a mysterious black box that can do somewhat magical things, and I think that’s very risky to think that way, because you don’t develop an appreciation of when you should use it and when you shouldn’t use it,” Reifschneider told WRAL News.

Student Harshitha Rasamsetty said she is learning the strengths and shortcomings of AI.

“We always look at the biases and privacy concerns and always consider the user,” she said.

The students in Duke’s engineering master’s programs come from all backgrounds, countries, even ages. Jared Bailey paused his insurance career in Florida to get a handle on the AI being deployed company-wide. 

He was already using AI tools when he wondered, “What if I could crack them open and adjust them myself and make them better?”

John Ernest studied engineering in undergrad, but sought job security in AI.

“I hear news every day that AI is replacing this job, AI is replacing that job,” he said. “I came to a conclusion that I should be a part of a person building AI, not be a part of a person getting replaced by AI.”

Reifschneider thinks warnings about AI taking jobs are overblown. 

In fact, he wants his students to come away understanding that humans have a quality AI can’t replace. That’s critical thinking. 

Reifschneider says AI “still relies on humans to guide it in the right direction, to give it the right prompts, to ask the right questions, to give it the right instructions.”

“If you can’t think, well, AI can’t take you very far,” Bailey said. “It’s a car with no gas.”

Reifschneider told WRAL that he thinks children as young as elementary school students should begin learning how to use AI, when it’s appropriate to do so, and how to use it safely.

WRAL News went inside Wake County schools to see how it is being used and what safeguards the district is using to protect students. Watch that story Wednesday on WRAL News.



Source link

Continue Reading

Tools & Platforms

WA state schools superintendent seeks $10M for AI in classrooms

Published

on


This article originally appeared on TVW News.

Washington’s top K-12 official is asking lawmakers to bankroll a statewide push to bring artificial intelligence tools and training into classrooms in 2026, even as new test data show slow, uneven academic recovery and persistent achievement gaps.

Superintendent of Public Instruction Chris Reykdal told TVW’s Inside Olympia that he will request about $10 million in the upcoming supplemental budget for a statewide pilot program to purchase AI tutoring tools — beginning with math — and fund teacher training. He urged legislators to protect education from cuts, make structural changes to the tax code and act boldly rather than leaving local districts to fend for themselves. “If you’re not willing to make those changes, don’t take it out on kids,” Reykdal said.

The funding push comes as new Smarter Balanced assessment results show gradual improvement but highlight persistent inequities. State test scores have ticked upward, and student progress rates between grades are now mirroring pre-pandemic trends. Still, higher-poverty communities are not improving as quickly as more affluent peers. About 57% of eighth graders met foundational math progress benchmarks — better than most states, Reykdal noted, but still leaving four in 10 students short of university-ready standards by 10th grade.

Reykdal cautioned against reading too much into a single exam, emphasizing that Washington consistently ranks near the top among peer states. He argued that overall college-going rates among public school students show they are more prepared than the test suggests. “Don’t grade the workload — grade the thinking,” he said.

Artificial intelligence, Reykdal said, has moved beyond the margins and into the mainstream of daily teaching and learning: “AI is in the middle of everything, because students are making it in a big way. Teachers are doing it. We’re doing it in our everyday lives.”

OSPI has issued human-centered AI guidance and directed districts to update technology policies, clarifying how AI can be used responsibly and what constitutes academic dishonesty. Reykdal warned against long-term contracts with unproven vendors, but said larger platforms with stronger privacy practices will likely endure. He framed AI as a tool for expanding customized learning and preparing students for the labor market, while acknowledging the need to teach ethical use.

Reykdal pressed lawmakers to think more like executives anticipating global competition rather than waiting for perfect solutions. “If you wait until it’s perfect, it will be a decade from now, and the inequalities will be massive,” he said.

With test scores climbing slowly and AI transforming classrooms, Reykdal said the Legislature’s next steps will be decisive in shaping whether Washington narrows achievement gaps — or lets them widen.

TVW News originally published this article on Sept. 11, 2025.


Paul W. Taylor is programming and external media manager at TVW News in Olympia.



Source link

Continue Reading

Tools & Platforms

AI Leapfrogs, Not Incremental Upgrades, Are New Back-Office Approach – PYMNTS.com

Published

on

By



AI Leapfrogs, Not Incremental Upgrades, Are New Back-Office Approach  PYMNTS.com



Source link

Continue Reading

Trending