Michael Yezerski
Chris Prestidge
For patients meeting a nurse powered by artificial intelligence (AI) technology for the first time, the process remains the same, though they are not communicating with a human.
AI nurses can quickly analyse patients’ medical records and offer recommendations and diagnoses. These machines can receive help from AI-powered diagnostic assistance, which can analyse X-ray images, helping doctors to better examine patient disorders.
Welcome to the world of AI, which is increasingly integrated into medical care services.
The Explainer looks at some AI technologies that could shape the future of the hospital business.
What is an AI nurse?
An AI nurse is not a physical robot, but an AI-powered application in nursing.
Many hospitals are using AI nurses to help human nurses with clinical decision-making, patient monitoring and carrying out administrative tasks.
Thonburi Healthcare Group (THG), which operates Thonburi Hospital, is interested in this technology and is applying it in tandem with smart registration for patients.
“Smart registration allows patients to register via tablets or kiosks at Thonburi Hospital,” said Pumipat Chatnoraset, chief financial officer of THG.
The software for this registration was jointly developed by THG and Agnos Health, a health tech company specialising in AI and healthcare automation.
While smart registration simplifies the registration process and reduces waiting time for patients, an AI nurse obtains preliminary information from patients, including their detailed symptoms and what medication they are taking, said Mr Pumipat.
The AI nurse uses its quick computing ability to analyse large datasets and automate tasks, enabling nurses to focus more on taking care of patients and helping doctors plan treatments.
Clinical decision support systems, also known as CDSS, is a key feature of AI nurses, helping medical staff analyse different kinds of patients data, including laboratory results, images and electronic health records. This leads to evidence-based advice for further examination as well as better care planning and medication management.
The goal is the technology will provide nurses more time for tasks that require direct two-way communication with patients, compassion and critical thinking, according to THG.
What other AI-driven technologies are used in hospitals?
Machine learning, a subset of AI, is a technology that plays an increasing role in helping doctors with diagnostics and treatment planning as well as operational efficiency, potentially leading to more personalised and proactive care, said Mr Pumipat.
Known as ML, the technology enables systems to learn from data and enhance their performance without the need to be programmed.
ML can identify patterns in patient data to predict diseases early and recommend personalised treatments.
The technology can carry out tasks related to medical imaging and diagnostics by increasing the accuracy of X-ray, magnetic resonance imaging (MRI) and computed tomography (CT) scans.
The ability of AI to improve CT and MRI scans can also reduce radiation exposure.
In addition, AI can help medical staff better manage data interoperability between computer systems and keep track of patients’ health, said Mr Pumipat.
Data interoperability allows different systems and software to share data, enabling providers to facilitate seamless care transitions.
AI also supports the use of wearable devices by patients for remote health monitoring by hospitals. Medical staff can continually collect patient data, which may lead to early detection of health issues.
Like many industries, hospital and healthcare businesses are using high technology and innovations to develop new treatments that meet patients’ needs, he said.
“Digital transformation through the use of AI is an effective way to make medical service providers more productive and engaged with patients,” said Mr Pumipat.
Do hospitals need to invest in new technologies, particularly AI?
The investment is crucial, though it is costly and will increase the financial burden on hospitals, he said.
Digital technologies in the medical field, including AI-powered devices, can improve customer services and provide patients with quick and easy access to medical information and treatments.
The need to invest in new technologies means hospitals, which are now encountering rising costs and increasing patient demand, will face more financial pressure, said Mr Pumipat.
However, though AI-driven automation, predictive analytics and generative AI tools are expensive, they can optimise workflows and enhance physical examination, making them cost-effective in the long term, according to media reports.
AI can help reduce healthcare spending by US$200-360 million a year, a savings of 5-10%, said the London-based Centre for Economic Policy Research.
The new technologies, including AI, are needed to support hospitals’ plans to set up “centres of excellence” to treat specific, complicated diseases, said Mr Pumipat.
These centres are important to serve growing demand from patients seeking treatments for cancer, heart and brain disorders, he said.
“Many hospitals have increased investment in special centres and modern medical equipment to serve patients who want the best treatments,” said Mr Pumipat.
Like THG, other hospitals are keen on adopting new healthcare technology and treatments to better serve patients.
Vimut Holding Hospital Co, a unit of real estate developer Pruksa Holding, announced earlier it spent 10 million baht developing a centre of excellence for the treatment of lung disease, in line with increasing respiratory disorders among the public.
According to the World Health Organization Strategy 2025-2028 and data from the Thai Health Promotion Foundation, Thailand faces a critical air pollution crisis from PM2.5 ultra-fine dust. Bangkok was ranked eighth in a global survey of cities suffering from the worst levels of air quality.
(TNS) — Illinois lawmakers have so far achieved mixed results in efforts to regulate the burgeoning technology of artificial intelligence, a task that butts up against moves by the Trump administration to eliminate restrictions on AI.
AI-related bills introduced during the spring legislative session covered areas including education, health care, insurance and elections. Supporters say the measures are intended to address potential threats to public safety or personal privacy and to counter any deceitful actions facilitated by AI, while not hindering innovation.
Although several of those measures failed to come to a vote, the Democratic-controlled General Assembly is only six months into its two-year term and all of the legislation remains in play. But going forward, backers will have to contend with Republican President Donald Trump’s administration’s plans to approach AI.
Days into Trump’s second term in January, his administration rescinded a 2023 executive order from Democratic President Joe Biden, that emphasized the “highest urgency on governing the development and use of AI safely and responsibly.”
Trump replaced that policy with a declaration that “revokes certain existing AI policies and directives that act as barriers to American AI innovation.”
Last week, the states got a reprieve from the federal government after a provision aimed at preventing states from regulating AI was removed from the massive, Trump-backed tax breaks bill that he signed into law. Still, Democratic Illinois state Rep. Abdelnasser Rashid, who co-chaired a legislative task force on AI last year, criticized Trump’s decision to rescind Biden’s AI executive order that Rashid said “set us on a positive path toward a responsible and ethical development and deployment of AI.”
Republican state Rep. Jeff Keicher of Sycamore agreed on the need to address any potential for AI to jeopardize people’s safety. But many GOP legislators have pushed back on Democratic efforts to regulate the technology and expressed concerns such measures could hamper innovation and the ability of companies in the state to remain competitive.
“If we inhibit AI and the development that could possibly come, it’s just like we’re inhibiting what you can use metal for,” said Keicher, the Republican spokesperson for the House Cybersecurity, Data Analytics, & IT (Information Technology) Committee.
“And what we’re going to quickly see is we’re going to see the Chinese, we’re going to see the Russians, we’re going to see other countries come up without restrictions with very innovative ways to use AI,” he said. “And I’d certainly hate in this advanced technological environment to have the state of Illinois or the United States writ large behind the eight ball.”
Last December, a task force co-led by Rashid and composed of Pritzker administration officials, educators and other lawmakers compiled a report detailing some of the risks presented by AI. It addressed the emergence of generative AI, a subset of the technology that can create text, code and images.
The report issued a number of recommendations including measures to protect workers in various industries from being displaced while at the same time preparing the workforce for AI innovation.
The report built on some of the AI-related measures passed by state lawmakers in 2024, including legislation subsequently signed by Pritzker making it a civil rights violation for employers to use AI if it subjects employees to discrimination, as well as legislation barring the use of AI to create child pornography, making it a felony to be caught with artificially created images.
In addition to those measures, Pritzker signed a bill in 2023 to make anyone civilly liable if they alter images of someone else in a sexually explicit manner through means that include AI.
In the final days of session in late May, lawmakers without opposition passed a measure meant to prevent AI chatbots from posing as mental health providers for patients in need of therapy. The bill also prohibits a person or a business from advertising or offering mental health services unless those services are carried out by licensed professionals.
It limits the use of AI in the work of those professionals, barring them, for example, from using the technology to make “independent therapeutic decisions.” Anyone found in violation of the measure could have to pay the state as much as $10,000 in fines.
The legislation awaits Pritzker’s signature.
State Rep. Bob Morgan, a Deerfield Democrat and the main House sponsor of the bill, said the measure is necessary at a time when there’s “more and more stories of AI inappropriately and in a dangerous way giving therapeutic advice to individuals.”
“We started to learn how AI was not only ill-equipped to respond to these mental health situations but actually providing harmful and dangerous recommendations,” he said.
Another bill sponsored by Morgan, which passed through the House but didn’t come to a vote in the Senate, would prevent insurers doing business in Illinois from denying, reducing or terminating coverage solely because of the use of an artificial intelligence system.
State Sen. Laura Fine, the bill’s main Senate sponsor, said the bill could be taken up as soon as the fall veto session in October, but noted the Senate has a year and half to pass it before a new legislature is seated.
“This is a new horizon and we just want to make sure that with the use of AI, there’s consumer protections because that’s of utmost importance,” said Fine, a Democrat from Glenview who is also running for Congress. “And that’s really what we’re focusing on in this legislation is how do we properly protect the consumer.”
Measures to address a controversial AI phenomenon known as “deepfakes,” when video or still images of a face, body or voice are digitally altered to appear as another person, for political purposes have so far failed to gain traction in Illinois.
The deepfake tactic has been used in attempts to influence elections. An audio deepfake of Biden during last year’s national elections made it sound like he was telling New Hampshire voters in a robocall not to vote.
According to the task force report, legislation regulating the use of deepfakes in elections has been enacted in some 20 states. During the previous two-year Illinois legislative term, which ended in early January, three bills addressing the issue were introduced but none passed.
Rashid reintroduced one of those bills this spring, to no avail. It would have banned the distribution of deceitful campaign material if the person doing so knew the shared information to be false, and was distributed within 90 days of an election. The bill also would prohibit a person from sharing the material if it was being done “to harm the reputation or electoral prospects of a candidate” and change the voting behavior of electors by deliberately causing them to believe the misinformation.
Rashid said hurdles to passing the bill include whether to enforce civil and criminal penalties for violators. The measure also needs to be able to withstand First Amendment challenges, which the American Civil Liberties Union of Illinois has cited as a reason for its opposition.
“I don’t think anyone in their right mind would say that the First Amendment was intended to allow the public to be deceived by political deep fakes,” Rashid, of Bridgeview, said. “But … we have to do this in a really surgical way.”
Rashid is also among more than 20 Democratic House sponsors on a bill that would bar state agencies from using any algorithm-based decision-making systems without “continuous meaningful human review” if those systems could have an impact on someone’s civil liberties or their ability to receive public assistance. The bill is meant to protect against algorithmic bias, another threat the task force report sought to address. But the bill went nowhere in the spring.
One AI-related bill backed by Rashid that did pass through the legislature and awaits Pritzker’s signature would prohibit a community college from using artificial intelligence as the sole source of instruction for students.
The bill — which passed 93-22 in the House in the final two days of session after passing 46-12 in the Senate on May 21 — would allow community college faculty to use AI to augment course instruction.
Rashid said there were “technical reasons” for not including four-year colleges and universities in Illinois in the bill but said there’d be further discussions on whether the measure would be expanded to include those schools.
While he said he knows of no incidents of AI solely replacing classroom instruction, he explained “that’s the direction things may be moving” and that “the level of experimentation with AI in the education space is significant.”
“I fully support using AI to supplement instruction and to provide students with tailored support. I think that’s fantastic,” Rashid said. “What we don’t want is during a, for example, a budget crisis, or for cost-cutting measures, to start sacrificing the quality of education by replacing instructors with AI tools.”
While Keicher backed Morgan’s mental health services AI bill, he opposed Rashid’s community college bill, saying the language was “overly broad.”
“I think it’s too restrictive,” Keicher said. “And I think it would prohibit our education institutions in the state of Illinois from being able to capitalize on the AI space to the benefit of the students that are coming through the pipeline because whether we like it or not, we’ve all seen the hologram teachers out there on the sci-fi shows that instruct our kids. At some point, 50 years, 100 years, that’s going to be reality.”
Also on the education front, lawmakers advanced a measure that would help establish guidelines for elementary and high school teachers and school administrators on how to use AI. It passed 74-34 in the House before passing 56-0 in the Senate during the final hours of spring session.
According to the legislation, which has yet to be signed by Pritzker, the guidance should include explanations of basic artificial intelligence concepts, including machine learning, natural language processing, and computer vision; specific ways AI can be used in the classroom to inform teaching and learning practices “while preserving the human relationships essential to effective teaching and learning”; and how schools can address technological bias and privacy issues.
John Sonnenberg, a former director of eLearning for the State Board of Education, said at a global level, AI is transforming education and, therefore, children should be prepared for learning about the integration of AI and human intelligence.
“We’re kind of working toward, not only educating kids for their future but using that technology to help in that effort to personalize learning and do all the things in education we know we should be doing but up to this point and time we didn’t have the technology and the support to do it affordably,” said Sonnenberg, who supported the legislation. “And now we do.”
© 2025 Chicago Tribune. Distributed by Tribune Content Agency, LLC.
Editor’s note: The promise and peril of artificial intelligence has captivated Washington D.C., Silicon Valley, Wall Street and Hollywood. Composer Michael Yezerski has taken a hands-on approach to it: The author of the score of the likes of the Oscar-winning short The Last Thing, Blindspotting (the movie and the series), Sean Byrne’s The Devil’s Candy, this year’s Dangerous Animals and the just released Liam Neeson-starring Ice Road: Vengeance put the tech to the test, as he details in a guest column for Deadline.
The other week at a party, I was asked by a picture editor if I am feeling the threat of AI.
I honestly replied that I am not. But then he told me that he uses AI music generators in his everyday work as a picture editor for commercials and all of a sudden, I felt threatened. I found the conversation sobering, but it spurred me to look further into the world of AI Music Generators (websites that write music for you based on a prompt). Now I have questions but I don’t have any answers.
AI Music is here and it’s here to stay. I think that much is clear.
At the moment, the technology is still nascent, and it is impressive for what it can do already (The Velvet Sundown, anyone?). But will it ever surpass human musical achievement? I have my doubts.
Michael Yezerski
Chris Prestidge
Using the AIs, I generated a raft of instrumental tracks in a variety of styles (sticking to instrumentals because they are the most applicable to my work). The electronic tracks (EDM, dance pop, etc) were quite impressive, whilst I found cinematic and classical tracks to be less so. I have to assume that this is only temporary and that the models will soon turn their focus to more complex musical structures.
I found that the AIs were able to churn out derivative dance, pop, basic rock, metal, punk with relative ease and incredible speed. Now these don’t feel human (yet) but you can’t exactly write them off either. I could see a world where certain filmmakers gravitate to some of these options. However, to my ear, they can’t yet replicate the very real energy that a live band or a real piano player would bring to the same scene and harmonically they all feel a bit odd.
I can see real value in music professionals using some of these AIs as idea generators. In certain styles, they are a quick way to get around writer’s block. Even so, all the tracks contained choices that I would never make in my own style as composer, and right now, the interfaces do not allow for the kind of changes that I would want.
Of course there are very real issues of copyright ownership and moral rights here. Whose music have these AIs been trained on? The Society of Composers & Lyricists, the Songwriters Guild of America and Music Creators North America are warning their members about the serious implications of assigning the rights to AI companies to train off their own music. And right now, there is a fierce campaign in Washington aimed at curbing AI companies’ request to label all content as “fair use” regardless of copyright ownership. It should be noted here that a 10-year moratorium on states passing their own laws regulating AI was removed from the budget bill before it passed last week.
I understand the desire to train on existing works. It’s almost human.
The dilemma for all composers is that we do start out by imitating the writers we admire. We are looking for the secret formula, convinced that there actually is one. But over time, the only secret that I’ve found is that there is no secret. Does anyone really know why a particular song goes viral? Or why a great score works so well that it gets used as temp music in countless successive productions? We know great music when we hear it, creating it is hard.
James Cameron recently suggested that we should be focusing on the output of these AIs and not the training. I agree to a certain extent and I worry that a picture editor, with a knowledge of music that is nowhere near that of a professional musician, may not recognize when an AI has unintentionally committed a copyright violation. I could foresee a scenario whereby a piece of music will be synced to picture, broadcast, and then called out (resulting in a tricky battle of ownership and responsibility).
Music is that most human of communications.
A language built of thousands of little mistakes, accidents and inconsistencies that, at its very best, is transformative and life-affirming to the human ear. Great music triggers an emotional response that can evoke core memories, peak experiences and foster feelings of community and intimacy with others. When I write, it’s often the happy accidents, mistakes and weird connections that end up defining the score (like in Dangerous Animals, where we really had to break the mold to find the exact sound for the “shark scream” – a combination of wailing strings, performing a difficult glissando, accompanied by analogue synths).
‘Dangerous Animals’
IFC/Shudder
So while I may start in one direction, often something unexpected happens and I end up improving on the sound based on my own cultural, historical and contextual knowledge. Will an AI ever be able to do that? Can AI innovate or only emulate?
And this is where I think composers and performers have their argument.
Can an AI spend seven months with a director honing, searching, defining and redefining a sound for their narrative masterwork (not to mention providing emotional support during that time!)? Can an AI engage interesting and unusual performers to bring the music to life like Hans Zimmer does? Can an AI take all of our contemporary cultural knowledge and turn it into song lyrics that delight and surprise us like Lin-Manuel Miranda does?
As composers, we are specialists and we have immersed ourselves in an evolving language that is thousands of years old. That language thrives on innovation and falters when it becomes stale and repetitive. AI Music Generators have made it incredibly easy to “re-create” sounds on a never-before-imagined scale.
But that is never where the goalposts were.
For me at least, I’m always looking further out.
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
Donald Trump suggests US government review subsidies to Elon Musk’s companies
Rethinking Venture Capital’s Talent Pipeline
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
From chatbots to collaborators: How AI agents are reshaping enterprise work
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit