Connect with us

AI Insights

AI reshapes ARDS care by predicting risk, guiding ventilation, and personalizing treatment

Published

on


From early warnings to smarter ventilators, artificial intelligence is helping clinicians outpace ARDS, offering hope for more lives saved through personalized, data-driven care.

Review: Artificial intelligence and machine learning in acute respiratory distress syndrome management: recent advances. Image Credit: Design_Cells / Shutterstock

In a recent review published in the journal Frontiers in Medicine, a group of authors synthesized recent evidence on how artificial intelligence (AI) and machine learning (ML) enhance prediction, stratification, and treatment of acute respiratory distress syndrome (ARDS) across the patient journey.

Background

Every day, more than one thousand people worldwide enter an intensive care unit (ICU) with ARDS, and 35–45% of those with severe illness still die despite guideline-based ventilation and prone positioning. Conventional care works, yet it remains fundamentally supportive and cannot overcome the syndrome’s striking biological and clinical heterogeneity. Meanwhile, the digital exhaust of modern ICUs, continuous vital signs, electronic health records (EHRs), imaging, and ventilator waveforms has outgrown the capabilities of unaided human cognition. AI and ML are increasingly being explored as tools that promise to transform this complexity into actionable insight. However, as the review notes, external validation, generalizability, and proof of real-world benefit remain crucial research needs. Further research is needed to determine whether these algorithms actually improve survival, disability, and cost.

Early Warning: Predicting Trouble Before It Starts

ML algorithms already flag patients likely to develop ARDS hours and sometimes days before clinical criteria are met. Convolutional neural networks (CNNs) trained on chest radiographs and ventilator waveforms, as well as gradient boosting models fed raw EHR data, have been shown to achieve area under curve (AUC) values up to 0.95 for detection or prediction tasks in specific settings. However, performance varies across cohorts and model types. This shift from reactive diagnosis to proactive screening enables teams to mobilize lung-protective ventilation, fluid stewardship, or transfer to high-acuity centers earlier, a practical advantage during coronavirus disease 2019 (COVID-19) surges when ICU beds are scarce. The review highlights that combining multiple data types, clinical, imaging, waveform, and even unstructured text, generally yields more accurate predictions. Still, real-world accuracy remains dependent on the quality of the data and external validation.

Sharper Prognosis: Dynamic Risk Profiles

Once ARDS is established, knowing who is likely to deteriorate guides resource allocation and family counseling. Long short-term memory (LSTM) networks that ingest time series vitals and laboratory trends outperform conventional Sequential Organ Failure Assessment (SOFA) and Simplified Acute Physiology Score (SAPS II) tools; meta-analysis shows a concordance index of 0.84 versus 0.64–0.70 for traditional scores. By continuously updating risk, these models enable clinicians to decide when to escalate to extracorporeal membrane oxygenation (ECMO) or palliative pathways, rather than relying on “worst value in 24 hours” snapshots. However, the review cautions that most current models are focused on mortality risk, and broader outcome prediction (e.g., disability, quality of life) remains underexplored.

Phenotypes and Endotypes

Latent class analysis (LCA) applied to multicenter trial data revealed two reproducible inflammatory phenotypes: hyper-inflammatory, characterized by interleukin 6 surges and a 40–50% mortality rate, and hypo-inflammatory, associated with less organ failure and a roughly 20% mortality rate. Treatment responses diverge; high positive end-expiratory pressure (PEEP) harms the hyper-inflammatory group, yet may aid the hypo-inflammatory group. Supervised gradient boosting models now assign these phenotypes bedside using routine labs and vitals with an accuracy of 0.94–0.95, paving the way for phenotype-specific trials of corticosteroids, fluid strategies, or emerging biologics. The review also describes additional ARDS subtypes, such as those based on respiratory mechanics, radiology, or multi-omics data. It emphasizes that real-time bedside subtyping is a critical goal for future precision medicine.

Smarter Breathing Support

AI also refines everyday ventilation decisions. A multi-task neural network simulates how oxygenation and compliance will change 45 minutes after a PEEP adjustment, enabling virtual “test drives” instead of trial-and-error titration. Mechanical power (MP) is the energy delivered to the lung each minute and exceeds 12 Joules per minute in patients at the highest risk of ventilator-induced injury. XGBoost models individualize MP thresholds and predict ICU mortality with an AUC of 0.88. For patient-ventilator asynchrony (PVA), deep learning detectors sift through millions of breaths and achieve over 90% accuracy, promising real-time alarms or even closed-loop ventilators that self-correct harmful cycling. The review notes, however, that most PVA detection models remain offline, and real-time actionable systems are still in development.

High Stakes Decisions: ECMO and Liberation

ECMO can salvage gas exchange but consumes significant resources in terms of staffing and supplies. The hierarchical Prediction, Early Monitoring, and Proactive Triage for Extracorporeal Membrane Oxygenation (PreEMPT ECMO) deep network combines demographics, laboratory results, and minute-by-minute vital signs to forecast ECMO need up to 96 hours in advance (AUC = 0.89 at 48 hours), aiding in referral timing and equitable resource utilization. At the other end of the journey, AI-based systems are being explored to predict when ventilator weaning will succeed, shortening mechanical ventilation and hospital stay in proof-of-concept studies. However, the review highlights that most studies of AI for weaning and extubation are generally conducted in ICU populations, rather than ARDS-specific cohorts, and direct evidence in ARDS remains scarce. Integrating both tools could one day create a complete life cycle decision platform, but this remains an aspirational goal.

Next Generation Algorithms and Real World Barriers

Graph neural networks (GNNs) model relationships among patients, treatments, and physiologic variables, potentially uncovering hidden risk clusters. Federated learning (FL) trains shared models across hospitals without moving protected health data, improving generalizability. Self-supervised learning (SSL) leverages billions of unlabeled waveforms to pre-train robust representations. Large language models (LLMs) and emerging multimodal variants act as orchestrators, calling specialized image or waveform models and generating human-readable plans. The review additionally highlights causal inference and reinforcement learning (RL) as promising approaches for simulating “what-if” scenarios and for developing AI agents that make sequential decisions in dynamic ICU environments. These techniques promise richer insights but still face hurdles related to data quality, interpretability, and workflow integration that must be addressed before routine clinical adoption.

In the area of drug discovery, the review notes that while AI has enabled target and compound identification in related lung diseases (such as idiopathic pulmonary fibrosis), the application of generative AI for ARDS-specific therapies remains largely conceptual at present.

Conclusions

To summarize, current evidence shows that AI and ML can detect ARDS earlier, stratify risk more precisely, tailor ventilation to individual lung mechanics, and guide costly therapies such as ECMO. Phenotype-aware algorithms already flag patients who benefit from, or suffer from, high PEEP, while neural networks forecast MP-related injury and PVA in real-time. Next-generation GNNs, FL, RL, causal inference, and LLMs may weave disparate data into cohesive bedside recommendations. Rigorous prospective trials, transparent reporting, and clinician-friendly interfaces remain essential to translate these digital advances into lives saved and disabilities prevented.

Journal reference:



Source link

AI Insights

Artificial intelligence offering political practices advice about robocalls in Montana GOP internal spat

Published

on


A version of this story first appeared in Capitolized, a weekly newsletter featuring expert reporting, analysis and insight from the editors and reporters of Montana Free Press. Want to see Capitolized in your inbox every Thursday? Sign up here.


The robocalls to John Sivlan’s phone this summer just wouldn’t let up. Recorded messages were coming in several times a day from multiple phone numbers, all trashing state Republican Rep. Llew Jones, a shrewd, 11-term lawmaker with an earned reputation for skirting party hardliners to pass the Legislature’s biggest financial bills, including the state budget. 

Sivlan, 80, a lifelong Republican who lives in Jones’ northcentral Montana hometown of Conrad, wasn’t amused by the general election-style attacks hitting his phone nearly a year before the next legislative primary. Jones, in turn, wasn’t impressed with the Commissioner of Political Practices’ advice that nothing could be done about the calls. The COPP polices campaigns and lobbying in Montana, and the opinion the office issued in response to a request from Jones to review the robocalls was written not by an office employee but instead authored by ChatGPT. 

“They were coming in hot and heavy in July,” Sivlan said on Aug. 26 while scrolling through his messages. “There must be dozens of these.”

“Did you know that Llew Jones sides with Democrats more than any other Republican in the Montana Legislature? If he wants to vote with Democrats, Jones should at least switch parties,” the robocalls said.

“And then they list his number and tell you to call him and tell him,” Sivlan continued.

In addition to the robocalls, a string of ads running on streaming services targeted Jones. On social media, placement ads depicted Jones as the portly, white-suited county commissioner Boss Hogg from “The Dukes of Hazzard” TV comedy of the early 1980s. None of the ads or calls disclosed who was paying for them.

Jones told Capitolized that voters were annoyed by the messaging, but said most people he’s talked to weren’t buying into it. He assumes the barrage was timed to reach voters before his own campaign outreach for the June 2026 primary.

The COPP’s new AI helper concluded that only ads appearing within 60 days of an election could be regulated by the office. The ads would also have to expressly advise the public on how to vote to fall under campaign finance reporting requirements.

In the response emailed to Jones, the AI program followed its opinion with a very chipper “Would you like guidance on how to monitor or respond to such ads effectively?”

“I felt that it was OK,” Commissioner Chris Gallus said of the AI opinion provided to Jones. “There were some things that I probably would have been more thorough about. Really at this point I wanted Llew to see where we were at that time with the (AI) build-out, more than explicit instructions.”

The plan is to prepare the COPP’s AI system for the coming 2026 primary elections, at which point members of the COPP staff will review the bot’s responses and supplement when necessary. But the system is already on the commissioner’s website, offering advice based solely on Montana laws and COPP’s own data, and not on what it might scrounge from the internet, according to Gallus.

Earlier this year, the Legislature put limits on AI use by government agencies, including a requirement for government disclosure and oversight of decisions and recommendations made by AI systems. The bill, by Rep. Braxton Mitchell, R-Columbia Falls, was opposed by only a handful of lawmakers.

Gallus said the artificial intelligence system at COPP is being built by 3M Data, a vendor with previous experience with machine learning for the Red Cross and the oil companies Shell and Exxon, where systems gathered and analyzed copious amounts of operational data. COPP has about $38,000 to work with, Gallus said.

The pre-primary battles within the Montana Republican Party are giving the COPP’s machine learning an early test, while also exposing loopholes in campaign reporting laws. 

There is no disclosure law for the ads placed on streaming services, unlike ad details for traditional radio and TV stations, cable and satellite, which must be available for public inspection under Federal Communications Commission law. The state would have to fill that gap, which the FCC and Federal Election Commission have struggled to do since 2011. 

Streaming now accounts for 45% of all TV viewing, according to Nielsen, more than broadcast and cable combined. Cable viewership has declined 39% since 2021.

“When we asked KSEN (a popular local radio station) who was paying for the ads, they didn’t know,” Jones said. “People were listening on Alexa.”

Nonetheless, Jones said the robocalls are coming from within the Republican house. An effort by hardliners to purge more centrists legislators from the party has been underway since April, when the MTGOP executive board began “rescinding recognition” of the state Republican senators who collaborated with a bipartisan group of Democrats and House Republicans to pass a budget, increase teacher pay and lower taxes on primary homes.

Being Republican doesn’t require recognition by the MTGOP  “e-board,” as it’s known. In June, when the party chose new leadership, newly elected Chair Art Wittich said the party would no longer stay neutral in primary elections and would look for conservative candidates to support. 

Republicans who have registered campaigns for the Legislature were issued questionnaires Aug. 17 by the Conservative Governance Committee, a group chaired by Keith Regier, a former state legislator and father of a Flathead County family that’s sent three members to the Montana Legislature; in 2023 Keith  Regier and two of his children served in the Legislature simultaneously.

Membership for the Conservative Governance Committee and a new Red Policy Committee to prioritize legislative priorities is still a work in progress, new party spokesman Ethan Holmes said this week. 

The 14 questions, which Regier informed candidates could be used to determine party support of campaigns, hit on standard Republican fare: guns, “thoughts on transgenderism,” and at what point human life starts. There was no question about a willingness to follow caucus leadership. Regier’s son, Matt, was elected Senate president late 2024, but lost control of his caucus on the first day of the legislative session in January.



Source link

Continue Reading

AI Insights

“AI Is Not Intelligent at All” – Expert Warns of Worldwide Threat to Human Dignity

Published

on


Credit: Shutterstock

Opaque AI systems risk undermining human rights and dignity. Global cooperation is needed to ensure protection.

The rise of artificial intelligence (AI) has changed how people interact, but it also poses a global risk to human dignity, according to new research from Charles Darwin University (CDU).

Lead author Dr. Maria Randazzo, from CDU’s School of Law, explained that AI is rapidly reshaping Western legal and ethical systems, yet this transformation is eroding democratic principles and reinforcing existing social inequalities.

She noted that current regulatory frameworks often overlook basic human rights and freedoms, including privacy, protection from discrimination, individual autonomy, and intellectual property. This shortfall is largely due to the opaque nature of many algorithmic models, which makes their operations difficult to trace.

The black box problem

Dr. Randazzo described this lack of transparency as the “black box problem,” noting that the decisions produced by deep-learning and machine-learning systems cannot be traced by humans. This opacity makes it challenging for individuals to understand whether and how an AI model has infringed on their rights or dignity, and it prevents them from effectively pursuing justice when such violations occur.

Dr Maria Randazzo
Dr. Maria Randazzo has found AI has reshaped Western legal and ethical landscapes at unprecedented speed. Credit: Charles Darwin University

“This is a very significant issue that is only going to get worse without adequate regulation,” Dr. Randazzo said.

“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour.

“It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

Global approaches to AI governance

Currently, the world’s three dominant digital powers – the United States, China, and the European Union – are taking markedly different approaches to AI, leaning on market-centric, state-centric, and human-centric models, respectively.

Dr. Randazzo said the EU’s human-centric approach is the preferred path to protect human dignity, but without a global commitment to this goal, even that approach falls short.

“Globally, if we don’t anchor AI development to what makes us human – our capacity to choose, to feel, to reason with care, to empathy and compassion – we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” she said.

“Humankind must not be treated as a means to an end.”

Reference: “Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes” by Maria Salvatrice Randazzo and Guzyal Hill, 23 April 2025, Australian Journal of Human Rights.
DOI: 10.1080/1323238X.2025.2483822

The paper is the first in a trilogy Dr. Randazzo will produce on the topic.

Never miss a breakthrough: Join the SciTechDaily newsletter.



Source link

Continue Reading

AI Insights

Mexico says works created by AI cannot be granted copyright

Published

on


In an era where artwork is increasingly influenced and even created by Artificial Intelligence (AI), Mexico’s Supreme Court (SCJN) has ruled that works generated exclusively by AI cannot be registered under the copyright regime. According to the ruling, authorship belongs solely to humans. 

“This resolution establishes a legal precedent regarding AI and intellectual property in Mexico,” the Copyright National Institute (INDAUTOR) said on Aug. 28 in a statement on its official X account following the SCJN’s decision.

The SCJN’s unanimous decision said that the Federal Copyright Law (LFDA) reserves authorship to humans, and that any creative invention generated exclusively by algorithms lacks a human author to whom moral rights can be attributed. 

According to the Supreme Court, automated systems do not possess the necessary qualities of creativity, originality and individuality that are considered human attributes for authorship.

“The SCJN resolved that copyright is a human right exclusive to humans derived from their creativity, intellect, feelings and experiences,” it said. 

The Supreme Court resolved that works generated autonomously by artificial intelligence do not meet the originality requirements of the LFDA. It said that those requirements are constitutional as limiting authorship to humans is “objective, reasonable and compatible with international treaties.” 

It further added that protections to AI can’t be granted on the same basis as humans, since both have intrinsically different characteristics. 

What was the case about?

In August 2024, INDAUTOR denied the registration application for “Virtual Avatar: Gerald García Báez,” created with an AI dubbed Leonardo, on the basis that it lacked human intervention.

The AI-created avatar in question. (SCJN)

“The registration was denied on the grounds that the Federal Copyright Law (LFDA) requires that works be of human creation, with the characteristic of originality as an expression of the author’s individuality and personality,” INDAUTOR said. 

The applicant contested the denial, arguing that creativity should not be restricted to humans. In the opinion of the defendant, excluding works generated by AI violated the principles of equality, human rights and international treaties, including the United States, Mexico and Canada agreement (USMCA) and the Berne Convention. 

However, the Supreme Court clarified that such international treaties do not oblige Mexico to give copyrights to non-human entities or to extend the concept of authorship beyond what is established in the LFDA.  

Does the resolution allow registration of works generated with AI? 

Yes, provided there is a substantive and demonstrable human contribution. This means that works created in collaboration with AI, in which humans direct, select, edit or transform the result generated by AI until it is endowed with originality and a personal touch, are subject to registration before INDAUTOR. 

Intellectual property specialists consulted by the newspaper El Economista explained that to register creative work developed in collaboration with AI, it is important to document the human intervention and submit the creative process in a way that aligns with the LFDA. 

Mexico News Daily



Source link

Continue Reading

Trending