Connect with us

AI Research

New Artificial Intelligence Model Accurately Identifies Which Atrial Fibrillation Patients Need Blood Thinners to Prevent Stroke

Published

on


FOR IMMEDIATE RELEASE                

Contact:  Ilana Nikravesh                             
Mount Sinai Press Office                              
212-241-9200                              
ilana.nikravesh@mountsinai.org

New Artificial Intelligence Model Accurately Identifies Which Atrial Fibrillation Patients Need Blood Thinners to Prevent Stroke
Mount Sinai late-breaking study could transform standard treatment course and has profound ramifications for global health

Conference: “Late Breaking Science” presentation at the European Society of CardiologyAI driven cardiovascular biomarkers and clinical decisions

Title: Graph Neural Network Automation of Anticoagulation Decision-Making

Date: Embargo lifts Monday, September 1, 4:00 pm EST

Newswise — Bottom Line: Mount Sinai researchers developed an AI model to make individualized treatment recommendations for atrial fibrillation (AF) patients—helping clinicians accurately decide whether or not to treat them with anticoagulants (blood thinner medications) to prevent stroke, which is currently the standard treatment course in this patient population. This model presents a completely new approach for how clinical decisions are made for AF patients and could represent a potential paradigm shift in this area.

In this study, the AI model recommended against anticoagulant treatment for up to half of the AF patients who otherwise would have received it based on standard-of-care tools. This could have profound ramifications for global health.

Why the study is important: AF is the most common abnormal heart rhythm, impacting roughly 59 million people globally. During AF, the top chambers of the heart quiver, which allows blood to become stagnant and form clots. These clots can then dislodge and go to the brain, causing a stroke. Blood thinners are the standard treatment for this patient population to prevent clotting and stroke; however, in some cases this medication can lead to major bleeding events.

This AI model uses the patient’s whole electronic health record to recommend an individualized treatment recommendation. It weighs the risk of having a stroke against the risk of major bleeding (whether this would occur organically or as a result of treatment with the blood thinner). This approach to clinical decision-making is truly individualized compared to current practice, where clinicians use risk scores/tools that provide estimates of risk on average over the studied patient population, not for individual patients. Thus, this model provides a patient-level estimate of risk, which it then uses to make an individualized recommendation taking into account the benefits and risks of treatment for that person.

The study could revolutionize the approach clinicians take to treat a very common disease to minimize stroke and bleeding events. It also reflects a potential paradigm change for how clinical decisions are made.

Why this study is unique: This is the first-known individualized AI model designed to make clinical decisions for AF patients using underlying risk estimates for the specific patient based on all of their actual clinical features. It computes an inclusive net-benefit recommendation to mitigate stroke and bleeding. 

How the research was conducted: Researchers trained the AI model on electronic health records of 1.8 million patients over 21 million doctor visits, 82 million notes, and 1.2 billion data points. They generated a net-benefit recommendation on whether or not to treat the patient with blood thinners.

To validate the model, researchers tested the model’s performance among 38,642 patients with atrial fibrillation within the Mount Sinai Health System. They also externally validated the model on 12,817 patients from publicly available datasets from Stanford.

Results: The model generated treatment recommendations that aligned with mitigating stroke and bleeding. It reclassified around half of the AF patients to not receive anticoagulation. These patients would have received anticoagulants under current treatment guidelines.

What this study means for patients and clinicians: This study represents a new era in caring for patients. When it comes to treating AF patients, this study will allow for more personalized, tailored treatment plans.

Quotes:  

“This study represents a profound modernization of how we manage anticoagulation for patients with atrial fibrillation and may change the paradigm of how clinical decisions are made,” says corresponding author Joshua Lampert, MD, Director of Machine Learning at Mount Sinai Fuster Heart Hospital. “This approach overcomes the need for clinicians to extrapolate population-level statistics to individuals while assessing the net benefit to the individual patient—which is at the core of what we hope to accomplish as clinicians. The model can not only compute initial recommendations, but also dynamically update recommendations based on the patient’s entire electronic health record prior to an appointment. Notably, these recommendations can be decomposed into probabilities for stroke and major bleeding, which relieves the clinician of the cognitive burden of weighing between stroke and bleeding risks not tailored to an individual patient, avoids human labor needed for additional data gathering, and provides discrete relatable risk profiles to help counsel patients.”

“This work illustrates how advanced AI models can synthesize billions of data points across the electronic health record to generate personalized treatment recommendations. By moving beyond the ‘one size fits none’ population-based risk scores, we can now provide clinicians with individual patient-specific probabilities of stroke and bleeding, enabling shared decision making and precision anticoagulation strategies that represent a true paradigm shift,”adds co-corresponding author Girish Nadkarni, MD, MPH, Chair of the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai.  

“Avoiding stroke is the single most important goal in the management of patients with atrial fibrillation, a heart rhythm disorder that is estimated to affect 1 in 3 adults sometime in their life”, says co-senior author, Vivek Reddy MD, Director ofCardiac Electrophysiology at the Mount Sinai Fuster Heart Hospital. “If future randomized clinical trials demonstrate that this Ai Model is even only a fraction as effective in discriminating the high vs low risk patients as observed in our study, the Model would have a profound effect on patient care and outcomes.”

“When patients get test results or a treatment recommendation, they might ask, ‘What does this mean for me specifically?’ We created a new way to answer that question. Our system looks at your complete medical history and calculates your risk for serious problems like stroke and major bleeding prior to your medical appointment. Instead of just telling you what might happen, we show you both what and how likely it is to happen to you personally. This gives both you and your doctor a clearer picture of your individual situation, not just general statistics that may miss important individual factors,” says co-first author Justin Kauffman, Data Scientiest with the Windreich Department of Artificial Intelligence and Human Health.

Mount Sinai Is a World Leader in Cardiology and Heart Surgery

Mount Sinai Fuster Heart Hospital at The Mount Sinai Hospital ranks No. 2 nationally for cardiology, heart, and vascular surgery, according to U.S. News & World Report®. It also ranks No. 1 in New York and No. 6 globally according to Newsweek’s “The World’s Best Specialized Hospitals.”  

It is part of Mount Sinai Health System, which is New York City’s largest academic medical system, encompassing seven hospitals, a leading medical school, and a vast network of ambulatory practices throughout the greater New York region. We advance medicine and health through unrivaled education and translational research and discovery to deliver care that is the safest, highest-quality, most accessible and equitable, and the best value of any health system in the nation. The Health System includes approximately 9,000 primary and specialty care physicians; 10 free-standing joint-venture centers throughout the five boroughs of New York City, Westchester, Long Island, and Florida; and 48 multidisciplinary research, educational, and clinical institutes. Hospitals within the Health System are consistently ranked by Newsweek’s® “The World’s Best Smart Hospitals” and by U.S. News & World Report‘s® “Best Hospitals” and “Best Children’s Hospitals.” The Mount Sinai Hospital is on the U.S. News & World Report‘s® “Best Hospitals” Honor Roll for 2025-2026.

For more information, visit https://www.mountsinai.org or find Mount Sinai on Facebook, Instagram, LinkedIn, X, and YouTube.

For more Mount Sinai artificial intelligence news, visit: https://icahn.mssm.edu/about/artificial-intelligence.   

 





Source link

AI Research

Artificial intelligence loses out to humans in credibility during corporate crisis responses

Published

on


As artificial intelligence tools become increasingly integrated into public relations workflows, many organizations are considering whether these technologies can handle high-stakes communication tasks such as crisis response. A new study published in Corporate Communications: An International Journal provides evidence that, at least for now, human-written crisis messages are perceived as more credible and reputationally beneficial than those authored by AI systems.

The rise of generative artificial intelligence has raised questions about its suitability for replacing human labor in communication roles. In public relations, AI tools are already used for media monitoring, message personalization, and social media management. Some advocates even suggest that AI could eventually author press releases or crisis response messages.

However, prior studies have found that people often view AI-generated messages with suspicion. Despite improvements in the sophistication of these systems, the use of AI can still reduce perceptions of warmth, trustworthiness, and competence. Given the importance of credibility and trust in public relations, especially during crises, the study aimed to evaluate how the perceived source of a message—human or AI—affects how people interpret crisis responses.

The researchers also wanted to assess whether the tone or strategy of a message—whether sympathetic, apologetic, or informational—would influence perceptions. Drawing on situational crisis communication theory, they hypothesized that more accommodating responses might boost credibility and protect organizational reputation, regardless of the source.

“Our interest in understanding how people judge the credibility of AI-generated text grew out of a graduate class Ayman Alhammad (the lead author) took with Cameron Piercy (the third author). They talked about research about trust in AI during the class and the questions we posed in our study naturally grew out of that,” said study author Christopher Etheridge, an assistant professor in the William Allen White School of Journalism and Mass Communications at the University of Kansas.

To explore these questions, the researchers designed a controlled experiment using a hypothetical crisis scenario. Participants were told about a fictitious company called the Chunky Chocolate Company, which was facing backlash after a batch of its chocolate bars reportedly caused consumers to become ill. According to the scenario, the company had investigated the incident and determined that the problem was due to product tampering by an employee.

Participants were then shown one of six possible press releases responding to the crisis. The releases varied in two key ways. First, they were attributed either to a human spokesperson (“Chris Smith”) or to an AI system explicitly labeled as such. Second, the tone of the message followed one of three common strategies: informational (providing details about the incident), sympathetic (expressing empathy for affected customers), or apologetic (taking responsibility and issuing an apology).

The wording of the messages was carefully controlled to ensure consistency across versions, with only the source and emotional tone changing between conditions. After reading the message, participants were asked to rate the perceived credibility of the author, the credibility of the message, and the overall reputation of the company. These ratings were made using standardized scales based on prior research.

The sample included 447 students enrolled in journalism and communication courses at a public university in the Midwestern United States. These participants were chosen because of their familiarity with media content and their relevance as potential future professionals or informed consumers of public relations material. Their average age was just over 20 years old, and most participants identified as white and either full- or part-time employed.

The results provided clear support for the idea that human authors are still viewed as more credible than AI. Across all three key outcomes—source credibility, message credibility, and organizational reputation—participants rated human-written messages higher than identical messages attributed to AI.

“It’s not surprising, given discussions that are taking place, that people found AI-generated content to be less credible,” Etheridge told PsyPost. “Still, capturing the data and showing it in an experiment like this one is valuable as the landscape of AI is ever-changing.”

Participants who read press releases from a human author gave an average source credibility rating of 4.40 on a 7-point scale, compared to 4.11 for the AI author. Message credibility followed a similar pattern, with human-authored messages receiving an average score of 4.82, compared to 4.38 for AI-authored versions. Finally, organizational reputation was also judged to be higher when the company’s message came from a human, with ratings averaging 4.84 versus 4.49.

These differences, while not massive, were statistically significant and suggest that the mere presence of an AI label can diminish trust in a message. Importantly, the content of the message was identical across the human and AI conditions. The only change was who—or what—was said to have authored it.

In contrast, the tone or strategy of the message (apologetic, sympathetic, or informational) did not significantly influence any of the credibility or reputation ratings. Participants did perceive the tone differences when asked directly, meaning the manipulations were effective. But these differences did not translate into significantly different impressions of the author, message, or company. Even though past research has emphasized the importance of an apologetic or sympathetic tone during a crisis, this study found that source effects had a stronger influence on audience perceptions.

“People are generally still pretty weary of AI-generated messages,” Etheridge explained. “They don’t find them as credible as human-written content. In our case, news releases written by humans are more favorably viewed by readers than those written by AI. For people who are concerned about AI replacing jobs, that could be welcome news. We caution Public Relations agencies against over-use of AI, as it could hurt their reputation with the public when public reputation is a crucial measure of the industry.”

But as with all research, there are some caveats to consider. The study relied on a fictional company and crisis scenario, which might not fully capture real-world reactions, and participants—primarily university students—may not represent broader public attitudes due to their greater familiarity with AI. Additionally, while the study clearly labeled the message as AI-generated, real-world news releases often lack such transparency, raising questions about how audiences interpret content when authorship is ambiguous.

“We measured credibility and organizational reputation but didn’t really look at other important variables like trust or message retention,” Etheridge said. “We also may have been more transparent about our AI-generated content than a professional public relations outlet might be, but that allowed us to clearly measure responses. Dr. Alhammad is leading where our research effort might go from here. We have talked about a few ideas, but nothing solid has formed as of yet.”

The study, “Credibility and organizational reputation perceptions of news releases produced by artificial intelligence,” was authored by Ayman Alhammad, Christopher Etheridge, and Cameron W. Piercy.



Source link

Continue Reading

AI Research

Oxford and Ellison Institute Collaborate to Integrate AI in Vaccine Research – geneonline.com

Published

on



Oxford and Ellison Institute Collaborate to Integrate AI in Vaccine Research  geneonline.com



Source link

Continue Reading

AI Research

Chinese social media firms comply with strict AI labelling law, making it clear to users and bots what’s real and what’s not

Published

on


Chinese social media companies have begun requiring users to classify AI generated content that is uploaded to their services in order to comply with new government legislation. By law, the sites and services now need to apply a watermark or explicit indicator of AI content for users, as well as include metadata for web crawling algorithms to make it clear what was generated by a human and what was not, according to SCMP.

Countries and companies the world over have been grappling with how to deal with AI generated content since the explosive growth of popular AI tools like ChatGPT, Midjourney, and Dall-E. After drafting the new law in March, China has now implemented it, taking the lead in increasing oversight and curtailing rampant use with its new labeling law making social media companies more responsible for the content on their platforms.



Source link

Continue Reading

Trending