Connect with us

AI Research

Empowering clinicians with intelligence at the point of conversation

Published

on

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

SBU Researchers Use AI to Advance Alzheimer’s Detection

Published

on


Shan Lin

Alzheimer’s disease is one of the most urgent public health challenges for aging Americans. Nearly seven million Americans over the age of 65 are currently living with the disease, and that number is projected to nearly double by 2060, according to the Alzheimer’s Association.

Early diagnosis and continuous monitoring are crucial to improving care and extending independence, but there isn’t enough high-quality, Alzheimer’s-specific data to train artificial intelligence systems that could help detect and track the disease.

Shan Lin, associate professor of Electrical and Computer Engineering at Stony Brook University, along with PhD candidate Heming Fu, are working with Guoliang Xing from The Chinese University of Hong Kong to create a network of data based on Alzheimer’s patients. Together they developed SHADE-AD (Synthesizing Human Activity Datasets Embedded with AD features), a generative AI framework designed to create synthetic, realistic data that reflects the motor behaviors of Alzheimer’s patients.

Shade-AD
This figure provides the design overview of Shade-AD. The Training Process involves three stages: Stage 1 learns general human actions; Stage 2 embeds AD-specific knowledge; and Stage 3 fine-tunes the model based on patient-specific motion metrics.

Movements like stooped posture, reliance on armrests when standing from sitting, or slowed gait may appear subtle, but can be early indicators of the disease. By identifying and replicating these patterns, SHADE-AD provides researchers and physicians with the data required to improve monitoring and diagnosis.

Unlike existing generative models, which often rely on and output generic datasets drawn from healthy individuals, SHADE-AD was trained to embed Alzheimer’s-specific traits. The system generates three-dimensional “skeleton videos,” simplified figures that preserve details of joint motion. These 3D skeleton datasets were validated against real-world patient data, with the model proving capable of reproducing the subtle changes in speed, angle, and range of motion that distinguish Alzheimer’s behaviors from those of healthy older adults. 

The results and findings, published and presented at the 23rd ACM Conference on Embedded Networked Sensor Systems (SenSys 2025), have been significant. Activity recognition systems trained with SHADE-AD’s data achieved higher accuracy across all major tasks compared with systems trained on traditional data augmentation or general open datasets. In particular, SHADE-AD excelled at recognizing actions like walking and standing up, which often reveal the earliest signs of decline for Alzheimer’s patients.

Shade-AD skeleton
This figure illustrates the comparison of “standing up from a chair” motion between a healthy elder and an AD patient.

Lin believes this work could have a significant impact on the daily lives of older adults and their families. Technologies built on SHADE-AD could one day allow doctors to detect Alzheimer’s sooner, track disease progression more accurately, and intervene earlier with treatments and support. “If we can provide tools that spot these changes before they become severe, patients will have more options, and families will have more time to plan,” he said. 

With September recognized nationally as Healthy Aging Month, Lin sees this research as part of an effort to use technology to support older adults in living longer, healthier, and more independent lives. “Healthy aging isn’t only about treating illness, but also about creating systems that allow people to thrive as they grow older,” he said. “AI can be a powerful ally in that mission.”

— Beth Squire



Source link

Continue Reading

AI Research

Penn State Altoona professor to launch ‘Metabytes: AI + Humanities Lunch Lab’

Published

on


ALTOONA, Pa. — John Eicher, associate professor of history at Penn State Altoona, will launch the “Metabytes: AI + Humanities Lunch Lab” series on Tuesday, Oct. 7, from noon to 1 p.m. in room 102D of the Smith Building.

As artificial intelligence (AI) systems continue to advance, students need the tools to engage them not only technically, but also intelligently, ethically and creatively. The AI + Humanities Lab will serve as a cross-disciplinary space where humanistic inquiry meets cutting-edge technology, helping students ask the deeper questions that surround this emerging force. By blending hands-on experimentation with philosophical and ethical reflection, the lab aims to give students a critical edge: The ability to see AI not just as a tool, but as a cultural and intellectual phenomenon that requires serious and sober engagement.

Each session will begin with a text, image or prompt shared with an AI model. Participants will then interpret and discuss the responses as philosophical or creative expressions. These activities will ask students to grapple with questions of authority, authenticity, consciousness, choice, empathy, interpretation and what it even means to “understand.”

The lab will run each Tuesday from Oct. 7 through Nov. 18, with the exception of Oct. 14. Sessions are drop-in, open to all and participants may bring their lunch.



Source link

Continue Reading

AI Research

Research: Reviewer Split on Generative AI in Peer Review

Published

on


A new global reviewer survey from IOP Publishing (IOPP) reveals a growing divide in attitudes among reviewers in the physical sciences regarding the use of generative AI in peer review. The study follows a similar survey conducted last year showing that while some researchers are beginning to embrace AI tools, others remain concerned about the potential negative impact, particularly when AI is used to assess their own work.

Currently, IOPP does not allow the use of AI in peer review as generative models cannot meet the ethical, legal, and scholarly standards required. However, there is growing recognition of AI’s potential to support, rather than replace, the peer review process.

Key Findings:

  • 41% of respondents now believe generative AI will have a positive impact on peer review (up 12% from 2024), while 37% see it as negative (up 2%). Only 22% are neutral or unsure—down from 36% last year—indicating growing polarisation in views.
  • 32% of researchers have already used AI tools to support them with their reviews.
  • 57% would be unhappy if a reviewer used generative AI to write a peer review report on a manuscript they had co-authored and 42% would be unhappy if AI were used to augment a peer review report.
  • 42% believe they could accurately detect an AI-written peer review report on a manuscript they had co-authored.

Women tend to feel less positive about the potential of AI compared with men, suggesting a gendered difference in the usefulness of AI in peer review. Meanwhile, more junior researchers appear more optimistic about the benefits of AI, compared to their more senior colleagues who express greater scepticism.

When it comes to reviewer behaviour and expectations, 32% of respondents reported using AI tools to support them during the peer review process in some form. Notably, over half (53%) of those using AI said they apply it in more than one way. The most common use (21%) was for editing grammar and improving the flow of text and 13% said they use AI tools to summarise or digest articles under review, raising serious concerns around confidentiality and data privacy. A small minority (2%) admitted to uploading entire manuscripts into AI chatbots asking it to generate a review on their behalf.

Interestingly, 42% of researchers believe they could accurately detect an AI-written peer review report on a manuscript they had co-authored.

“These findings highlight the need for clearer community standards and transparency around the use of generative AI in scholarly publishing. As the technology continues to evolve, so too must the frameworks that support ethical and trustworthy peer review”, said Laura Feetham-Walker, Reviewer Engagement Manager at IOP Publishing and lead author of the study.

“One potential solution is to develop AI tools that are integrated directly into peer review systems, offering support to reviewers and editors without compromising security or research integrity. These tools should be designed to support, rather than replace, human judgment. If implemented effectively, such tools would not only address ethical concerns but also mitigate risks around confidentiality and data privacy; particularly the issue of reviewers uploading manuscripts to third-party generative AI platforms,” adds Feetham-Walker.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.



Source link

Continue Reading

Trending