Connect with us

AI Research

Studying Falls to Save Lives

Published

on


For older adults, falling is a major concern. It’s the leading cause of injury for those over 65 and the consequences can be life-threatening.

“Falling itself isn’t the issue—it’s the injury that’s really harmful for older adults,” says Syracuse University exercise science professor Yaejin Moon, who lost two of her grandparents to fall-related injuries.

The experience of losing a family member, friend or neighbor from complications after a fall is all too universal. That’s why Moon and Ph.D. student Reese Michaels G’24 are using cutting-edge research tactics—combining advanced artificial intelligence (AI) video analysis with traditional lab research—to learn how people fall and how to prevent serious injury.

Analyzing Falls With AI and Custom Code

AI-powered tools like OpenPose and WHAM are replacing traditional motion-tracking markers, allowing researchers to study movement more easily in real-world settings.

Traditionally, studying human movement meant attaching motion-tracking markers to the body—a technique common in gaming, film and movement science. Today, however, advances in AI make it possible to analyze movement directly from standard video footage.

“If we take a video—even from an iPhone—and input it into the system, the AI can automatically detect key body points and track motion. We don’t need markers anymore,” explains Moon, referring to AI-based pose estimation algorithms such as OpenPose.

Working with researchers in Canada, Moon and Michaels have access to over 1,700 real-life fall videos from surveillance footage in long-term care facilities and hospitals. Using OpenPose and Michaels’ custom code, the research pair track body position and extract biomechanical data to identify which types of falls result in injury and evaluate which movements protect against harm.

“It’s like having access to a black box for accidents,” Moon says. “We can analyze exactly what happened.”

Although Michaels had no prior coding experience, he took a graduate-level Python course through Syracuse’s School of Information Studies. “It was trial by fire, but I was able to write code for one of our projects, and I realized I could apply those skills in a meaningful way to research,” says Michaels, who started working with Moon in the Falk College of Sport as an exercise science master’s student two years ago.

“He can calculate things like velocity of the fall, acceleration and knee angle at the moment of impact—very specific biomechanical outcomes—all generated through his own programming,” Moon says.

As the AI models continue to improve, the team’s research also advances. “These newer AI models can track movement in three dimensions rather than two,” Michaels explains. “That gives us much more insight into things like joint angles during a fall, which opens the door to more realistic and accurate analysis.”

“The goal is to implement this kind of technology in long-term care settings to get real-time insights into how people move and how injuries happen,” Michaels says.

Person walking on a treadmill while a researcher collects data.

Falk College professor Yaejin Moon (left) uses a special treadmill to simulate sudden loss of balance, while motion-capture cameras track how participants respond.

In the lab, the AI models are validated using a specialized treadmill that safely simulates balance loss. The treadmill can move forward, backward and side to side while participants wear a safety harness and adjust to the sudden changes in movement. Motion-capture cameras record every step and reaction.

Falls happen in three phases: the initial phase (standing or walking normally), the loss-of-balance phase (when the fall begins) and the impact phase (when the body hits the ground).

Person walking on treadmill to simulate falls.

New AI models allow researchers to track movement in 3D, greatly improving the accuracy and realism of fall analysis.

“The perturbation treadmill is used to study that second phase—the moment when balance is lost,” Moon says. “We analyze how people react to losing balance and how they try to recover.”

The research also explores dual-task conditions—how cognitive load impacts the ability to recover balance. Participants are asked to perform mental tasks, such as listing animals or counting backward from 100 by sevens, while walking. This adds a layer of realism, simulating situations where older adults might be distracted by thinking, talking or multitasking while moving.

“Do we recover balance faster when we’re focused solely on walking? Or is our response slower or different when our attention is divided?” Moon inquires.

Research in the Real World

People working on research in a lab.

Ph.D. student Reese Michaels G’24 is the lead author of two studies—one published in Scientific Reports and another currently under review in the Journal of Biomechanics.

So, how will this ongoing research impact people’s everyday lives? Moon breaks it down into three key components: “First is understanding the mechanisms—how the body and mind work together during a fall. Second is developing intervention programs. And third is improving technology.”

Michaels, who is now in his second year of the exercise science Ph.D. program, is especially focused on improving technology.

Person walking on a treadmill while two other people run lab tests on her.

A third-degree black belt in Taekwondo, Moon began her research by teaching older adults how to fall safely using martial arts. Now, she and Michaels are using AI tools to better understand falls and develop new ways to prevent serious injuries.

“One of our next steps is feeding outputs from pose estimation models into a machine learning algorithm that could predict impact force—how hard someone hit the ground,” explains Michaels. “That would give us a direct measure of whether a fracture or injury occurred.”

The pair is also working to make their video analysis methods more generalizable. With ongoing AI advancements and more real-world video data, the team hopes to analyze situations that can’t be replicated in a lab, such as falls down a set of stairs, and to address different age and health groups.

By combining AI, biomechanics and real-world data, this research is not only advancing the study of falls but also laying the foundation for innovative solutions to prevent injuries in aging populations. As technology continues to evolve, their work promises to lead to more precise strategies that could significantly reduce the risks older adults face, ultimately improving their quality of life and safety.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Spotlab.ai hiring AI research scientist for multimodal diagnostics and global health

Published

on


In a LinkedIn post, Miguel Luengo-Oroz, co-founder and CEO of Spotlab.ai, confirmed the company is hiring an Artificial Intelligence Research Scientist. The role is aimed at early career researchers, postdoctoral candidates, and recent PhD graduates in AI.

Luengo-Oroz writes: “Are you a young independent researcher, postdoc, just finished your PhD (or on the way there) in AI and wondering what’s next? If you’re curious, ready to tackle tough scientific and technical challenges, and want to build AI for something that matters, this might be for you.”

Spotlab.ai targets diagnostics role with new hire

The position will focus on building and deploying multimodal AI solutions for diagnostics and biopharma research. Applications include blood cancers and neglected tropical diseases.

The scientist will be expected to organize and prepare biomedical datasets, train and test AI models, and deploy algorithms in real-world conditions. The job description highlights interaction with medical specialists and product managers, as well as drafting technical documentation. Scientific publications are a priority, with the candidate expected to contribute across the research cycle from experiment planning to peer review.

Spotlab.ai is looking for candidates with experience in areas such as biomedical image processing, computer vision, NLP, video processing, and large language models. Proficiency in Python and deep learning frameworks including TensorFlow, Keras, and PyTorch is required, with GPU programming experience considered an advantage.

Company positions itself in global health AI

Spotlab.ai develops multimodal AI for diagnostics and biopharma research, with projects addressing gaps in hematology, infectious diseases, and neglected tropical diseases. The Madrid-based startup team combines developers, engineers, doctors, and business managers, with an emphasis on gender parity and collaboration across disciplines.

CEO highlights global mission

Alongside the job listing, Luengo-Oroz underscored the company’s broader mission. A former Chief Data Scientist at the United Nations, he has worked on technology strategies in areas ranging from food security to epidemics and conflict prevention. He is also the inventor of MalariaSpot.org, a collective intelligence videogame for malaria diagnosis.

Luengo-Oroz writes: “Take the driver’s seat of our train (not just a minion) at key stages of the journey, designing AI systems and doing science at Champions League level from Madrid.”



Source link

Continue Reading

AI Research

YARBROUGH: A semi-intelligent look at artificial intelligence – Rockdale Citizen

Published

on



YARBROUGH: A semi-intelligent look at artificial intelligence  Rockdale Citizen



Source link

Continue Reading

AI Research

Rice University creative writing course introduced Artificial Intelligence, AI

Published

on


Ian Schimmel teaches the new AI fiction course. The course invites writers to incorporate or resist the influence of AI in creative writing.

Courtesy Brandi Smith

By
Abigail Chiu
   
9/9/25 10:29pm

Rice is bringing generative artificial intelligence into the creative writing world with this fall’s new course, “ENGL 306: AI Fictions.” Ian Schimmel, an associate teaching professor in the English and creative writing department, said he teaches the course to help students think critically about technology and consider the ways that AI models could be used in the creative processes of fiction writing.

The course is structured for any level of writer and also includes space to both incorporate and resist the influence of AI, according to its description. 

“In this class, we never sit down with ChatGPT and tell it to write us a story and that’s that,” Schimmel wrote in an email to the Thresher. “We don’t use it to speed up the artistic process, either. Instead, we think about how to incorporate it in ways that might expand our thinking.”



Schimmel said he was stunned by the capabilities of ChatGPT when it was initially released in 2022, wondering if it truly possessed the ability to write. He said he found that the topic generated more questions than answers. 

The next logical step, for Schimmel, was to create a course centered on exploring the complexities of AI and fiction writing, with assigned readings ranging from New York Times opinion pieces critical of its usage to an AI-generated poetry collection.  

Schimmel said both students and faculty share concerns about how AI can help or harm academic progress and potentially cripple human creativity.

“Classes that engage students with AI might be some of the best ways to learn about what these systems can and cannot do,” Schimmel wrote. “There are so many things that AI is terrible at and incapable of. Seeing that firsthand is empowering. Whenever it hallucinates, glitches or makes you frustrated, you suddenly remember: ‘Oh right — this is a machine. This is nothing like me.”

“Fear is intrinsic to anything that shakes industry like AI is doing,” Robert Gray, a Brown College senior, wrote in an email to the Thresher. “I am taking this class so that I can immerse myself in that fear and learn how to navigate these new industrial landscapes.”

The course approaches AI from a fluid perspective that evolves as the class reads and writes more with the technology, Schimmel said. Their answers to the complex ethical questions surrounding AI usage evolve with this.

“At its core, the technology is fundamentally unethical,” Schimmel wrote. “It was developed and enhanced, without permission, on copyrighted text and personal data and without regard for the environment. So in that failed historical context, the question becomes: what do we do now? Paradoxically, the best way for us to formulate and evidence arguments against this technology might be to get to know it on a deep and personal level.”

Generative AI is often criticized for its ethicality, such as the energy output and water demanded for its data centers to function or how the models are trained based on data sets of existing copyrighted works

Amazon and Google-backed Anthropic recently settled a class-action lawsuit with a group of U.S. authors who accused the company of using millions of pirated books to train its Claude chatbot to respond to human prompts.

With the assistance of AI, students will be able to attempt large-scale projects that typically would not be possible within a single semester, according to the course overview. AI will accelerate the writing process for drafting a book outline, and students can “collaborate” with AI to write the opening chapters of a novel for NaNoWriMo, a worldwide writing event held every November where participants would produce a 50,000-word first draft of a novel.

NaNoWriMo, short for National Novel Writing Month, announced its closing after more than 20 years in spring 2025. It received widespread press coverage for a statement released in 2024 that said condemnation of AI in writing “has classist and ableist undertones.” Many authors spoke out against the perceived endorsement of using generative AI for writing and the implication that disabled writers would require AI to produce work.

Each weekly class involves experimentation in dialogues and writing sessions with ChatGPT, with Schimmel and his students acknowledging the unknown and unexplored within AI and especially the visual and literary arts. Aspects of AI, from creative copyrights to excessive water usage to its accuracy as an editor, were discussed in one Friday session in the Wiess College classroom.

“We’re always better off when we pay attention to our attention. If there’s a topic (or tech) that creates worry, or upset, or raises difficult questions, then that’s a subject that we should pursue,” Schimmel wrote. “It’s in those undefined, sometimes uncomfortable places where we humans do our best, most important learning.”






Source link

Continue Reading

Trending