Connect with us

AI Research

USF-developed facial analysis AI detects PTSD in kids • St Pete Catalyst

Published

on


Diagnosing post-traumatic stress disorder (PTSD) in children presents significant challenges, as many struggle to share their experiences or explain how they feel. Artificial intelligence could help clinicians overcome those hurdles.

Researchers at the University of South Florida have combined their expertise in childhood trauma and artificial intelligence (AI) to create an objective, cost-effective tool that helps identify PTSD through facial expressions. The technology also tracks recovery, as the overarching goal is to improve pediatric and adolescent patient outcomes.

Alison Salloum, a professor at the USF School of Social Work, and Shaun Canavan, an associate professor in the Bellini College for Artificial Intelligence, Cybersecurity and Computing, lead an interdisciplinary team developing the system. They found that AI can detect distinct patterns in the facial movements of youth who have experienced trauma.

“Avoidance is a main component of PTSD, so children don’t want to talk about it,” Salloum said. “There are lots of reasons – one is just the sheer horror of what happened.”

Clinicians typically rely on subjective clinical interviews and self-reported questionnaires when diagnosing PTSD in children. Cognitive development, language skills, avoidance behaviors and emotional suppression often hinder those efforts.

Salloum also noted that children are reluctant to verbalize their experiences “because they don’t want to upset their parents any more.” Many children realize that revisiting traumatic experiences will compound the emotional toll on their parents and instead choose to compartmentalize their thoughts.

However, Salloum noticed that “child after child” exhibited intense facial expressions during virtual interviews for a clinical trial. She asked Canavan if he could systematically capture those moments to help them understand “what children are going through.”

Canavan, who specializes in facial analysis and emotion recognition, was happy to provide his technological expertise. He repurposed existing lab tools to build a new system that prioritizes patient privacy.

“I was confident it would work,” Canavan said.

Their study, published in Science Direct, validated his self-assurance. The first step was ensuring anonymity.

Canavan stressed that they didn’t use “raw features,” and only kept unidentifiable data on facial movements, head poses and whether the child was talking to a parent or clinician. Salloum called that a “critical piece” of the system that will foster future use.

Their study is now the first to incorporate contextual PTSD classifications while preserving data privacy. The interdisciplinary team built the model after recording 18 sessions with children as they shared traumatic experiences.

Canavan’s AI had over 100 minutes of video per child, each containing roughly 185,000 frames, to extract subtle facial movements linked to emotional expression. The technology detected distinct patterns, including the inability to convey emotion, among those with PTSD.

Salloum explained that accurately understanding symptomology will lead to better treatments, help track patient progress and determine when care should conclude. “We also want to make sure that when we’re ending treatment, that child is really back on track developmentally and not experiencing post-traumatic reactions,” she said.

“It was not surprising that our hypothesis, the way we set it up, worked,” Canavan said. “We’ve had previous experiments in other areas where we’ve shown this analysis works.”

He said AI is “absolutely another tool” for health care providers. The team now hopes to refine their system.

Rather than waiting to process videos, Canavan wants to create a model that analyzes facial expressions in real-time. His team would then create a user interface for clinicians that provides an immediate analysis.

Salloum said adolescents were eager to volunteer for the pilot study. “They liked the idea of technology helping.”

However, with additional funding, she hopes to test the AI on pediatric patients hindered by their cognitive development. “For young children, we have to rely on parent assessment and interviewing the parent,” she added.

Canavan said the system could benefit adults, like combat veterans and domestic abuse survivors, who internalize PTSD symptoms. A “big part” of his research revolves around applying technology that works for one group of people to different demographics.

Canavan stressed that AI is a clinical aid rather than a substitute. Salloum noted the importance of ensuring a model is valid and accurate “before people become comfortable with it.”

“It doesn’t replace the other methods of asking questions, conversations and interviews about what the person has experienced,” she continued. “It’s really a tool.”

 

 

 





Source link

AI Research

EU Publishes Final AI Code of Practice to Guide AI Companies

Published

on

By


The European Commission said Thursday (July 10) that it published the final version of a voluntary framework designed to help artificial intelligence companies comply with the European Union’s AI Act.

The General-Purpose AI Code of Practice seeks to clarify legal obligations under the act for providers of general-purpose AI models such as ChatGPT, especially those posing systemic risks like ones that help fraudsters develop chemical and biological weapons.

The code’s publication “marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent,” Henna Virkkunen, executive vice president for tech sovereignty, security and democracy for the commission, which is the EU’s executive arm, said in a statement.

The code was developed by 13 independent experts after hearing from 1,000 stakeholders, which included AI developers, industry organizations, academics, civil society organizations and representatives of EU member states, according to a Thursday (July 10) press release. Observers from global public agencies also participated.

The EU AI Act, which was approved in 2024, is the first comprehensive legal framework governing AI. It aims to ensure that AI systems used in the EU are safe and transparent, as well as respectful of fundamental human rights.

The act classifies AI applications into risk categories — unacceptable, high, limited and minimal — and imposes obligations accordingly. Any AI company whose services are used by EU residents must comply with the act. Fines can go up to 7% of global annual revenue.

The code is voluntary, but AI model companies who sign on will benefit from lower administrative burdens and greater legal certainty, according to the commission. The next step is for the EU’s 27 member states and the commission to endorse it.

Read also: European Commission Says It Won’t Delay Implementation of AI Act

Inside the Code of Practice

The code is structured into three core chapters: Transparency; Copyright; and Safety and Security.

The Transparency chapter includes a model documentation form, described by the commission as “a user-friendly” tool to help companies demonstrate compliance with transparency requirements.

The Copyright chapter offers “practical solutions to meet the AI Act’s obligation to put in place a policy to comply with EU copyright law.”

The Safety and Security chapter, aimed at the most advanced systems with systemic risk, outlines “concrete state-of-the-art practices for managing systemic risks.”

The drafting process began with a plenary session in September 2024 and proceeded through multiple working group meetings, virtual drafting rounds and provider workshops.

The code takes effect Aug. 2, but the commission’s AI Office will enforce the rules on new AI models after one year and on existing models after two years.

A spokesperson for OpenAI told The Wall Street Journal that the company is reviewing the code to decide whether to sign it. A Google spokesperson said the company would also review the code.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more:



Source link

Continue Reading

AI Research

Every Blooming Thing – Technology and Artificial Intelligence in the garden – appeal-democrat.com

Published

on



Every Blooming Thing – Technology and Artificial Intelligence in the garden  appeal-democrat.com



Source link

Continue Reading

AI Research

Researchers develop AI model to generate global realistic rainfall maps

Published

on


Working from low-resolution global precipitation data, the spateGAN-ERA5 AI model generates high-resolution fields for the analysis of heavy rainfall events. Credit: Christian Chwala, KIT

Severe weather events, such as heavy rainfall, are on the rise worldwide. Reliable assessments of these events can save lives and protect property. Researchers at the Karlsruhe Institute of Technology (KIT) have developed a new method that uses artificial intelligence (AI) to convert low-resolution global weather data into high-resolution precipitation maps. The method is fast, efficient, and independent of location. Their findings have been published in npj Climate and Atmospheric Science.

“Heavy rainfall and flooding are much more common in many regions of the world than they were just a few decades ago,” said Dr. Christian Chwala, an expert on hydrometeorology and machine learning at the Institute of Meteorology and Climate Research (IMK-IFU), KIT’s Campus Alpin in the German town of Garmisch-Partenkirchen. “But until now the data needed for reliable regional assessments of such extreme events was missing for many locations.”

His research team addresses this problem with a new AI that can generate precise global precipitation maps from low-resolution information. The result is a unique tool for the analysis and assessment of extreme weather, even for regions with poor data coverage, such as the Global South.

For their method, the researchers use from that describe global precipitation at hourly intervals with a spatial resolution of about 24 kilometers. Not only was their generative AI model (spateGEN-ERA5) trained with this data, it also learned (from high-resolution weather radar measurements made in Germany) how precipitation patterns and extreme events correlate at different scales, from coarse to fine.

“Our AI model doesn’t merely create a more sharply focused version of the input data, it generates multiple physically plausible, high-resolution maps,” said Luca Glawion of IMK-IFU, who developed the model while working on his doctoral thesis in the SCENIC research project. “Details at a resolution of 2 kilometers and 10 minutes become visible. The model also provides information about the statistical uncertainty of the results, which is especially relevant when modeling regionalized events.”

He also noted that validation with weather radar data from the United States and Australia showed that the method can be applied to entirely different climatic conditions.

Correctly assessing flood risks worldwide

With their method’s global applicability, the researchers offer new possibilities for better assessment of regional climate risks. “It’s the especially vulnerable regions that often lack the resources for detailed weather observations,” said Dr. Julius Polz of IMK-IFU, who was also involved in the model’s development.

“Our approach will enable us to make much more reliable assessments of where heavy rainfall and floods are likely to occur, even in such regions with poor data coverage.” Not only can the new AI method contribute to disaster control in emergencies, it can also help with the implementation of more effective long-term preventive measures such as flood control.

More information:
Luca Glawion et al, Global spatio-temporal ERA5 precipitation downscaling to km and sub-hourly scale using generative AI, npj Climate and Atmospheric Science (2025). DOI: 10.1038/s41612-025-01103-y

Citation:
Researchers develop AI model to generate global realistic rainfall maps (2025, July 10)
retrieved 10 July 2025
from https://phys.org/news/2025-07-ai-generate-global-realistic-rainfall.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending