Connect with us

AI Research

Artificial intelligence used to improve speed and accuracy of autism and ADHD diagnoses

Published

on


BYLINE: Christiane Wisehart

It can take as long as 18 months for children with suspected autism spectrum or attention-deficit-hyperactivity disorders to get a diagnostic appointment with a psychiatrist in Indiana. But an interdisciplinary team led by an Indiana University researcher has developed a new diagnostic approach using artificial intelligence that could speed up and improve the detection of neurodivergent disorders.

Psychiatrists, who currently use a variety of tests and patient surveys to analyze symptoms such as communication impairments, hyperactivity or repeated behaviors, have no widely available quantitative or biological tests to diagnose autism, ADHD or related disorders.

“The symptoms of neurodivergent disorders are very heterogeneous; psychiatrists call them ‘spectrum disorders’ because there’s no one observable thing that tells them if a person is neurotypical or not,” said Jorge José, the James H. Rudy Distinguished Professor of Physics in the College of Arts and Sciences at IU Bloomington and member of the Stark Neuroscience Research Institute at the IU School of Medicine in Indianapolis.

That’s why José — in collaboration with an interdisciplinary team of scholars, including IU School of Medicine Distinguished Professor Emeritus John I. Nurnberger and associate professor of psychiatry Martin Plawecki — dedicated his recent research to improving diagnostic tools for children with these symptoms.

A new study on the use of artificial intelligence to quickly diagnose autism and ADHD, published July 8 in Nature’s Scientific Reports, details the latest step in his team’s development of a data-driven approach to rapidly and accurately assess neurodivergent disorders using quantitative biomarkers and biometrics.

Their method — which has the potential to diagnose autism or ADHD in as little as 15 minutes — could be used in schools to triage students who might need further care, said Khoshrav Doctor, a Ph.D. student at the University of Massachusetts Amherst and former visiting research scholar at IU who has been a member of José’s team since 2016.

Both he and José said their approach is not meant to replace the role of psychiatrists in the diagnosis and treatment of neurodivergent disorders.

“It could help as an additional tool in the clinician’s toolbelt,” Doctor said. “It also gives us the ability to see who might need the quickest intervention and direct them to providers earlier.”

Finding the biomarkers

In 2018, José published an autism study in collaboration with Rutgers, revealing that there are “movement biomarkers” that, while imperceptible to the naked eye, can be identified and measured in severity by using sensors.

José and his team instructed a group of participants to reach for a target when it appeared on a computer touch screen in front of them. Using sensors attached to participants’ hands, researchers recorded hundreds of images of micromovements per second.

The images showed that neurotypical patients moved in a measurably different way than participants with autism. The researchers were able to correlate increased randomness in movement with the participants who had previously been diagnosed with autism.

Improving treatment

In the years since their landmark 2018 study, José and his present team took advantage of new high-definition kinematic Bluetooth sensors to collect information not just on the velocity of study participants’ movements, but also to measure acceleration, rotation and many other variables.

“We’re taking a physicist’s approach to looking at the brain and analyzing movement specifically,” said IU physics graduate student Chaundy McKeever, who recently joined José’s group. “We’re looking at how sporadic the movement of a patient is. We’ve found that, typically, the more sporadic their movement, the more severe a disorder is.”

The team also introduced the use of a specialized area of artificial intelligence known as deep learning to analyze the new measurements. Using a supervised deep-learning technique, the team studied raw movement data from participants with autism spectrum disorder, ADHD, comorbid autism and ADHD, and neurotypical development.

This enhanced method, detailed in their July 8 Scientific Reports paper, introduced an ability to better analyze a patient’s neurodivergent disorder.

“By studying the statistics of the motion fluctuations, invisible to the naked eye, we can assess the severity of a disorder in terms of a new set of biometrics,” José said. “No psychiatrist can currently tell you how serious a condition is.”

With the added ability to assess a neurodivergent disorder’s severity, health care providers can better set up and monitor the impact of their treatments.

“Some patients will need a significant number of services and specialized treatments,” José said. “If, however, the severity of a patient’s disorder is in the middle of the spectrum, their treatments can be more minutely adjusted, will be less demanding and often can be carried out at home, making their care more affordable and easier to carry out.”





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Northumbria to roll out new AI platform for staff and students

Published

on


Northumbria University is to provide its students and staff with access to Claude for Education – a leading AI platform specifically tailored for higher education.

Northumbria will become only the second university in the UK, alongside the London School of Economics and other leading international institutions, to offer Claude for Education as a tool to its university community.

With artificial intelligence rapidly transforming many aspects of our lives, Northumbria’s students and staff will now be provided with free access to many of the tools and skills they will need to succeed in the new global AI-environment.

Claude for Education is a next-generation AI assistant built by Anthropic and trained to be safe, accurate and secure. It provides universities with ethical and transparent access to AI that ensures data security and copyright compliance and acts as a 24/7 study partner for students, designed to guide learning and develop critical thinking rather than providing direct answers.

Known as a UK leader in responsible AI-based research and education, Northumbria University recently launched its Centre for Responsible AI and is leading a multi-million pound UKRI AI Centre for Doctoral Training in Citizen-Centred Artificial Intelligence to train the next generation of leaders in AI development.

Professor Graham Wynn explained: “Today’s students are digitally native and recent data show many use AI routinely. They expect their universities to provide a modern, technology-enhanced education, providing access to AI tools along with clear guidance on the responsible use of AI.

“We know that the availability of secure and ethical AI tools is a significant consideration for our applicants and our investment in Claude for Education will position Northumbria as a forward-thinking leader in ethical AI innovation.

“Empowering students and staff, providing cutting-edge learning opportunities, driving social mobility and powering an inclusive economy are at the heart of everything we do. We know how important it is to eliminate digital poverty and provide equitable access to the most powerful AI tools, so our students and graduates are AI literate with the skills they need for the workplaces of the future.

“The introduction of Claude for Education will provide our students and staff with free universal access to cutting-edge AI technology, regardless of their financial circumstances.”

The University is now working with Anthropic to establish the technical infrastructure and training to roll out Claude for Education in autumn 2025.



Source link

Continue Reading

AI Research

Avalara rolls out AI tax research bot

Published

on


Tax solutions provider Avalara announced the release of its newest AI offering, Avi for Tax Research, a generative AI-based solution that will now be embedded in Avalara Tax Research. The model is trained on Avalara’s own data, gathered over two decades, which the bot will use for contextually aware, data-driven answers to complex tax questions. 

“The tax compliance industry is at the dawn of unprecedented innovation driven by rapid advancements in AI,” says Danny Fields, executive vice president and chief technology officer of Avalara. “Avalara’s technology mission is to equip customers with reliable, intuitive tools that simplify their work and accelerate business outcomes.”

Avi for Tax, specifically, offers the ability to instantly check the tax status of products and services using plain language queries to receive trusted, clearly articulated responses grounded in Avalara’s tax database. Users can also access real-time official guidance that supports defensible tax positions and enables proactive adaptation to evolving tax regulations, as well as  quickly obtain precise sales tax rates tailored to specific street addresses to facilitate compliance accuracy down to local jurisdictional levels. The solution comes with an intuitive conversational interface that allows even those without tax backgrounds to use the tool. 

For existing users of Avi Tax Research, the AI solution is available now with no additional setup required. New customers can sign up for a free trial today. 

The announcement comes shortly after Avalara announced new application programming interfaces for its 1099 and W-9 solutions, allowing companies to embed their compliance workflows into their existing ERP, accounting, e-commerce or marketplace platforms. An API is a type of software bridge that allows two computer systems to directly communicate with each other using a predefined set of definitions and protocols. Any software integration depends on API access to function. Avalara’s API access enables users to directly collect W-9 forms from vendors; validate tax IDs against IRS databases; confirm mailing addresses with the U.S. Postal Service; electronically file 1099 forms with the IRS and states; and deliver recipient copies from one central location. Avalara’s new APIs allow for e-filing of 1099s with the IRS without even creating a FIRE account.



Source link

Continue Reading

AI Research

Tencent improves testing creative AI models with new benchmark

Published

on


Tencent has introduced a new benchmark, ArtifactsBench, that aims to fix current problems with testing creative AI models.

Ever asked an AI to build something like a simple webpage or a chart and received something that works but has a poor user experience? The buttons might be in the wrong place, the colours might clash, or the animations feel clunky. It’s a common problem, and it highlights a huge challenge in the world of AI development: how do you teach a machine to have good taste?

For a long time, we’ve been testing AI models on their ability to write code that is functionally correct. These tests could confirm the code would run, but they were completely “blind to the visual fidelity and interactive integrity that define modern user experiences.”

This is the exact problem ArtifactsBench has been designed to solve. It’s less of a test and more of an automated art critic for AI-generated code

Getting it right, like a human would should

So, how does Tencent’s AI benchmark work? First, an AI is given a creative task from a catalogue of over 1,800 challenges, from building data visualisations and web apps to making interactive mini-games.

Once the AI generates the code, ArtifactsBench gets to work. It automatically builds and runs the code in a safe and sandboxed environment.

To see how the application behaves, it captures a series of screenshots over time. This allows it to check for things like animations, state changes after a button click, and other dynamic user feedback.

Finally, it hands over all this evidence – the original request, the AI’s code, and the screenshots – to a Multimodal LLM (MLLM), to act as a judge.

This MLLM judge isn’t just giving a vague opinion and instead uses a detailed, per-task checklist to score the result across ten different metrics. Scoring includes functionality, user experience, and even aesthetic quality. This ensures the scoring is fair, consistent, and thorough.

The big question is, does this automated judge actually have good taste? The results suggest it does.

When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard platform where real humans vote on the best AI creations, they matched up with a 94.4% consistency. This is a massive leap from older automated benchmarks, which only managed around 69.4% consistency.

On top of this, the framework’s judgments showed over 90% agreement with professional human developers.

Tencent evaluates the creativity of top AI models with its new benchmark

When Tencent put more than 30 of the world’s top AI models through their paces, the leaderboard was revealing. While top commercial models from Google (Gemini-2.5-Pro) and Anthropic (Claude 4.0-Sonnet) took the lead, the tests unearthed a fascinating insight.

You might think that an AI specialised in writing code would be the best at these tasks. But the opposite was true. The research found that “the holistic capabilities of generalist models often surpass those of specialized ones.”

A general-purpose model, Qwen-2.5-Instruct, actually beat its more specialised siblings, Qwen-2.5-coder (a code-specific model) and Qwen2.5-VL (a vision-specialised model).

The researchers believe this is because creating a great visual application isn’t just about coding or visual understanding in isolation and requires a blend of skills.

“Robust reasoning, nuanced instruction following, and an implicit sense of design aesthetics,” the researchers highlight as example vital skills. These are the kinds of well-rounded, almost human-like abilities that the best generalist models are beginning to develop.

Tencent hopes its ArtifactsBench benchmark can reliably evaluate these qualities and thus measure future progress in the ability for AI to create things that are not just functional but what users actually want to use.

See also: Tencent Hunyuan3D-PolyGen: A model for ‘art-grade’ 3D assets

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.





Source link

Continue Reading

Trending