Connect with us

AI Research

Researchers in U.K., Chennai, explore use of AI and social robots for early dementia detection and support

Published

on


Dementia, a condition that affects millions worldwide, has become a challenge of significant concern as the global population ages. Traditionally, care for dementia has relied heavily on medication, therapy, and the support of family and caregivers. However, now, the approach to dementia care is beginning to evolve in unexpected ways. 

Researchers are exploring how technology, particularly robotics and Artificial Intelligence (AI), can play a deeper role in supporting human care.

Though still in its early stages, a notable example is the collaboration between Imperial College London and the Chennai-based Schizophrenia Research Foundation (SCARF), where a team is investigating how social robots could aid those living with dementia. The aim of this research is not only to provide companionship, but also to detect early signs of cognitive decline.

Use of social robots

The research seeks to use voice recognition and “social robots” to detect early signs of cognitive decline. Social robots are those that interact and communicate with humans by following social behaviors and rules. According to Ravi Vaidyanathan, professor in biomecharonics at Imperial College London, who leads the research, the idea is to engage people with dementia and use these interactions to monitor their cognitive health. 

“We are looking at how we can use voice interactions to diagnose dementia. By collecting data over time, the AI can help doctors spot early warning signs,” Prof Vaidyanathan explained. He believes the technology has the potential to identify changes in speech, like hesitation, difficulty finding words, or changes in inflection, that could signal the early stages of dementia. “If we can get people to engage with the robot and enjoy the interaction, we create a richer dataset that may lead to more accurate diagnostics,” he added.

The pilot studies conducted so far are said to have shown encouraging results. In one particular study at SCARF, a social robot was used to engage participants diagnosed with dementia. The robot initiated conversations with people about their daily lives, asking simple questions such as, “How did you sleep last night?” and “How are you feeling today?” 

Sridhar Vaitheshwaran, consultant psychiatrist and head of DEMCARES at SCARF, pointed out the positive outcomes, saying, “People with dementia were genuinely interested in the robot and engaged in meaningful conversations. It was clear that they were interacting with it not as a machine, but as a companion.” This finding highlighted the potential for robots to alleviate feelings of isolation that many people with dementia experience.

Data for early detection

Prof. Vaidyanathan noted that the key goal of the research isn’t just about keeping patients engaged but also about collecting meaningful data. “We’re gathering data in real-time, so it’s not just about engaging people. It’s about how we can make sure this interaction can lead to something useful. If we can detect early signs of dementia based on these conversations, we can better equip physicians with tools for early diagnosis,” he stated. 

He also stressed the significance of regular check-ins. “By having conversations with people every day, we can observe fluctuations in their speech over time, which could be early indicators of cognitive decline.”

Challenges in scaling up, data privacy

One of the biggest challenges in scaling this research, however, lies in the cross-cultural differences in language, speech patterns, and patient engagement. By testing the technology in different cultural contexts, the team aims to make the AI more adaptable to various accents, linguistic nuances, and communication styles.

While the potential for social robots in dementia care is exciting, the project also faces the critical issue of data privacy. 

In the context of AI, voice recordings and other personal data can raise significant concerns. Prof. Vaitheshwaran said of the importance of ethical research practices, “I think any research, any data that we gather from people needs to be protected and it has to be ethical, and we need to be careful about what we do with the data,” adding that the Indian Council of Medical Research (ICMR) has developed comprehensive guidelines for the use of patient data. 

“All of our studies undergo thorough review by our Ethics Committee to ensure that the data is handled ethically and securely,” he said.

The use of voice data, especially in healthcare settings, poses unique privacy challenges. According to the team, the data gathered is handled privately. Close oversight of both the research team and participants is maintained to ensure a degree of control. “But when it comes to broader implementation, managing these aspects becomes much more complex — and that’s something the research community hasn’t fully resolved yet,” said Prof Vaidyanathan. He pointed out that as the research evolves, maintaining privacy during the broader deployment of the technology will be an ongoing concern. 

Early intervention and reducing physician load

Looking ahead, Prof. Vaidyanathan said that the next step is to refine the technology for early dementia detection through AI-powered voice screening, which could alert physicians when further evaluation is needed. Ultimately, they aim to support clinicians and people with dementia through technology that is both effective and user-friendly.

“We want to move beyond just helping those already diagnosed with dementia. We aim to identify people at risk before they show any obvious symptoms.” He believes that if dementia is detected in its early stages, it could make a significant difference to managing the disease and improving patients’ quality of life. 

The team also wants to explore combining voice data with other diagnostic methods, like urinary tract information or even genetic markers. “Voice interactions are one piece of the puzzle, but when combined with other diagnostic tools, we could create a more holistic approach,” Prof. Vaidyanathan said.

Published – July 02, 2025 07:35 pm IST



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Captions rebrands as Mirage, expands beyond creator tools to AI video research

Published

on


Captions, an AI-powered video creation and editing app for content creators that has secured over $100 million in venture capital to date at a valuation of $500 million, is rebranding to Mirage, the company announced on Thursday. 

The new name reflects the company’s broader ambitions to become an AI research lab focused on multimodal foundational models specifically designed for short-form video content for platforms like TikTok, Reels, and Shorts. The company believes this approach will distinguish it from traditional AI models and competitors such as D-ID, Synthesia, and Hour One.

The rebranding will also unify the company’s offerings under one umbrella, bringing together the flagship creator-focused AI video platform, Captions, and the recently launched Mirage Studio, which caters to brands and ad production.

“The way we see it, the real race for AI video hasn’t begun. Our new identity, Mirage, reflects our expanded vision and commitment to redefining the video category, starting with short-form video, through frontier AI research and models,” CEO Gaurav Misra told TechCrunch.

Image Credits:Mirage

The sales pitch behind Mirage Studio, which launched in June, focuses on enabling brands to create short advertisements without relying on human talent or large budgets. By simply submitting an audio file, the AI generates video content from scratch, with an AI-generated background and custom AI avatars. Users can also upload selfies to create an avatar using their likeness.

What sets the platform apart, according to the company, is its ability to produce AI avatars that have natural-looking speech, movements, and facial expressions. Additionally, Mirage says it doesn’t rely on existing stock footage, voice cloning, or lip-syncing. 

Mirage Studio is available under the business plan, which costs $399 per month for 8,000 credits. New users receive 50% off the first month. 

Techcrunch event

San Francisco
|
October 27-29, 2025

While these tools will likely benefit brands wanting to streamline video production and save some money, they also spark concerns around the potential impact on the creative workforce. The growing use of AI in advertisements has prompted backlash, as seen in a recent Guess ad in Vogue’s July print edition that featured an AI-generated model.

Additionally, as this technology becomes more advanced, distinguishing between real and deepfake videos becomes increasingly difficult. It’s a difficult pill to swallow for many people, especially given how quickly misinformation can spread these days.

Mirage recently addressed its role in deepfake technology in a blog post. The company acknowledged the genuine risks of misinformation while also expressing optimism about the positive potential of AI video. It mentioned that it has put moderation measures in place to limit misuse, such as preventing impersonation and requiring consent for likeness use. 

However, the company emphasized that “design isn’t a catch-all” and that the real solution lies in fostering a “new kind of media literacy” where people approach video content with the same critical eye as they do news headlines.



Source link

Continue Reading

AI Research

Head of UK’s Turing AI Institute resigns after funding threat

Published

on


Graham FraserTechnology reporter

PA Jean Innes, Foreign Secretary David Lammy and his French counterpart, Jean-Noel Barrot at a meeting in London PA

Dr Jean Innes (left) pictured with Foreign Secretary David Lammy (centre) and his French counterpart Jean-Noel Barrot at a meeting in London

The chief executive of the UK’s national institute for artificial intelligence (AI) has resigned following staff unrest and a warning the charity was at risk of collapse.

Dr Jean Innes said she was stepping down from the Alan Turing Institute as it “completes the current transformation programme”.

Her position has come under pressure after the government demanded the centre change its focus to defence and threatened to pull its funding if it did not – leading to staff discontent and a whistleblowing complaint submitted to the Charity Commission.

Dr Innes, who was appointed chief executive in July 2023, said the time was right for “new leadership”.

The BBC has approached the government for comment.

The Turing Institute said its board was now looking to appoint a new CEO who will oversee “the next phase” to “step up its work on defence, national security and sovereign capabilities”.

Its work had once focused on AI and data science research in environmental sustainability, health and national security, but moved on to other areas such as responsible AI.

The government, however, wanted the Turing Institute to make defence its main priority, marking a significant pivot for the organisation.

“It has been a great honour to lead the UK’s national institute for data science and artificial intelligence, implementing a new strategy and overseeing significant organisational transformation,” Dr Innes said.

“With that work concluding, and a new chapter starting… now is the right time for new leadership and I am excited about what it will achieve.”

What happened at the Alan Turing Institute?

Founded in 2015 as the UK’s leading centre of AI research, the Turing Institute, which is headquartered at the British Library in London, has been rocked by internal discontent and criticism of its research activities.

A review last year by government funding body UK Research and Innovation found “a clear need for the governance and leadership structure of the Institute to evolve”.

At the end of 2024, 93 members of staff signed a letter expressing a lack of confidence in its leadership team.

In July, Technology Secretary Peter Kyle wrote to the Turing Institute to tell its bosses to focus on defence and security.

He said boosting the UK’s AI capabilities was “critical” to national security and should be at the core of the institute’s activities – and suggested it should overhaul its leadership team to reflect its “renewed purpose”.

He said further government investment would depend on the “delivery of the vision” he had outlined in the letter.

This followed Prime Minister Sir Keir Starmer’s commitment to increasing UK defence spending to 5% of national income by 2035, which would include investing more in military uses of AI.

Getty Images Peter Kyle. He has short smart grey hair and is wearing a sharp blue suit with a white shirt and red tie. He appears to be leaving 10 Downing Street.Getty Images

Technology Secretary Peter Kyle wants the Alan Turing Institute to focus on defence

A month after Kyle’s letter was sent, staff at the Turing institute warned the charity was at risk of collapse, after the threat to withdraw its funding.

Workers raised a series of “serious and escalating concerns” in a whistleblowing complaint submitted to the Charity Commission.

Bosses at the Turing Institute then acknowledged recent months had been “challenging” for staff.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”



Source link

Continue Reading

AI Research

Global Working Group Releases Publication on Responsible Use of Artificial Intelligence in Creating Lay Summaries of Clinical Trial Results

Published

on


New publication underscores the importance of human oversight, transparency, and patient involvement in AI-assisted lay summaries.

BOSTON, Sept. 4, 2025 /PRNewswire/ — The Center for Information and Study on Clinical Research Participation (CISCRP) today announced the publication of a landmark article, “Considerations for the Use of Artificial Intelligence in the Creation of Lay Summaries of Clinical Trial Results” , in Medical Writing (Volume 34, Issue 2, June 2025). Developed by the working group, Patient-focused AI for Lay Summaries (PAILS) , this comprehensive document addresses both the opportunities and risks of using artificial intelligence (AI) in the development of plain language communications of clinical trial results.

CISCRP logo

Lay summaries (LS) are essential tools for translating complex clinical trial results into plain language that is clear, accurate, and accessible to patients, caregivers, and the broader community. As AI technologies evolve, they hold promise for streamlining LS creation, improving efficiency, and expanding access to trial results. However, without thoughtful integration and oversight , AI-generated content can risk inaccuracies, cultural insensitivity, and loss of public trust.

For biopharma sponsors, CROs, and medical writing vendors, this framework offers clear, best practices for integrating AI responsibly while maintaining compliance with EU and UK lay summary regulations and improving efficiency at scale.

Key recommendations from the working group include:

  • Human oversight is essential – AI should support, not replace, expert review to ensure accuracy, clarity, and cultural sensitivity.

  • Prompt Engineering is a Critical Skillset – Thoughtful, specific prompts – including instructions on tone, reading level, terminology, structure, and disclaimers – can make the difference between usable and unusable drafts.

  • Full transparency of AI involvement – Disclosing when and how AI was used builds public trust and complies with emerging regulations such as the EU Artificial Intelligence Act.

  • Robust governance frameworks – Policies should address bias, privacy, compliance, and ongoing monitoring of AI systems.

  • Patient and public involvement – Including patient perspectives in review processes improves relevance and comprehension.

“This considerations document is the result of thoughtful collaboration among industry, academia , and CISCRP.” said Kimbra Edwards, Senior Director of Health Communication Services at CISCRP. “By combining human expertise with AI innovation, we can ensure that clinical trial information remains transparent, accurate, and truly patient-centered.”



Source link

Continue Reading

Trending