Connect with us

AI Research

Apple’s latest AI project may be a web search tool

Published

on


Apple continues to seek a foothold in the artificial intelligence race, and its next effort could bring the company into web search. Mark Gurman at reports that Apple is building a search platform that it may incorporate into its AI-driven overhaul of Siri. Sources said the tool, internally called World Knowledge Answers, could also be added to the Safari web browser and the Spotlight smartphone search interface.

Apple’s efforts in AI have been under the microscope since the of Apple Intelligence at WWDC 2024. Since then, the company appears to still be foundering, with its revitalized and AI-empowered Siri now not due to arrive . This proposed search tool would be part of that planned Siri re-launch next spring.

Some core aspects of Siri are still up in the air. The company has reportedly trialed to power a version of the AI assistant, although it hasn’t committed to using that approach. Considering an outside partnership for this critical feature is one path Apple could take to bolstering its AI offerings. CEO Tim Cook has also said the company is to pursue its current roadmap. There were even rumors that the company snapping up Perplexity.

Apple has historically avoided getting involved in search, but this development could reflect how more of its potential customers are turning to AI chatbots to access information online. And particularly if the company brings an AI option to Safari, Apple might be able to compete more directly to other tech majors that offer their own-branded chatbots, such as Google with Gemini or Microsoft with Copilot. It could also draw closer to parity with AI companies that are entering the browser game, such as and .



Source link

AI Research

SBU Researchers Use AI to Advance Alzheimer’s Detection

Published

on


Shan Lin

Alzheimer’s disease is one of the most urgent public health challenges for aging Americans. Nearly seven million Americans over the age of 65 are currently living with the disease, and that number is projected to nearly double by 2060, according to the Alzheimer’s Association.

Early diagnosis and continuous monitoring are crucial to improving care and extending independence, but there isn’t enough high-quality, Alzheimer’s-specific data to train artificial intelligence systems that could help detect and track the disease.

Shan Lin, associate professor of Electrical and Computer Engineering at Stony Brook University, along with PhD candidate Heming Fu, are working with Guoliang Xing from The Chinese University of Hong Kong to create a network of data based on Alzheimer’s patients. Together they developed SHADE-AD (Synthesizing Human Activity Datasets Embedded with AD features), a generative AI framework designed to create synthetic, realistic data that reflects the motor behaviors of Alzheimer’s patients.

Shade-AD
This figure provides the design overview of Shade-AD. The Training Process involves three stages: Stage 1 learns general human actions; Stage 2 embeds AD-specific knowledge; and Stage 3 fine-tunes the model based on patient-specific motion metrics.

Movements like stooped posture, reliance on armrests when standing from sitting, or slowed gait may appear subtle, but can be early indicators of the disease. By identifying and replicating these patterns, SHADE-AD provides researchers and physicians with the data required to improve monitoring and diagnosis.

Unlike existing generative models, which often rely on and output generic datasets drawn from healthy individuals, SHADE-AD was trained to embed Alzheimer’s-specific traits. The system generates three-dimensional “skeleton videos,” simplified figures that preserve details of joint motion. These 3D skeleton datasets were validated against real-world patient data, with the model proving capable of reproducing the subtle changes in speed, angle, and range of motion that distinguish Alzheimer’s behaviors from those of healthy older adults. 

The results and findings, published and presented at the 23rd ACM Conference on Embedded Networked Sensor Systems (SenSys 2025), have been significant. Activity recognition systems trained with SHADE-AD’s data achieved higher accuracy across all major tasks compared with systems trained on traditional data augmentation or general open datasets. In particular, SHADE-AD excelled at recognizing actions like walking and standing up, which often reveal the earliest signs of decline for Alzheimer’s patients.

Shade-AD skeleton
This figure illustrates the comparison of “standing up from a chair” motion between a healthy elder and an AD patient.

Lin believes this work could have a significant impact on the daily lives of older adults and their families. Technologies built on SHADE-AD could one day allow doctors to detect Alzheimer’s sooner, track disease progression more accurately, and intervene earlier with treatments and support. “If we can provide tools that spot these changes before they become severe, patients will have more options, and families will have more time to plan,” he said. 

With September recognized nationally as Healthy Aging Month, Lin sees this research as part of an effort to use technology to support older adults in living longer, healthier, and more independent lives. “Healthy aging isn’t only about treating illness, but also about creating systems that allow people to thrive as they grow older,” he said. “AI can be a powerful ally in that mission.”

— Beth Squire



Source link

Continue Reading

AI Research

Interactive apps, AI chatbots promote playfulness, reduce privacy concerns

Published

on


They found that interactivity enhanced perceived playfulness and users’ intention to engage with an app, which was accompanied by a decrease in privacy concerns. Surprisingly, Sundar said, message interactivity, which the researchers thought would increase user vigilance, instead distracted users from thinking about the personal information they may be sharing with the system. That is, the way AI chatbots operate today — building responses based on a user’s prior inputs — makes individuals less likely to think about the sensitive information they may be sharing, according to the researchers.

“Nowadays, when users engage with AI agents, there’s a lot of back-and-forth conversation, and because the experience is so engaging, they forget that they need to be vigilant about the information they share with these systems,” said lead author Jiaqi Agnes Bao, assistant professor of strategic communication at the University of South Dakota who completed the research during her doctoral work at Penn State. “We wanted to understand how to better design an interface to make sure users are aware of their information disclosure.”

While user vigilance plays a large part in preventing the unintended disclosure of personal information, app and AI developers can balance playfulness and privacy concerns through design choices that result in win-win situations for individuals and companies alike, Bao said.

“We found that if both message interactivity and modality interactivity are designed to operate in tandem, it could cause users to pause and reflect,” she said. “So, when a user converses with an AI chatbot, a pop-up button asking the user to rate their experience or leave comments on how to improve their tailored responses can give users a pause to think about the kind of information they share with the system and help the company provide a better customized experience.”

AI platforms’ responsibility goes beyond simply giving users the option to share or not share personal information via conversation, said study co-author Yongnam Jung, a doctoral candidate at Penn State.

“It’s not just about notifying users, but about helping them make informed choices, which is the responsible way for building trust between platforms and users,” she added.

The study builds on the team’s earlier research, which revealed similar patterns, according to the researchers. Together, they said, the two studies underscore a critical trade-off: while interactivity enhances the user experience, it highlights the benefits of the app and draws attention away from potential privacy risks.

Generative AI, for the most part and in most application domains, is based on message interactivity, which is conversational in nature, said Sundar, who is also the director of Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI). He added that this study’s finding challenges current thinking among designers that, unlike clicking and swiping tools, conversation-based tools make people more cognitively alert to negative aspects, like privacy concerns.

“In reality, conversation-based tools are turning out to be a playful exercise, and we’re seeing this reflected in the larger discourse on generative AI where there are all kinds of stories about people getting so drawn into conversations that they do things that seem illogical,” he said. “They are following the advice of generative AI tools for very high-stakes decision making. In some ways, our study is a cautionary tale for this newer suite of generative AI tools. Perhaps inserting a pop-up or other modality interactivity tools in the middle of a conversation may stem the flow of this mesmerizing, playful interaction and jerk users into awareness now and then.”



Source link

Continue Reading

AI Research

Penn State Altoona professor to launch ‘Metabytes: AI + Humanities Lunch Lab’

Published

on


ALTOONA, Pa. — John Eicher, associate professor of history at Penn State Altoona, will launch the “Metabytes: AI + Humanities Lunch Lab” series on Tuesday, Oct. 7, from noon to 1 p.m. in room 102D of the Smith Building.

As artificial intelligence (AI) systems continue to advance, students need the tools to engage them not only technically, but also intelligently, ethically and creatively. The AI + Humanities Lab will serve as a cross-disciplinary space where humanistic inquiry meets cutting-edge technology, helping students ask the deeper questions that surround this emerging force. By blending hands-on experimentation with philosophical and ethical reflection, the lab aims to give students a critical edge: The ability to see AI not just as a tool, but as a cultural and intellectual phenomenon that requires serious and sober engagement.

Each session will begin with a text, image or prompt shared with an AI model. Participants will then interpret and discuss the responses as philosophical or creative expressions. These activities will ask students to grapple with questions of authority, authenticity, consciousness, choice, empathy, interpretation and what it even means to “understand.”

The lab will run each Tuesday from Oct. 7 through Nov. 18, with the exception of Oct. 14. Sessions are drop-in, open to all and participants may bring their lunch.



Source link

Continue Reading

Trending