Connect with us

AI Research

Bridging the emotional gap in human-AI communication

Published

on


Artificial Intelligence (AI) systems have reshaped modern life in several remarkable ways. From customer service chatbots to address our grievances to virtual assistants like Alexa that remind us about a pending task on our to-do lists, AI is firmly established in our daily lives, transforming the way we work, communicate, or access information. Conversational AI, such as Alexa or Google Assistant, has made voice commands and natural-language queries commonplace, providing to millions of users the convenience of speaking instead of typing out queries.

Current AI systems, however, are not trained to recognize the emotional and nonverbal aspects of human communication, such as voice tone, facial expressions, and body language, which are key to grasping the user’s full emotional profile. Imagine having a voice assistant that plays music according to your mood, or an AI tutor that adapts study lessons to suit your interest and willingness to learn! Sounds fascinating, right? To manifest this idea in the real world, scientists have been developing sentient or emotionally intelligent AI—often explored under the domain of affective computing— with the intention of creating systems that can interpret and respond to human emotions.

Professor Shogo Okada from Japan Advanced Institute of Science and Technology (JAIST), who leads the Social Signal and Multimodal Interaction Laboratory there, is working on this crucial aspect of human-AI interactions. Professor Okada’s lab uses multimodal communication signals, including language presentations, speech signals, body language, and physiological signals like heart rate variability, sweat gland activity, and nerve stimulations, to understand human-AI and human-human social interaction patterns. Prof. Okada’s lab uses these multimodal signals from real-time experiments involving social interactions to collect data and train computational models that can assess human emotions accurately.

 

When AI reads between the lines (on our face)

Prof. Okada joined JAIST in 2017, after completing his MS and PhD from the Tokyo Institute of Technology, Japan. At JAIST, he started working on human-centric AI systems. He has published several research papers on different aspects of human relations, including human group interactions. For example, in 2014-2016, Prof. Okada collaborated with Prof. Daniel Gatica-Perez from EPFL, Switzerland, to study how AI systems can be trained to predict personality traits. By analyzing humans’ nonverbal behavior in both one-on-one interviews (dyadic) as well as group conversations, the team studied traits like leadership or BigFive personality traits. In this work, they used pattern recognition sensors to understand emotional states in individuals through different types of signals – voice, body posture, and facial expressions – occurring together. Using group interaction experiments, this study also analyzed how one person’s nonverbal communication patterns align with those of another person in the group. Explaining this further, Prof. Okada says, “In our experiments, we noted that when a person with high leadership skills started speaking, others in the group started gathering their gaze/attention towards him/her and stopped speaking. So, we used such paired behavioral patterns in the group to evaluate a person’s level of influence over the group, or their interpersonal relations. Training AI systems with this kind of multimodaldata may help us understand specific personal and interpersonal traits of people.”

In recent years, research has focused on AI systems that sense human emotions. However, most studies focus on the tone of voice and facial expressions. But humans may hide their true emotions by controlling such observable features. Physiological signals, such as heart rate variability, EDA: Electro Dermal Activity, nerve stimulation, etc., are ‘unobservable’ signs of emotional states and cannot be controlled, reflecting the users’ true emotions. Exploring this ‘multimodal’ sentiment, Prof. Okada’s 2023 study published in IEEE Transactions on Affective Computing revealed that a combination of ‘observable signals’ and ‘unobservable’ physiological signals best predicted the emotional states of users. This study may pave the way for emotionally intelligent AI systems, allowing for a more natural and satisfying human-AI interaction. Prof. Okada also adds that this technology has potential applications in the fields of education as well as for monitoring mental illness. By assessing a student’s state of excitement or boredom, AI may adapt its teaching routines for better educational outcomes. Similarly, by continuously interacting with the user,  AI may assess variations in emotional states in patients with mental illnesses, helping them access timely therapeutic interventions.

Yet another interesting work conducted at Prof. Okada’s lab features using social signal pattern processing to develop an adaptive interview strategy. “We can all agree that answering questions during interviews is not always easy! Sometimes, we do not know enough to speak at length on a certain subject, or perhaps the interviewer expects us to explain more. In 2024, we published our work on using social signal recognition techniques to sense a speaker’s/interviewee’s willingness to speak, allowing conversational robots to adapt and change their interaction strategies,” explains Prof. Okada. This enables selection of appropriate interview questions based on the estimated willingness of the interviewee, resulting in effective interview strategies. Prof. Okada’s research on multimodal signal processing technology also extends to developing better educational AI that helps students improve their spoken English skills. This study demonstrated that utilizing multiple communication signals together resulted in a more accurate assessment of speaking skills. The technology may help tutors and students understand what specific behaviors may lead to improved clarity scores. Instead of relying on traditional assessment metrics, this framework identifies the specific aspects of speaking skills that require students’/tutors’ attention, for instance, the use of filler words (like hmm/uh) or insufficient eye contact– that needs to be improved for better spoken English.

 

‘Sen Tan’ Edge of Innovation: How does JAIST facilitate this?

One thing’s for sure – AI is going to play an indispensable role in all walks of human life. But Prof. Okada believes that our current understanding of AI systems in still ‘insufficient.’ The ‘sen tan’ (cutting-edge aspect) of his research is to address this by conducting interdisciplinary research on AI systems. By combining information science, psychology, linguistics, and social science, Prof. Okada intends to add a design of empathy onto existing AI systems. He believes this is the only way we can make AI a powerful medium to enhance innate human capabilities.

JAIST plays a critical role in facilitating such innovations. With a campus that is nestled along the mountainside, far away from the bustling cities of Japan, JAIST provides the ideal environment for creative thinking and research. According to Prof. Okada, this idyllic environment not only gives students and faculty a peaceful space but also cuts out distractions. Also, the number of students at JAIST is relatively low, giving them better personal access to facilities like supercomputers. The support structure at JAIST is commendable, converging freedom to collaborate with researchers from all over the world and exceptionally supportive administrative staff who help you manage research budgets – allowing scientists and students more time and energy to focus on research.

 

Inspiration for a futuristic vision of the world

Prof. Okada looks up to visionaries like Professor Geoffrey Hinton, who is also known as the “Godfather of AI” for his work on artificial neural networks. Prof. Okada acknowledges that when it comes to sentient AI systems, there are differences of opinion among AI researchers globally. While some scientists believe that AI systems that can feel and respond to humane emotions might take over the world, Prof. Okada believes that it is important to reflect whether AI can ever cultivate the intrinsic motivation that human beings have. Moreover, contemplating on how, why, and for what we use AI is critical. For example, if AI systems can positively influence human behavior by encouraging interpersonal interactions rather than social isolation, it could enhance the quality of life for a large portion of the aging population, as well as for young adults who often find themselves isolated in the maze of social media.

Concluding his thoughts with a fictional analogy that he draws inspiration from, Prof. Okada says, “The way I think of sentient AI systems is inspired by the famous Japanese manga series “Doraemon”! The companionship of the lazy but kind-hearted 10-year-old – Nobita Nobi – and the blue robotic cat from the 22nd century – Doraemon – is how I envision human-AI relationships to evolve in the future. Just as Doraemon helps Nobita overcome his laziness to unlock his potential, I hope our relationship with AI systems will reflect the same kind of collaboration and personal growth.”

 

***

 

About Japan Advanced Institute of Science and Technology, Japan

Founded in 1990 in Ishikawa prefecture, the Japan Advanced Institute of Science and Technology (JAIST) was the first independent national graduate university that has its own campus in Japan to carry out research-based graduate education in advanced science and technology. The term “Advanced” in JAIST’s name reflects the Japanese term “Sen Tan,” meaning “cutting-edge,” representing the university’s focus on being at the forefront of innovative research and education. Now, after 30 years of steady progress, JAIST has become one of Japan’s top-ranking universities. JAIST aims to foster capable leaders through its advanced education and research curricula. About 40% of JAIST’s alumni are international students. The university has a unique style of graduate education to ensure that students have a thorough foundation to build cutting-edge research and technology in the future. JAIST also works closely with both local and overseas academic and industrial communities, promoting industry–academia collaborative research.

 

Website: https://www.jaist.ac.jp/english/

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

AI in PR Research: Speed That Lacks Credibility

Published

on


Artificial intelligence is transforming how research is created and used in PR and thought leadership. Surveys that once took weeks to design and analyze can now be drafted, fielded and summarized in days or even hours. For communications professionals, the appeal is obvious: AI makes it possible to generate insights that keep pace with the news cycle. But does the quality of those insights hold?

In the race to move faster, an uncomfortable truth is emerging. AI may make aspects of research easier, but it also creates enormous pitfalls for the layperson. Journalists rightfully expect research to be transparent, verifiable and meaningful. This credibility cannot be compromised. Yet an overreliance on AI risks jeopardizing the very characteristics that make research such a powerful tool for thought leadership and PR.

This is where the opportunity and the risk converge. AI can help research live up to its potential as a driver of media coverage, but only if it is deployed responsibly, and never as a total substitute for skilled practitioners. Used without oversight, or by untrained but well-meaning communicators, it produces data that looks impressive on the surface but fails under scrutiny. Used wisely, it can augment and enhance the research process but never supplant it.

The Temptation: Faster, Cheaper, Scalable

AI has upended the traditional pace of research. Writing questions, cleaning data, coding open-ended responses and building reports required days of manual effort. Now, many of these tasks can be automated.

  • Drafting: Generative models can create survey questions in seconds, offering PR teams a head start on design.
  • Fielding: AI can help identify fraudulent or bot-like responses.
  • Analysis: Large datasets can be summarized almost instantly, and open-text responses can be categorized without armies of coders.
  • Reporting: Tools can generate data summaries and visualizations that make insights more accessible.

The acceleration is appealing. PR professionals can, in theory, generate surveys and insert data into the media conversation before a trend peaks. The opportunity is real, but it comes with a condition: speed matters only when the research holds up to scrutiny.

The Risk: Data That Doesn’t Stand Up

AI makes it possible to create research faster, but not necessarily better. Fully automated workflows often miss the standards required for earned media.

Consider synthetic respondents, artificial personas generated by AI to simulate human answers to surveys, trained on data from previous surveys. On the surface, they provide instant answers to survey questions. But research shows they diverge from real human data once tested across different groups and contexts. The issue isn’t limited to surveys. Even at the model level, AI outputs remain unreliable. OpenAI’s own system card shows that despite improvements in its newest model, GPT-5 still makes incorrect claims nearly 10% of the time.

For journalists, these shortcomings are disqualifying. Reporters and editors want to know how respondents were sourced, how questions were framed and whether findings were verified. If the answer is simply “AI produced it,” credibility collapses. Worse, errors that slip into coverage can damage brand reputation. Research meant to support PR should build trust, not risk it.

Why Journalists Demand More, Not Less

The reality for PR teams is that reporters are inundated with pitches. That volume has made editors more discerning, and credible data can differentiate a pitch from the competition.

Research that earns coverage typically delivers three things:

  1. Clarity: Methods are clearly explained.
  2. Context: Results are tied to trends or issues audiences care about.
  3. Credibility: Findings are grounded in sound design and transparent analysis.

These expectations have only intensified. Public trust in media is at a historic low. Only 31% of Americans trust the news “a great deal” or “a fair amount.” At the same time, 36% have “no trust at all,” the highest level of complete distrust Gallup has recorded in more than 50 years of tracking. Reporters know this and apply greater scrutiny before publishing any research.

For PR professionals, the implication is clear: AI can speed up processes, but unless findings meet editorial standards, they will never see the light of day.

Why Human Oversight Is Indispensable

AI can process data at scale, but it cannot replicate the judgment or accountability of human researchers. Oversight matters most in four areas:

  • Defining objectives: Humans decide which questions are newsworthy or align with campaign goals and what narratives are worth testing.
  • Interpreting nuance: Machines can classify sentiment, but are bad at identifying sarcasm, cultural context and emotional cues that shape meaningful insights.
  • Accountability: When findings are published, people – not algorithms – must explain the methods and defend the results.
  • Bias detection: AI reflects the limitations of its training data. Without human review, skewed or incomplete findings can pass as fact.

Public opinion reinforces the need for this oversight. Nearly half of Americans say AI will have a negative impact on the news they get, while only one in 10 say it will have a positive effect. If audiences are skeptical of AI-created news, journalists will be even more cautious about publishing research that lacks human validation. For PR teams, that means credibility comes from oversight: AI may accelerate the process, but only people can provide the transparency that makes research media ready.

AI as a Partner, Not a Shortcut

AI is best used strategically. It is as an “assistant” that enhances workflows rather than a substitute for expertise. That means:

  • Letting AI handle repetitive tasks such as transcription, always with human oversight.
  • Documenting when and how AI tools are used, to build transparency.
  • Validating AI outputs against human coders or traditional benchmarks.
  • Training teams to understand AI’s capabilities and limitations.
  • Aligning with evolving disclosure standards, such as the AAPOR Transparency Initiative.

Used this way, AI accelerates processes while preserving the qualities that make research credible. It becomes a force multiplier for human expertise, not a replacement for it.

What’s at Stake for PR Campaigns

Research has always been one of the most powerful tools for earning media. A well-executed survey can create headlines, drive thought leadership and support campaigns long after launch. But research that lacks credibility can do the opposite, damaging relationships with journalists and eroding trust.

Editors are paying closer attention to how AI is being used in PR. Some are experimenting with it themselves, while exercising caution. In Cision’s 2025 State of the Media Report, nearly three-quarters of journalists (72%) said factual errors are their biggest concern with AI-generated material, while many also worried about quality and authenticity. And although some reporters remain open to AI-assisted content if it is carefully validated, more than a quarter (27%) are strongly opposed to AI-generated press content of any kind. Those figures show why credibility cannot be an afterthought: skepticism is high, and mistakes will close doors.

The winners will be teams that integrate AI responsibly, using it to move quickly without cutting corners. They will produce findings that are timely enough to tap into news cycles and rigorous enough to withstand scrutiny. In a crowded media landscape, that balance will be the difference between earning coverage and being ignored.

Conclusion: Credibility as Currency

AI is here to stay in PR research. Its role will only expand, reshaping workflows and expectations across the industry. The question is not whether to use AI, but how to use it responsibly.

Teams that treat AI as a shortcut will see their research dismissed by the media. Teams that treat it as a partner – accelerating processes while upholding standards of rigor and transparency – will produce insights that both journalists and audiences trust.

In today’s environment, credibility is the most valuable currency. Journalists will continue to demand research that meets high standards. AI can help meet those standards, but only when guided by human judgment. The future belongs to PR professionals who prove that speed and credibility are not in conflict, but in partnership.



Source link

Continue Reading

AI Research

High Schoolers, Industry Partners, and Howard Students Open the Door to Tech at the Robotics and AI Outreach Event

Published

on


Last week in Blackburn Center, Howard University welcomed middle school, high school, and college students to explore the rapidly expanding world of robotics over the course of its second Robotics and AI Outreach Event. Teams of high school students showcased robots they built, while representatives from partnering Amazon Fulfillment Technologies, FIRST Robotics, the U.S. Navy and U.S. Army Research Laboratories, and Viriginia Tech gave presentations on their latest technologies, as well as ways to get involved in high-tech research. 

Across Thursday and Friday, Howard students and middle and high schoolers from across the DMV region heard from university researchers creating stories with generative AI and learned how they can get involved in STEM outreach from the Howard University Robotics Organization (HURO) and FIRST Robotics. They also viewed demonstrations of military unmanned ground vehicles and the Amazon Astro household robot. The biggest draw, however, was the robotics showcase in the East Ballroom. 

Amazon Program Manager Gerald Harris demos the Astro to students.

Over both days, middle and high school teams from across the DMV presented their robots as part of the FIRST Tech Challenge (FTC) and FIRST Robotics Competition, during which they were tasked with designing a robot within  six weeks. The program is intensive and gives students a taste of a real-world engineering career, as the students not only design and build their entries, but also engage in outreach events and raise their own funding each year.

“It’s incredible,” said Shelley Stoddard, vice president of FIRST Chesapeake. “I liken our teams to entrepreneurial startups. Each year they need to think about who they’re recruiting, how they’re recruiting; what they’re going to do for fundraising. If they want to have a brand, they create that, they manage that. We are highly encouraging of outreach because we don’t want it to be insular to just their schools or their classrooms.” 

Reaching the Next Generation of Engineers

This entrepreneurial spirit carries across the teams, such as the Ashburn, Virginia-based BeaverBots, who showed up in matching professional attire to stand out to potential recruits and investors as they presented three separate robots they’ve designed over the years — the Stubby V2, Dam Driver V1, and DemoBot — all built for lifting objects. Beyond already being skilled engineers and coders in their own right, the team has a heavy focus on getting younger children into robotics, even organizing their own events.

One of three robots designed by the BeaverBots team.

“One of the biggest things about our outreach is showing up to scrimmages and showing people we actually care about robotics and want to help kids join robotics,” said team member and high school junior Savni (last name withheld). “So, for example we’ve started a team in California, we’ve mentored [in] First Lego League, and we’ve hosted multiple scrimmages with FTC teams.”

“We also did a presentation in our local Troop 58 in Ashburn, where we showed our robot and told kids how they can get involved with FIRST,” added team vice-captain Aryan. “Along with that, a major part of our fundraising is sponsorship and matching grants.  We’ve received matching grants from CVS, FabWorks, and ICF.”

This desire to pay it forward and get more people involved in engineering wasn’t limited to the teams. Members of the student-run HURO were also present, putting on a drone demo and giving lectures advocating for more young Black intellectuals to get into science and engineering. 

“Right now, we’re doing a demo of one of our drones from the drone academy,” explained senior electrical engineering major David Toler II. “It’s a program we’ve put on since 2024 as a way to enrich the community around us and educate the Black community in STEM. We not only provide free drones to high schools, but we also work hands-on with them in very one-on-one mentor styles to give them knowledge to build on themselves and understand exactly how it works, why it works, and what components are necessary.” 

Building A Strong Support Network

HURO has been involved with the event from the beginning. Event organizer and Howard professor Harry Keeling, Ph.D., credits the drone program for helping the university’s AI and robotics outreach take flight. 

“It started with the drone academy, then that expanded through Dr. Todd Shurn’s work through the Sloan Foundation in the area of gaming,” explained Keeling. “Then gaming brought us to AI, and we got more money from Amazon and finally said ‘we need to do more outreach.’” 

Since 2024, Keeling has been working to bring more young people into engineering and AI research, relying on HURO, other local universities and high schools, industry partners like Amazon, and the Department of Defense, to build a strong network dedicated to local STEM outreach. Like with FIRST Robotics, a large part of his motivation with these growing partnerships is to prepare students for successful  jobs in the industry.

“We tell our students that in this field, networking is how you accomplish career growth,” he said. “None of us knows everything about what we do, but we can have a network where we can reach out to people who know more than we do. And the stronger our network is, the more we are able to solve problems in our own personal and professional lives.” 

At next year’s event, Keeling plans to step back and allow HURO to take over  more of the organizing and outreach, further bringing the next generation into leadership positions within the field. Meanwhile, he is working with other faculty members across the university to bring AI to the curriculum, further demystifying the technology and ensuring Howard students are prepared for the future. 

For Keeling, outreach events like this are vital to ensuring that young people feel confident in entering robotics, rather than intimidated. 

“One thing I realized is young people gravitate to what they see,” he said. “If they can’t see it, they can’t conceive it. These high schoolers[and] middle schoolers are getting a chance to rub elbows with a lot of professionals [and] understand what a roboticist ultimately might be doing in life.” 

He hopes that his work eventually makes children see a future in tech as just as possible as any other field they see on TV. 

“I was talking with my daughters, and I asked them at dinner ‘what do you want to be when you grow up?’” Keeling said. “And my youngest one said astronauts, and an artist, and a cook. Now hopefully one day, one of those 275 students that were listening to my presentation will answer the question with ‘I want to be an AI expert. I want to be a roboticist.’ Because they’ve come here, they’ve seen and heard what they can do.”





Source link

Continue Reading

AI Research

Disney, Universal and Warner Bros. Discovery sue Chinese AI firm as Hollywood’s copyright battles spread

Published

on


Walt Disney Co., Universal Pictures and Warner Bros. Discovery on Tuesday sued a Chinese artificial intelligence firm called MiniMax for copyright infringement, alleging its AI service generates iconic characters including Darth Vader, the Minions and Wonder Woman without the studios’ permission.

“MiniMax’s bootlegging business model and defiance of U.S. copyright law are not only an attack on Plaintiffs and the hard-working creative community that brings the magic of movies to life, but are also a broader threat to the American motion picture industry,” the companies said in their complaint, filed in U.S. District Court in Los Angeles.

The entertainment companies requested that MiniMax be restrained from further infringement. They are seeking damages of up to $150,000 per infringed work, as well as attorney fees and costs.

This is the latest round of copyright lawsuits that major studios have brought against AI companies over intellectual property concerns. In June, Disney and Universal Pictures sued AI firm Midjourney for copyright infringement. Earlier this month, Warner Bros. Discovery also sued Midjourney.

Shanghai-based MiniMax has a service called Hailuo AI, which is marketed as a “Hollywood studio in your pocket” and used characters including the Joker and Groot in its ads without the studios’ permission, the studios’ lawsuit said. Users can type in a text prompt requesting “Star Wars’” iconic character Yoda or DC Comics’ Superman, and Hailuo AI can pull up high quality and downloadable images or video of the character, according to the document.

“MiniMax completely disregards U.S. copyright law and treats Plaintiffs’ valuable copyrighted characters like its own,” the lawsuit said. “MiniMax’s copyright infringement is willful and brazen.”

“Given the rapid advancement in technology in the AI video generation field … it is only a matter of time until Hailuo AI can generate unauthorized, infringing videos featuring Plaintiffs’ copyrighted characters that are substantially longer, and even eventually the same duration as a movie or television program,” the lawsuit said.

MiniMax did not immediately return a request for comment.

Hollywood is grappling with significant challenges, including the threat of AI, as companies consolidate and reduce their expenses as production costs rise. Many actors and writers, still recovering from strikes that took place in 2023, are scrambling to find jobs. Some believe the growth of AI has threatened their livelihoods as tech tools can replicate iconic characters with text prompts.

While some studios have sued AI companies, others are looking for ways to partner with them. For example, Lionsgate has partnered with AI startup Runway to help with behind the scenes processes such as storyboarding.



Source link

Continue Reading

Trending