Connect with us

AI Research

Could AI understand emotions better than we do?

Published

on


Is artificial intelligence (AI) capable of suggesting appropriate behaviour in emotionally charged situations? A team from the University of Geneva (UNIGE) and the University of Bern (UniBE) put six generative AIs — including ChatGPT — to the test using emotional intelligence (EI) assessments typically designed for humans. The outcome: these AIs outperformed average human performance and were even able to generate new tests in record time. These findings open up new possibilities for AI in education, coaching, and conflict management. The study is published in Communications Psychology.

Large Language Models (LLMs) are artificial intelligence (AI) systems capable of processing, interpreting and generating human language. The ChatGPT generative AI, for example, is based on this type of model. LLMs can answer questions and solve complex problems. But can they also suggest emotionally intelligent behaviour?

These results pave the way for AI to be used in contexts thought to be reserved for humans.

Emotionally charged scenarios

To find out, a team from UniBE, Institute of Psychology, and UNIGE’s Swiss Center for Affective Sciences (CISA) subjected six LLMs (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku and DeepSeek V3) to emotional intelligence tests. ”We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions,” says Katja Schlegel, lecturer and principal investigator at the Division of Personality Psychology, Differential Psychology, and Assessment at the Institute of Psychology at UniBE, and lead author of the study.

For example: One of Michael’s colleagues has stolen his idea and is being unfairly congratulated. What would be Michael’s most effective reaction?

a) Argue with the colleague involved

b) Talk to his superior about the situation

c) Silently resent his colleague

d) Steal an idea back

Here, option b) was considered the most appropriate.

In parallel, the same five tests were administered to human participants. “In the end, the LLMs achieved significantly higher scores — 82% correct answers versus 56% for humans. This suggests that these AIs not only understand emotions, but also grasp what it means to behave with emotional intelligence,” explains Marcello Mortillaro, senior scientist at the UNIGE’s Swiss Center for Affective Sciences (CISA), who was involved in the research.

New tests in record time

In a second stage, the scientists asked ChatGPT-4 to create new emotional intelligence tests, with new scenarios. These automatically generated tests were then taken by over 400 participants. ”They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop,” explains Katja Schlegel. ”LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context. This reinforces the idea that LLMs, such as ChatGPT, have emotional knowledge and can reason about emotions,” adds Marcello Mortillaro.

These results pave the way for AI to be used in contexts thought to be reserved for humans, such as education, coaching or conflict management, provided it is used and supervised by experts.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Radiomics-Based Artificial Intelligence and Machine Learning Approach for the Diagnosis and Prognosis of Idiopathic Pulmonary Fibrosis: A Systematic Review – Cureus

Published

on



Radiomics-Based Artificial Intelligence and Machine Learning Approach for the Diagnosis and Prognosis of Idiopathic Pulmonary Fibrosis: A Systematic Review  Cureus



Source link

Continue Reading

AI Research

A Real-Time Look at How AI Is Reshaping Work : Information Sciences Institute

Published

on


Artificial intelligence may take over some tasks and transform others, but one thing is certain: it’s reshaping the job market. Researchers at USC’s Information Sciences Institute (ISI) analyzed LinkedIn job postings and AI-related patent filings to measure which jobs are most exposed, and where those changes are happening first. 

The project was led by ISI research assistant Eun Cheol Choi, working with students in a graduate-level USC Annenberg data science course taught by USC Viterbi Research Assistant Professor Luca Luceri. The team developed an “AI exposure” score to measure how closely each role is tied to current AI technologies. A high score suggests the job may be affected by automation, new tools, or shifts in how the work is done. 

Which Industries Are Most Exposed to AI?

To understand how exposure shifted with new waves of innovation, the researchers compared patent data from before and after a major turning point. “We split the patent dataset into two parts, pre- and post-ChatGPT release, to see how job exposure scores changed in relation to fresh innovations,” Choi said. Released in late 2022, ChatGPT triggered a surge in generative AI development, investment, and patent filings.

Jobs in wholesale trade, transportation and warehousing, information, and manufacturing topped the list in both periods. Retail also showed high exposure early on, while healthcare and social assistance rose sharply after ChatGPT, likely due to new AI tools aimed at diagnostics, medical records, and clinical decision-making.

In contrast, education and real estate consistently showed low exposure, suggesting they are, at least for now, less likely to be reshaped by current AI technologies.

AI’s Reach Depends on the Role

AI exposure doesn’t just vary by industry, it also depends on the specific type of work. Jobs like software engineer and data scientist scored highest, since they involve building or deploying AI systems. Roles in manufacturing and repair, such as maintenance technician, also showed elevated exposure due to increased use of AI in automation and diagnostics.

At the other end of the spectrum, jobs like tax accountant, HR coordinator, and paralegal showed low exposure. They center on work that’s harder for AI to automate: nuanced reasoning, domain expertise, or dealing with people.

AI Exposure and Salary Don’t Always Move Together

The study also examined how AI exposure relates to pay. In general, jobs with higher exposure to current AI technologies were associated with higher salaries, likely reflecting the demand for new AI skills. That trend was strongest in the information sector, where software and data-related roles were both highly exposed and well compensated.

But in sectors like wholesale trade and transportation and warehousing, the opposite was true. Jobs with higher exposure in these industries tended to offer lower salaries, especially at the highest exposure levels. The researchers suggest this may signal the early effects of automation, where AI is starting to replace workers instead of augmenting them.

“In some industries, there may be synergy between workers and AI,” said Choi. “In others, it may point to competition or replacement.”

From Class Project to Ongoing Research

The contrast between industries where AI complements workers and those where it may replace them is something the team plans to investigate further. They hope to build on their framework by distinguishing between different types of impact — automation versus augmentation — and by tracking the emergence of new job categories driven by AI. “This kind of framework is exciting,” said Choi, “because it lets us capture those signals in real time.”

Luceri emphasized the value of hands-on research in the classroom: “It’s important to give students the chance to work on relevant and impactful problems where they can apply the theoretical tools they’ve learned to real-world data and questions,” he said. The paper, Mapping Labor Market Vulnerability in the Age of AI: Evidence from Job Postings and Patent Data, was co-authored by students Qingyu Cao, Qi Guan, Shengzhu Peng, and Po-Yuan Chen, and was presented at the 2025 International AAAI Conference on Web and Social Media (ICWSM), held June 23-26 in Copenhagen, Denmark.

Published on July 7th, 2025

Last updated on July 7th, 2025



Source link

Continue Reading

AI Research

SERAM collaborates on AI-driven clinical decision project

Published

on


The Spanish Society of Medical Radiology (SERAM) has collaborated with six other scientific societies to develop an AI-supported urology clinical decision-making project called Uro-Oncogu(IA)s.

Uro-Oncog(IA)s project team.SERAM

The initiative produced an algorithm that will “reduce time and clinical variability” in the management of urological patients, the society said. SERAM’s collaborators include the Spanish Urology Association (AEU), the Foundation for Research in Urology (FIU), the Spanish Society of Pathological Anatomy (SEAP), the Spanish Society of Hospital Pharmacy (SEFH), the Spanish Society of Nuclear Medicine and Molecular Imaging (SEMNIM), and the Spanish Society of Radiation Oncology (SEOR).

SERAM Secretary General Dr. MaríLuz Parra launched the project in Madrid on 3 July with AEU President Dr. Carmen González.

On behalf of SERAM, the following doctors participated in this initiative:

  • Prostate cancer guide: Dr. Joan Carles Vilanova, PhD, of the University of Girona,
  • Upper urinary tract guide: Dr. Richard Mast of University Hospital Vall d’Hebron in Barcelona,
  • Muscle-invasive bladder cancer guide: Dr. Eloy Vivas of the University of Malaga,
  • Non-muscle invasive bladder cancer guide: Dr. Paula Pelechano of the Valencian Institute of Oncology in Valencia,
  • Kidney cancer guide: Dr. Nicolau Molina of the University of Barcelona.



Source link

Continue Reading

Trending