Connect with us

AI Research

Multilingualism is a blind spot in AI systems

Published

on


For internationally operating companies, it is attractive to use a single AI solution across all markets. Such a centralized approach offers economies of scale and appears to ensure uniformity. Yet research from CWI reveals that this assumption is on shaky ground: the language in which an AI is addressed, influences the answers the system provides – and quite significantly too.

Language steers outcomes

The problem goes beyond small differences in nuance. Researcher Davide Ceolin, tenured researcher within the Human-Centered Data Analytics group at CWI, and his international research team discovered that identical Large Language Models (LLM’s) can adopt varying political standpoints, depending on the language used. They delivered more economically progressive responses in Dutch and more centre-conservative ones in English. For organizations applying AI in HR, customer service or strategic decision-making, this results in direct consequences for business processes and reputation.

These differences are not incidental. Statistical analysis shows that the language of the prompt used has a stronger influence on the AI response than other factors, such as assigned nationality. “We assumed that the output of an AI model would remain consistent, regardless of the language. But that turns out not to be the case,” says Ceolin.

For businesses, this means more than academic curiosity. Ceolin emphasizes: “When a system responds differently to users with different languages or cultural backgrounds, this can be advantageous – think of personalization – but also detrimental, such as with prejudices. When the owners of these systems are unaware of this bias, they may experience harmful consequences.”

Davide Céolin speaking at a symposium

Prejudices with consequences

The implications of these findings extend beyond political standpoints alone. Every domain in which AI is deployed – from HR and customer service to risk assessment – runs the risk of skewed outcomes as a result of language-specific prejudices. An AI assistant that assesses job applicants differently depending on the language of their CV, or a chatbot that gives inconsistent answers to customers in different languages: these are realistic scenarios, no longer hypothetical.

According to Ceolin, such deviations are not random outliers, but patterns with a systematic character. “That is extra concerning. Especially when organizations are unaware of this.”

For Dutch multinationals, this is a real risk. They often operate in multiple languages but utilize a single central AI system. “I suspect this problem already occurs within organizations, but it’s unclear to what extent people are aware of it,” says Ceolin. The research also suggests that smaller models are, on average, more consistent than the larger, more advanced variants, which appear to be more sensitive to cultural and linguistic nuances.

What can organizations do?

The good news is that the problem can be detected and limited. Ceolin advises testing AI systems regularly using persona-based prompting, which involves testing different scenarios where the language, nationality, or culture of the user varies. “This way you can analyze whether specific characteristics lead to unexpected or unwanted behaviour.”

Additionally, it’s essential to have a clear understanding of who works with the system and in which language. Only then you can assess whether the system operates consistently and fairly in practice. Ceolin advocates for clear governance frameworks that account for language-sensitive bias, just as currently happens with security or ethics.

Structural approach required

According to the researchers, multilingual AI bias is not a temporary phenomenon that will disappear on its own. “Compare it to the early years of internet security,” says Ceolin. “What was then seen as a side issue turned out to be of strategic importance later.” CWI is now collaborating with the French partner institute INRIA to unravel the mechanisms behind this problem further.

The conclusion is clear: companies that deploy AI in multilingual contexts would do well to consciously address this risk not only for technical reasons, but also to prevent reputational damage, legal complications and unfair treatment of customers or employees.

“AI is being deployed increasingly often, but insight into how language influences the system is in its infancy,” concludes Ceolin. “There’s still much work to be done there.”

Author: Kim Loohuis
Header photo: Shutterstock



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

The new frontier of medical malpractice

Published

on


Although the beginnings of modern artificial intelligence (AI) can be traced
as far back as 1956, modern generative AI, the most famous example of which is
arguably ChatGPT, only began emerging in 2019. For better or worse, the steady
rise of generative AI has increasingly impacted the medical field. At this time, AI has begun to advance in a way that creates
potential liability…



Source link

Continue Reading

AI Research

Pharmaceutical Innovation Rises as Global Funding Surges and AI Reshapes Clinical Research – geneonline.com

Published

on



Pharmaceutical Innovation Rises as Global Funding Surges and AI Reshapes Clinical Research  geneonline.com



Source link

Continue Reading

AI Research

Radiomics-Based Artificial Intelligence and Machine Learning Approach for the Diagnosis and Prognosis of Idiopathic Pulmonary Fibrosis: A Systematic Review – Cureus

Published

on



Radiomics-Based Artificial Intelligence and Machine Learning Approach for the Diagnosis and Prognosis of Idiopathic Pulmonary Fibrosis: A Systematic Review  Cureus



Source link

Continue Reading

Trending