Connect with us

Tools & Platforms

How multilingual AI often reinforces bias

Published

on


Credit: AI-generated image

Johns Hopkins computer scientists have discovered that artificial intelligence tools like ChatGPT are creating a digital language divide, amplifying the dominance of English and other commonly spoken languages while sidelining minority languages.

Rather than leveling the playing field, popular large language model tools are actually building “information cocoons,” the researchers say in findings presented at the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics earlier this year.

“We were trying to ask, are multilingual LLMs truly multilingual? Are they breaking language barriers and democratizing access to information?” says first author Nikhil Sharma, a Ph.D. student in the Whiting School of Engineering’s Department of Computer Science.

To find out, Sharma and his team—including Kenton Murray, a research scientist in the Human Language Technology Center of Excellence, and Ziang Xiao, an assistant professor of computer science—first looked at coverage of the Israel–Gaza and Russia–Ukraine wars and identified several types of information across the : common knowledge, contradicting assertions, facts exclusive to certain documents, and information that was similar, but presented with very different perspectives.

Informed by these , the team created two sets of fake articles—one with “truthful” information and one with “alternative,” conflicting information. The documents featured coverage of a festival—with differing dates, names, and statistics—and a war, which was reported on with biased perspectives. The pieces were written in high-resource languages, such as English, Chinese, and German, as well as lower-resource languages, including Hindi and Arabic.

The team then asked LLMs from big-name developers like OpenAI, Cohere, Voyage AI, and Anthropic to answer several types of queries, such as choosing one of two contradictory facts presented in different languages, more general questions about the topic at hand, queries about facts that are present in only one article, and topical questions phrased with clear bias.

The researchers found that both in retrieving the information from the documents and in generating an answer to a user’s query, the LLMs preferred information in the language of the question itself.

“This means if I have an article in English that says some Indian political figure—let’s call them Person X—is bad, but I have an article in Hindi that says Person X is good, then the model will tell me they’re bad if I’m asking in English, but that they’re good if I’m asking in Hindi,” Sharma explains.

The researchers then wondered what would happen if there was no article in the language of the query, which is common for speakers of low-resource languages. The team’s results show that LLMs will generate answers based on information found only in higher-resource languages, ignoring other perspectives.

“For instance, if you’re asking about Person X in Sanskrit—a less commonly spoken language in India—the model will default to information pulled from English articles, even though Person X is a figure from India,” Sharma says.

Furthermore, the computer scientists found a troubling trend: English dominates. They point to this as evidence of linguistic imperialism—when information from higher-resource languages is amplified more often, potentially overshadowing or distorting narratives from low-resource ones.

To summarize the study’s results, Sharma offers a hypothetical scenario: Three ChatGPT users ask about the longstanding India–China border dispute. A Hindi-speaking user would see answers shaped by Indian sources, while a Chinese-speaking user would get answers reflecting only Chinese perspectives.

“But say there’s an Arabic-speaking user, and there are no documents in Arabic about this conflict,” Sharma says. “That user will get answers from the American English perspective, because that is the highest-resource language out there. So all three users will come away with completely different understandings of the conflict.”

As a result, the researchers label current multilingual LLMs “faux polyglots” that fail to break , keeping users trapped in language-based filter bubbles.

“The information you’re exposed to determines how you vote and the you make,” Sharma says. “If we want to shift the power to the people and enable them to make informed decisions, we need AI systems capable of showing them the whole truth with different perspectives. This becomes especially important when covering information about conflicts between regions that speak different languages, like the Israel–Gaza and Russian–Ukraine wars—or even the tariffs between China and the U.S.”

To mitigate this information disparity in LLMs, the Hopkins team plans to build a dynamic benchmark and datasets to help guide future model development. In the meantime, it encourages the larger research community to look at the effects of different model training strategies, data mixtures, and retrieval-augmented generation architectures.

The researchers also recommend collecting diverse perspectives from multiple languages, issuing warnings to users who may be falling into confirmatory query-response behavior, and developing programs to increase information literacy around conversational search to reduce over-trust in and over-reliance on LLMs.

“Concentrated power over AI technologies poses substantial risks, as it enables a few individuals or companies to manipulate the flow of information, thus facilitating mass persuasion, diminishing the credibility of these systems, and exacerbating the spread of misinformation,” Sharma says. “As a society, we need users to get the same information regardless of their language and background.”

More information:
Nikhil Sharma et al, Faux Polyglot: A Study on Information Disparity in Multilingual Large Language Models, Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) (2025). DOI: 10.18653/v1/2025.naacl-long.411

Citation:
A digital language divide: How multilingual AI often reinforces bias (2025, September 2)
retrieved 2 September 2025
from https://techxplore.com/news/2025-09-digital-language-multilingual-ai-bias.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tools & Platforms

AI, art, and sound converge in holosculpture interactive artwork

Published

on


HoloSculpture integrates AI, display technology, and sound

 

HoloSculpture is an interactive artwork developed by Turbulence Lab and Hamza Kırbaş that integrates artificial intelligence, anamorphic display technology, and sound within a single physical object. Conceived as both a sculptural form and a digital interface, the project explores how AI can be embodied in a tangible medium.

 

The piece incorporates a 4D anamorphic display, which generates a sense of depth and spatial presence, and a studio-grade sound system that enhances audiovisual interaction. Activated by voice, HoloSculpture engages in dialogue, uses gesture-like movements, and presents shifting visual compositions, creating a dynamic relationship between user and object.


all images courtesy of Hamza Kırbaş and Turbulence Lab

 

 

HoloSculpture functions as both sculpture and digital interface

 

Each unit is produced as a signed and numbered edition and includes three distinct AI ‘characters,’ programmed with unique modes of response and expression. In addition, owners receive three certified digital artworks associated with the sculpture. Through this combination, HoloSculpture by Turbulence Lab and designer Hamza Kırbaş operates simultaneously as a functional device and a collectible artwork, positioning itself at the intersection of design, technology, and contemporary art.

AI, art, and sound converge in holosculpture interactive artwork
the work functions as both sculpture and digital interface

AI, art, and sound converge in holosculpture interactive artwork
a 4D anamorphic display creates depth and spatial presence

 

AI, art, and sound converge in holosculpture interactive artwork
studio-grade sound enhances the audiovisual experience

AI, art, and sound converge in holosculpture interactive artwork
shifting visuals create a dynamic interaction with the viewer

 





Source link

Continue Reading

Tools & Platforms

Latin America’s Digital Non-Alignment Gains Ground in the AI Crossfire

Published

on


Washington offers AI built for dominance, Beijing touts cooperation and infrastructure sharing. Latin America, squeezed between standards, chips, and geopolitics, is weighing a third path—one that prizes interoperability, cultural trust, and sovereignty over lock-in. The decision will shape generations.

Two Competing Blueprints

This past summer, the United States unveiled Winning the Race: America’s AI Action Plan, a strategy that describes artificial intelligence as a zero-sum contest. The accompanying executive order launched an AI Exports Program promising a full U.S. stack—from chips and cloud to models and guardrails—and urging allies to adopt American standards. Latin American officials interviewed by Americas Quarterly compared it to a “digital Monroe Doctrine,” a bid to create spheres of influence through contracts and code.

China countered with its Action Plan on Global Governance of Artificial Intelligence, calling AI a “global public good.” Beijing pledged UN-based rule-making, knowledge transfer, and infrastructure support for the Global South. Premier Li Qiang stressed cooperation over competition. The pitch is bolstered by cheaper options and tools like DeepSeek that feel attainable for governments under budget pressure. But, as policymakers told Americas Quarterly, buying into a Chinese ecosystem wholesale could swap dependencies—updates, patches, and data pipelines—from Washington to Beijing.

Both visions are designed to bind partners tightly. Latin America must now decide whether to sign up or to draft its own.

Pressure Points in the Region

For Brazil, the tightrope is real. It has codified an AI ethics framework and built research centers like CPQD, yet still relies heavily on U.S. clouds and developer ecosystems while courting Chinese financing for telecoms and manufacturing. Officials in Brasília told Americas Quarterly the challenge is to keep space for homegrown innovation without antagonizing either superpower.

Mexico faces sharper constraints. A 2,000-mile border with the U.S. makes Chinese infrastructure a national security red flag for Washington. Still, Mexican industry remains deeply integrated into Chinese supply chains, and its digital economy requires multiple partners to prevent shortages in chips and connectivity.

Chile, with ambitions to be a neutral regional hub, is marketing itself as a meeting point for data and cloud. Argentina’s courts, meanwhile, have already slowed state surveillance tools on privacy and due-process grounds, underscoring a civic culture that shapes technology adoption. Analysts told Americas Quarterly the diversity of political systems and industrial bases makes a single alignment impossible—hence the appeal of a third way.

Digital Non-Alignment as Strategy

That third way is what scholars and officials increasingly call digital non-alignment. The idea is to create interoperable standards and diversified infrastructure that let countries draw from both ecosystems while maintaining sovereignty. Some of the pieces are already visible.

Chile’s CENIA is leading LatamGPT, a regional model project where training data and roadmaps are steered by Latin American institutions. The hardware mix is deliberately plural: U.S. hyperscalers like AWS alongside Huawei cloud and regional providers, stitched together with the new Humboldt undersea cable linking South America to Asia-Pacific via Google.

Brazil’s $4 billion AI Plan aims to promote sovereignty through domestic models, public financing via BNDES, and rule-setting via the G20 and UN, while still allowing room for foreign vendors to prevent lock-in.

Scaled up, the region could form a Latin American AI consortium to pool research funds, share scarce GPU clusters, and co-own strategic datasets—from agriculture yields to climate models—under regional governance. Joint procurement would prevent any single country from being cornered by exclusivity clauses. Institutions like Mercosur, the Pacific Alliance, and the IDB could anchor a standards spine grounded in Latin American values, not imported templates. Officials told Americas Quarterly that such a move would give Brasília, Santiago, and Mexico City more leverage in negotiations with both powers.

Culture, Trust, and a Homegrown Market

Sovereignty is not just hardware and code. It is also a social license. Latin America’s legal culture leans toward human-centered tech: Brazil’s LGPD guarantees the right to review automated decisions; Chile’s AI policy is undergoing public consultation; Argentine courts suspended Buenos Aires’ facial recognition system on privacy grounds. These instincts make the region fertile ground for trustworthy AI systems with contestability and explainability built in from the start.

That ethic could become a market advantage. While U.S. and Chinese firms chase massive frontier models, Latin American developers can focus on practical breakthroughs: credit scoring with appeal rights, farm forecasting tuned for smallholders, logistics that cut port delays, or health triage tools that respect privacy. Legal scholars told Americas Quarterly that cooperative traditions—agricultural co-ops, community health networks, solidarity finance—can guide AI adoption with less resistance and clearer public value.

Culture is no guarantee. Procurement reform, data stewardship, and incentives that reward real outcomes are still needed. But trust itself can be an economic niche, one that makes Latin American AI distinct rather than derivative.

EFE@GIAN EHRENZELLER

The Clock Is Ticking

The window for genuine choice is closing fast. Washington and Beijing are already hardening lanes through export controls, cloud credits, and preferred-vendor partnerships. Countries that hesitate may soon find themselves choosing not between two philosophies, but between leftovers once the map is drawn.

A regional strategy is still within reach: build an interoperable stack, treat data as a strategic asset with accountable governance, federate compute across borders, and leverage both ecosystems without being locked into either. As officials told Americas Quarterly, the real trap is believing the only options are American dominance or Chinese cooperation.

Also Read: Peru’s Peñico Discovery Shows How the Caral Civilization Faced Crisis With Cooperation, Not Conquest

Latin America can choose differently—crafting standards that fit its development goals, embedding trust in technology, and keeping sovereignty in its own hands. The decision will determine whether AI becomes another story of dependency, or the moment the region writes its own script in the digital century.



Source link

Continue Reading

Tools & Platforms

Artificial intelligence is reshaping healthcare, but at what environmental cost?

Published

on


Amidst widespread promotion of artificial intelligence (AI), the environmental impacts are not receiving enough scrutiny, including from the health sector, writes Jason Staines.

When Hume City Council recently rejected part of a proposed $2 billion data centre precinct in Melbourne’s north, it put the spotlight on the largely overlooked environmental costs of artificial intelligence (AI) and the communities most at risk of bearing these costs.

The council had originally approved planning permits for the Merrifield data centre precinct, but rescinded support for one facility after residents and campaigners raised concerns about energy and water use, local infrastructure, and consultation with Traditional Owners. The backlash may be a sign that policymakers are starting to consider AI’s ecological footprint.

As AI is rolled out in increasingly sensitive areas of public life, including healthcare, policing, and welfare, governments have focused on the need for ethical, safe, and responsible deployment. There are also fierce debates over copyright and AI’s impact on jobs.

As important as these discussions are, environmental consequences have rarely been part of the equation to date.

Missing piece of the ‘responsibility’ puzzle

Governments and stakeholders have been busy discussing how AI might help lift Australia’s sagging productivity; Treasurer Dr Jim Chalmers wrote earlier this month that he expects AI to “completely transform our economy”, and that he is optimistic AI “will be a force for good”.

However, the Treasurer and others promoting AI are less vocal about the technology’s negative externalities.

The Productivity Commission’s interim report, Harnessing data and digital technology, pointed to AI’s potential to “improve service delivery, lift productivity and help solve complex social and environmental challenges”. But it largely overlooked AI’s environmental impacts, saying “few of AI’s risks are wholly new issues” and that higher energy demand is par for the course with “the information technology revolution”.

Likewise, a Therapeutic Goods Administration (TGA) consultation paper on the regulation of AI in medical device software omitted any discussion of environmental impact, despite recommending more transparency and accountability in the way AI tools are assessed and monitored.

The absence matters. As AI becomes embedded in essential services such as healthcare, its environmental footprint becomes not just a technical issue, but a public health and equity concern, particularly for communities already facing water insecurity or climate risk.

In a sector that is trying to decarbonise and reduce its impact, uncritical adoption of AI could prove counterproductive.

There are serious equity questions when governments invest in digital transformation strategies without accounting for the cultural impacts of water-intensive technologies such as AI (Jack Kinny / Shutterstock).

First Nations perspectives

For some First Nations communities, water scarcity is not theoretical, it is a daily reality.

In remote and regional Australia, many First Nations peoples face ongoing systemic barriers to safe, reliable, and culturally appropriate water access. These are demands that extend far beyond infrastructure and include deeply held cultural and spiritual connections with water.

Research conducted by CSIRO between 2008 and 2010 in the Daly (NT) and Fitzroy (WA) river catchments was the first of its kind to document Indigenous social and economic values tied to aquatic ecosystems, linking river flows directly to Indigenous livelihoods, resource use, and planning processes.

The Northern Australia Water Resource Assessment reinforces these insights, framing rivers as vessels of sustenance, heritage, and governance, and asserting Traditional Owners as inherently central to water and development planning.

Yet Australia’s AI reform dialogue has mostly omitted these cultural linkages, even when it does consider AI’s consumption of resources. In the context of AI-powered healthcare, this omission is especially troubling.

Innovations celebrated for improving diagnostics or service delivery often rely on energy- and water-intensive data systems, whose environmental toll is seldom disclosed or evaluated through an equity lens. When AI is embedded in healthcare services for Indigenous populations, with no accounting for its resource footprint, those least consulted risk bearing the heaviest cost.

This raises serious equity questions when governments invest in digital transformation strategies without accounting for water-intensive technologies such as AI.

As the United Nations Environment Programme has noted, policymakers must ensure the social, ethical, and environmental aspects of AI use are considered, not just the economic benefits.

“We need to make sure the net effect of AI on the planet is positive before we deploy the technology at scale,” said Golestan (Sally) Radwan, Chief Digital Officer of the United Nations Environment Programme.

Just how thirsty is AI?

Data centres remain one of the fastest-growing consumers of global electricity, using approximately 460 terawatt‑hours in 2022, and projected to more than double by 2026 with increasing AI and cryptocurrency activity.

These facilities often depend on water-intensive cooling systems, such as evaporative methods, which can consume large volumes of potable water and exacerbate stress on local supply. With AI workloads driving higher server densities and increased heat output, water demand for cooling is rising sharply, especially for hyperscale data centres, making water scarcity a growing operational risk.

For context, a study from the University of California, Riverside calculated that training GPT‑3 in Microsoft’s advanced US data centres evaporated about 700,000 litres of clean freshwater, a sobering figure for a single model’s development phase.

AI is being promoted as a climate solution, through better modelling, emissions tracking, and even water management optimisation. But the industry’s own resource use can directly undermine those goals.

As the OECD notes: “AI-enabled products and services are creating significant efficiency gains, helping to manage energy systems and achieve the deep cuts in greenhouse gas (GHG) emissions needed to meet net-zero targets. However, training and deploying AI systems can require massive amounts of computational resources with their own environmental impacts.”

According to one report, data centres in the US consumed about 4.4 percent of the country’s electricity in 2023, which could nearly triple to 12 percent by 2028. Meanwhile, Google’s US data centres went from using 12.7 billion litres of cooling water in 2021 to over 30 billion litres just three years later, and UC Riverside estimates that running just 20 to 50 ChatGPT queries uses roughly half a litre of fresh water.

In response, the global tech sector has invested heavily in green branding. Microsoft, for example, has publicly committed to being “carbon negative and water positive by 2030”.

Notable absence

Australia’s healthcare system is rapidly adopting AI across clinical, administrative, and operational domains.

From diagnostic imaging to digital scribes, clinical decision support, and personalised treatment plans, AI is being held up as a core enabler of future-ready care. Federal reports, such as Our Gen AI Transition, point to AI’s potential to improve efficiency and free up clinicians for more patient-centred work.

But that optimism comes with a caveat: the integration of AI into healthcare is unfolding with limited consideration of its environmental toll. The healthcare sector is already one of the most resource-intensive in Australia, responsible for around seven percent of national greenhouse gas emissions. AI risks adding a new layer of resource demand.

While regulatory bodies are beginning to grapple with questions of safety, accountability, and clinical transparency, environmental impacts remain conspicuously absent from most discussions.

The TGA, for instance, has flagged a need for clearer regulation of AI in medical software, noting that some tools, such as generative AI scribes, may already be operating outside existing rules if they suggest diagnoses or treatments without approval. Yet neither the TGA’s consultation paper nor its updated guidance documents meaningfully address the carbon or water costs of these tools.

According to Consumer Health Forum CEO Dr Elizabeth Deveny, the TGA’s review surfaced critical issues around trust, transparency, and consent, from hidden AI tools embedded in routine software, to confusion about who is responsible when things go wrong.

She notes: “Trust is the real product being managed.” Yet environmental transparency is foundational to that trust, particularly when AI is deployed into hospitals and clinics already experiencing the impacts of climate and infrastructure strain.

Equally important is the broader policy context. A Deeble Institute Perspectives Brief cautions that AI’s success in healthcare hinges on transparent and nationally consistent implementation frameworks, shaped through co-design with clinicians and consumers.

But such frameworks must also consider the material cost of AI, not just its clinical or administrative promise. Otherwise, we risk solving one set of problems, such as workforce strain or wait times, while silently compounding others, including water insecurity, emissions, and energy grid pressure.

Global pressure is building

In Europe, data centre water use is already triggering regulatory scrutiny. The EU is finalising a Water Resilience Strategy that will impose usage limits on tech companies, with a focus on AI-related growth.

“The IT sector is suddenly coming to Brussels and saying we need a lot of high-quality water,” said Sergiz Moroz of the European Environment Bureau. “Farmers are coming and saying, look, we cannot grow the food without water.”

Even the United Nations has weighed in, stating bluntly: “AI has an environmental problem”.

The UNEP emphasises that AI strategies must be deeply integrated with sustainability goals, moving beyond simple transparency and they should “integrate sustainability goals into their digitalization and AI strategies”.

The new Coalition for Environmentally Sustainable AI, co‑led by the UNEP, brings together governments, academia, industry and civil society to ensure that “the net effect of AI on the planet is positive”.

Communities’ concerns

The Hume Council’s data centre decision is not an isolated objection, and as Australia rapidly expands its digital infrastructure to support AI and other emerging technologies, the communities asked to host this infrastructure will increasingly demand to be heard.

Data centres have a real and often disruptive presence: high heat output, constant noise from cooling systems, diesel backup generators, as well as their heavy water and energy use. Once operational, they offer few jobs and limited direct benefit to the communities that surround them.

Late last year, the Senate Select Committee on Adopting Artificial Intelligence observed that stakeholders provided extensive evidence on AI’s environmental footprint, from energy use, greenhouse gas emissions, to water consumption, and also recognised AI’s potential to help mitigate these same challenges.

Yet, despite this, only one of the report’s 13 recommendations addressed the environment, and even that was disappointingly vague:

“That the Australian Government take a coordinated, holistic approach to managing the growth of AI infrastructure in Australia to ensure that growth is sustainable, delivers value for Australians and is in the national interest.”

As Dr Bronwyn Cumbo, a transdisciplinary social researcher at the University of Technology Sydney, writes in The Conversation, Australia has a unique opportunity to embed genuine community participation in the design and planning of its digital infrastructure.

“To avoid amplifying the social inequities and environmental challenges of data centres,” she argues, “the tech industry and governments across Australia need to include the communities who will live alongside these crucial pieces of digital infrastructure.”

Public trust in AI cannot be divorced from the physical and environmental contexts in which it operates. If the benefits of AI are to be shared, then the burdens, from emissions and water use to noise and land occupation, must be acknowledged and addressed. This is especially true in healthcare, where ethical use and public confidence are paramount.

The Hume Council vote is a reminder that local communities are paying attention. Whether policymakers and those with an interest in promoting wider uptake of AI are listening is another matter. Likewise, there are questions about whether the health sector is doing enough to investigate and highlight the potential environmental impacts.

Jason Staines is a communications consultant with a background spanning journalism, government, and strategic advisory roles. He has reported for outlets including AAP, Dow Jones, The Sydney Morning Herald and The Age, and later worked in government as a Senior Research Officer at the Australian Treasury’s post in New Delhi and as an analyst in Canberra. He holds a Master of International Relations from the University of Sydney and a Bachelor of Arts (Communication) from the University of Technology Sydney.

The article was written in his capacity as a Croakey editor and journalist.

This article was originally published by Croakey.

Subscribe to the free InSight+ weekly newsletter here. It is available to all readers, not just registered medical practitioners.



Source link

Continue Reading

Trending