Connect with us

Tools & Platforms

‘Sovereignty’ Myth-Making in the AI Race

Published

on


This piece is part of “Ideologies of Control: A Series on Tech Power and Democratic Crisis,” in collaboration with Data & Society. Read more about the series here.

NVIDIA CEO Jensen Huang delivers remarks as President Donald Trump looks on during an “Investing in America” event, Wednesday, April 30, 2025, in the Cross Hall of the White House. (Official White House photo by Joyce N. Boghosian)

In late May, US President Donald Trump made an official trip to a number of Arab Gulf States accompanied by over three dozen CEOs from US-based big technology companies that resulted in over $600 billion dollars worth of deals and celebratory proclamations by Gulf leaders, including Saudi Crown Prince Mohammed bin Salman, that their countries would now become hubs for independent, groundbreaking AI research and development in the Middle East. In what can only be described as an ironic confluence of events, G42 (the holding company for the United Arab Emirates AI strategy) was one of the partners, along with NVIDIA, at a France-sponsored event to build a European AI stack, while at the same time NVIDIA and other American tech companies were partnering with the UAE. The geopolitical era of sovereign AI is truly here.

Tech sovereignty didn’t start with AI. Initial discussions of internet sovereignty originated in China in the early naughts and 2010s. However, given the historic global dominance of US-based big technology companies, the appetite for sovereign AI — for self-sufficiency in the development of AI technologies — only began to develop in the first Trump administration’s trade war with China in 2018. Many of the chips that US technology companies relied on were manufactured in Taiwan. As China became more belligerent towards Taiwan, concerns about global AI production grew, rising out of the question of what would happen to chip supply chains in the event of an all-out conflict between Taiwan and China. During the Biden administration, increasing US chip production capacity and limiting the export of powerful GPUs to China grew to become a top national security priority. (The Trump Administration has since rescinded the framework under which these controls were put in place, but has not removed the specific restrictions limiting GPU export to China.)

This intensifying adversarial relationship between the US and China, the newer and more aggressive assertion of American AI dominance by the Trump administration, and the ripple effects of these moves across Europe and across the globe — which have manifested as a fear of being left behind in the AI race— have all heightened the way countries prioritize sovereign control of the AI stack into their AI strategies.

‘Sovereignty as a Service’ (SaaS)

Big tech companies recognize these priorities, and are themselves shaping the rhetoric of sovereign tech by, effectively, offering sovereignty as a service. This is happening at three different levels of the tech stack. Firstly, NVIDIA’s CEO has boldly declared, “Every country needs sovereign AI.” Under this imperative, the company is laying down chips and hardware infrastructure around the world, from Denmark to Thailand to New Zealand. NVIDIA describes the components comprising this global infrastructure as “AI factories,” which spin natural resources and energy into tokens of intelligence.

Secondly, cloud service providers are also getting into the SaaS game, and are offering sovereignty not just to national governments, but also private entities. Amazon Web Services, the foremost cloud service provider, offers a “AWS European Sovereign Cloud.” Microsoft Azure and Google Cloud also offer sovereign cloud to private enterprises— including “sovereign” or “sovereignty” controls to private entities, which encompass encryption and data localization.

And finally, at the model building and dataset annotation level, open-source and multi-lingual AI have also been touted as supporting digital and AI sovereignty. HuggingFace has described open-source AI as a “cornerstone of digital sovereignty,” forming the foundation for “autonomy, innovation, and trust” in nations around the world. Countries around the world are funding the development of national language models: South Korea has recently announced that it will invest $735 billion in the development of “sovereign AI” using Korean language data. Together, governments and companies alike paint advantages in the performance of multilingual AI as sovereignty wins, promoting multilingual models as bolstering economic growth, commerce, and cultural preservation.

‘Sovereignty’ for you – control for me

An expansive view of digital sovereignty is that an entity — nation-state, regional grouping, community — should control its own digital destiny. The twist with SaaS is that the “clients” are negotiating away key aspects of their sovereignty in the process.

Consider NVIDIA. What appears to be a straightforward transaction — territory, energy, and resources in exchange for the company’s chips to build out national sovereign AI infrastructure — is complicated by the company’s other business interests. The company is also in the business of providing cloud services and developing its own AI models. These arms of business are also part of its sovereign AI package deal: the company is also training Saudi Arabia’s university and government scientists to build out “physical” and “agentic” AI. Besides laying the infrastructural groundwork in India, the company is also training India’s business engineers to use the company’s AI offerings.

NVIDIA’s AI models, like its multi-lingual offerings, would benefit significantly from the cultural and language data already being transmitted through its infrastructure. Government and enterprise use of NVIDIA’s AI models through the company’s AI API and cloud opens opportunities for NVIDIA to siphon high-quality data around the world to bolster its own offerings. That the language data extracted from these countries could be used to bolster governmental and enterprise client access to high-quality multi-lingual models, like the Nemotron language models, could provide a legitimate use that justifies the company’s collection and use of that data, which could instead enrich the company’s other models.

Finally, the company’s AI models have to be trained somewhere. Governmental lock-in to NVIDIA’s infrastructure could mean that residents not only bear the costs of national AI production, but also that they bear costs of the company’s operations. Other AI companies, such as Meta, have already tried to structure data center utilities such that residents would foot the power bill. The rhetoric of “sovereign AI” — that this infrastructure is beneficial to these countries and that the countries have control over AI production — further justifies costs for residents. This leaves those dependent on its infrastructure in a position to accept an attractive myth doused in technical language and the promise of national technological leadership, which buries a reality in which they may not be sovereign over their AI infrastructure — over how and the degree to which their territory and resources are used in the production of AI for their interests or for NVIDIA’s.

Model building and data annotation: ‘Sovereign AI’ as labor and expertise extraction

By contributing their expertise to train multilingual models—seen as prime examples of sovereign AI—translators around the world are being placed in a vulnerable and uncertain position. They are annotating data for models that supplant their labor. The impacts of AI on translator roles are especially felt in Turkey, where translators have played a respected role in the country’s diplomatic history. Rather than empowering communities that speak low-resource languages, multilingual models that cover languages spoken in these communities could instead play a role in their detriment. Cohere, which focuses on multilingual models, has formed a partnership with Palantir, which supplies software infrastructure to entities like US Immigration and Customs Enforcement (ICE). Human language annotators have been told that they should aim to convert the machine-like responses of LLMs into more human-like responses. The subtle cultural and lingual nuances that aim to be captured by “sovereign” multilingual models are arguably key to the resistance of political oppression. Indeed, culturally-specific emojis and nicknames have been used to counteract censorship. Enabling surveillant entities the access to language expertise could shut down avenues for resistance and the assertion of autonomy — of sovereignty.

Finally, a number of “sovereign” multilingual models are open-sourced or built from open-source models, which have themselves been painted as supporting sovereignty. While open-source models or synthetic models can be extremely worthwhile technological efforts, highlighting only these offerings can serve to downplay and ultimately bury the ways in which these models and language data and community involvement is serving proprietary multilingual models and more targeted business interests. It is important to remain vigilant to how the rhetoric that this labor and these models are in the service of cultural preservation can serve to obfuscate less savory uses of these models, from labor supplantation to surveillance.

‘Sovereignty’ for whom?

In the 19th-century, European powers deployed build-operate-transfer schemes, or BOTs, as a tool of colonial expansion. In these schemes, private, metropolitan companies provided the capital, knowledge, and resources to construct key pieces of infrastructure — railroads, ports, canals, roads, telegraph lines, etc. — either in formal colonies, like the British in India, or in places where their government was trying to expand power and influence, like the Germans in Anatolia, the heart of the Ottoman Empire, on the eve of World War I.

Sovereignty as a service represents a modern incarnation of this colonial mode. This rhetoric is part of a whole new political economy of global politics where traditional institutional sites of power are preserved as facades but hollowed out to create commodities that are accessed by subscription from what was formerly collective property, as Laleh Khalili has written in a recent London Review of Books essay on defense contractors. In contrast to two decades ago, when the US Department of Defense would have owned the software they operated and likely developed themselves, now they run corporate software, like products from Palantir, that they pay a regular subscription fee to access (and were sued to be forced into using). This kind of subscription model enables continuous rent extraction and the ability of the corporations not only to update or fix the software remotely, but also to turn it off at the source when the governments or institutions beholden to it don’t act according to the corporation’s wishes. If we take seriously the problematic metaphor of an AI arms race, or of a “war” to control the 21st century, then tech companies, with their SaaS offerings, are acting as arms dealers, encouraging the illusion of a race for sovereign control while being the true powers behind the scenes.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Australia is set to get more AI data centres. Local communities need to be more involved

Published

on


Data centres are the engines of the internet. These large, high-security facilities host racks of servers that store and process our digital data, 24 hours a day, seven days a week.

There are already more than 250 data centres across Australia. But there are set to be more, as the federal government’s plans for digital infrastructure expansion gains traction. We recently saw tech giant Amazon’s recent pledge to invest an additional A$20 billion in new data centres across Sydney and Melbourne, alongside the development of three solar farms in Victoria and Queensland to help power them.

The New South Wales government also recently launched a new authority to fast-track approvals for major infrastructure projects.

These developments will help cater to the surging demand for generative artificial intelligence (AI). They will also boost the national economy and increase Australia’s digital sovereignty – a global shift toward storing and managing data domestically under national laws.

But the everyday realities of communities living near these data centres aren’t as optimistic. And one key step toward mitigating these impacts is ensuring genuine community participation in shaping how Australia’s data-centre future is developed.

The sensory experience of data centres

Data centres are large, warehouse-like facilities. Their footprint typically ranges from 10,000 to 100,000 square metres. They are set on sites with backup generators and thousands of litres of stored diesel and enclosed by high-security fencing. Fluorescent lighting illuminates them every hour of the day.

A data centre can emanate temperatures of 35°C to 45°C. To prevent the servers from overheating, air conditioners are continuously humming. In water-cooled facilities, water pipes transport gigalitres of cool water through the data centre each day to absorb the heat produced.

Data centres can place substantial strain on the local energy grid and water supply.

In some places where many data centres have been built, such as Northern Virginia in the United States and Dublin in Ireland, communities have reported rising energy and water prices. They have also reported water shortages and the degradation of valued natural and historical sites.

They have also experienced economic impacts. While data centre construction generates high levels of employment, these facilities tend to employ a relatively small number of staff when they are operating.

These impacts have prompted some communities to push back against new data centre developments. Some communities have even filed lawsuits to halt proposed projects due to concerns about water security, environmental harm and heavy reliance on fossil fuels.

A unique opportunity

To date, communities in Australia have been buffered from the impacts of data centres. This is largely because Australia has outsourced most of its digital storage and processing needs (and associated impacts) to data centres overseas.

But this is now changing. As Australia rapidly expands its digital infrastructure, the question of who gets to shape its future becomes increasingly important.

To avoid amplifying the social inequities and environmental challenges of data centres, the tech industry and governments across Australia need to include the communities who will live alongside these crucial pieces of digital infrastructure.

This presents Australia with a unique opportunity to set the standard for creating a sustainable and inclusive digital future.

A path to authentic community participation

Current planning protocols for data centres limit community input. But there are three key steps data centre developers and governments can take to ensure individual developments – and the broader data centre industry – reflect the values, priorities and aspirations of local communities.

1. Developing critical awareness about data centres

People want a greater understanding of what data centres are, and how they will affect their everyday lives.

For example, what will data centres look, sound and feel like to live alongside? How will they affect access to drinking water during the next drought? Or water and energy prices during the peak of summer or winter?

Genuinely engaging with these questions is a crucial step toward empowering communities to take part in informed conversations about data centre developments in their neighbourhoods.

2. Involving communities early in the planning process

Data centres are often designed using generic templates, with minimal adaptation to local conditions or concerns. Yet each development site has a unique social and ecological context.

By involving communities early in the planning process, developers can access invaluable local knowledge about culturally significant sites, biodiversity corridors, water-sensitive areas and existing sustainability strategies that may be overlooked in state-level planning frameworks.

This kind of local insight can help tailor developments to reduce harm, enhance benefits, and ensure local priorities are not just heard, but built into the infrastructure itself.

3. Creating more inclusive visions of Australia’s data centre industry

Communities understand the importance of digital infrastructure and are generally supportive of equitable digital access. But they want to see the data centre industry grow in ways that acknowledges their everyday lives, values and priorities.

To create a more inclusive future, governments and industry can work with communities to broaden their “clean” visions of digital innovation and economic prosperity to include the “messy” realities, uncertainties and everyday aspirations of those living alongside data centre developments.

This approach will foster greater community trust and is essential for building more complex, human-centred visions of the tech industry’s future.



Source link

Continue Reading

Tools & Platforms

Google Launches Lightweight Gemma 3n, Expanding Edge AI Efforts — Campus Technology

Published

on


Google Launches Lightweight Gemma 3n, Expanding Edge AI Efforts

Google DeepMind has officially launched Gemma 3n, the latest version of its lightweight generative AI model designed specifically for mobile and edge devices — a move that reinforces the company’s emphasis on on-device computing.

The new model builds on the momentum of the original Gemma family, which has seen more than 160 million cumulative downloads since its launch last year. Gemma 3n introduces expanded multimodal support, a more efficient architecture, and new tools for developers targeting low-latency applications across smartphones, wearables, and other embedded systems.

“This release unlocks the full power of a mobile-first architecture,” said Omar Sanseviero and Ian Ballantyne, Google developer relations engineers, in a recent blog post.

Multimodal and Memory-Efficient by Design

Gemma 3n is available in two model sizes, E2B (5 billion parameters) and E4B (8 billion), with effective memory footprints similar to much smaller models — 2GB and 3GB respectively. Both versions natively support text, image, audio, and video inputs, enabling complex inference tasks to run directly on hardware with limited memory resources.

A core innovation in Gemma 3n is its MatFormer (Matryoshka Transformer) architecture, which allows developers to extract smaller sub-models or dynamically adjust model size during inference. This modular approach, combined with Mix-n-Match configuration tools, gives users granular control over performance and memory usage.

Google also introduced Per-Layer Embeddings (PLE), a technique that offloads part of the model to CPUs, reducing reliance on high-speed accelerator memory. This enables improved model quality without increasing the VRAM requirements.

Competitive Benchmarks and Performance

Gemma 3n E4B achieved an LMArena score exceeding 1300, the first model under 10 billion parameters to do so. The company attributes this to architectural innovations and enhanced inference techniques, including KV Cache Sharing, which speeds up long-context processing by reusing attention layer data.

Benchmark tests show up to a twofold improvement in prefill latency over the previous Gemma 3 model.

In speech applications, the model supports on-device speech-to-text and speech translation via a Universal Speech Model-based encoder, while a new MobileNet-V5 vision module offers real-time video comprehension on hardware such as Google Pixel devices.

Broader Ecosystem Support and Developer Focus

Google emphasized the model’s compatibility with widely used developer tools and platforms, including Hugging Face Transformers, llama.cpp, Ollama, Docker, and Apple’s MLX framework. The company also launched a MatFormer Lab to help developers fine-tune sub-models using custom parameter configurations.

“From Hugging Face to MLX to NVIDIA NeMo, we’re focused on making Gemma accessible across the ecosystem,” the authors wrote.

As part of its community outreach, Google introduced the Gemma 3n Impact Challenge, a developer contest offering $150,000 in prizes for real-world applications built on the platform.

Industry Context

Gemma 3n reflects a broader trend in AI development: a shift from cloud-based inference to edge computing as hardware improves and developers seek greater control over performance, latency, and privacy. Major tech firms are increasingly competing not just on raw power, but on deployment flexibility.

Although models such as Meta’s LLaMA and Alibaba’s Qwen3 series have gained traction in the open source domain, Gemma 3n signals Google’s intent to dominate the mobile inference space by balancing performance with efficiency and integration depth.

Developers can access the models through Google AI Studio, Hugging Face, or Kaggle, and deploy them via Vertex AI, Cloud Run, and other infrastructure services.

For more information, visit the Google site.

About the Author



John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He’s been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he’s written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].







Source link

Continue Reading

Tools & Platforms

Gelson’s adopts Upshop’s AI-powered tech

Published

on


Gelson’s Markets has gone all-in on artificial intelligence with plans to deploy Uphop’s total store platform to manage forecasting, ordering, inventory, and production planning, the Austin-based tech company announced Monday. 

Gelson’s, which operates 26 upscale supermarkets and one convenience store, ReCharge by Gelsons, in Southern California, said the partnership ensures that “every location is tuned into local demand dynamics.”

The Austin-based SaaS tech company has served as a leader in AI-powered inventory management with its suite of tools that streamline the process. That includes direct store delivery (DSD) future-proofing, food traceability, and food waste management, among others. 

“In a competitive grocery landscape, scale isn’t everything—intelligence is,” said Ryan Adams, president and CEO of Gelson’s Markets, in a press release. “With Upshop’s embedded platform and AI-driven capabilities, we’re empowering our stores to be hyper-responsive, efficient, and focused on the guest experience. It’s how Gelson’s can compete at the highest level.”

Implementing the new technology puts Gelson’s in league with “a market dominated by national chains,” according to Upshop.

The grocery retailer’s adoption of the platform will kick off with a focus on “eliminating food waste and optimizing fresh food production—especially within foodservice,” with the goals of reducing shrink, streamlining production, and enhancing quality, according to Upshop.

Related:Foxtrot added to Uber Eats app

The premium grocery chain’s announcement appears to build on its recent investment in technology. In January 2024, the grocer announced a partnership with Scottsdale, Ariz.-based Clear Demand, which specializes in so-called intelligent price management and optimization (IPMO). That partnership aims to manage retail pricing strategies for the grocer.
Gelson’s was sold to Tokyo-based Pan Pacific International Holdings (PPIH) from TPG Capital in 2021.

**
Join us at Grocery NEXT, September 10-12 at the Westin Chicago Northwest in Itasca, Ill., where industry leaders will explore the future of grocery technology, AI, automation and evolving consumer trends. Register now to be part of this groundbreaking event.





Source link

Continue Reading

Trending