The AI revolution is here, and it’s accelerating fast. Don’t get left behind. In this video, TD SYNNEX reveals its Destination AI program—your strategic guide to navigating this rapidly expanding market. Learn how to transform your business and gain a competitive edge with our all-in-one program that offers everything from expert training and certification to ongoing sales support. Join us and harness the incredible power of AI to build a future-proof business.
Tools & Platforms
Destination AI | IT Pro

Tools & Platforms
Making the case for a third AI technology stack

The debate about sovereignty across digital networks, systems, and applications is not new. As early as 1996, John Perry Barlow’s “A Declaration of the Independence of Cyberspace” challenged the notion of government control over the internet. China has advocated for the need for state control over the internet for more than a decade. More recently, U.S. Vice President J.D. Vance asserted in February that the U.S. “is the leader in AI, and [the Trump] administration plans to keep it that way.” He added that “[t]he U.S. possesses all components across the full AI stack, including advanced semiconductor design, frontier algorithms, and, of course, transformational applications.”
This ambition was formalized in July through America’s AI Action Plan, which forcefully endorses an idea of an American sovereign AI stack, espousing the “need to establish American AI—from our advanced semiconductors to our models to our applications—as the gold standard for AI worldwide and ensure our allies are building on American technology.” More recently, the administration took a 10% equity stake in Intel and expressed interest in “many more [investments] like it.”
But exerting “sovereignty” along the AI technology stack (see Table 1)—including everything from upstream rare earth minerals and critical materials to specialized high-precision chip-making, cloud infrastructure, data centers, and advanced model training—is a considerable undertaking. Each stage of the stack represents the ingenuity and expertise of skilled workers as well as strategic control points with major economic, political, and security implications. Today, the U.S. and China dominate the full AI stack, leaving the rest of the world with a difficult implicit choice: align with one version of the stack or sit on the fence between the two. Unsatisfied with this choice and fearful of an AI-induced digital divide, a growing number of countries want to develop their own “sovereign AI” by gaining control over some, or all, of the key components of the AI tech stack.
Initiatives to advance sovereign AI are already underway worldwide, including in the African Union, India, Brazil, and the European Union. Recently, these efforts have taken on greater urgency, attracting a wider number of respected supporters who have drafted the contours of a well-thought-out plan. Advocates argue control over at least part of the AI stack is necessary not only for economic competitiveness, but also for cultural and linguistic preservation, national security, and the ability to shape global norms.
Some of the loudest cries for “sovereign” AI have come from Europe. The EU’s concerns are understandable given its strategic vulnerabilities. Europe accounts for just 10% of the global microchips market. Seventy-four percent of EU member states at least partially rely on U.S. cloud providers, whereas only 14% of EU countries use Chinese providers. Just 14% use EU providers even as Europe has pushed its homegrown, cloud services alternative, Gaia-X, to little effect. Over 80% of Europe’s overall technology stack is imported. The EU is also facing persistent brain drain as AI startups and talent increasingly migrate to American, Canadian, and Chinese ecosystems in search of capital and scale.
European concerns over digital sovereignty continue long-running debates over privacy and government surveillance. The 2013 Snowden revelations reignited tensions over transatlantic data flows, leading to legal challenges that ultimately invalidated both the original Safe Harbor agreement and its successor, Privacy Shield. These concerns were further heightened by the 2018 U.S. “Clarifying Lawful Overseas Use of Data Act” (CLOUD Act), which grants U.S. law enforcement agencies the legal authority to compel U.S. providers to provide access to the data stored on servers even if the servers are located abroad. While the European Commission (EC) was somewhat reassured by institutional responses like the Privacy and Civil Liberties Oversight Board (PCLOB), its credibility has been significantly weakened by the Trump administration. Parallel to these concerns, the EU has built out a more assertive digital rulemaking agenda. The EC expanded its regulatory capacity with legislation including the Digital Services Act (DSA), Digital Markets Act (DMA), AI Act and Code of Practice, as well as enforcement actions targeting dominant U.S. technology firms. These efforts reflect many EU policymakers’ broader ambitions to shape the global digital rulebook and reduce strategic dependencies on foreign providers.
Still, for many in Europe, the push for a sovereign AI stack only moved to a top priority in 2025, following Vance’s speech and the changes in U.S. foreign and trade policy, including the Trump administration’s tightened semiconductor export controls, public threats to withdraw from NATO, and a more assertive posture on international technology regulations. These shifts have raised concerns about overdependence on the U.S. AI stack, which could be abruptly cut off or rapidly altered by U.S. political dynamics. Axel Voss, a German member of the European Parliament and a leading voice on data governance and AI, has stated, “we do not have a reliable U.S. partner any longer” and that Europe should develop its own “sovereign AI and secure cloud.” A leading proponent of European AI sovereignty, Cristine Caffra, puts it: “If our roads, water, our electricity, our trains and our airports were largely in foreign hands, we would find that unacceptable.”
A global rationale for a third AI stack
Beyond sovereignty, there is a strong global rationale for Europe charting the course for a “third AI technology stack.” It would diversify and stoke market competition beyond the current geographic segments of the U.S. and China, increase technical and values-based innovation, and provide countries with an alternative aligned with democratic norms and product features that consumers want, including transparency, trustworthiness, and accountability. In this sense, a European-led AI Stack could differentiate itself by raising the bar on data governance policies, monitoring and reporting standards, and environmental impact.
Currently, the geopolitical landscape is often seen as dominated by two players. The United States holds early technology firm market dominance and is deeply integrated in global economic systems, reinforced by leadership in organizations like the G7 and the Organization for Economic Cooperation and Development (OECD). China promotes its own infrastructure through programs like the Digital Silk Road and exerts geopolitical influence via BRICS and its own Global AI Governance Action Plan. A more competitive EU in the global AI industry could establish a “third path forward” rooted in democratic values and fundamental rights. While this aspiration makes good rhetoric, is it realistic?
Realistic or rhetoric?
In short, the answer is no: Maximalist visions of AI sovereignty are not realistic—not for Europe, and not for any country or region, including the United States. Despite Vance’s assertion, even the U.S. does not have complete control over the whole stack: The Taiwan Semiconductor Manufacturing Company (TSMC) produces nearly all of Nvidia’s chips. In turn, TSMC depends on Dutch firm ASML for the advanced extreme ultraviolet (EUV) lithography machines needed to make AI graphics processing unit (GPU) chips. TSMC owned more than half of the world’s EUV machines as of the end of 2023, and ASML is the exclusive supplier. These machines integrate a range of technologies including German optical systems and tin sourced globally. Throughout the AI stack, foundational technologies rely on rare metals and materials with limited sources in mines around the world.
This intricate global technology interdependence reflects decades of accumulated expertise and specialization leading to comparative advantage which cannot be easily replicated, even in the medium term, despite U.S. efforts to “restore American semiconductor manufacturing” through policies such as America’s AI Action Plan and the CHIPS and Science Act that invest in semiconductor factories and streamline permitting. In addition to its weakened position in digital technologies, Europe also faces what former Italian Prime Minister Mario Draghi called an “innovation gap.” EU countries must manage the costly political imperatives of remilitarization, as well as ballooning social welfare costs and budget deficits.
Developing a European-led third AI stack: confronting inconvenient truths
These pressures have forced a pragmatic shift. Even the most ardent proponents of a European-led AI stack, or a “EuroStack,” have backed off from complete, absolute sovereignty to “creat[ing] some space for European technology” and clarifying that this vision “is not about closing the EU off from the world — quite the opposite. It is about … fostering trusted international partnerships.” Politicians like European Parliamentarian Eva Maydell have gone further, telling Europeans to “sober up.”
A more realistic strategy is for the EU to control layers of the stack where it has a comparative advantage. This would give it enough leverage to achieve strategic interdependence and secure a seat at the table. Akin to a security pact, strategic interdependence allows innovation to thrive and competition to exist and collectively can ensure all members’ security. The EU could lead the development of a third AI stack, co-built through partnerships with “like-minded” or “third-place” countries such as Brazil, Canada, India, Japan, Kenya, Korea, and Nigeria, the United Arab Emirates (UAE), and the United Kingdom, all of whom have a similar strategic interest in creating a third stack more independent of China and the U.S. and have cutting-edge expertise along segments of the AI stack. Already, EuroStack proponents recognize India’s Digital Public Infrastructure as a model. Korea’s Samsung had the highest global revenue for semiconductors in 2024 and could carve out a significant niche in the market through its Mach-1 inference chips that appear to be more power efficient than traditional High-Bandwidth Memory used in traditional Nvidia chips. Japan’s Canon and Nikon are developing nanoimprint and Argon-Fluoride lithography that could replace EUVs. And the U.K. is widely recognized as a leader in AI science, research, and startup innovation. Add these countries to Europe’s domestic capabilities and the contours of a credible third AI stack emerge.
While Europe already has well-cultivated ties with some of these partners, it needs to double down on developing these connections into true alliances and position itself at the epicenter of this coalition. While proponents of a EuroStack acknowledge: “…cooperation should be sought with third-party states which share common goals and may also have privileged access to certain inputs…” and “Europe can play a major role at the centre of a network of other countries of the ‘Global Majority,’” details are not provided on how to accomplish this non-trivial task. Which are the countries? How will they be organized? Why should they align with Europe instead of countries with proven AI capability, like China or the United States? These are difficult questions that need to be addressed for a third AI stack to be viable.
A European-led third AI stack that engages a coalition of countries—ideally including the United States—would be a truly positive global development, providing market diversity and competition and reinforcing democratic digital norms. To build such a coalition, Europe must leverage its existing strengths beyond diplomacy.
Europe remains home to world-class AI and science institutes and universities, which increasingly attract foreign talent—particularly as U.S. science budgets are cut and scrutiny of foreign students ramps up. This said, these institutions often remain siloed from the world of policy and business. Too many European universities operate as “ivory towers,” stuck in bureaucratic public administrations misaligned with public policy or business interests. This needs to change to achieve reverse brain-drain of any magnitude.
The same disconnect affects startups. Europe has no shortage of innovative startups and entrepreneurial leaders, but typically they are swallowed up by U.S. Big Tech before reaching scale. Why is this? It is not because they prefer the U.S. way of life or values, but because the U.S. ecosystem offers easy access to capital, essential complementary resources, and a vast integrated market. It is a one-stop shop.
Europe, by contrast, remains fragmented. Despite two decades of digital single-market efforts, each country protects its national telecom providers, and each country has its own data protection authorities and intellectual property entities. It is time for Europe to confront its “inconvenient truths.” The lack of integration limits the EU’s scale and impedes AI competitiveness. Pushing back on entrenched, politically powerful incumbents is difficult but necessary.
To confront this dynamic, mainstream European industry must play a larger role. Sectors, such as automotive, finance, insurance, and luxury goods, depend on AI to remain globally competitive and need to support this initiative. To the credit of third stack proponents, they recognize this need and have garnered the support of many leading industrial names. For this to be effective, it needs to go beyond political declarations arguing for public expenditures and guard against sovereignty washing, where corporate interests merely co-opt the sovereignty agenda to secure short-term subsidies and political influence. A durable third stack will require sustained private capital, something Europe’s venture ecosystem still lacks in depth and breadth.
Support needs to manifest itself in real financial commitments and action by these firms. Initiatives such as the private investment in “AI Gigafactories” through the InvestAI program, which seeks €20 billion for five factories, and “Buy European” procurement can help, but they are not substitutes for private capital willing to take risks at scale. European AI stack proponents are targeting an investment of €300 billion over 10 years, including a €10 billion European Sovereign Tech Fund. They seek “to liberate private initiative, not to rely on institutions and state bureaucracy.”
While this approaches the right magnitude of funding, the question remains whether it will be enough to close the gap and keep the EuroStack competitive in the near term. This spending is modest compared to the investment of global competitors. U.S. Big Tech (Apple, Amazon, Google, Meta, and Microsoft) collectively made over $1.5 trillion in revenue in 2024 alone and have plans to invest up to $320 billion on AI technologies in 2025. U.S. software companies invested €181 billion in R&D in 2023, about 10 times more than their EU counterparts. The gap is a chasm that will require massive investment to narrow.
Meanwhile, China is accelerating its AI investments through strategic subsidies, state-backed venture funds, public-private partnerships, and support for national champions. DeepSeek, a Chinese rival to companies like OpenAI and Anthropic, has benefited from substantial state support. China has invested across the entire AI stack, from chips to supercomputing to sovereign models. A third AI stack, if it is to succeed, must be viable not only as an alternative to a U.S.-only approach, but also as a counterweight to China’s expanding digital sphere.
Given the level of play, to develop a real AI alternative ecosystem to U.S. Big Tech or China’s model, the coalition of countries involved in this effort has to go beyond Europe and draw in powerhouses like Samsung, Nikon and Canon, Infosys and Tata, Arms Holding and Cohere AI, to name a few. A collective public-private effort is needed that extends beyond European businesses to a constellation of partner countries. Only then can sufficient funding be amassed.
Lastly, if Europe aspires to lead the development of a third AI stack, it will be a reality check on what it means to be in the AI market competing with the U.S and China. With real skin in the game, it will be more difficult to be too righteous. The world saw a glimpse of this in the final stage of the EU AI Act drama as France pushed back on some of its provisions. Now, as the EU AI Act is being implemented and key elements like the Code of Practice have been finalized, emerging stronger than many industry players had hoped and with sign-on from U.S. technology companies, the focus now shifts to implementation. European innovators must now prove that they can create competitive products while adhering to the new regulatory regime. The U.S. AI Action Plan explicitly rejects what it calls “onerous regulation,” withdrawing prior rules on AI safety and ethics, and removing references to climate, misinformation, and diversity from federal standards. While this creates room for Europe to offer a values-based alternative, such differentiation will only succeed if the resulting products and platforms remain competitive at scale.
Going global
The world would significantly benefit from a third AI stack that adheres to democratic principles and is distinct from both the Chinese state-driven and U.S. market-led models. The reality is that no one country or region by itself can achieve this in the medium term. The only viable path is a collective effort with strategic alliances, a shared governance framework, coordinated action, and real economic incentives for participation.
This collective effort should include the United States, and the stack would be strengthened from the U.S.’s dominant position across many elements of the AI stack. While some national officials may view a third stack as a threat, it is better understood as an opportunity. U.S. firms across the AI stack would benefit from an expanded market for AI systems. Nvidia and external experts estimate that sovereign AI spending could generate anywhere from $200 billion to $1 trillion in revenue for the company in the coming years. Moreover, it is in the U.S.’s geopolitical interest to offer democratic infrastructure alternatives to China’s Digital Silk Road, giving countries a genuine stake and meaningful role.
Vance stated in Paris that, “America wants to partner with all of you, and we want to embark on the AI revolution before us with a spirit of openness and collaboration.” The recent U.S. AI Action Plan reiterates the desire to form an alliance but one based on exporting the “full [U.S.] AI technology stack” to all countries willing to join the alliance. This is in stark contrast to European and other countries’ desire for more autonomy and seems to retreat from Vance’s offer to partner and collaborate. China on the other hand is reading the room, with its “Global AI Governance Action Plan” promoting the idea to “jointly explore cutting-edge innovations in AI technology” and “promote technological cooperation.”
The U.S. should counter this and support a third AI stack as a genuine joint effort that strengthens alliances, reinforces democratic governance, reduces reliance on Chinese infrastructure, and extends AI’s benefits globally. Europe is well-positioned to lead this initiative with its diplomatic networks and scientific capacity, and the U.S. should encourage it, as it would with investment in its own defense capabilities. While European diplomacy is impressive, it needs to be matched with nuts-and-bolts follow-up and a concrete implementation plan that is properly budgeted and funded. Too often in the past, well-intentioned political initiatives, like the Lisbon Agenda of 2000s, which pledged to increase the R&D to GDP ratio from 2% to 3% by 2010, lacked follow-through. Twenty-five years later, Europe’s R&D intensity has increased to 2.1%.
Administratively, it will be tempting to task the European Commission to stand this initiative up and create new “institutional coordination capacity,” but their plates are already very full, and it would be subject to EC politics which tend to favor a “spray and pray” approach as funds get dispersed across all the member countries.
Rather than trying to establish a new institution, the third AI stack should grow organically out of existing initiatives. One option is the Current AI initiative announced at the Paris AI Action Summit in February. While a good deliverable for the summit, the goal to develop “practical tools, global standards, and governance models” through its Open Auditing and Accountability Initiative lacks clear deadlines and publicly shared progress.
A more promising vehicle may be the Global Partnership on AI (GPAI), housed administratively in the OECD. With its multilateral foundation and broad member base of key democratic allies and partners, GPAI could build on the OECD AI Principles and G7+’s Hiroshima Code of Conduct to serve as the governance backbone for the third AI stack. The Hiroshima AI Process extends well beyond the G7, including more than 50 “friend” countries—many of them “third-place” nations—as well as the Partners’ Community, which brings in key technology companies. Coupled with the OECD’s longstanding multistakeholder model, involving civil society, organized labor, and the technical community, this networked global governance structure lays the groundwork to advance a third AI stack as a proof of concept. While ambitious, the window of opportunity is now for like-minded governments and partners to act; if they do not, the die may soon be cast.
Tools & Platforms
Why AI upskilling fails, and how tech leaders are fixing it | What IT Leaders Want, Ep. 11

That’s a great question. I think it’s important to realize with technology that it’s constantly evolving. Like upskilling isn’t a choice you have to make. It’s kind of an imperative organizations must upskill, otherwise they’re getting left behind.
In terms of how Red Gate does that, I think one of the first principles we operate from is we always try and hire curious folks and and that means people who have a thirst for learning. And you might wonder, how you find such such people, right?
And you know that is hard. One of the simple filler questions we use is just to ask people, what’s the last book they read, what’s the last technology they played with? What makes them excited?
That can give you a great impression of whether someone has that curiosity and that mindset to learn and adapt. Another principle we try and put in place is before you need to, before you introduce a technology, you really need to understand the why of that technology.
You need to feel the problem that the technology is trying to solve. So for example, if you’re trying to learn Kubernetes, a container orchestration framework, and you haven’t felt the problem that Kubernetes solves, it’s going to feel like an over complicated solution to a problem you haven’t got.
The way you can create that space for people is to not run workshops treating things in the abstract is to give people a chance to play with that technology and run into those problems themselves, so they can discover those solutions and learn to put them into practice.
Some of the ways we try and do that. At Red Gate, we have this thing called 10% time, where we give up every Friday afternoon for people to embrace learning and development. And that might be through lightning talks.
It might be through trying to fix a particular customer issue in a new and novel way, or it might just be trying to get to grips with a new technology, with a toy application, a Slack bot that orders lunch for the team every Friday, something akin to that.
And the final way, I think is really important to upskill people is to expose expert thinking. And I think that that’s really key to see the decision making process in action.
And again, one of the things we’ve put in place, and it’s taken a long time to get this actually showing value, is architecture decision records.
So when we ask people to make or when people make changes to software at Red Gate, we ask them to fill in a short description of of why they’re doing it, the options they considered and why they chose the path that they chose.
I think we put this in about five years ago. Now we’ve got a library of almost 500 architecture decisions that detail why we did something, and sometimes a few years later, why we were wrong about that. And that’s brilliant.
It’s that organizational repository of knowledge that new starters can look in to understand why the decisions were made. They might be wrong. We’re still going to make wrong decisions. Everyone does, but at least you can see the thinking process underneath. Valerie Potter
Tools & Platforms
Duke’s chief nurse exec sees pros and cons for AI in nursing

-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi