Tools & Platforms
Ever-expanding AI continues to invade higher education – Tone Madison

Just months into the Trump administration, in a systematic process facilitated by the six Republican-appointed U.S. Supreme Court justices, democracy and the rule of law—as we have long understood these concepts—are eroding by the day. Yet capitalism is booming, less restrained than at any other time in recent memory, and tech firms are dominating portfolios. Such conditions have facilitated corporate America’s push—streamlined by President Trump—to insert AI into every square inch of the education system.
Shortly after Trump’s executive order on AI in education in April, which focused on K-12 schooling, a report was published, The Blueprint For Action: Comprehensive AI Literacy For All. It took aim at the entire education system with “support” by EDSAFE AI Alliance, aiEDU, and Data Science 4 Everyone. The constellation of corporate interests affiliated with these three coalitions alone is stunning.
But this is politics, of course. Like so many other corporate-and foundation-funded education-related initiatives, The Blueprint For Action was also supported by two university-based entities, the Global Science of Learning Education Network at the University of California, San Diego, and the Mary Lou Fulton College for Teaching and Learning Innovation at Arizona State University (ASU)—the latter of which educates roughly 75 percent of its students online and is home of the Center on Reinventing Public Education. The Blueprint For Action also lists over 30 separate entities as “contributors,” a list which includes corporate- and foundation-funded education groups, consulting and lobbying organizations, investors, and opaque education nonprofits.
Speaking at the ed tech Woodstock known as the ASU+GSV Summit in April, Secretary of Education Linda McMahon seemed to confuse AI with the steak sauce of my childhood, but it all makes sense to me now. The ’80s talking points are all back; The Big Bad Department of Education, a permanently failing education system, the wonders of private school choice and more tech in the schools, and couching corporate greed in terms of concern for students and our national competitiveness. I just hope we can use AI to, once and for all, win the international war for condiments.
Written primarily for educational administrators and reporters, The Blueprint For Action mentions the concept of “human flourishing” several times, a phrase that can also be found in the playbook for our current democratic dissolution, Project 2025 itself (the full title of which is Mandate for Leadership: The Conservative Promise). A brilliant stroke. Who doesn’t want our students—and all humans for that matter—to flourish? Those of us hung up on, for example, masked federal agents arresting immigrants on our streets, using alligators as prison guards, fictionalizing history, federal investigations of the President’s perceived enemies, and millions of Americans losing healthcare coverage and food assistance need to reconsider our views. Because the report affirms that the key rhetorical concept of human flourishing—”understood as a state of complete well-being, encompassing purpose, positive relationships, personal growth, and health”—is a “focus of the Trump administration.” Whew.
Given the University of Wisconsin (UW) System’s prioritization of all things tech, what I’ve named the AI-is-everything-in-education policy couldn’t have come at a better time. Recent media revelations about the UW’s profligate technology and consultant-related spending have called the System’s leaders’ priorities into question.
But the UW’s prioritizing of extravagant tech products and services over employees is hardly news. In the fall of 2023, when The Daily Cardinal published UW System President Jay Rothman’s smoking-gun email to Chancellors outlining 16 “observations and takeaways” of a Chronicle Of Education report, the press—beginning with the Daily Cardinal itself—focused on takeaway #13 on the list regarding campuses “shifting away from liberal arts programs to programs that are more career specific, particularly if the institution serves a large number of low income students.”
Yet the 16-point list should be read in its entirety. And in addition to suggestions that campus leaders make “painful” (#2) and “difficult” (#3) (both terms in scare-quotes) budget cuts and decisions, Rothman also stressed the “need to invest in technology to ensure efficient operations (e.g. that the institution’s enrollment and tuition payment functions are effective”) (#7).
The corporate AI-is-everything-in-education policy comes to the rescue, providing cover for the UW to refocus the larger conversation where it wants—on the uncritical embrace of any and all technology over all else as it slowly and systematically dismantles our campuses.
A long-standing corporate tech priority for education—online schooling—has had numerous monikers in a decades-long attempt to expand in the K-12 and higher-education markets. The AI-is-everything-in-education policy—because it’s all about tech—would also be the next logical step in the permanent marketing plan for the UW’s online programs. When speaking to the legislature’s Joint Finance Committee in April, Rothman said that the UW System “was focused on making college more accessible and looking for ways to serve nontraditional students around emerging technologies like artificial intelligence.”
Given the recent closure of six two-year schools in the UW System, the Board of Regents’ authorization of tuition increases, and the official secrecy surrounding pending budget cuts at UW Madison, it’s clear that “accessible” does not mean affordable or geographically convenient. Rather, “accessibility” is yet more corporate-speak for marketing online education.
Indeed, with no expensive training or additional software, even I could read the AI-generated results of a Google search for the “benefits of online education,” a list on which “accessibility” is prominently featured.
Moreover, if I had a nickel for every time I’ve heard educational administrators and policymakers link their corporate-created priority-of-the-moment to the elusive “nontraditional” student, I could have already retired. Higher-education leaders, accepting the institution’s privatization as God-given and not an ongoing political choice, have told us that the “nontraditional student” is going to infuse our coffers with cash since about forever.
Ultimately, however, it’s not clear what Rothman was saying about AI and the education of nontraditional students. One interpretation is that the UW System is seeking to increasingly use AI to educate—at the very least—nontraditional students online. Another possibility is that the UW’s online programs should be increasingly used to educate students about AI. Or Rothman could have meant both: fewer actual humans teaching our students in an increasing number of online programs, the content of which disproportionately consists of AI-related information (whatever that would look like). Absent any clarification and given the UW’s embarrassing and increasingly expensive tech worship, I suspect President Rothman meant both.
In addition to the multitude of corporate interests aligned with the Trump administration, many other powerful actors are pushing the UW System in a tech-worshipping direction, including Wisconsin’s own Technology Council, created by state statute in 2001 as a non-profit corporation to “promote the development of high-technology businesses” in the state. Just weeks after technology interests helped deliver the White House to Trump, the Technology Council hosted an event about the future of higher education in the state at which Rothman, alongside president of the Wisconsin Technical College System, Layla Merrifield, and president of the Wisconsin Association of Independent Colleges and Universities, Eric Fulcomer, were featured on a panel.
The Technology Council describes itself as the “science and technology advisor to the Governor and the Legislature” and as an “independent, non-profit and non-partisan board.” “Independent” is a curious descriptor for the Council, however, which boasts numerous corporate sponsors, including several investment and law firms.
In 2002, the Technology Council published Vision 2020: A Model Wisconsin Economy, with “major funding” from Mason Wells Private Equity. The report, like the seemingly endless string of similar reports published by business interests ever since, falsely assumed that we can create more high-tech, high-wage jobs by educating more people and producing more sleek reports. As if more bachelor’s and advanced degrees, or certificates (another current corporate education fad) held by the population will mean fewer jobs in the occupations that dominate the labor market, such as home healthcare, warehouses, retail, food services, and the like.
But that’s not how things work. That’s not how things can work. Nearly 70 percent of gross domestic product consists of personal consumption, and the vast majority of this consumption involves physical objects, not information, data, or knowledge. And decades ago, corporations decided to move the manufacturing of nearly everything outside of the U.S., to locations that pay workers pennies on the dollar. Technology, then, is largely a tool used to create and deliver stuff to eager consumers. Hence, the percentage of technology-focused jobs as a share of all employment has long been in the low single digits.
Maybe neither the Technology Council nor UW policymakers have high-speed internet access, so they’re unaware of the poor job prospects in the oversaturated tech sector. To give just a few recent examples, Microsoft is laying off three percent of its entire workforce just two years after it eliminated 10,000 “roles.” And it’s taking these steps despite “better-than-expected results, with $25.8 billion in quarterly net income, and an upbeat forecast in late April.” Amazon is also laying off employees in communications and sustainability, while the cyber-security firm Crowd Strike also recently announced layoffs of five percent of its workforce.
It seems like all those high-paying, high-tech manufacturing jobs that never materialized in another Trump-backed Wisconsin fantasy, Foxconn, have also been deleted from policymakers’ memories. Foxconn was promoted as a boon for UW–Madison, and the Technology Council affirmed that the project “will help drive Wisconsin’s economy.” Yet in June of this year, the Foxconn fiasco was labelled by NBC Chicago a “mostly-abandoned stretch of four lane roads and barren fields.” I guess the “technologies that will place Wisconsin on the leading edge of an American revolution in manufacturing and health care” should be considered alongside highways jammed with self-driving cars, the 3-D printer takeover, and employee-less retail stores in the Futurist Bible, a faith-based volume consisting of perpetual predictions that can never be disproven. They just keep getting postponed.
Any discussion of the education sector’s tech fixation is not complete without mention of the seemingly endless and ever-increasing number of ed-tech firms, many of which are connected to The Blueprint For Action report. An honest discussion of the real economy—specifically, the disproportionately low education, low wage, non-technology-focused jobs that actually exist—runs afoul of the multitude of powerful interests that literally marinate our education information ecosystem.
Why, then, do we still take seriously corporate claims of an economy, or world for that matter, that can ever be dominated by technology-related activity and employment?
The AI-is-everything-in-education policy will also further the UW’s current emphasis on the T in STEM at the expense of most of S, E, and M. The Materials Science engineering program at UW-Milwaukee (UWM) recently learned this the hard way. According to the Milwaukee Journal Sentinel, UWM Chancellor Mark Mone defended eliminating the materials science program, and suggested that the university can “redeploy resources from the shuttered program to ‘growth areas,’ such as computer science and software development,” programs which have between 450 and 500 students.
Here, Chancellor Mone must be talking about programs with increasing numbers of students as opposed to fields with growing numbers of jobs in the real economy. Because there is absolutely zero evidence that technology jobs are increasing as a share of all jobs in the real economy. Zero.
But in the UW System, we don’t serve students. Like the Trump administration, we serve corporations. And today, tech corporations—and their conduit in the White House—run the show.
Many faculty in the UW System, with AFT-Wisconsin leading the way, have forcefully pointed out that education is about human relationships, a rather pedestrian observation if one talks to any student, parent, or teacher. But educational administrators, despite mostly being parents, former faculty, and—of course—students themselves, now exist in an environment completely dominated by technology interests. The current AI-obsessed education sector is yet another chapter in a decades-old tech supremacy narrative vis-à-vis education, one which continually replaces rational thought and empirical reality with corporate talking points repeated endlessly, as if to make them true through sheer repetition.
Because tech is always the priority in our corporate-dominated, consultant-driven UW System, it seems clear where its tumultuous path is headed. The UW will spend obscene sums of money on various platforms and AI consultants. AI training for faculty and staff will become omnipresent on our campuses, dictated by UW System leaders and policy.
Following Trump’s lead, there will be no public debate on prioritizing all things AI. Only administrators and UW System leaders will be involved in these far-reaching decisions, an increasing number of whom are now selected by Rothman without mandatory search committees because of a recent, extraordinarily significant yet under-appreciated Regents-backed policy. This campaign will be presented as necessary to, as Trump affirms, “demystif[y] this powerful technology,” or, in the language of The Blueprint for Action, provide “comprehensive AI literacy” for all the simple-minded, knuckle-dragging faculty who struggle to point and click.
While millions of dollars (as well as untold time, intellectual energy, and oxygen) are dumped into the AI-is-everything-in-education policy, UW campus administrators no longer have any meaningful discretion. They merely act as regional managers following austerity dictates from System headquarters in Madison, and will tell our campuses that newly vacant faculty and staff lines cannot be filled—except those in the corporate-preferred programs involving technology and, increasingly, healthcare (another major focus of the Technology Council).
The further narrowing of the UW’s educational offerings, built on the fake version of the economy promulgated largely by tech interests themselves, will continue unabated. Wisconsin students’ access to a broad range of fields at our 11 comprehensive universities—intentionally located in every corner of the state so as to provide the widest access possible consistent with the Wisconsin Idea—further eroded. And let’s not forget our students’ access to in-person, human education is increasingly at risk as well.
But who needs well-informed, critical-thinking citizens prepared to work in the real economy and educated by human professors in subjects like history, politics, literature, foreign languages, the arts, and so forth, in an increasingly non-democratic country? Certainly not the Trump administration and its corporate backers in tech.
Tools & Platforms
Making the case for a third AI technology stack

The debate about sovereignty across digital networks, systems, and applications is not new. As early as 1996, John Perry Barlow’s “A Declaration of the Independence of Cyberspace” challenged the notion of government control over the internet. China has advocated for the need for state control over the internet for more than a decade. More recently, U.S. Vice President J.D. Vance asserted in February that the U.S. “is the leader in AI, and [the Trump] administration plans to keep it that way.” He added that “[t]he U.S. possesses all components across the full AI stack, including advanced semiconductor design, frontier algorithms, and, of course, transformational applications.”
This ambition was formalized in July through America’s AI Action Plan, which forcefully endorses an idea of an American sovereign AI stack, espousing the “need to establish American AI—from our advanced semiconductors to our models to our applications—as the gold standard for AI worldwide and ensure our allies are building on American technology.” More recently, the administration took a 10% equity stake in Intel and expressed interest in “many more [investments] like it.”
But exerting “sovereignty” along the AI technology stack (see Table 1)—including everything from upstream rare earth minerals and critical materials to specialized high-precision chip-making, cloud infrastructure, data centers, and advanced model training—is a considerable undertaking. Each stage of the stack represents the ingenuity and expertise of skilled workers as well as strategic control points with major economic, political, and security implications. Today, the U.S. and China dominate the full AI stack, leaving the rest of the world with a difficult implicit choice: align with one version of the stack or sit on the fence between the two. Unsatisfied with this choice and fearful of an AI-induced digital divide, a growing number of countries want to develop their own “sovereign AI” by gaining control over some, or all, of the key components of the AI tech stack.
Initiatives to advance sovereign AI are already underway worldwide, including in the African Union, India, Brazil, and the European Union. Recently, these efforts have taken on greater urgency, attracting a wider number of respected supporters who have drafted the contours of a well-thought-out plan. Advocates argue control over at least part of the AI stack is necessary not only for economic competitiveness, but also for cultural and linguistic preservation, national security, and the ability to shape global norms.
Some of the loudest cries for “sovereign” AI have come from Europe. The EU’s concerns are understandable given its strategic vulnerabilities. Europe accounts for just 10% of the global microchips market. Seventy-four percent of EU member states at least partially rely on U.S. cloud providers, whereas only 14% of EU countries use Chinese providers. Just 14% use EU providers even as Europe has pushed its homegrown, cloud services alternative, Gaia-X, to little effect. Over 80% of Europe’s overall technology stack is imported. The EU is also facing persistent brain drain as AI startups and talent increasingly migrate to American, Canadian, and Chinese ecosystems in search of capital and scale.
European concerns over digital sovereignty continue long-running debates over privacy and government surveillance. The 2013 Snowden revelations reignited tensions over transatlantic data flows, leading to legal challenges that ultimately invalidated both the original Safe Harbor agreement and its successor, Privacy Shield. These concerns were further heightened by the 2018 U.S. “Clarifying Lawful Overseas Use of Data Act” (CLOUD Act), which grants U.S. law enforcement agencies the legal authority to compel U.S. providers to provide access to the data stored on servers even if the servers are located abroad. While the European Commission (EC) was somewhat reassured by institutional responses like the Privacy and Civil Liberties Oversight Board (PCLOB), its credibility has been significantly weakened by the Trump administration. Parallel to these concerns, the EU has built out a more assertive digital rulemaking agenda. The EC expanded its regulatory capacity with legislation including the Digital Services Act (DSA), Digital Markets Act (DMA), AI Act and Code of Practice, as well as enforcement actions targeting dominant U.S. technology firms. These efforts reflect many EU policymakers’ broader ambitions to shape the global digital rulebook and reduce strategic dependencies on foreign providers.
Still, for many in Europe, the push for a sovereign AI stack only moved to a top priority in 2025, following Vance’s speech and the changes in U.S. foreign and trade policy, including the Trump administration’s tightened semiconductor export controls, public threats to withdraw from NATO, and a more assertive posture on international technology regulations. These shifts have raised concerns about overdependence on the U.S. AI stack, which could be abruptly cut off or rapidly altered by U.S. political dynamics. Axel Voss, a German member of the European Parliament and a leading voice on data governance and AI, has stated, “we do not have a reliable U.S. partner any longer” and that Europe should develop its own “sovereign AI and secure cloud.” A leading proponent of European AI sovereignty, Cristine Caffra, puts it: “If our roads, water, our electricity, our trains and our airports were largely in foreign hands, we would find that unacceptable.”
A global rationale for a third AI stack
Beyond sovereignty, there is a strong global rationale for Europe charting the course for a “third AI technology stack.” It would diversify and stoke market competition beyond the current geographic segments of the U.S. and China, increase technical and values-based innovation, and provide countries with an alternative aligned with democratic norms and product features that consumers want, including transparency, trustworthiness, and accountability. In this sense, a European-led AI Stack could differentiate itself by raising the bar on data governance policies, monitoring and reporting standards, and environmental impact.
Currently, the geopolitical landscape is often seen as dominated by two players. The United States holds early technology firm market dominance and is deeply integrated in global economic systems, reinforced by leadership in organizations like the G7 and the Organization for Economic Cooperation and Development (OECD). China promotes its own infrastructure through programs like the Digital Silk Road and exerts geopolitical influence via BRICS and its own Global AI Governance Action Plan. A more competitive EU in the global AI industry could establish a “third path forward” rooted in democratic values and fundamental rights. While this aspiration makes good rhetoric, is it realistic?
Realistic or rhetoric?
In short, the answer is no: Maximalist visions of AI sovereignty are not realistic—not for Europe, and not for any country or region, including the United States. Despite Vance’s assertion, even the U.S. does not have complete control over the whole stack: The Taiwan Semiconductor Manufacturing Company (TSMC) produces nearly all of Nvidia’s chips. In turn, TSMC depends on Dutch firm ASML for the advanced extreme ultraviolet (EUV) lithography machines needed to make AI graphics processing unit (GPU) chips. TSMC owned more than half of the world’s EUV machines as of the end of 2023, and ASML is the exclusive supplier. These machines integrate a range of technologies including German optical systems and tin sourced globally. Throughout the AI stack, foundational technologies rely on rare metals and materials with limited sources in mines around the world.
This intricate global technology interdependence reflects decades of accumulated expertise and specialization leading to comparative advantage which cannot be easily replicated, even in the medium term, despite U.S. efforts to “restore American semiconductor manufacturing” through policies such as America’s AI Action Plan and the CHIPS and Science Act that invest in semiconductor factories and streamline permitting. In addition to its weakened position in digital technologies, Europe also faces what former Italian Prime Minister Mario Draghi called an “innovation gap.” EU countries must manage the costly political imperatives of remilitarization, as well as ballooning social welfare costs and budget deficits.
Developing a European-led third AI stack: confronting inconvenient truths
These pressures have forced a pragmatic shift. Even the most ardent proponents of a European-led AI stack, or a “EuroStack,” have backed off from complete, absolute sovereignty to “creat[ing] some space for European technology” and clarifying that this vision “is not about closing the EU off from the world — quite the opposite. It is about … fostering trusted international partnerships.” Politicians like European Parliamentarian Eva Maydell have gone further, telling Europeans to “sober up.”
A more realistic strategy is for the EU to control layers of the stack where it has a comparative advantage. This would give it enough leverage to achieve strategic interdependence and secure a seat at the table. Akin to a security pact, strategic interdependence allows innovation to thrive and competition to exist and collectively can ensure all members’ security. The EU could lead the development of a third AI stack, co-built through partnerships with “like-minded” or “third-place” countries such as Brazil, Canada, India, Japan, Kenya, Korea, and Nigeria, the United Arab Emirates (UAE), and the United Kingdom, all of whom have a similar strategic interest in creating a third stack more independent of China and the U.S. and have cutting-edge expertise along segments of the AI stack. Already, EuroStack proponents recognize India’s Digital Public Infrastructure as a model. Korea’s Samsung had the highest global revenue for semiconductors in 2024 and could carve out a significant niche in the market through its Mach-1 inference chips that appear to be more power efficient than traditional High-Bandwidth Memory used in traditional Nvidia chips. Japan’s Canon and Nikon are developing nanoimprint and Argon-Fluoride lithography that could replace EUVs. And the U.K. is widely recognized as a leader in AI science, research, and startup innovation. Add these countries to Europe’s domestic capabilities and the contours of a credible third AI stack emerge.
While Europe already has well-cultivated ties with some of these partners, it needs to double down on developing these connections into true alliances and position itself at the epicenter of this coalition. While proponents of a EuroStack acknowledge: “…cooperation should be sought with third-party states which share common goals and may also have privileged access to certain inputs…” and “Europe can play a major role at the centre of a network of other countries of the ‘Global Majority,’” details are not provided on how to accomplish this non-trivial task. Which are the countries? How will they be organized? Why should they align with Europe instead of countries with proven AI capability, like China or the United States? These are difficult questions that need to be addressed for a third AI stack to be viable.
A European-led third AI stack that engages a coalition of countries—ideally including the United States—would be a truly positive global development, providing market diversity and competition and reinforcing democratic digital norms. To build such a coalition, Europe must leverage its existing strengths beyond diplomacy.
Europe remains home to world-class AI and science institutes and universities, which increasingly attract foreign talent—particularly as U.S. science budgets are cut and scrutiny of foreign students ramps up. This said, these institutions often remain siloed from the world of policy and business. Too many European universities operate as “ivory towers,” stuck in bureaucratic public administrations misaligned with public policy or business interests. This needs to change to achieve reverse brain-drain of any magnitude.
The same disconnect affects startups. Europe has no shortage of innovative startups and entrepreneurial leaders, but typically they are swallowed up by U.S. Big Tech before reaching scale. Why is this? It is not because they prefer the U.S. way of life or values, but because the U.S. ecosystem offers easy access to capital, essential complementary resources, and a vast integrated market. It is a one-stop shop.
Europe, by contrast, remains fragmented. Despite two decades of digital single-market efforts, each country protects its national telecom providers, and each country has its own data protection authorities and intellectual property entities. It is time for Europe to confront its “inconvenient truths.” The lack of integration limits the EU’s scale and impedes AI competitiveness. Pushing back on entrenched, politically powerful incumbents is difficult but necessary.
To confront this dynamic, mainstream European industry must play a larger role. Sectors, such as automotive, finance, insurance, and luxury goods, depend on AI to remain globally competitive and need to support this initiative. To the credit of third stack proponents, they recognize this need and have garnered the support of many leading industrial names. For this to be effective, it needs to go beyond political declarations arguing for public expenditures and guard against sovereignty washing, where corporate interests merely co-opt the sovereignty agenda to secure short-term subsidies and political influence. A durable third stack will require sustained private capital, something Europe’s venture ecosystem still lacks in depth and breadth.
Support needs to manifest itself in real financial commitments and action by these firms. Initiatives such as the private investment in “AI Gigafactories” through the InvestAI program, which seeks €20 billion for five factories, and “Buy European” procurement can help, but they are not substitutes for private capital willing to take risks at scale. European AI stack proponents are targeting an investment of €300 billion over 10 years, including a €10 billion European Sovereign Tech Fund. They seek “to liberate private initiative, not to rely on institutions and state bureaucracy.”
While this approaches the right magnitude of funding, the question remains whether it will be enough to close the gap and keep the EuroStack competitive in the near term. This spending is modest compared to the investment of global competitors. U.S. Big Tech (Apple, Amazon, Google, Meta, and Microsoft) collectively made over $1.5 trillion in revenue in 2024 alone and have plans to invest up to $320 billion on AI technologies in 2025. U.S. software companies invested €181 billion in R&D in 2023, about 10 times more than their EU counterparts. The gap is a chasm that will require massive investment to narrow.
Meanwhile, China is accelerating its AI investments through strategic subsidies, state-backed venture funds, public-private partnerships, and support for national champions. DeepSeek, a Chinese rival to companies like OpenAI and Anthropic, has benefited from substantial state support. China has invested across the entire AI stack, from chips to supercomputing to sovereign models. A third AI stack, if it is to succeed, must be viable not only as an alternative to a U.S.-only approach, but also as a counterweight to China’s expanding digital sphere.
Given the level of play, to develop a real AI alternative ecosystem to U.S. Big Tech or China’s model, the coalition of countries involved in this effort has to go beyond Europe and draw in powerhouses like Samsung, Nikon and Canon, Infosys and Tata, Arms Holding and Cohere AI, to name a few. A collective public-private effort is needed that extends beyond European businesses to a constellation of partner countries. Only then can sufficient funding be amassed.
Lastly, if Europe aspires to lead the development of a third AI stack, it will be a reality check on what it means to be in the AI market competing with the U.S and China. With real skin in the game, it will be more difficult to be too righteous. The world saw a glimpse of this in the final stage of the EU AI Act drama as France pushed back on some of its provisions. Now, as the EU AI Act is being implemented and key elements like the Code of Practice have been finalized, emerging stronger than many industry players had hoped and with sign-on from U.S. technology companies, the focus now shifts to implementation. European innovators must now prove that they can create competitive products while adhering to the new regulatory regime. The U.S. AI Action Plan explicitly rejects what it calls “onerous regulation,” withdrawing prior rules on AI safety and ethics, and removing references to climate, misinformation, and diversity from federal standards. While this creates room for Europe to offer a values-based alternative, such differentiation will only succeed if the resulting products and platforms remain competitive at scale.
Going global
The world would significantly benefit from a third AI stack that adheres to democratic principles and is distinct from both the Chinese state-driven and U.S. market-led models. The reality is that no one country or region by itself can achieve this in the medium term. The only viable path is a collective effort with strategic alliances, a shared governance framework, coordinated action, and real economic incentives for participation.
This collective effort should include the United States, and the stack would be strengthened from the U.S.’s dominant position across many elements of the AI stack. While some national officials may view a third stack as a threat, it is better understood as an opportunity. U.S. firms across the AI stack would benefit from an expanded market for AI systems. Nvidia and external experts estimate that sovereign AI spending could generate anywhere from $200 billion to $1 trillion in revenue for the company in the coming years. Moreover, it is in the U.S.’s geopolitical interest to offer democratic infrastructure alternatives to China’s Digital Silk Road, giving countries a genuine stake and meaningful role.
Vance stated in Paris that, “America wants to partner with all of you, and we want to embark on the AI revolution before us with a spirit of openness and collaboration.” The recent U.S. AI Action Plan reiterates the desire to form an alliance but one based on exporting the “full [U.S.] AI technology stack” to all countries willing to join the alliance. This is in stark contrast to European and other countries’ desire for more autonomy and seems to retreat from Vance’s offer to partner and collaborate. China on the other hand is reading the room, with its “Global AI Governance Action Plan” promoting the idea to “jointly explore cutting-edge innovations in AI technology” and “promote technological cooperation.”
The U.S. should counter this and support a third AI stack as a genuine joint effort that strengthens alliances, reinforces democratic governance, reduces reliance on Chinese infrastructure, and extends AI’s benefits globally. Europe is well-positioned to lead this initiative with its diplomatic networks and scientific capacity, and the U.S. should encourage it, as it would with investment in its own defense capabilities. While European diplomacy is impressive, it needs to be matched with nuts-and-bolts follow-up and a concrete implementation plan that is properly budgeted and funded. Too often in the past, well-intentioned political initiatives, like the Lisbon Agenda of 2000s, which pledged to increase the R&D to GDP ratio from 2% to 3% by 2010, lacked follow-through. Twenty-five years later, Europe’s R&D intensity has increased to 2.1%.
Administratively, it will be tempting to task the European Commission to stand this initiative up and create new “institutional coordination capacity,” but their plates are already very full, and it would be subject to EC politics which tend to favor a “spray and pray” approach as funds get dispersed across all the member countries.
Rather than trying to establish a new institution, the third AI stack should grow organically out of existing initiatives. One option is the Current AI initiative announced at the Paris AI Action Summit in February. While a good deliverable for the summit, the goal to develop “practical tools, global standards, and governance models” through its Open Auditing and Accountability Initiative lacks clear deadlines and publicly shared progress.
A more promising vehicle may be the Global Partnership on AI (GPAI), housed administratively in the OECD. With its multilateral foundation and broad member base of key democratic allies and partners, GPAI could build on the OECD AI Principles and G7+’s Hiroshima Code of Conduct to serve as the governance backbone for the third AI stack. The Hiroshima AI Process extends well beyond the G7, including more than 50 “friend” countries—many of them “third-place” nations—as well as the Partners’ Community, which brings in key technology companies. Coupled with the OECD’s longstanding multistakeholder model, involving civil society, organized labor, and the technical community, this networked global governance structure lays the groundwork to advance a third AI stack as a proof of concept. While ambitious, the window of opportunity is now for like-minded governments and partners to act; if they do not, the die may soon be cast.
Tools & Platforms
Why AI upskilling fails, and how tech leaders are fixing it | What IT Leaders Want, Ep. 11

That’s a great question. I think it’s important to realize with technology that it’s constantly evolving. Like upskilling isn’t a choice you have to make. It’s kind of an imperative organizations must upskill, otherwise they’re getting left behind.
In terms of how Red Gate does that, I think one of the first principles we operate from is we always try and hire curious folks and and that means people who have a thirst for learning. And you might wonder, how you find such such people, right?
And you know that is hard. One of the simple filler questions we use is just to ask people, what’s the last book they read, what’s the last technology they played with? What makes them excited?
That can give you a great impression of whether someone has that curiosity and that mindset to learn and adapt. Another principle we try and put in place is before you need to, before you introduce a technology, you really need to understand the why of that technology.
You need to feel the problem that the technology is trying to solve. So for example, if you’re trying to learn Kubernetes, a container orchestration framework, and you haven’t felt the problem that Kubernetes solves, it’s going to feel like an over complicated solution to a problem you haven’t got.
The way you can create that space for people is to not run workshops treating things in the abstract is to give people a chance to play with that technology and run into those problems themselves, so they can discover those solutions and learn to put them into practice.
Some of the ways we try and do that. At Red Gate, we have this thing called 10% time, where we give up every Friday afternoon for people to embrace learning and development. And that might be through lightning talks.
It might be through trying to fix a particular customer issue in a new and novel way, or it might just be trying to get to grips with a new technology, with a toy application, a Slack bot that orders lunch for the team every Friday, something akin to that.
And the final way, I think is really important to upskill people is to expose expert thinking. And I think that that’s really key to see the decision making process in action.
And again, one of the things we’ve put in place, and it’s taken a long time to get this actually showing value, is architecture decision records.
So when we ask people to make or when people make changes to software at Red Gate, we ask them to fill in a short description of of why they’re doing it, the options they considered and why they chose the path that they chose.
I think we put this in about five years ago. Now we’ve got a library of almost 500 architecture decisions that detail why we did something, and sometimes a few years later, why we were wrong about that. And that’s brilliant.
It’s that organizational repository of knowledge that new starters can look in to understand why the decisions were made. They might be wrong. We’re still going to make wrong decisions. Everyone does, but at least you can see the thinking process underneath. Valerie Potter
Tools & Platforms
Duke’s chief nurse exec sees pros and cons for AI in nursing

-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi