Tools & Platforms
Making the case for a third AI technology stack

The debate about sovereignty across digital networks, systems, and applications is not new. As early as 1996, John Perry Barlow’s “A Declaration of the Independence of Cyberspace” challenged the notion of government control over the internet. China has advocated for the need for state control over the internet for more than a decade. More recently, U.S. Vice President J.D. Vance asserted in February that the U.S. “is the leader in AI, and [the Trump] administration plans to keep it that way.” He added that “[t]he U.S. possesses all components across the full AI stack, including advanced semiconductor design, frontier algorithms, and, of course, transformational applications.”
This ambition was formalized in July through America’s AI Action Plan, which forcefully endorses an idea of an American sovereign AI stack, espousing the “need to establish American AI—from our advanced semiconductors to our models to our applications—as the gold standard for AI worldwide and ensure our allies are building on American technology.” More recently, the administration took a 10% equity stake in Intel and expressed interest in “many more [investments] like it.”
But exerting “sovereignty” along the AI technology stack (see Table 1)—including everything from upstream rare earth minerals and critical materials to specialized high-precision chip-making, cloud infrastructure, data centers, and advanced model training—is a considerable undertaking. Each stage of the stack represents the ingenuity and expertise of skilled workers as well as strategic control points with major economic, political, and security implications. Today, the U.S. and China dominate the full AI stack, leaving the rest of the world with a difficult implicit choice: align with one version of the stack or sit on the fence between the two. Unsatisfied with this choice and fearful of an AI-induced digital divide, a growing number of countries want to develop their own “sovereign AI” by gaining control over some, or all, of the key components of the AI tech stack.
Initiatives to advance sovereign AI are already underway worldwide, including in the African Union, India, Brazil, and the European Union. Recently, these efforts have taken on greater urgency, attracting a wider number of respected supporters who have drafted the contours of a well-thought-out plan. Advocates argue control over at least part of the AI stack is necessary not only for economic competitiveness, but also for cultural and linguistic preservation, national security, and the ability to shape global norms.
Some of the loudest cries for “sovereign” AI have come from Europe. The EU’s concerns are understandable given its strategic vulnerabilities. Europe accounts for just 10% of the global microchips market. Seventy-four percent of EU member states at least partially rely on U.S. cloud providers, whereas only 14% of EU countries use Chinese providers. Just 14% use EU providers even as Europe has pushed its homegrown, cloud services alternative, Gaia-X, to little effect. Over 80% of Europe’s overall technology stack is imported. The EU is also facing persistent brain drain as AI startups and talent increasingly migrate to American, Canadian, and Chinese ecosystems in search of capital and scale.
European concerns over digital sovereignty continue long-running debates over privacy and government surveillance. The 2013 Snowden revelations reignited tensions over transatlantic data flows, leading to legal challenges that ultimately invalidated both the original Safe Harbor agreement and its successor, Privacy Shield. These concerns were further heightened by the 2018 U.S. “Clarifying Lawful Overseas Use of Data Act” (CLOUD Act), which grants U.S. law enforcement agencies the legal authority to compel U.S. providers to provide access to the data stored on servers even if the servers are located abroad. While the European Commission (EC) was somewhat reassured by institutional responses like the Privacy and Civil Liberties Oversight Board (PCLOB), its credibility has been significantly weakened by the Trump administration. Parallel to these concerns, the EU has built out a more assertive digital rulemaking agenda. The EC expanded its regulatory capacity with legislation including the Digital Services Act (DSA), Digital Markets Act (DMA), AI Act and Code of Practice, as well as enforcement actions targeting dominant U.S. technology firms. These efforts reflect many EU policymakers’ broader ambitions to shape the global digital rulebook and reduce strategic dependencies on foreign providers.
Still, for many in Europe, the push for a sovereign AI stack only moved to a top priority in 2025, following Vance’s speech and the changes in U.S. foreign and trade policy, including the Trump administration’s tightened semiconductor export controls, public threats to withdraw from NATO, and a more assertive posture on international technology regulations. These shifts have raised concerns about overdependence on the U.S. AI stack, which could be abruptly cut off or rapidly altered by U.S. political dynamics. Axel Voss, a German member of the European Parliament and a leading voice on data governance and AI, has stated, “we do not have a reliable U.S. partner any longer” and that Europe should develop its own “sovereign AI and secure cloud.” A leading proponent of European AI sovereignty, Cristine Caffra, puts it: “If our roads, water, our electricity, our trains and our airports were largely in foreign hands, we would find that unacceptable.”
A global rationale for a third AI stack
Beyond sovereignty, there is a strong global rationale for Europe charting the course for a “third AI technology stack.” It would diversify and stoke market competition beyond the current geographic segments of the U.S. and China, increase technical and values-based innovation, and provide countries with an alternative aligned with democratic norms and product features that consumers want, including transparency, trustworthiness, and accountability. In this sense, a European-led AI Stack could differentiate itself by raising the bar on data governance policies, monitoring and reporting standards, and environmental impact.
Currently, the geopolitical landscape is often seen as dominated by two players. The United States holds early technology firm market dominance and is deeply integrated in global economic systems, reinforced by leadership in organizations like the G7 and the Organization for Economic Cooperation and Development (OECD). China promotes its own infrastructure through programs like the Digital Silk Road and exerts geopolitical influence via BRICS and its own Global AI Governance Action Plan. A more competitive EU in the global AI industry could establish a “third path forward” rooted in democratic values and fundamental rights. While this aspiration makes good rhetoric, is it realistic?
Realistic or rhetoric?
In short, the answer is no: Maximalist visions of AI sovereignty are not realistic—not for Europe, and not for any country or region, including the United States. Despite Vance’s assertion, even the U.S. does not have complete control over the whole stack: The Taiwan Semiconductor Manufacturing Company (TSMC) produces nearly all of Nvidia’s chips. In turn, TSMC depends on Dutch firm ASML for the advanced extreme ultraviolet (EUV) lithography machines needed to make AI graphics processing unit (GPU) chips. TSMC owned more than half of the world’s EUV machines as of the end of 2023, and ASML is the exclusive supplier. These machines integrate a range of technologies including German optical systems and tin sourced globally. Throughout the AI stack, foundational technologies rely on rare metals and materials with limited sources in mines around the world.
This intricate global technology interdependence reflects decades of accumulated expertise and specialization leading to comparative advantage which cannot be easily replicated, even in the medium term, despite U.S. efforts to “restore American semiconductor manufacturing” through policies such as America’s AI Action Plan and the CHIPS and Science Act that invest in semiconductor factories and streamline permitting. In addition to its weakened position in digital technologies, Europe also faces what former Italian Prime Minister Mario Draghi called an “innovation gap.” EU countries must manage the costly political imperatives of remilitarization, as well as ballooning social welfare costs and budget deficits.
Developing a European-led third AI stack: confronting inconvenient truths
These pressures have forced a pragmatic shift. Even the most ardent proponents of a European-led AI stack, or a “EuroStack,” have backed off from complete, absolute sovereignty to “creat[ing] some space for European technology” and clarifying that this vision “is not about closing the EU off from the world — quite the opposite. It is about … fostering trusted international partnerships.” Politicians like European Parliamentarian Eva Maydell have gone further, telling Europeans to “sober up.”
A more realistic strategy is for the EU to control layers of the stack where it has a comparative advantage. This would give it enough leverage to achieve strategic interdependence and secure a seat at the table. Akin to a security pact, strategic interdependence allows innovation to thrive and competition to exist and collectively can ensure all members’ security. The EU could lead the development of a third AI stack, co-built through partnerships with “like-minded” or “third-place” countries such as Brazil, Canada, India, Japan, Kenya, Korea, and Nigeria, the United Arab Emirates (UAE), and the United Kingdom, all of whom have a similar strategic interest in creating a third stack more independent of China and the U.S. and have cutting-edge expertise along segments of the AI stack. Already, EuroStack proponents recognize India’s Digital Public Infrastructure as a model. Korea’s Samsung had the highest global revenue for semiconductors in 2024 and could carve out a significant niche in the market through its Mach-1 inference chips that appear to be more power efficient than traditional High-Bandwidth Memory used in traditional Nvidia chips. Japan’s Canon and Nikon are developing nanoimprint and Argon-Fluoride lithography that could replace EUVs. And the U.K. is widely recognized as a leader in AI science, research, and startup innovation. Add these countries to Europe’s domestic capabilities and the contours of a credible third AI stack emerge.
While Europe already has well-cultivated ties with some of these partners, it needs to double down on developing these connections into true alliances and position itself at the epicenter of this coalition. While proponents of a EuroStack acknowledge: “…cooperation should be sought with third-party states which share common goals and may also have privileged access to certain inputs…” and “Europe can play a major role at the centre of a network of other countries of the ‘Global Majority,’” details are not provided on how to accomplish this non-trivial task. Which are the countries? How will they be organized? Why should they align with Europe instead of countries with proven AI capability, like China or the United States? These are difficult questions that need to be addressed for a third AI stack to be viable.
A European-led third AI stack that engages a coalition of countries—ideally including the United States—would be a truly positive global development, providing market diversity and competition and reinforcing democratic digital norms. To build such a coalition, Europe must leverage its existing strengths beyond diplomacy.
Europe remains home to world-class AI and science institutes and universities, which increasingly attract foreign talent—particularly as U.S. science budgets are cut and scrutiny of foreign students ramps up. This said, these institutions often remain siloed from the world of policy and business. Too many European universities operate as “ivory towers,” stuck in bureaucratic public administrations misaligned with public policy or business interests. This needs to change to achieve reverse brain-drain of any magnitude.
The same disconnect affects startups. Europe has no shortage of innovative startups and entrepreneurial leaders, but typically they are swallowed up by U.S. Big Tech before reaching scale. Why is this? It is not because they prefer the U.S. way of life or values, but because the U.S. ecosystem offers easy access to capital, essential complementary resources, and a vast integrated market. It is a one-stop shop.
Europe, by contrast, remains fragmented. Despite two decades of digital single-market efforts, each country protects its national telecom providers, and each country has its own data protection authorities and intellectual property entities. It is time for Europe to confront its “inconvenient truths.” The lack of integration limits the EU’s scale and impedes AI competitiveness. Pushing back on entrenched, politically powerful incumbents is difficult but necessary.
To confront this dynamic, mainstream European industry must play a larger role. Sectors, such as automotive, finance, insurance, and luxury goods, depend on AI to remain globally competitive and need to support this initiative. To the credit of third stack proponents, they recognize this need and have garnered the support of many leading industrial names. For this to be effective, it needs to go beyond political declarations arguing for public expenditures and guard against sovereignty washing, where corporate interests merely co-opt the sovereignty agenda to secure short-term subsidies and political influence. A durable third stack will require sustained private capital, something Europe’s venture ecosystem still lacks in depth and breadth.
Support needs to manifest itself in real financial commitments and action by these firms. Initiatives such as the private investment in “AI Gigafactories” through the InvestAI program, which seeks €20 billion for five factories, and “Buy European” procurement can help, but they are not substitutes for private capital willing to take risks at scale. European AI stack proponents are targeting an investment of €300 billion over 10 years, including a €10 billion European Sovereign Tech Fund. They seek “to liberate private initiative, not to rely on institutions and state bureaucracy.”
While this approaches the right magnitude of funding, the question remains whether it will be enough to close the gap and keep the EuroStack competitive in the near term. This spending is modest compared to the investment of global competitors. U.S. Big Tech (Apple, Amazon, Google, Meta, and Microsoft) collectively made over $1.5 trillion in revenue in 2024 alone and have plans to invest up to $320 billion on AI technologies in 2025. U.S. software companies invested €181 billion in R&D in 2023, about 10 times more than their EU counterparts. The gap is a chasm that will require massive investment to narrow.
Meanwhile, China is accelerating its AI investments through strategic subsidies, state-backed venture funds, public-private partnerships, and support for national champions. DeepSeek, a Chinese rival to companies like OpenAI and Anthropic, has benefited from substantial state support. China has invested across the entire AI stack, from chips to supercomputing to sovereign models. A third AI stack, if it is to succeed, must be viable not only as an alternative to a U.S.-only approach, but also as a counterweight to China’s expanding digital sphere.
Given the level of play, to develop a real AI alternative ecosystem to U.S. Big Tech or China’s model, the coalition of countries involved in this effort has to go beyond Europe and draw in powerhouses like Samsung, Nikon and Canon, Infosys and Tata, Arms Holding and Cohere AI, to name a few. A collective public-private effort is needed that extends beyond European businesses to a constellation of partner countries. Only then can sufficient funding be amassed.
Lastly, if Europe aspires to lead the development of a third AI stack, it will be a reality check on what it means to be in the AI market competing with the U.S and China. With real skin in the game, it will be more difficult to be too righteous. The world saw a glimpse of this in the final stage of the EU AI Act drama as France pushed back on some of its provisions. Now, as the EU AI Act is being implemented and key elements like the Code of Practice have been finalized, emerging stronger than many industry players had hoped and with sign-on from U.S. technology companies, the focus now shifts to implementation. European innovators must now prove that they can create competitive products while adhering to the new regulatory regime. The U.S. AI Action Plan explicitly rejects what it calls “onerous regulation,” withdrawing prior rules on AI safety and ethics, and removing references to climate, misinformation, and diversity from federal standards. While this creates room for Europe to offer a values-based alternative, such differentiation will only succeed if the resulting products and platforms remain competitive at scale.
Going global
The world would significantly benefit from a third AI stack that adheres to democratic principles and is distinct from both the Chinese state-driven and U.S. market-led models. The reality is that no one country or region by itself can achieve this in the medium term. The only viable path is a collective effort with strategic alliances, a shared governance framework, coordinated action, and real economic incentives for participation.
This collective effort should include the United States, and the stack would be strengthened from the U.S.’s dominant position across many elements of the AI stack. While some national officials may view a third stack as a threat, it is better understood as an opportunity. U.S. firms across the AI stack would benefit from an expanded market for AI systems. Nvidia and external experts estimate that sovereign AI spending could generate anywhere from $200 billion to $1 trillion in revenue for the company in the coming years. Moreover, it is in the U.S.’s geopolitical interest to offer democratic infrastructure alternatives to China’s Digital Silk Road, giving countries a genuine stake and meaningful role.
Vance stated in Paris that, “America wants to partner with all of you, and we want to embark on the AI revolution before us with a spirit of openness and collaboration.” The recent U.S. AI Action Plan reiterates the desire to form an alliance but one based on exporting the “full [U.S.] AI technology stack” to all countries willing to join the alliance. This is in stark contrast to European and other countries’ desire for more autonomy and seems to retreat from Vance’s offer to partner and collaborate. China on the other hand is reading the room, with its “Global AI Governance Action Plan” promoting the idea to “jointly explore cutting-edge innovations in AI technology” and “promote technological cooperation.”
The U.S. should counter this and support a third AI stack as a genuine joint effort that strengthens alliances, reinforces democratic governance, reduces reliance on Chinese infrastructure, and extends AI’s benefits globally. Europe is well-positioned to lead this initiative with its diplomatic networks and scientific capacity, and the U.S. should encourage it, as it would with investment in its own defense capabilities. While European diplomacy is impressive, it needs to be matched with nuts-and-bolts follow-up and a concrete implementation plan that is properly budgeted and funded. Too often in the past, well-intentioned political initiatives, like the Lisbon Agenda of 2000s, which pledged to increase the R&D to GDP ratio from 2% to 3% by 2010, lacked follow-through. Twenty-five years later, Europe’s R&D intensity has increased to 2.1%.
Administratively, it will be tempting to task the European Commission to stand this initiative up and create new “institutional coordination capacity,” but their plates are already very full, and it would be subject to EC politics which tend to favor a “spray and pray” approach as funds get dispersed across all the member countries.
Rather than trying to establish a new institution, the third AI stack should grow organically out of existing initiatives. One option is the Current AI initiative announced at the Paris AI Action Summit in February. While a good deliverable for the summit, the goal to develop “practical tools, global standards, and governance models” through its Open Auditing and Accountability Initiative lacks clear deadlines and publicly shared progress.
A more promising vehicle may be the Global Partnership on AI (GPAI), housed administratively in the OECD. With its multilateral foundation and broad member base of key democratic allies and partners, GPAI could build on the OECD AI Principles and G7+’s Hiroshima Code of Conduct to serve as the governance backbone for the third AI stack. The Hiroshima AI Process extends well beyond the G7, including more than 50 “friend” countries—many of them “third-place” nations—as well as the Partners’ Community, which brings in key technology companies. Coupled with the OECD’s longstanding multistakeholder model, involving civil society, organized labor, and the technical community, this networked global governance structure lays the groundwork to advance a third AI stack as a proof of concept. While ambitious, the window of opportunity is now for like-minded governments and partners to act; if they do not, the die may soon be cast.
Tools & Platforms
AI took your job — can retraining help? — Harvard Gazette

Many people worry that AI is going to take their job. But a recent survey conducted by the Federal Reserve Bank of New York found that rather than laying off workers, many AI-adopting firms are retraining their workforces to use the new technology. Yet there’s little research into whether existing job-training programs are helping workers successfully adapt to an evolving labor market.
A new working paper starts to fill that gap. A team of researchers, including doctoral candidate Karen Ni of the Harvard Kennedy School, analyzed worker outcomes after they participated in job-training programs through the U.S. government’s Workforce Innovation and Opportunity Act. Researchers looked at administrative earnings records spanning the quarters before and after workers completed training. Then they analyzed workers’ earning when transitioning from or into an occupation that was highly “AI-exposed” — a term that refers to the extent of tasks that have the potential to be automated, both in the traditional computerization sense and through generative AI technology.
Across the board, the training programs demonstrated a positive impact, with displaced workers seeing increased earnings after entering a new occupation. Still, those earnings were less for someone who targeted a high AI-exposed occupation than someone who targeted a low AI-exposed occupation.
In this edited conversation, Ni explains the role that job-training programs play as AI use is transforming the labor market.
With all the discussion around job displacement and AI, what led you to focus on retraining in particular?
When thinking about the disruptions that a new large-scale technology might have for the labor market, it’s important to understand whether it’s possible for us to help workers who might be displaced by these technologies to transition into other work. So we homed in on, OK, we know that some of these workers are being displaced. Now, what can job training services do for them? Can they improve their job prospects? Can they help them move up in terms of earnings? Is it possible to retrain some of these workers for highly AI-exposed roles?
We wanted to help document the transition and adaptability for these displaced workers, especially those who are lower income. Because then we can think about how we can support these workers, whether it be better investing in these kinds of workforce-development programs or training programs, or adapting those programs to the evolving labor market landscape.
“We wanted to help document the transition and adaptability for these displaced workers, especially those who are lower income.”
What can we learn by looking at data from government workforce development programs?
One of the big advantages of using these trainees is that it’s nationwide, and so it’s nationally representative. That allows us to take a broad look at trainees across the entire country and capture a fair bit of heterogeneity in terms of their occupations and backgrounds. For the large part, our sample captures displaced workers who tend to be lower income, making an average of $40,000 a year. Some are making big transitions from one occupation to a completely different one. We also see a fair number of people who end up going into the same types of jobs that they had before. We think those workers are likely trying to develop new skills or credentials that might be helpful to enter back into a similar occupation. Some of these people might be displaced from their occupation because of AI. But the job displacement in this sample could be for any reason, like a regional office shutting down.
Can you provide some examples of highAI-exposed careers versus low AI-exposed careers?
AI exposure refers to the extent of tasks within an occupation that could potentially be completed by a machine or a large language model. Among our sample of job trainees, some of the most common high AI-exposed occupations were customer service representatives, cashiers, office clerks. On the other end of the spectrum, the lowest AI-exposed workers tended to be manual laborers, such as movers, industrial truck drivers, or packagers.
AI retrainability by occupation
What were your main findings?
We first looked at the split before entering job training: if they were displaced from a low AI-exposed or high AI-exposed occupation. After training, we find pretty positive earnings returns across the board. However, workers who are coming from high AI-exposed jobs have, on average, 25 percent lower earnings returns after training compared to workers initially coming from low AI-exposed occupations.
Then we looked at the split after job training, if they were targeting high AI-exposed jobs or low AI-exposed jobs. If you break it down that way, we find that workers generally are better off targeting jobs that are lower AI-exposed compared to the workers who are targeting jobs that are more highly AI-exposed. Those who are targeting the high AI-exposed fields tend to face a penalty of 29 percent in terms of earnings, relative to workers who target more general skills training.
Are there any recommendations that displaced workers could take away from those findings?
I would cautiously say our findings seem to suggest that, for these AI-exposed workers going through job-training programs, going for jobs that are less AI-exposed tends to give them a better outcome. That said, the fact that we do see positive returns for all of these groups suggests that there’s probably other factors that need to be considered. For instance, what are the specific types of training that they’re receiving? What kinds of skills are they targeting? There’s an immense heterogeneity across the different job-training centers throughout the country, in terms of the quality, intensity, and even the types of occupations that they can offer services for. There’s a lot of potential for future work to consider how those factors might affect outcomes.
Also, in this case, the training program is predominantly serving displaced workers from lower parts of the income distribution. So I don’t think we can generalize across the board and say, “everyone should go do a job-training program.” We were focused on this specific population.
You also created an AI Retrainability Index to rank occupations that both prepare workers well for jobs that are more AI-exposed and also earn more than their past occupation. What did the index reveal about which occupations are most “retrainable”?
We wanted to have a way of measuring by occupation how retrainable workers are if they were to be displaced. Our index ranking shows that, depending on where they’re starting from, you might have more or less capability of being retrained for highly AI-exposed roles. The only three occupational categories that had a positive index value — meaning that we consider these to be occupations that are highly AI-retrainable — were legal, computation and mathematics, and arts, design, and media. So someone coming from a legal profession is more retrainable for high-paying, high AI-exposed roles than someone coming from, say, a customer service job.
Overall, we found that 25 to 40 percent of occupations are AI retrainable, which, to us, is surprisingly high. You might think that if someone is coming from a lower-wage job, it might be really hard to retrain them for a job that has more AI exposure. But what we found is that there may actually be a large potential for retraining.
Source link
Tools & Platforms
Check Point acquires AI security firm Lakera in push for enterprise AI protection

Check Point Software Technologies announced Monday it will acquire Lakera, a specialized artificial intelligence security platform, as entrenched cybersecurity companies continue to expand their offerings to match the generative AI boom.
The deal, expected to close in the fourth quarter of 2025, positions Check Point to offer what the company describes as an “end-to-end AI security solution.” Financial terms were not disclosed.
The acquisition reflects growing concerns about security risks as companies integrate large language models, generative AI, and autonomous agents into core business operations. These technologies introduce potential attack vectors including data exposure, model manipulation, and risks from multi-agent collaboration systems.
“AI is transforming every business process, but it also introduces new attack surfaces,” said Check Point CEO Nadav Zafrir. The company chose Lakera for its AI-native security approach and performance capabilities, he said.
Lakera, founded by former AI specialists from Google and Meta, operates out of both Zurich and San Francisco. The company’s platform provides real-time protection for AI applications, claiming detection rates above 98% with response times under 50 milliseconds and false positive rates below 0.5%.
The startup’s flagship products, Lakera Red and Lakera Guard, offer pre-deployment security assessments and runtime enforcement to protect AI models and applications. The platform supports more than 100 languages and serves Fortune 500 companies globally. The company also operates what it calls Gandalf, an adversarial AI network that has generated more than 80 million attack patterns to test AI defenses. This continuous testing approach helps the platform adapt to emerging threats.
David Haber, Lakera’s co-founder and CEO, said joining Check Point will accelerate the company’s global mission to protect AI applications with the speed and accuracy enterprises require.
Check Point already offers AI-related security through its GenAI Protect service and other AI-powered defenses for applications, cloud systems, and endpoints. The Lakera acquisition extends these capabilities to cover the full AI lifecycle, from models to data to autonomous agents.
Upon completion of the deal, Lakera will form the foundation of Check Point’s Global Center of Excellence for AI Security. The integration aims to accelerate AI security research and development across Check Point’s broader security platform.
The acquisition is another in a flurry of bigger cybersecurity companies moving to acquire AI-focused startups. Earlier this month, F5 acquired CalypsoAI, Cato Networks acquired Aim Security, and Varonis acquired SlashNext.
The deal remains subject to customary closing conditions.
Tools & Platforms
Commure to Embed Ambient AI into MEDITECH Expanse Now Mobile EHR

What You Should Know:
– Commure, a healthcare technology company, has announced the direct embedding of its Ambient technology within MEDITECH Expanse Now, the physician’s mobile application in the MEDITECH Expanse EHR platform.
– The collaboration empowers healthcare organizations using Expanse Now to streamline clinical documentation and reduce administrative burdens, which allows clinicians to focus more fully on patient care within their familiar mobile workflows. This solution is now available to early adopters, with general availability to follow.
A New Era for Clinical Documentation
Commure’s Ambient technology is designed to deliver real-time, AI-powered clinical documentation that fits naturally into the clinician’s workflow. By intelligently capturing and structuring patient-clinician conversations, the solution saves providers an average of 90 minutes per day. This helps reduce cognitive overload and enables clinicians to stay present with their patients.
Seamless Integration and Strategic Advantages
This integration is part of Commure’s comprehensive suite of ambient documentation solutions that address workforce shortages, inefficiencies, and administrative burdens. The Ambient Suite keeps clinicians in their workflow, supports customization and quality capture, and extends across various care settings, including ambulatory environments and the emergency department. Built with Commure’s revenue cycle expertise, the Ambient Suite enhances documentation quality by leveraging both clinical and financial insights.
The embedded offering in Expanse Now complements other previously released ambient mobile and web application options that also make use of deep bidirectional integration with MEDITECH Expanse. Commure works directly with clinicians and administrators to boost margins, reduce burdens, and improve patient engagement. The company integrates with over 60 EHRs and powers millions of encounters annually.Ian Shakil, Chief Strategy Officer of Commure, stated that integrating the company’s Ambient AI technology directly within MEDITECH Expanse Now is a significant step forward in their mission to transform healthcare organizations into “the most advanced, intelligent, and human-centered systems”. The integration uses gold-standard technology to securely exchange data and accurately upload the generated notes back into discrete sections of MEDITECH Expanse.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries