Connect with us

Tools & Platforms

Experts react: What Trump’s new AI Action Plan means for tech, energy, the economy, and more 

Published

on


“An industrial revolution, an information revolution, and a renaissance—all at once.” That’s how the Trump administration describes artificial intelligence (AI) in its new “AI Action Plan.” Released on Wednesday, the plan calls for cutting regulations to spur AI innovation and adoption, speeding up the buildout of AI data centers, exporting AI “full technology stacks” to US allies and partners, and ridding AI systems of what the White House calls “ideological bias.” How does the plan’s approach to AI policy differ from past US policy? What impacts will it have on the US AI industry and global AI governance? What are the implications for energy and the global economy? Our experts share their human-generated responses to these burning AI questions below.  

Click to jump to an expert analysis:

Graham Brookie: A deliberative and thorough plan—but three questions arise about its implementation

Trey Herr: If the US is in an AI race, where is it going?

Trisha Ray: On international partnerships, the AI Action Plan is all sticks, few carrots

Nitansha Bansal: The plan is a step forward for the AI supply chain

Raul Brens: The US can’t lead the way on AI through dominance alone

Mark Scott: The US and EU see eye-to-eye on AI, up to a point

Ananya Kumar and Nitansha Bansal: The US plan may sound like those of the UK and EU—but the differences are critical

Esteban Ponce de Leon: The plan accelerates the tension between proprietary and open-source models

Joseph Webster: On energy, watch what the plan could do for the grid and batteries


A deliberative and thorough plan—but three questions arise about its implementation

We are in an era of increasing geopolitical competition, increased interdependence, and rapid technological change. No single issue demonstrates the convergence of all three better than AI. The AI Action Plan released today reflects this reality. Throughout the first six months of the Trump administration, officials have run a thorough and deliberative policy process—which White House officials say incorporated more than ten thousand public comments from various stakeholders, especially US industry. The resulting product provides a clear articulation of AI in terms of the tech stack that underpins it and an increasingly vast ecosystem of industry segments, stakeholders, applications, and implications. 

The policy recommendations laid out in the action plan are well-organized and draw connections between scientific, domestic, and international priorities. Despite the rhetoric, there is more continuity than it may appear from the first Trump administration to the Biden administration to this action plan—especially in areas such as increasing investment in infrastructure, hardware fabrication, and outcompeting foreign adversaries in innovation and the human talent that underpins it. The AI Action Plan will continue to scale investment and growth in these areas. The key divergence is in governance and guardrails.  

Three outstanding questions stick out regarding effective implementation of the Action Plan.  

First, in an era of budget and staff cuts across the federal government, will there be enough government expertise and funding to realize much of the ambition of this plan? For example, cutting State Department staff focused on tech diplomacy or global norms could undercut parts of the international strategy. Budget cuts to the National Science Foundation could impact AI priorities from workforce to research and development.  

Second, how will the administration wield consolidated power with frameworks to reward states it views as aligned and cut funding to states it sees as unaligned? 

Third, beyond selling US technology, how will the United States not just compete against Chinese frameworks in global bodies, but also work collaboratively with allies and partners on AI norms? 

Given the pace of change, the United States’ success will be based on continuing to grow the AI ecosystem as a collective whole and for the ecosystem to iterate faster to compete more effectively. 

Graham Brookie is the Atlantic Council’s vice president for technology programs and strategy. 


If the US is in an AI race, where is it going?  

The arms race is a funny concept to apply to AI, and not just because the history of arms races is replete with countries bankrupting themselves trying to keep up with a perceived threat from abroad. The repeated emphasis on an AI “race” is still ambiguous on a crucial point—what are we racing toward?  

Consider this useful insight on arms racing in national security: “Over and over again, a promising new idea proved far more expensive than it first appeared would be the case; yet to halt midstream or refuse to try something new until its feasibility had been thoroughly tested meant handing over technical leadership to someone else.”    

 Was this written about AI? No, this comes from historian William H. McNeill writing about the British-German maritime arms race at the turn of the twentieth century. The United Kingdom and Germany raced to build ever bigger armored Dreadnoughts in an attempt to win naval supremacy based on the theory that the economic survival of seagoing countries would be determined by the ability to win a large, decisive naval battle. Industry played a key role in encouraging the competition and setting the terms of the debate, increasingly disconnected from the needs of national security  

 So, to take things back to the present, what are we racing toward when it comes to AI? The White House’s AI Action Plan hasn’t resolved this question. The plan’s Pillar 1 offers a swath of policy ideas grounded more in AI as a normal technology. Pillar 2 is more narrowly focused on infrastructure but still thin on the details of implementation. Tasking the National Institute of Standards and Technology is a common refrain and some of the previous administration’s policy priorities, such as the CHIPS Act and Secure by Design program have been essentially rebranded and relaunched. Pillar 3 calls for a renewed commitment to countering China in multilateral tech standards forums, a cruel irony as the State Department office responsible for this was just shuttered in wide-ranging layoffs announced earlier this month.  

The national security of the United States and its allies is composed of more than the capability of a single cutting-edge technology. Without knowing where this race is going, it will be hard to say when we’ve won, or if it’s worth what we lose to get there.    

Trey Herr is senior director of the Cyber Statecraft Initiative (CSI), part of the Atlantic Council Technology Programs, and assistant professor of global security and policy at American University’s School of International Service.  


On international partnerships, the AI Action Plan is all sticks, few carrots 

The AI Action Plan’s strongest message is that the United States should meet, not curb, global demand for AI. To achieve this, the plan suggests a novel and ambitious approach: full-stack AI export packages through industry consortia. 

What is the AI stack? Most definitions include five layers: infrastructure, data, development, deployment, and application. Arguably, monitoring and governance is a critical sixth layer. US companies dominate components of different layers (e.g. chips, talent, cloud services, and models). But the United States’ ability to export full-stack AI solutions, the carrot in this scenario, is limited by a rather large stick: its broad export control regime, which includes the Foreign Director Product Rule and Export Administration Regulations. 

Governance remains the layer the United States is weakest on. The AI Action Plan does emphasize countering adversarial influence in international governance bodies, such as the Organisation for Economic Co-operation and Development, the Internet Corporation for Assigned Names and Numbers, the Group of Seven (G7), the Group of Twenty (G20), and the International Telecommunication Union. However, the plan undermines the consensus-based AI governance efforts within these bodies, including an apparent jibe at the G7 Code of Conduct. If it seeks real alignment with allies and partners, the White House must outline an affirmative vision for values-based global AI governance. 

Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center, part of the Atlantic Council Technology Programs. 


The plan is a step forward for the AI supply chain

The AI Action Plan’s focus on the full AI stack—from energy infrastructure, data centers, semiconductors, and the talent pipeline to acknowledging associated risks and cybersecurity concerns—is welcome. The plan has adopted an optimistic view of the open source and open weight AI models, and it has built in provisions to create a healthy innovation ecosystem for open source AI models along with strengthening the access to compute—which is another positive policy realization on the part of the administration.

The administration appears to be cognizant that competitiveness in AI will not be achieved solely by domesticating the AI supply chain. Competitiveness in this ecosystem needs to be a multi-pronged strategy of translating domestic AI capabilities into national power faster, more efficiently, more effectively, and more economically than adversaries—driven by faster chips, smarter and more trustworthy models, a more resilient electricity grid, a robust investment infrastructure, and collaboration with allies.

This emphasis on securing the full stack means that the near-term policy will target not just innovation, but the location, sourcing, and trustworthiness of every component in the AI pipeline. The owners and users of AI supply chain components have much to look forward to. The new permitting reform could reshape the location of AI infrastructure; recognition of workforce and talent bottlenecks can lead to renewed focus on skill development and training programs; and emphasis on AI-related vulnerabilities in critical infrastructure could translate into more regular and robust information sharing apparatuses and incident response requirements for private sector executives.

In all, achieving AI competitiveness is an ambitious goal, and the plan sets the government’s agenda straight.

Nitansha Bansal is the assistant director of the Cyber Statecraft Initiative.


The US can’t lead the way on AI through dominance alone

The AI Action Plan makes one thing clear: the United States isn’t just trying to win the AI race—it’s trying to engineer the track unilaterally. With sweeping ambitions to export US-made chips, models, and standards, the plan signals a cutting-edge strategy to rally allies and counter China. But it also takes a big gamble. Rather than co-design AI governance with democratic allies and partners, it pushes a “buy American, trust American” model. This will likely ring hollow for countries across Europe and the Indo-Pacific that have invested heavily in building their own AI rules around transparency, climate action, and digital equity. 

There’s a lot to like in the plan’s push for infrastructure investment and workforce development, which is a necessary step toward building serious AI capacity. But its sidelining of critical safeguards and its dismissal of issues like misinformation, climate change, and diversity, equity, and inclusion continues to have a sandpaper effect on traditional partners and institutions that have invested heavily in aligning AI with public values. If US developers are pressured to walk away from those same principles, the alliance could fray and the social license to operate in these domains will inevitably suffer. 

The United States can lead the way—but not through dominance alone. An alliance is built on the stabilizing forces of trust, not tech stack supply chains or destabilizing attempts to force partners to follow one country’s standards. Building this trust will require working together to respond to the ways that AI shapes our societies, not just unilaterally fixating on its growth. 

Raul Brens Jr. is the director of the GeoTech Center. 


On energy, watch what the plan could do for the grid and batteries

Two energy elements in the AI Action Plan hold bipartisan promise: 

  1. Expanding the electricity grid. The action plan notes the United States should “explore solutions like advanced grid management technologies and upgrades to power lines that can increase the amount of electricity transmitted along existing routes.” In other words, advanced conductors, reconductoring, and dynamic line ratings (and more) are on the table. Both Republicans and Democrats likely agree that transmission and the grid received inadequate investment in the Biden years: The United States built only fifty-five miles of high-voltage lines in 2023, down from the average of 925 miles per year between 2015 and 2019. The University of Pennsylvania estimated that the Inflation Reduction Act’s energy provisions would cost $1.045 trillion from 2023 to 2032, but the bill included only $2.9 billion in direct funding for transmission. 
  1. Funding “leapfrog” dual-use batteries. Next-generation battery chemistries, such as solid-state or lithium-sulfur, could enhance the capabilities of autonomous vehicles and other platforms requiring on-board inference. Virtually all autonomous passenger vehicles run on batteries, and the action plan mentions self-driving cars and logistics applications. Additionally, batteries are a critical military enabler: They are deployed in drones, electronic warfare systems, robots, diesel-electric submarines, directed energy weapons, and more. Given the bipartisan interest in autonomous vehicles and US military competition with Beijing, there may be scope for bipartisan agreement on funding “leapfrog,” dual-use battery chemistries. 

Joseph Webster is a senior fellow at the Atlantic Council’s Global Energy Center and the Indo-Pacific Security Initiative. 


The US and EU see eye-to-eye on AI, up to a point

Despite the ongoing transatlantic friction between Washington and Brussels, much of what was outlined by the White House aligns with much what EU officials have similarly announced in recent months. That includes efforts to reduce bureaucratic red tape to foster AI-enabled industries, the promotion of scientific research to outline a democracy-led approach to the emerging technology, and efforts to understand AI’s impact on the labor force and to upskill workers nationwide.

Yet where problems likely will arise is how Washington seeks to promote a “Make America Great Again” approach to the export of US AI technologies to allies and the wider world. Much of that focuses on prioritizing US interests, primarily against the rise of China and its indigenous AI industry, in multinational standards bodies and other global fora—at a time when the White House has significantly pulled back from previously bipartisan issues like the maintenance of an open and interoperable internet.

This dichotomy—where the United States and EU agree on separate domestic-focused AI industrial policy agendas but disagree on how those approaches are scaled internationally—will likely be a central pain point in the ongoing transatlantic relationship on technology. Finding a path forward between Washington and Brussels must now become a short-term priority at a time when both EU and US officials are threatening tariffs against each other.

Mark Scott is a senior resident fellow at the Digital Forensic Research Lab’s Democracy + Tech Initiative within the Atlantic Council Technology Programs.


The US plan may sound like those of the UK and EU—but the differences are critical

The new AI Action Plan—like its peers from the European Union (EU) and the United Kingdom—is focused on “winning the AI race” through regulatory actions to direct and promote innovation, new investments to create and advance access to crucial AI inputs, and frameworks for international engagement and leadership. Winning the AI race is, in effect, the top priority of all three AI plans, albeit in different ways. While the EU’s AI Act wants to be the first to create regulatory guardrails, the United States’ AI plan has a strong deregulation agenda. In a significant break from other policy measures from this administration to ensure US dominance, this action plan moves away from a purely domestic orientation to the international sphere, flexing the reach of traditional US notions of power. This includes international leadership in frontier technology research and development and adoption, as well as creating global governance standards. It’s a testament to the scarcity, quality, and sizable nature of the inputs needed for global AI dominance that even the Trump administration is thinking through its strategy on AI in terms of global alignment. 

Even as each jurisdiction, including the United States, seeks to position itself as the dominant player in the AI race, there is no common scoreboard for deciding a winner for the game. Each player has devised an ambitious but distinct understanding of this “competition,” and each competition will play out through harnessing a unique combination of industrial, trade, investment, and regulatory policy tools. As the race unfolds in real time, the challenge for US policymakers is to simultaneously create the rules of the game while playing it effectively. A broad range of stakeholders, including AI companies, investors, venture capitalists, safety institutes, and allied governments seek clarity and stability. They all will watch the implementation of the US plan closely to determine their next moves.  

There are two encouraging signs in this action plan when it comes to strengthening US competitiveness:  

First, by prioritizing international diplomacy and security, the United States is positioning itself to influence the global AI playbook that will ultimately determine who reaps economic benefits from AI systems. Leading multilateral coordination on AI positions the United States to secure open markets for AI inputs, shape global adoption pathways, and protect its private sector from regulatory fragmentation and protectionism. 

Second, the plan creates a roadmap for ensuring that the United States and its allies assimilate AI capabilities faster than their adversaries. In this vein, the plan emphasizes the importance of coordinating with allies to implement and strengthen the enforcement of coordinating export controls. 

Ananya Kumar is the deputy director for Future of Money at the GeoEconomics Center. 

Nitansha Bansal is the assistant director of the Cyber Statecraft Initiative. 


The plan accelerates the tension between proprietary and open-source models

The White House’s AI Action Plan explicitly frames model superiority as essential to US dominance, but this creates profound tensions within the US ecosystem itself. As better models attract more users—who, in turn, generate training data for future improvements—we may see a self-reinforcing concentration of power among a few firms. 

This dynamic creates opportunities for leading firms to set safety standards that elevate the entire industry. A clear example is Anthropic’s “race to the top,” where competitive incentives are directly channeled into solving safety problems. When frontier labs adopt rigorous development protocols, market pressures force competitors to match or exceed these standards. However, the darker side of innovation may emerge through benchmark gaming, where pressure to demonstrate superiority incentivizes optimizing for benchmarks rather than genuine capability, risking misleadingly capable systems that excel at tests while lacking true innovation. 

Yet the AI Action Plan’s emphasis on open-source models highlights a more complex competitive landscape than market concentration alone suggests. Open-source strategies are not just defensive moves against domestic monopolization; they also represent offensive tactics in the global AI race, particularly as Chinese open-source models gain traction and threaten to establish alternative standards with millions of users worldwide. 

This dual-track competition between concentrated proprietary excellence and distributed open-source influence fundamentally redefines how firms must compete.  

Success now requires not only racing for capability supremacy but also strategically deciding what to keep proprietary and what to release in order to shape global standards. The plan’s call to “export American AI to allies and partners” through “full-stack deployment packages” suggests that the ultimate competitive advantage may lie not in the superiority of a single model, but in the ability to build dependent ecosystems where US AI becomes the essential infrastructure for global innovation. 

Esteban Ponce de León is a resident fellow at the DFRLab of the Atlantic Council. 


Further reading

Related Experts:
Graham Brookie,
Trey Herr,
Trisha Ray,
Raul Brens Jr.,
Ananya Kumar,
Nitansha Bansal,
Esteban Ponce de León,
Joseph Webster, and
Mark Scott

Image: An aerial view shows construction underway on a Project Stargate AI infrastructure site, a collaboration between three large tech companies – OpenAI, SoftBank, and Oracle – in Abilene, Texas, U.S., April 23, 2025. REUTERS/Daniel Cole.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

AI engineers are being deployed as consultants and getting paid $900 per hour

Published

on


AI engineers are being paid a premium to work as consultants to help large companies troubleshoot, adopt, and integrate AI with enterprise data—something traditional consultants may not be able to do.

PromptQL, an enterprise AI platform created by San Francisco-based developer tooling company Hasura, is doling out $900-per-hour wages to its engineers tasked with building and deploying AI agents to analyze internal company data using large language models (LLMs).

The price point reflects the “intuition” and technical skills needed to keep pace with a rapidly-changing technology, Tanmai Gopal, PromptQL’s cofounder and CEO, told Fortune

Gopal said the company hourly wage for AI engineers as consultants is “aligned with the going rate that you would see for AI engineers,” but that “it feels like we should be increasing that price even more,” as customers aren’t pushing back on the price PromptQL sets.

“MBA types… are very strategic thinkers, and they’re smart people, but they don’t have an intuition for what AI can do,” Gopal said.

Gopal declined to disclose any customers that have used PromptQL to integrate AI into their businesses, but says the list includes “the largest networking company” as well as top fast food, e-commerce, grocery and food delivery tech companies, and “one of the largest B2B companies.”

Oana Iordăchescu, founder of Deep Tech Recruitment, a boutique agency focused on AI, quantum, and frontier tech talent, told Fortune enterprises and startups are competing for senior AI engineers at “unprecedented rates,” and which is leading to wage inflation.

Iordăchescu said the wages are priced “far above even Big Four consulting partners,” who often make around $400 to $600 per hour.

“Traditional management consultants can design AI strategies, but most lack the hands-on technical expertise to debug models, build pipelines, or integrate systems into legacy infrastructure,” Iordăchescu said. “AI engineers working as consultants bridge that gap. They don’t just advise, they execute.”

AI consultant Rob Howard told Fortune he wasn’t surprised at “mind-blowing numbers” like a $900-per-hour wage for AI consulting work, as he’s seen a price premium on projects that have an AI component while companies rush to adopt it into their businesses.

Howard, who is also the CEO Innovating with AI, a program to teach people to become AI consultants in their own right, said some students of his have sold AI trainings or two-day boot camps that net out to $400 or $500 per hour.

“The pricing for this is high in general across the market, because it’s in demand and new and relatively rare to find, you know, people who are qualified to do it,” Howard said.

A recent report published by MIT’s NANDA initiative, revealed that while generative AI holds promise for enterprises, 95% of initiatives to drive rapid revenue growth failed. Aditya Challapally, the lead author of the report and a research contributor to project NANDA at MIT, previously told Fortune the AI pilot program failures did not fall on the quality of the AI models, but the “learning gap” for both tools and organizations.

“Some large companies’ pilots and younger startups are really excelling with generative AI,” Challapally told Fortune earlier this month. Startups led by 19- or 20-year-olds, for example, “have seen revenues jump from zero to $20 million in a year,” he said. 

“It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools,” he added.

Jim Johson, an AI consulting executive at AnswerRocket, told Fortune the $900-per-hour wage “makes perfect sense” when considering companies have spent two years experimenting with AI and “have little to show for it.” 

“Now the pressure’s on to demonstrate real progress, and they’re discovering there’s no easy button for enterprise AI,” Johnson said. “This premium won’t last forever, but right now companies are essentially buying insurance against joining that 95% failure statistic.”

Gopal said PromptQL’s business model to have AI engineers serve as both consultants and forward deployed engineers (FDEs)—hybrid sales and engineering jobs tasked with integrating AI solutions—is what makes their employees so valuable.

This new wave of AI engineer consultants is shaking up the consulting industry, Gopal said. But he sees his company as helping shift traditional consulting partnership expectations and culture. 

“The demand is there,” he said. “I think what makes it hard is that leaders, especially in some of the established companies… are kind of more used to the traditional style of consultants.” 

Gopal said the challenge for his company will be to “drive that leadership and education, and saying, ‘Folks, there is a new way of doing things.’”

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

Tools & Platforms

ChatGPT causes outrage after it refuses to do one task for users

Published

on


As the days go by, ChatGPT is becoming more and more of a useful tool for humans.

Whether it be asking for information on a subject, help in drafting an email, or even opinions on fashion choices, AI is becoming an essential part of some people’s lifestyles in 2025.

People are having genuine conversations with programmes like ChatGPT for advice on situations in life, and while it answers almost anything you ask it, there’s one question it refuses to answer, for some bizarre reason.

The popular AI chatbot’s capabilities seemed endless, but this seemingly newly found barrier has driven people crazy on social media, who don’t understand why it says no to answering this one question.

To nobody’s surprise, this confusion online has stemmed from a viral TikTok video.

AI is heavily relied upon by some (Getty Stock Image)

All the user did was demand for ChatGPT to count to a million – but how did the chatbot respond?

“I know you just won that counting, but the truth is counting all the way to a million would literally take days,” it replied.

While he kept insisting, the bot kept turning the request down, with the voice saying that it ‘isn’t really practical’, ‘even for me’.

The hilarious exchange included the bot saying that it’s ‘not really possible’ either, saying that it simply won’t be able to carry the prompt out for him.

Replies included the bot repeatedly saying that it ‘understood’ and ‘heard’ what he was saying, but his frustrations grew as the clip went on.

Many have now questioned why this might be the case, as one wrote in response to the user’s anger: “I don’t even use ChatGPT and I’ll say this is a win for them. AIs should not be enablers of abusive behaviour in their users.”

Another posted: “So AI does have limits?! Or maybe it’s just going through a rough day at the office. Too many GenZ are asking about Excel and saving Word documents.”

A third claimed: “I think it’s good that AI can identify and ignore stupid time-sink requests that serve no purpose.”

ChatGPT will not count to a million (Samuel Boivin/NurPhoto via Getty Images)

ChatGPT will not count to a million (Samuel Boivin/NurPhoto via Getty Images)

Others joked that the man would be first to be targeted in an AI-uprising, while some suggested that the programme might need a higher subscription plan to count to a million.

“What it really wanted to say is the amount of time you require is higher than your subscription,” a different user said.

As long as you don’t ask the bot to count to ridiculous numbers, it looks like it can help with more or less anything else.



Source link

Continue Reading

Tools & Platforms

AI is introducing new risks in biotechnology. It can undermine trust in science

Published

on


The bioeconomy is entering a defining moment. Advances in biotechnology, artificial intelligence (AI) and global collaboration are opening new frontiers in health, agriculture and climate solutions. Within reach are safe and effective vaccines and therapeutics, developed within days of a new outbreak, precision diagnostics that can be deployed anywhere and bio-based materials that replace fossil fuels.

But alongside these breakthroughs lies a challenge: the very tools that accelerate discovery can also introduce new risks of accidental release or deliberate misuse of biological agents, technologies and knowledge. Left unchecked, these risks could undermine trust in science and slow progress at a time when the world most needs solutions.

The question is not whether biotechnology will reshape our societies: it already is. The question is whether we can build a bioeconomy that is responsibly safeguarded, inclusive and resilient.

The promise and the risk

AI is transforming biotechnology at a remarkable speed. Machine learning models and biological design tools can identify promising vaccine candidates, design novel therapeutic molecules and optimize clinical trials, regulatory submissions and manufacturing processes – all in a fraction of the time it once took. These advances are essential for achieving ambitious goals such as the 100 Days Mission, the effort to compress vaccine development timelines in response to future emergent pandemics within 100 days, enabled by AI-driven tools and technologies.

The stakes extend beyond security. Without equitable access to AI-driven tools, low- and middle-income countries risk falling behind in innovation and preparedness. Without distributed infrastructure, inclusive training datasets, skilled personnel and role models, the benefits of the bioeconomy could remain concentrated in a few regions, perpetuating inequities in health security, technological opportunity and scientific progress.

Building a culture of responsibility

Technology alone cannot solve these challenges. What is required is a culture of responsibility embedded across the entire innovation ecosystem, from scientists and startups to policymakers, funders and publishers.

This culture is beginning to take shape. Some research institutions are integrating biosecurity into operational planning and training. Community-led initiatives are emerging to embed biosafety and biosecurity awareness into everyday laboratory practices. International bodies are responding as well: in 2024, the World Health Organization adopted a resolution to strengthen laboratory biological risk management, underscoring the importance of safe and secure practices amid rapid scientific progress.

The Global South is leading the way in practice. Rwanda, for instance, responded rapidly to a Marburg virus outbreak in 2024 by integrating biosecurity into national health security strategies and collaborating with global partners. Such exemplars demonstrate that with political will and the right systems in place, emerging innovation ecosystems play leadership roles in protecting communities and enabling safe participation in the global bioeconomy.

Why inclusion and equity matter

Safeguarding the bioeconomy is not only about biosecurity; it is also about inclusion. If only a handful of countries shape the rules, control the infrastructure and train the talent, innovation will remain unevenly distributed and risks will multiply.

That is why expanding AI and biotechnology capacity globally is so urgent. Distributed cloud infrastructure, diverse training datasets and inclusive training programmes can help ensure that all regions are equipped to participate. Diverse perspectives from scientists, regulators and civil society, across the Global South and Global North, are essential to evaluating risks and identifying solutions that are fair, secure and effective.

Equity is also a matter of resilience. A pandemic that spreads quickly will not wait for producer countries to supply vaccines and treatments. A bioeconomy that works for all must empower all to respond.

The way forward

The World Economic Forum, alongside partners such as CEPI and IBBIS, continues to bring together leaders from science, industry and civil society to mobilize collective action on these issues. At this year’s BIO convention, for example, a group of senior health and biosecurity leaders from industry and civil society met to discuss the foundational importance of biosecurity and biosafety for life science, to future-proof preparedness and innovation ecosystems for tomorrow’s global bioeconomy and to achieve the 100 Days Mission.

The bioeconomy stands at a crossroads. On one path, innovation accelerates solutions to humanity’s greatest challenges: pandemics, climate change and food security. On the other path, the same innovations, unmanaged, could deepen inequities and expose society to new vulnerabilities.

The choice is ours. By embedding responsibility, biosecurity and inclusive governance into today’s breakthroughs, we can secure the foundation of tomorrow’s bioeconomy.

But responsibility cannot rest with a few institutions alone. Building a secure and equitable bioeconomy requires a shared commitment across regions, sectors and disciplines.

The bioeconomy’s potential is immense. Realizing it safely will depend on the choices made now. Choices that determine not just how we innovate, but how we safeguard humanity’s future.

This article is republished from World Economic Forum under a Creative Commons license. Read the original article.



Source link

Continue Reading

Trending