Ethics & Policy
How the US and China Are Reshaping AI Geopolitics

Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
-
We examine the near-simultaneous release of competing AI governance visions, the US AI Action Plan and China’s Global AI Governance Action Plan, and explore what this superpower bifurcation means for the rest of the world.
-
From this 30,000-foot view, we take a concrete look at what the US AI Action Plan means in practice through environmental concerns and state rights, showing how it treats local state autonomy as a barrier rather than a cornerstone of American federalism.
-
We explore privacy failures through the lens of thousands of ChatGPT conversations that became publicly searchable on Google, exposing deeply personal mental health discussions before the feature was removed, revealing the gap between user expectations and platform design.
-
We share two contrasting perspectives on the UN AI for Good Summit 2025, the troubling censorship of keynote speaker Dr. Abeba Birhane, alongside the concrete policy developments highlighted in our AI Policy Corner with GRAIL at Purdue University.
-
Finally, we conclude our four-part AVID blog series on red teaming as critical thinking, introducing military-derived techniques like premortems and the Five Whys to help organizations identify AI system vulnerabilities through systematic analysis.
What connects these stories: The persistent gap between stated intentions and actual practice in AI governance, whether in international cooperation, environmental protection, user privacy, or ethical discourse, revealing how institutional priorities consistently override genuine accountability.
Brief #170 Banner Image Credit: Pink Office by Jamillah Knowles & Digit, featured in Better Images of AI, licensed under CC-BY 4.0.
On July 23, 2025, the Trump administration released Winning the Race: America’s AI Action Plan, the culmination of Executive Order 14179 signed on January 23, 2025. The Action Plan outlines over 90 Federal policy actions across three pillars: Accelerating Innovation, Building American AI Infrastructure, and Leading in International AI Diplomacy and Security. It was accompanied by three Executive Orders addressing “Woke AI” prevention in the federal government, accelerating data center permitting, and promoting AI technology exports.
Three days later, on July 26, 2025, China unveiled its Global AI Governance Action Plan at the World Artificial Intelligence Conference in Shanghai. The Governance Plan proposes a 13-point roadmap for multilateral cooperation and UN-anchored standards, part of Premier Li Qiang’s call for a “global AI cooperation organization” headquartered in Shanghai. Building on President Xi Jinping’s 2023 Global AI Governance Initiative, the Governance Plan reflects China’s ambition to shape global AI governance through infrastructure development, open-source collaboration, and international cooperation.
Taken together, these near-simultaneous releases signal a clear bifurcation:
-
U.S. Deregulation-First: The Action Plan explicitly dismantles Biden-era AI safeguards, removing references to bias, climate impact, and diversity, and insists federal agencies procure only “truth-seeking” and “ideologically neutral” systems. As Tech Policy Press warns us in two must-read articles on this topic, this is deregulation framed as innovation, privileging speed and scale over democratic safeguards, while also having the potential to create real harms from politicized AI design. Potential pitfalls include biased hiring algorithms, opaque public-sector AI tools for policing, inequitable health-care and education models, and the sidelining of climate-informed AI research.
Related: In Brief #168, we examined how AI-powered immigration enforcement has expanded predictive policing and surveillance capabilities, eroding civil liberties protections in the process.
-
China’s Multilateralism: Beijing’s Governance Plan positions China as the champion of trustworthy AI. It leverages development aid, standards harmonization, and UN forums to build soft power and set global norms, while simultaneously cultivating “homegrown alternatives” to Western AI stacks. Most notable is the Model-Chip Ecosystem Innovation Alliance, announced July 28 at the conclusion of the World Artificial Intelligence Conference in Shanghai, which unites Chinese LLM developers and domestic AI chip manufacturers, reducing reliance on foreign technology and bolstering China’s standard-setting influence.
Global Implications: The View from Canada and Beyond
For Canada and other middle powers, this bifurcation presents both opportunities and risks. The US Action Plan’s emphasis on allied cooperation through full-stack American AI export packages suggests potential benefits for countries that align with American AI infrastructure. However, the administration’s rejection of international AI safety frameworks could leave allies exposed to regulatory arbitrage and a race to the bottom in AI standards.
Canada’s own AI governance approach, emphasizing human rights, transparency, and multilateral cooperation, sits between these two approaches. The question becomes: can smaller nations maintain sovereign AI governance standards when superpowers are racing to the bottom on safety while competing for technological dominance?
The Missing Middle Ground
What’s most concerning is the risk of regulatory capture in AI governance. As discussed in the April 2025 edition of the Harvard Law Review, regulatory capture occurs when agencies created to act in the public interest instead advance commercial or political concerns of special interest groups. Both plans appear designed to serve corporate interests through different mechanisms, while genuine public interest considerations are relegated to secondary status.
The rapid succession of these announcements suggests we’re entering a new phase of AI geopolitics where governance frameworks become tools of strategic competition rather than collaborative efforts to manage shared risks. For the international community, this raises fundamental questions about whether effective AI governance is possible in an increasingly multipolar world where technological sovereignty overshadows cooperative safety measures.
Looking Forward
As AI systems become more powerful and pervasive, the stakes of this governance competition extend far beyond national competitiveness. Climate models, medical diagnoses, financial systems, and democratic processes all depend on AI systems whose development is now explicitly caught between competing ideological and geopolitical frameworks.
The next few months will reveal whether this represents a temporary divergence or a permanent fracture in global AI governance. For Canada and other nations seeking to chart an independent course, the challenge is maintaining a commitment to evidence-based, rights-respecting AI governance while navigating pressure to choose sides in the technological competition between Washington DC and Beijing.
Recommended Reading: What to make of the Trump administration’s AI Action Plan (Brookings)
Please share your thoughts with the MAIEI community:
Are we witnessing the beginning of parallel AI ecosystems, or can international cooperation on AI governance survive this moment of strategic competition?
President Trump’s aim to “do everything possible to expedite construction of all major AI infrastructure projects” includes sacrificing national land and environmental protections, as revealed in the AI Action Plan. Notably, the Action Plan provides policy recommendations such as exempting data centers from National Environmental Policy Act (NEPA) provisions, constructing data centers on federal lands, accelerating environmental permitting, and considering Clean Water Act Section 404 permits to allow data centers to discharge dredged or fill materials into water sources. Moreover, the plan seeks to “consider a state’s AI regulatory climate when making funding decisions,” and decrease funding for states with heightened regulations.
📌 MAIEI’s Take and Why It Matters:
Such actions depict a growing “win at all costs” mentality in the US-China tech arms race, rolling back Biden-era climate policies and exposing federal lands to environmental degradation, as the Trump administration works to bypass NEPA protections. Data centers, which house the physical infrastructure for AI tools, are known to have a significant environmental impact, especially on water availability and the electricity grid. While the state is facing severe droughts, Texan data centers are utilizing millions of gallons of water, with 463 million gallons used between 2023 and 2024 alone.
Furthermore, as MAIEI reported in Brief #164, data centers are undermining energy stability in New Zealand and Spain. Although the AI Action Plan seeks to “Develop a grid to match the pace of AI innovation,” new projects to expand the grid often first require comprehensive multi-year impact studies. The electricity grid is strained already; new pressures cannot be added while the current infrastructure is insufficient.
In Brief #167, we argued that communities should exercise their autonomy to push back against the many problematic aspects of data center development projects. States like South Carolina have done precisely this, increasing electricity costs for data centers and other significant energy consumers. The state is also considering capping data center tax incentives to protect residents from spiking electricity costs. Despite these developments, the second-largest investment in South Carolina’s history, a $2.8 billion data center in Spartanburg County, was announced this spring. South Carolina’s example shows that community-driven pushback can safeguard resident interests without deterring headline-making investments.
Yet, Trump’s AI Action Plan will decrease funding to states with environmental and data center regulations deemed incompatible with rapid development, a chilling declaration in the wake of the One Big Beautiful Bill Act provision, which sought a 10-year moratorium on state AI regulation (the Senate ultimately removed this provision). Alongside the increased demand for locally destructive and state resource-taxing data centers, federal respect for state autonomy is dwindling.
By penalizing states for exercising stronger environmental or zoning rules, the AI Action Plan flips the usual respect for states’ rights on its head, treating local autonomy as a barrier rather than a cornerstone of American federalism.
In late July 2025, thousands of ChatGPT conversations began appearing in Google search results after users clicked the platform’s “Share” feature. Nearly 4,500 conversations became publicly indexed, including discussions about deeply personal details like addiction struggles, physical abuse experiences, and severe mental health issues.
The mechanism was straightforward: when users clicked “Share” to send conversations to friends or save URLs, ChatGPT created publicly accessible links that weren’t protected from Google’s indexing. Many users appeared unaware that their conversations would become publicly searchable, presumably thinking they were sharing with a small audience.
OpenAI’s Chief Information Security Officer, Dane Stuckey, quickly announced the removal of the feature after widespread criticism, describing it as “a short-lived experiment to help people discover useful conversations.” However, OpenAI maintains that users have to explicitly select an option to make chats visible in web searches.
The timing is particularly concerning given that nearly half of Americans in a survey from earlier this year say they’ve used large language models for psychological support in the past year, with 75% seeking help with anxiety and nearly 60% for depression.
📌 MAIEI’s Take and Why It Matters:
This incident exposes a fundamental misalignment between how users understand AI privacy and how these systems actually work. While OpenAI claims users had to explicitly select search visibility, the real issue is the cognitive gap between user intent and platform design.
When someone shares a ChatGPT conversation, their mental model involves targeted sharing, like forwarding a text message. The leap from “sharing with a friend” to “making globally searchable” represents a design failure that prioritizes data exposure over user expectations. As privacy scholar Carissa Veliz noted about the incident:
“As a privacy scholar, I’m very aware that that data is not private, but of course, ‘not private’ can mean many things, and that Google is logging in these extremely sensitive conversations is just astonishing.”
This follows a troubling pattern where convenience features mask significant privacy trade-offs. Similar issues emerged with Meta’s AI systems, which began sharing user queries in public feeds. As cybersecurity analyst Rachel Tobac observed:
“Many users aren’t fully grasping that these platforms have features that could unintentionally leak their most private questions, stories, and fears.”
This incident represents more than individual privacy violations. It signals the normalization of what journalist and writer Isabelle Castro describes as our data becoming “the digital gossip of our time,” where corporations and governments collect personal information and are “poised to use something that is ours for their gain,” just as the gossip industry did in Warren and Brandeis’s era.
When AI conversations about mental health, trauma, and personal struggles become searchable public records, it exposes the most intimate aspects of users’ lives without their informed consent. Both the OpenAI and Meta AI incidents reveal a fundamental breakdown in the expectation that personal conversations, even with AI systems, maintain some boundary between private reflection and public exposure.
OpenAI CEO Sam Altman recently warned that user conversations with ChatGPT could be subpoenaed and used in court, noting, “If someone confides their most personal issues to ChatGPT, and that ends up in legal proceedings, we could be compelled to hand that over.” These incidents demonstrate why privacy must be designed into AI systems from the ground up, rather than addressed after public outcry.
Building trustworthy AI requires fundamentally rethinking how we architect these platforms to respect user mental models of privacy and provide meaningful control over personal data. As we integrate AI into intimate aspects of our lives, robust privacy protections become essential for maintaining the trust these systems need to function effectively.
While the UN’s AI for Good Global Summit 2025, held from July 8 to 11 in Geneva, brought together over 11,000 participants to discuss AI governance and launched several new initiatives, a troubling incident of censorship revealed the gap between the summit’s stated mission and its actual treatment of critical voices. Dr. Abeba Birhane, a leading AI ethics researcher, was reportedly pressured by conference organizers at the International Telecommunication Union (ITU) to remove critical content from her keynote speech.
According to a blog post published on the Artificial Intelligence Accountability Lab (AIAL) at Trinity College Dublin, Dr. Birhane was forced to take down slides and remove anything that mentioned “Gaza” or “Palestine” or “Israel,” and editing the word “genocide” to “war crimes,” until a single slide that called for “No AI for War Crimes” remained.
The controversy intensified when organizers allegedly gave her an ultimatum just two hours before her scheduled talk. Despite the pressure, Dr. Birhane proceeded with a heavily modified version of her presentation, later publishing both the original and censored versions of her slides to highlight the extent of the censorship at what’s supposed to be a “social good” event.
The general global trend towards authoritarianism, censorship and silencing of academics, journalists alike that stand up for fundamental rights, the rule of law, and justice is difficult to deal with in the current climate. But for a Summit that supposedly claims that “AI for Good remains firmly aligned with the collective priorities of the international community” and “[…] it is our responsibility to ensure that no one is left behind”, to then censor an invited keynote speaker that advocates for confronting difficult issues and engaging in self-reflection, is doubly disheartening.
📌 MAIEI’s Take and Why It Matters:
This incident exemplifies classic ethics washing, where organizations use “social good” branding like “AI for Good” to appear ethically motivated while simultaneously suppressing actual ethical critique. The irony is stark: a summit called “AI for Good” censoring a speaker who wanted to address how AI technologies are being used to cause harm.
As we covered in Brief #159, where we highlighted Seher Shafiq’s recap of RightsCon 2025, AI systems are being weaponized in conflicts worldwide. In Gaza, AI tools are being used with dubious targeting criteria, permitting up to 15-20 civilian deaths per “target,” enabled by tech giants like Google and Amazon through Project Nimbus. Similar patterns have emerged with AI-driven persecution of Uyghurs and Facebook’s AI-enabled role in actively promoting violence against Rohingyas.
This follows a familiar playbook across corporate social responsibility initiatives:
Tech Industry Examples:
Broader Corporate Ethics Washing:
-
Greenwashing at climate summits while continuing harmful practices
-
Diversity initiatives focused on optics rather than systemic change
-
Corporate social responsibility programs that deflect from core business harm
The pattern reveals how “social good” or “ethics” becomes branding rather than genuine accountability. Organizations create ethics initiatives that provide legitimacy, control narratives, deflect criticism, and suppress dissent when real concerns threaten business interests.
As Asma Derja of the Ethical AI Alliance put it bluntly: “You cannot claim to lead conversations on AI for Good while censoring ethical critique.”
Did we miss anything? Let us know in the comments below.
This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, examines conference proceedings from the UN AI for Good Global Summit 2025. While our story on Behind the ‘AI for Good’ Brand: When Industry Summits Censor Dissent highlights concerning censorship issues at the summit, our GRAIL partner Alexander Wilhelm attended the event and provides insight into the concrete policy developments that emerged, including the launch of an AI Standards Exchange Database covering over 700 standards, publication of the UN Activities on AI Report identifying 729 AI projects across 53 UN entities, and UNICC’s new AI Hub for training UN employees. These initiatives reflect the summit’s stated multistakeholder approach, even as questions remain about whose voices are truly welcomed in these “AI for good” conversations.
To dive deeper, read the full article here.
In the final installment of the AVID blog series, the authors bridge strategic and tactical red teaming by introducing critical thinking tools from military doctrine, specifically, premortems and the Five Whys technique, to identify organizational vulnerabilities before they manifest.
Drawing from the U.S. Army’s Red Team Handbook, they demonstrate how premortems can help teams anticipate AI system failures by assuming failure from the outset and working backward to identify potential causes, while the Five Whys helps uncover the root purposes and risks of AI initiatives through systematic questioning across different organizational perspectives. The piece emphasizes that tactical-level red teaming should be driven by concerns uncovered through strategic analysis, using technical tools like Garak, PyRIT, and Counterfit to investigate specific risks rather than conducting unfocused adversarial testing.
Ultimately, the authors conclude that effective red teaming requires cross-organizational participation and should function as a comprehensive critical thinking exercise that strengthens AI systems through systematic analysis rather than mere technical vulnerability hunting.
Parts 1, 2 and 3 can be found here.
To dive deeper, read the full article here.
Please help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!
Ethics & Policy
5 interesting stats to start your week

Third of UK marketers have ‘dramatically’ changed AI approach since AI Act
More than a third (37%) of UK marketers say they have ‘dramatically’ changed their approach to AI, since the introduction of the European Union’s AI Act a year ago, according to research by SAP Emarsys.
Additionally, nearly half (44%) of UK marketers say their approach to AI is more ethical than it was this time last year, while 46% report a better understanding of AI ethics, and 48% claim full compliance with the AI Act, which is designed to ensure safe and transparent AI.
The act sets out a phased approach to regulating the technology, classifying models into risk categories and setting up legal, technological, and governance frameworks which will come into place over the next two years.
However, some marketers are sceptical about the legislation, with 28% raising concerns that the AI Act will lead to the end of innovation in marketing.
Source: SAP Emarsys
Shoppers more likely to trust user reviews than influencers
Nearly two-thirds (65%) of UK consumers say they have made a purchase based on online reviews or comments from fellow shoppers, as opposed to 58% who say they have made a purchase thanks to a social media endorsement.
Sports and leisure equipment (63%), decorative homewares (58%), luxury goods (56%), and cultural events (55%) are identified as product categories where consumers are most likely to find peer-to-peer information valuable.
Accurate product information was found to be a key factor in whether a review was positive or negative. Two-thirds (66%) of UK shoppers say that discrepancies between the product they receive and its description are a key reason for leaving negative reviews, whereas 40% of respondents say they have returned an item in the past year because the product details were inaccurate or misleading.
According to research by Akeeno, purchases driven by influencer activity have also declined since 2023, with 50% reporting having made a purchase based on influencer content in 2025 compared to 54% two years ago.
Source: Akeeno
77% of B2B marketing leaders say buyers still rely on their networks
When vetting what brands to work with, 77% of B2B marketing leaders say potential buyers still look at the company’s wider network as well as its own channels.
Given the amount of content professionals are faced with, they are more likely to rely on other professionals they already know and trust, according to research from LinkedIn.
More than two-fifths (43%) of B2B marketers globally say their network is still their primary source for advice at work, ahead of family and friends, search engines, and AI tools.
Additionally, younger professionals surveyed say they are still somewhat sceptical of AI, with three-quarters (75%) of 18- to 24-year-olds saying that even as AI becomes more advanced, there’s still no substitute for the intuition and insights they get from trusted colleagues.
Since professionals are more likely to trust content and advice from peers, marketers are now investing more in creators, employees, and subject matter experts to build trust. As a result, 80% of marketers say trusted creators are now essential to earning credibility with younger buyers.
Source: LinkedIn
Business confidence up 11 points but leaders remain concerned about economy
Business leader confidence has increased slightly from last month, having risen from -72 in July to -61 in August.
The IoD Directors’ Economic Confidence Index, which measures business leader optimism in prospects for the UK economy, is now back to where it was immediately after last year’s Budget.
This improvement comes from several factors, including the rise in investment intentions (up from -27 in July to -8 in August), the rise in headcount expectations from -23 to -4 over the same period, and the increase in revenue expectations from -8 to 12.
Additionally, business leaders’ confidence in their own organisations is also up, standing at 1 in August compared to -9 in July.
Several factors were identified as being of concern for business leaders; these include UK economic conditions at 76%, up from 67% in May, and both employment taxes (remaining at 59%) and business taxes (up to 47%, from 45%) continuing to be of significant concern.
Source: The Institute of Directors
Total volume of alcohol sold in retail down 2.3%
The total volume of alcohol sold in retail has fallen by 2.3% in the first half of 2025 compared to the previous year, equivalent to 90 million fewer litres. Value sales are also down by 1.1% compared to the same period in 2024.
At the same time, retail sales of non-alcoholic drinks have increased by 5.5% compared to last year, while volume sales are up by 2.3%, equivalent to a further 1.5 billion litres.
As the demand for non-alcoholic beverages grows, people increasingly expect these options to be available in their local bars and restaurants, with 55% of Brits and Europeans now expecting bars to always serve non-alcoholic beer.
As well as this, there are shifts happening within the alcoholic beverages category with value sales of no and low-alcohol spirits rising by 16.1%, and sales of ready-to-drink spirits growing by 11.6% compared to last year.
Source: Circana
Ethics & Policy
AI ethics under scrutiny, young people most exposed

New reports into the rise of artificial intelligence (AI) showed incidents linked to ethical breaches have more than doubled in just two years.
At the same time, entry-level job opportunities have been shrinking, partly due to the spread of this automation.
AI is moving from the margins to the mainstream at extraordinary speed and both workplaces and universities are struggling to keep up.
Tools such as ChatGPT, Gemini and Claude are now being used to draft emails, analyse data, write code, mark essays and even decide who gets a job interview.
Alongside this rapid rollout, a March report from McKinsey, one by the OECD in July and an earlier Rand report warned of a sharp increase in ethical controversies — from cheating scandals in exams to biased recruitment systems and cybersecurity threats — leaving regulators and institutions scrambling to respond.
The McKinsey survey said almost eight in 10 organisations now used AI in at least one business function, up from half in 2022.
While adoption promises faster workflows and lower costs, many companies deploy AI without clear policies. Universities face similar struggles, with students increasingly relying on AI for assignments and exams while academic rules remain inconsistent, it said.
The OECD’s AI Incidents and Hazards Monitor reported that ethical and operational issues involving AI have more than doubled since 2022.
Common concerns included accountability — who is responsible when AI errs; transparency — whether users understand AI decisions; and fairness, whether AI discriminates against certain groups.
Many models operated as “black boxes”, producing results without explanation, making errors hard to detect and correct, it said.
In workplaces, AI is used to screen CVs, rank applicants, and monitor performance. Yet studies show AI trained on historical data can replicate biases, unintentionally favouring certain groups.
Rand reported that AI was also used to manipulate information, influence decisions in sensitive sectors, and conduct cyberattacks.
Meanwhile, 41 per cent of professionals report that AI-driven change is harming their mental health, with younger workers feeling most anxious about job security.
LinkedIn data showed that entry-level roles in the US have fallen by more than 35 per cent since 2023, while 63 per cent of executives expected AI to replace tasks currently done by junior staff.
Aneesh Raman, LinkedIn’s chief economic opportunity officer, described this as “a perfect storm” for new graduates: Hiring freezes, economic uncertainty and AI disruption, as the BBC reported August 26.
LinkedIn forecasts that 70 per cent of jobs will look very different by 2030.
Recent Stanford research confirmed that employment among early-career workers in AI-exposed roles has dropped 13 per cent since generative AI became widespread, while more experienced workers or less AI-exposed roles remained stable.
Companies are adjusting through layoffs rather than pay cuts, squeezing younger workers out, it found.
In Belgium, AI ethics and fairness debates have intensified following a scandal in Flanders’ medical entrance exams.
Investigators caught three candidates using ChatGPT during the test.
Separately, 19 students filed appeals, suspecting others may have used AI unfairly after unusually high pass rates: Some 2,608 of 5,544 participants passed but only 1,741 could enter medical school. The success rate jumped to 47 per cent from 18.9 per cent in 2024, raising concerns about fairness and potential AI misuse.
Flemish education minister Zuhal Demir condemned the incidents, saying students who used AI had “cheated themselves, the university and society”.
Exam commission chair Professor Jan Eggermont noted that the higher pass rate might also reflect easier questions, which were deliberately simplified after the previous year’s exam proved excessively difficult, as well as the record number of participants, rather than AI-assisted cheating alone.
French-speaking universities, in the other part of the country, were not concerned by this scandal, as they still conduct medical entrance exams entirely on paper, something Demir said he was considering going back to.
Ethics & Policy
Governing AI with inclusion: An Egyptian model for the Global South

When artificial intelligence tools began spreading beyond technical circles and into the hands of everyday users, I saw a real opportunity to understand this profound transformation and harness AI’s potential to benefit Egypt as a state and its citizens. I also had questions: Is AI truly a national priority for Egypt? Do we need a legal framework to regulate it? Does it provide adequate protection for citizens? And is it safe enough for vulnerable groups like women and children?
These questions were not rhetorical. They were the drivers behind my decision to work on a legislative proposal for AI governance. My goal was to craft a national framework rooted in inclusion, dialogue, and development, one that does not simply follow global trends but actively shapes them to serve our society’s interests. The journey Egypt undertook can offer inspiration for other countries navigating the path toward fair and inclusive digital policies.
Egypt’s AI Development Journey
Over the past five years, Egypt has accelerated its commitment to AI as a pillar of its Egypt Vision 2030 for sustainable development. In May 2021, the government launched its first National AI Strategy, focusing on capacity building, integrating AI in the public sector, and fostering international collaboration. A National AI Council was established under the Ministry of Communications and Information Technology (MCIT) to oversee implementation. In January 2025, President Abdel Fattah El-Sisi unveiled the second National AI Strategy (2025–2030), which is built around six pillars: governance, technology, data, infrastructure, ecosystem development, and capacity building.
Since then, the MCIT has launched several initiatives, including training 100,000 young people through the “Our Future is Digital” programme, partnering with UNESCO to assess AI readiness, and integrating AI into health, education, and infrastructure projects. Today, Egypt hosts AI research centres, university departments, and partnerships with global tech companies—positioning itself as a regional innovation hub.
AI-led education reform
AI is not reserved for startups and hospitals. In May 2025, President El-Sisi instructed the government to consider introducing AI as a compulsory subject in pre-university education. In April 2025, I formally submitted a parliamentary request and another to the Deputy Prime Minister, suggesting that the government include AI education as part of a broader vision to prepare future generations, as outlined in Egypt’s initial AI strategy. The political leadership’s support for this proposal highlighted the value of synergy between decision-makers and civil society. The Ministries of Education and Communications are now exploring how to integrate AI concepts, ethics, and basic programming into school curricula.
From dialogue to legislation: My journey in AI policymaking
As Deputy Chair of the Foreign Affairs Committee in Parliament, I believe AI policymaking should not be confined to closed-door discussions. It must include all voices. In shaping Egypt’s AI policy, we brought together:
- The private sector, from startups to multinationals, will contribute its views on regulations, data protection, and innovation.
- Civil society – to emphasise ethical AI, algorithmic justice, and protection of vulnerable groups.
- International organisations, such as the OECD, UNDP, and UNESCO, share global best practices and experiences.
- Academic institutions – I co-hosted policy dialogues with the American University in Cairo and the American Chamber of Commerce (AmCham) to discuss governance standards and capacity development.
From recommendations to action: The government listening session
To transform dialogue into real policy, I formally requested the MCIT to host a listening session focused solely on the private sector. Over 70 companies and experts attended, sharing their recommendations directly with government officials.
This marked a key turning point, transitioning the initiative from a parliamentary effort into a participatory, cross-sectoral collaboration.
Drafting the law: Objectives, transparency, and risk-based classification
Based on these consultations, participants developed a legislative proposal grounded in transparency, fairness, and inclusivity. The proposed law includes the following core objectives:
- Support education and scientific research in the field of artificial intelligence
- Provide specific protection for individuals and groups most vulnerable to the potential risks of AI technologies
- Govern AI systems in alignment with Egypt’s international commitments and national legal framework
- Enhance Egypt’s position as a regional and international hub for AI innovation, in partnership with development institutions
- Support and encourage private sector investment in the field of AI, especially for startups and small enterprises
- Promote Egypt’s transition to a digital economy powered by advanced technologies and AI
To operationalise these objectives, the bill includes:
- Clear definitions of AI systems
- Data protection measures aligned with Egypt’s 2020 Personal Data Protection Law
- Mandatory algorithmic fairness, transparency, and auditability
- Incentives for innovation, such as AI incubators and R&D centres
Establishment of ethics committees and training programmes for public sector staff
The draft law also introduces a risk-based classification framework, aligning it with global best practices, which categorises AI systems into three tiers:
1. Prohibited AI systems – These are banned outright due to unacceptable risks, including harm to safety, rights, or public order.
2. High-risk AI systems – These require prior approval, detailed documentation, transparency, and ongoing regulatory oversight. Common examples include AI used in healthcare, law enforcement, critical infrastructure, and education.
3. Limited-risk AI systems – These are permitted with minimal safeguards, such as user transparency, labelling of AI-generated content, and optional user consent. Examples include recommendation engines and chatbots.
This classification system ensures proportionality in regulation, protecting the public interest without stifling innovation.
Global recognition: The IPU applauds Egypt’s model
The Inter-Parliamentary Union (IPU), representing over 179 national parliaments, praised Egypt’s AI bill as a model for inclusive AI governance. It highlighted that involving all stakeholders builds public trust in digital policy and reinforces the legitimacy of technology laws.
Key lessons learned
- Inclusion builds trust – Multistakeholder participation leads to more practical and sustainable policies.
- Political will matters – President El-Sisi’s support elevated AI from a tech topic to a national priority.
- Laws evolve through experience – Our draft legislation is designed to be updated as the field develops.
- Education is the ultimate infrastructure – Bridging the future digital divide begins in the classroom.
- Ethics come first – From the outset, we established values that focus on fairness, transparency, and non-discrimination.
Challenges ahead
As the draft bill progresses into final legislation and implementation, several challenges lie ahead:
- Training regulators on AI fundamentals
- Equipping public institutions to adopt ethical AI
- Reducing the urban-rural digital divide
- Ensuring national sovereignty over data
- Enhancing Egypt’s global role as a policymaker—not just a policy recipient
Ensuring representation in AI policy
As a female legislator leading this effort, it was important for me to prioritise the representation of women, youth, and marginalised groups in technology policymaking. If AI is built on biased data, it reproduces those biases. That’s why the policymaking table must be round, diverse, and representative.
A vision for the region
I look forward to seeing Egypt:
- Advance regional AI policy partnerships across the Middle East and Africa
- Embedd AI ethics in all levels of education
- Invest in AI for the public good
Because AI should serve people—not control them.
Better laws for a better future
This journey taught me that governing AI requires courage to legislate before all the answers are known—and humility to listen to every voice. Egypt’s experience isn’t just about technology; it’s about building trust and shared ownership. And perhaps that’s the most important infrastructure of all.
The post Governing AI with inclusion: An Egyptian model for the Global South appeared first on OECD.AI.
-
Business3 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies