Connect with us

Ethics & Policy

Global Voices, Local Harm—The New Digital Colonialism In AI?

Published

on


From military partnerships to underpaid labor in the Global South, the systems we deploy are built on global inequities. For the channel ecosystem and its leaders, AI ethics isn’t just about code—it’s about power.


In 2020, Dr. Timnit Gebru (pictured above), a prominent AI ethics researcher, was fired from Google. This was no ordinary firing—she was relieved of her duties after raising concerns about bias in large language models. Her critique centered on the disproportionate impact AI’s LLMs could have on marginalized communities. Her work was definitively critical of the LLMs she had a hand in building. And it is exactly the type of questioning we in the channel ecosystem need to continue.

Let’s put the AI policy challenges in context: The rapid growth of AI and the increased usage of LLMs require policies that are not only strategic, but inclusive. According the Boston Consulting Group, 74 percent of executives say AI is central to their growth strategies, yet only 15 percent have a dedicated AI ethics framework. The question becomes, why do less than a fourth of them have a dedicated AI framework?

One answer is a lack of psychological safety compounded by increasingly homogeneous teams. McKinsey data shows homogeneous teams are significantly more likely to overlook ethical red flags in product development. A lack of psychological safety makes it difficult for marginalized professionals to speak up, even when harm is evident. If you thought you’d be fired for calling out ethically gray areas, would you be like Dr. Gebru?

When Innovation Exploits Instead Of Includes

Without inclusive leaders who write AI policies, innovation risks becoming a new form of digital colonialism whereby data, labor and culture are extracted from marginalized communities without consent, compensation or proper representation.

Case in point: In 2023, Kenyan content moderators working for OpenAI were paid less than $2 per hour to review traumatic material used to train AI models.

Globally, the disparity is stark. Less than 1 percent of AI research funding goes to institutions in Africa, Latin America or Southeast Asia. Yet these communities in the Global South and beyond bear the brunt of AI’s hidden labor.

The result? AI systems that reflect the biases and values of the powerful, not those most impacted by their deployment.

The Military-Industrial-Tech Complex

Earlier this month, the U.S. Army swore in executives from Palantir, Meta and OpenAI as Army Reserve Lieutenant Colonels with no requirement of military experience or expertise. While the move was not completely unprecedented, it does solidify the links between tech sectors and global military power.

These companies already extract labor from the Global South and are now formally embedded within institutions that project force globally. This becomes a double extraction:

  • First, communities provide the invisible labor and data that fuel AI.
  • Then, those same systems are adapted for surveillance, targeting and geopolitical influence—all with limited accountability to the communities they originated from.

It echoes a familiar colonial pattern: Like the British East India Company, today’s tech giants fuse economic exploitation with military power. Only now, the resources are data, attention and human cognition rather than spices or land.

Digital Colonialism Is Here

And the channel ecosystem cannot afford to ignore it. As leaders in the tech channel, we do more than sell innovation; we shape how innovation impacts the world. Knowing this, we have a responsibility to ask hard questions about where our tools come from, who they serve and who is left out or harmed.

It’s not enough to focus on performance metrics or margin optimization if the systems we deploy are built on invisible labor, cultural erasure or structural inequity. While some see AI simply as a product to deploy, those of us committed to inclusive leadership structures see it as a reflection of our priorities, our partnerships and our values.

The next wave of growth in the channel will not come from technological innovation alone. It will come from deeper trust and shared accountability combined with ethical foresight. We have the power and collective responsibility to ensure that AI does more than scale profit. We have the ability to also do what is just.

So, the question stands: Will we be the architects of inclusive progress or silent partners in digital extraction?

The Inclusive Leadership Newsletter is a must-read for news, tips, and strategies focused on advancing successful diversity, equity, and inclusion initiatives in technology and across the IT channel. Subscribe today!



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Heron Financial brings out AI ethics committee and training programme

Published

on



Heron Financial has unveiled an artificial intelligence (AI) ethics committee and student cohort.

The mortgage and protection broker’s newly formed AI ethics committee will give employees the opportunity to offer input on how AI is utilised, which ensures that innovation will be balanced with transparency, fairness and accountability.

Heron Financial said the AI training will be a “combination of technical learning with ethical oversight” to ensure the “responsible integration” of the technology into Everglades, its digital platform.

As part of its AI student cohort, a number of members from different teams will be upskilled with an AI development programme.

Matt Coulson (pictured), the firm’s co-founder, said: “I’m proud to be launching our first AI learning and development cohort. We see AI not just as a way to enhance our operations, but as an opportunity to enhance our people.

“By investing in their skills, we’re giving them knowledge they can use in their work and daily lives. Human and technology centricity are part of our core values; it’s only natural we bring the two together.”


Sponsored

How we built a limited company proposition around brokers’ needs

Sponsored by BM Solutions


 

‘It’s about combining technology and human expertise to raise the bar’

Alongside this, Heron Financial’s Everglades platform will be embedded with AI-driven innovations comprising an AI reporting assistant, an AI adviser assistant and a prediction model, which it said uses basic customer details to predict the likelihood of them gaining a mortgage.

The platform aims to support prospective and existing homeowners by simplifying processes, as well as providing a referral route for clients and streamlining the sales-to-mortgage process to help sales teams and housebuilders.

Also in the pipeline is a new generation of AI ‘agents’ within Everglades, which Heron Financial said will further automate and optimise the whole adviser journey.

Coulson added: “We expect these developments to save hours in the adviser servicing journey, enabling our advisers to spend more time with clients and less time on admin. It’s about combining technology and human expertise to raise the bar for the whole industry.”

At the beginning of the year, Coulson discussed the firm’s green retrofit process in an interview with Mortgage Solutions.





Source link

Continue Reading

Ethics & Policy

Starting my AI and higher education seminar

Published

on


Greetings from early September, when fall classes have begun.  Today I’d like to share information about one of my seminars as part of my long-running practice of being open about my teaching.

It’s in Georgetown University’s Learning, Design, and Technology graduate program. I’m team teaching it with LDT’s founding director, professor Eddie Maloney, who first designed and led the class last year, along with some great guest presenters.  The subject is the impact of AI on higher education.

Every week students dive into one topic, from pedagogy to criticism, changes in politics and economics to using LLMs to craft simulations.  During the semester students lead discussions and also teach the class about a particular AI technology.  Each student will also tackle Georgetown’s Applied Artificial Intelligence Microcredential.

Midjourney-created image of AI impacting a university.

Almost all of the readings are online.  We have two books scheduled: B. Mairéad Pratschke, Generative AI and Education and De Kai, Raising AI: An Essential Guide to Parenting Our Future.

Here’s the syllabus:

Introduction to AI in Higher Education:
Overview of AI, its history, and current applications in academia

Signing up for tech sessions (45 min max) (pair up) and discussion leading spots

Delving deeper into LLMs
Guest speakers: Molly Chehak and Ella Csarno.
Readings:

  1. AI Tools in Society & Cognitive Offloading
  2. MIT study, Your Brain on ChatGPT. (overview, since the study is 120 pages)
  3. Allie K. Miller, Practical Guide to Prompting ChatGPT 5 Differently

Macro Impacts: Economics, Culture, Politics

A broad look at AI’s societal effects—on labor, governance, and policy—with a focus on emerging regulatory frameworks and debates about automation and democracy.

Readings:

Optional: Daron Acemoglu, “The Simple Macroeconomics of AI”

Institutional Responses

This week, we will also examine how colleges and universities are responding structurally to the rise of AI, from policy and pedagogy to strategic planning and public communication.

Reading:

How Colleges and Universities Are Grappling with AI

We consider AI’s influence on teaching and learning through institutional case studies and applied practices, with guest insights on faculty development and student experience.

Guest speaker: Eddie Watson on the AAC&U Institute on AI, Pedagogy, and the Curriculum.

Reading: 

Pedagogy, Conversations, and Anthropomorphism

Through simulations and classroom case studies, we examine the pedagogical potential and ethical complications of human-AI interaction, academic integrity, and AI as a “conversational partner.”

Readings:

AI and Ethics/Humanities

This session explores ethical and philosophical questions around AI development and deployment, drawing on work in the humanities, global ethics, and human-centered design.

Guest speaker: De Kai

Readings: selections from Raising AI: An Essential Guide to Parenting Our Future (TBD)

Critiques of AI

We engage critical perspectives on AI, focusing on algorithmic bias, epistemology, and the political economy of data, while challenging dominant narratives of inevitability and neutrality.

Readings:

Human-AI Learning

This week considers how humans and AI collaborate for learning, and what this partnership means for workforce development, education, and a sense of lifelong fulfillment.

Guest Speaker: Dewey Murdick

Readings: TBD

 

Agentic Possibilities

A close look at emerging AI systems and agents, with attention to autonomy, instructional design, and how educational tools are integrating generative AI features.

Reading

  • Pratschke, Generative AI and Education, chapters 5-6

Future Possibilities

We explore visions for the future of AI in higher education, including utopian and dystopian framings, and ask how ethical leadership and equity might shape what comes next.

Readings:

One week with topic reserved for emerging issues

Topic, materials, and exercises to be determined by instructors and students.

Student final presentations

I’m very excited to be teaching it.

Liked it? Take a second to support Bryan Alexander on Patreon!

Become a patron at Patreon!



Source link

Continue Reading

Ethics & Policy

A dynamic dialogue with Southeast Asia to put the OECD AI Principles into action

Published

on


A policy roundtable in Tokyo and a workshop in Bangkok deepened the dialogue between Southeast Asia and the OECD, fostering collaboration on AI governance across countries, sectors, and policy communities.

A dialogue strengthened by two dynamics

Southeast Asia is rapidly emerging as a vibrant hub for Artificial Intelligence. From Indonesia and Thailand to Singapore, and Viet Nam, governments have launched national AI strategies, and ASEAN published a Guide on AI Governance and Ethics to promote consistency of AI frameworks across jurisdictions.

At the same time, OECD’s engagement with Southeast Asia is strengthening. The 2024 Ministerial Council Meeting highlighted the region as a priority for OECD global relations, coinciding with the tenth anniversary of the Southeast Asia Regional Programme (SEARP) and the initiation of the accession processes for Indonesia and Thailand.

Together, these dynamics open new avenues for practical cooperation on trustworthy, safe and secure AI.

In 2025, this momentum translated into two concrete engagement initiatives: a policy roundtable in Tokyo in May and a co-creation workshop for the OECD AI Policy Toolkit in Bangkok in August. Both events shaped regional dialogues on AI governance and helped to bridge the gap between technical expertise and policy design.

Japan actively supported both initiatives, demonstrating a strong commitment to regional AI governance. At the OECD SEARP Regional Forum in Bangkok, Japan expressed hope that AI would become a new pillar of OECD–Southeast Asia cooperation, highlighting the Tokyo Policy Roundtable on AI as the first of many such initiatives. Subsequently, Japan supported the co-creation workshop in Bangkok in August, helping to ensure a regional focus and high-level engagement across Southeast Asia.

The Tokyo roundtable enabled discussions on AI in agriculture, natural disaster management, national strategies and more

On 26 May 2025, the OECD and its Tokyo Office held a regional policy roundtable, bringing together over 80 experts and policymakers from Japan, Korea, Southeast Asia, and the ASEAN Secretariat, with many more joining online. The event highlighted the importance of linking technical expertise with policy to ensure AI delivers benefits responsibly, drawing on the OECD AI Principles and Policy Observatory. Speakers from ASEAN, Thailand, and Singapore shared progress on implementing their national AI strategies,

AI’s potential came into focus through two powerful examples. In agriculture, it can speed up crop breeding and enable precision farming, while in disaster management, digital twins built from satellite and telecoms data can strengthen early warnings and damage assessments. As climate change intensifies agricultural stress and natural hazards, these cases demonstrate how AI can deliver real societal benefits—while underscoring the need for robust governance and regional cooperation, supported by OECD initiatives such as the upcoming AI Policy Toolkit.

The OECD presented activities in international AI governance, including the AI Policy Observatory, the AI Incidents and Hazards Monitor and the Reporting Framework for the G7 Hiroshima Code of Conduct for Developers of Advanced AI systems.

Bangkok co-creation workshop: testing the OECD AI Policy Toolkit

Following the Tokyo roundtable, the OECD, supported by the Foreign Ministries of Japan and Thailand, hosted the first co-creation workshop for the OECD AI Policy Toolkit on 6 August 2025 in Bangkok. Twenty senior policymakers and AI experts from across the region contributed regional perspectives to shape the tool, which will feature a self-assessment module to identify priorities and gaps, alongside a repository of proven policies and practices. The initiative, led by Costa Rica as Chair of the 2025 OECD Ministerial Council Meeting, has already gained strong backing from governments and organisations, including Japan’s Ministry of Internal Affairs and Communications.

Hands-on discussion on key challenges and practical solutions

The co-creation workshop provided a space for participants to work in breakout groups and discuss concrete challenges, explore practical solutions and design effective AI policies in key domains.

Participants identified several pressing challenges for AI governance in Southeast Asia. Designing public funding programmes for AI research and development remains difficult in an environment where technology evolves faster than policy cycles, while the need for large-scale investment continues to grow.

The scarcity of high-quality, local-language data, weak governance frameworks, limited data-sharing mechanisms,  and reliance on foreign compute providers further constrain progress, alongside the shortage of locally developed AI models tailored to sectoral needs.  

Participants also focused on labour market transformation, digital divides, and the need to advance AI literacy across all levels of society – from citizens to policymakers – to foster awareness of both opportunities and risks.

Participants showcased promising national initiatives, from responsible data-sharing frameworks and investment incentives for data centres and venture capital, to sectoral data platforms and local-language large language models. Countries are also rolling out capacity-building programmes to strengthen AI adoption and oversight, while seeking the right balance between regulation and innovation to foster trustworthy AI.

Across groups, participants agreed on the need to strengthen engagement, foster collaboration, and create enabling conditions for the practical deployment of AI, capacity building, and knowledge sharing.

The instruments discussed during the workshop will feed into the Toolkit’s policy repository, enabling other countries to draw on these experiences and adapt them to their national contexts.

Taking AI governance from global guidance to local practice

The Tokyo roundtable and Bangkok workshop were key milestones in building a regional dialogue on AI governance with Southeast Asian countries. By combining policy frameworks with technical demonstrations, the discussions focused on turning international guidance into practical, locally tailored measures. Southeast Asia is already shaping how global principles translate into action, and with continued collaboration, the OECD AI Policy Toolkit will provide governments in the region—and beyond—with concrete tools to design and implement trustworthy AI.

The authors would like to thank the team members who contributed to the success of these projects: Hugo Lavigne, Luis Aranda, Lucia Russo, Celine Caira, Kasumi Sugimoto, Julia Carro, Nikolas Schmidt and Takako Kitahara.

The post A dynamic dialogue with Southeast Asia to put the OECD AI Principles into action appeared first on OECD.AI.



Source link

Continue Reading

Trending