Connect with us

Ethics & Policy

The rise (or not) of AI ethics officers

Published

on


Four years after the World Economic Forum (WEF) called for chief artificial intelligence (AI) ethics officers, 79% of executives say AI ethics is important to their enterprise-wide AI approach.

Outside of large technology vendors, however, the role hasn’t taken off. Is centralising that responsibility the best approach, or should organisations look to other governance models? And if you do have someone leading AI ethics, what will they actually be doing? 

For one thing, enterprises tend to shy away from calling it ethics, says Forrester vice-president Brandon Purcell: “Ethics can connote a certain morality, a certain set of norms, and multinational companies are often dealing with many different cultures. Ethics can become a fraught term even within the same country where you have polarised views on what is right, what is fair.”

Salesforce may be urging non-profits to create ethical AI strategies, but enterprises talk about responsible AI and hire for that: in the US, as of April 2025, job postings on LinkedIn for “responsible AI use architects” are up 10% year on year (YoY).

But what most organisations are looking for is AI governance, Purcell says, adding: “Some companies are creating a role for an AI governance lead; others are rightfully looking at it as a team effort, a shared responsibility across everyone who touches the AI value chain.”

Organisations want a person or a team in charge of managing AI risks, agrees Phaedra Boiniodiris, global leader for trustworthy AI at IBM Consulting, “making sure employees and vendors are held accountable for AI solutions they’re buying, using or building”. She sees roles such as AI governance lead or risk officer ensuring “accountability for AI outputs and their impact”.

Whatever the title, Bola Rotibi, chief of enterprise research at CCS Insight, says: “The role is steeped in the latest regulations, the latest insight, the latest trends – they’re going to industry discussion, they’re the home of all of that knowledge around AI ethics.”

But Gartner Fellow and digital ethicist Frank Buytendijk cautions against siloing what should be a management responsibility: “The result should not be the rest of the organisation thinking the AI ethics officer is responsible for making the right ethical decisions.

“AI is only one topic: data ethics are important too. Moving forward, the ethics of spatial computing may be even more impactful than AI ethics, so if you appoint a person with a strategic role, a broader umbrella – digital ethics – may be more interesting.”

Protecting more than data

EfficientEther founder Ryan Mangan believes that, so far, a dedicated AI ethics officer remains a unicorn: “Even cyber security still struggles for a routine board-level seat, so unless an ethics lead lands squarely in the C-suite with real veto power, the title risks being just another mid-tier badge, more myth than mandate.”

A recent survey for Dastra suggests many organisations (51%) view AI compliance as the purview of the data protection officer (DPO), although Dastra co-founder Jérôme de Mercey suggests the role needs to expand. “The most important question with AI is ‘What is the purpose and how do I manage risk?’, and that’s the same question for data processing.”

Both roles involve both regulation and technical questions, communicating across the organisation, and delivering strong governance. For de Mercey, the General Data Protection Regulation’s (GDPR) concepts of fundamental rights are also key for AI ethics: “The economic and societal risk is always [pertinent] because there are people with personal data and DPOs are used to assessing this kind of risk.”

A standalone AI ethics officer isn’t feasible for smaller businesses, says Isabella Grandi, associate director of data strategy and governance at NTT Data. “In most places, responsibility for ethical oversight is still added to someone else’s job, often in data governance or legal, with limited influence. That’s fine up to a point, but as AI deployments scale, the risks get harder to manage on the side.”

DPOs, however, are unlikely to have enough expertise in AI and data science, Purcell argues. “Of course, there is no AI without data. But at the same time, today’s AI models are pre-trained on vast corpuses of data that don’t reside within a company’s four walls. [They may not know the right questions to ask] about the data that was sourced to use those models, about how those models were evaluated, about intended uses, and limitations and vulnerabilities of the models.”

Data science expertise isn’t enough either, he notes. “If we define fairness in terms of ‘the most qualified candidate gets the job’, that’s great, but we also know that there are all sorts of problems with the data used to determine who is most qualified. Maybe we have to look at the distribution of different types of applicants and acceptance rates given an algorithm. Your rank-and-file data scientist doesn’t necessarily know to ask those sorts of questions, whereas somebody who has been trained in ethics does, and can help to find the right balance for your organisation.”

The responsible AI team very often does not have somebody who is certified in AI ethics
Marisa Zalabak, Global Alliance for Digital Education and Sustainability

The remit for this role is distinct from the concerns of the DPO – or the CIO or CISO, says Gopinath Polavarapu, chief digital and AI officer at Jaggaer: “Those leaders safeguard uptime, cyber defence and lawful data use. The AI ethics lead wrestles with deeper questions – is this decision fair? Is it explainable? Does it reinforce or reduce inequality?”

Boiniodiris adds more questions: “Does this application of AI align with our company values? Who could be adversely affected? Do we fully understand the context of the data being used for this AI and was it gathered with consent? Have we communicated how this AI should be used? Are we being transparent?”

Asking what human values AI should reflect is a reminder that the role needs legal, social science, data science and ethics expertise.

“Responsible AI teams are lawyers, sometimes they’re researchers or psychologists – the responsible AI team very often does not have somebody who is certified in AI ethics,” says Marisa Zalabak, co-founder of the Global Alliance for Digital Education and Sustainability.

With more than 250 standards for ethical AI and another 750 in progress, they will need training – Zalabak recommends the Center for AI and Digital Policy while organisations build their own resources – that covers more than “the two things people think about when they think of AI ethics – bias and data privacy – because there’s a huge range of things, including multiple psychosocial impacts”.

The power to say no

While they have access to decision-makers, neither architects or DPOs are senior enough to have sufficient impact or to have visibility of new projects early enough. AI ethics needs to be involved at the design stage.

“The role must sit with executive leadership – reporting to the CEO, the risk committee, or directly to the board –to pause or recalibrate any model that jeopardises fairness or safety,” Polavarapu adds.

A responsible AI lead should be at least at the level of vice-president, Purcell agrees: “Typically, if there’s a chief data officer, they sit within that organisation. If data, analytics and AI are owned by the CIO, they sit within that organisation.”

As well as visibility, they need authority. “From the very start of when an AI project is conceived, that person is involved to elucidate what should be the responsibility requirements for this, in some cases, highly consequential, high-risk use case,” says Purcell.

“They are responsible for bringing in additional stakeholders who will be impacted, to identify where potential harms might occur. They help to create and ensure adherence to best practices in the development of the system, including monitoring and observability. And then, finally, they have a say in the go/no-go evaluation of the system: does it meet the requirements we’ve set out in the beginning?”

That will involve bringing in additional stakeholders with diverse perspectives and backgrounds to test the concept of the AI system and where it could go wrong so it can be red teamed for those edge cases.

“To a certain extent, it’s no different to what we’ve had with other new officers like ESG officers or heads of sustainability who keep up with specific regulations surrounding that capability,” says Rotibi. “The AI ethical officer, like any other officer, should be part of a governing body that looks overall at the company’s posture, whether that be around data privacy, or whether that be around AI, and asks ‘What’s the exposure? What are the vulnerabilities for an organisation?’”

The value of an AI ethics officer lies not just in their expertise and their ability to communicate, but also in the authority they’re given. Rotibi believes that needs to be structural: “You give them governance authority and escalation channels, you give them the ability to do decision impact assessments, so that there is a level of explainability in whatever they say. And you have consequences – because if you don’t have those structures in place, it becomes wishy-washy advisory.”

Boiniodiris agrees: “AI governance teams can pull together committees, but if no one shows up to the meetings, then progress is impossible. The message that this work matters has to come from the enterprise’s highest levels, communicated not just once, but consistently, until it’s embedded in the company culture.” 

Ethics needs to be cross-functional, warns Polavarapu: “Steering committees that span compliance, data science, HR, product and engineering ensure every release is stress-tested for unintended consequences before it ships.”

But Buytendijk maintains that an AI ethics officer should chair a digital ethics advisory board that doesn’t act as a steering committee: “There should be no barrier for line or project managers to hand in their ethical dilemmas. If it is a steering committee, line and project managers lose control over their project, and that is a barrier.”

In practice, he suggests creating advisory boards with sufficient authority: “We asked the advisory boards we have been talking with about how much it happens that their recommendations are not followed, and that essentially never happens.”

Doing well by doing good

Even so, AI ethics officers are unlikely to have the power to block widespread trends with ethical impacts, such as agentic AI that automates workflows and may reduce the number of staff required.

A recent NTT Data survey shows the tensions: 75% of leaders say the organisation’s AI ambitions conflict with corporate sustainability goals. A third of executives say responsibility matters more than innovation, another third rates innovation higher than safety, while the other third assigns equal importance.

The solution may be to view AI ethics and governance not as the necessary cost of avoiding loss (of trust, reputation, customers or even money, if fines are incurred), but as proactively generating longer term value – whether that’s recognition of industry leadership or simply doing what the business does better.

“Responsible AI isn’t a barrier to profit, it’s actually an accelerator for innovation,” Boiniodiris says. She compares it to guardrails on a racetrack that let you go fast safely. “If you embed strong governance from the start, you create the kind of framework that lets you scale responsibly and with confidence.”

AI ethics isn’t just about compliance or even good customer relations: it’s good business and competitive differentiation. Companies embracing Al ethics audits report more double the ROI of those who don’t demonstrate that kind of rigour. And the Center for Democracy & Technology’s report on Assessing AI is a comprehensive look at how to evaluate projects to reach those kind of returns.

If you embed strong governance from the start, you create the kind of framework that lets you scale responsibly and with confidence
Phaedra Boiniodiris, IBM Consulting

The recent ROI of AI ethics paper from the Digital Economist builds on tools such as the Holistic Return On Ethics Framework developed by IBM and Notre Dame, and Rolls-Royce’s Aletheia Framework AI ethics checklist with metrics for an ethical AI ROI calculator. Rather than treating ethical AI as a cost, “it’s a sophisticated financial risk management and revenue generation strategy with measurable, substantial economic returns”.

Lead author Zalabak describes it as “the right information for somebody who could not care less about ethics – ultimately, what’s the business case?”, and she describes AI ethics as “a huge opportunity for people to be amazed by the exponential potential of good”.

A clear ethical AI framework makes a company a more attractive, less risky investment, adds JMAN Group CEO Anush Newman: “When we’re looking at potential portfolio companies, their approach to AI governance and ethics is becoming a serious consideration. A robust data strategy, which inherently includes ethical considerations, isn’t just ‘nice to have’ anymore, it’s fast becoming a necessity.”

Organisations will almost certainly need to adopt a more holistic approach to evaluating risks and harms rather than marking their own homework. AI regulations remain a patchwork, but standards can help. Many enterprise customers now require verifiable controls such as ISO/IEC 42001, which attests that an Artificial Intelligence Management System (AIMS) is operating effectively, Polavarapu notes.

The conversation has moved on from staying on the right side of regulation such as the EU AI Act to embedding AI governance throughout product lifecycles. Grandi adds that UK firms look to the AI Opportunities Action Plan and the AI Playbook for guidance – but still need the internal clarity an AI ethics officer could bring.

Purcell recommends starting by aligning AI systems with their intended outcomes – and with company values. “AI alignment doesn’t just mean doing the right thing, it means, ‘Are we meeting our objectives with AI?’, and that has a material impact on a business’s profitability. A good AI ethics officer is someone who can show where alignment with business objectives also means being responsible, doing the right thing and setting appropriate guardrails, mechanisms and practices in place.”

Effective AI government requires principles such as fairness, transparency and safety, policies and practices ensuring systems follow policies and deliver those principles. The problem is many companies have never set down what their principles are.

“One of the things we’ve found in research is that if you haven’t articulated your values as a company, AI will do it for you,” warns Purcell. “That’s why you need a chief AI ethics officer to codify your values and principles as a company.”

And if you need an incentive for the kind of cross-functional collaboration he admits most large enterprises are terrible at, Purcell predicts least one organisation will suffer a major negative business outcome such as considerably increased costs, probably from “an agentic system that has some degree of autonomy that goes off the rails” within the next 12 months.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Heron Financial brings out AI ethics committee and training programme

Published

on



Heron Financial has unveiled an artificial intelligence (AI) ethics committee and student cohort.

The mortgage and protection broker’s newly formed AI ethics committee will give employees the opportunity to offer input on how AI is utilised, which ensures that innovation will be balanced with transparency, fairness and accountability.

Heron Financial said the AI training will be a “combination of technical learning with ethical oversight” to ensure the “responsible integration” of the technology into Everglades, its digital platform.

As part of its AI student cohort, a number of members from different teams will be upskilled with an AI development programme.

Matt Coulson (pictured), the firm’s co-founder, said: “I’m proud to be launching our first AI learning and development cohort. We see AI not just as a way to enhance our operations, but as an opportunity to enhance our people.

“By investing in their skills, we’re giving them knowledge they can use in their work and daily lives. Human and technology centricity are part of our core values; it’s only natural we bring the two together.”


Sponsored

How we built a limited company proposition around brokers’ needs

Sponsored by BM Solutions


 

‘It’s about combining technology and human expertise to raise the bar’

Alongside this, Heron Financial’s Everglades platform will be embedded with AI-driven innovations comprising an AI reporting assistant, an AI adviser assistant and a prediction model, which it said uses basic customer details to predict the likelihood of them gaining a mortgage.

The platform aims to support prospective and existing homeowners by simplifying processes, as well as providing a referral route for clients and streamlining the sales-to-mortgage process to help sales teams and housebuilders.

Also in the pipeline is a new generation of AI ‘agents’ within Everglades, which Heron Financial said will further automate and optimise the whole adviser journey.

Coulson added: “We expect these developments to save hours in the adviser servicing journey, enabling our advisers to spend more time with clients and less time on admin. It’s about combining technology and human expertise to raise the bar for the whole industry.”

At the beginning of the year, Coulson discussed the firm’s green retrofit process in an interview with Mortgage Solutions.





Source link

Continue Reading

Ethics & Policy

Starting my AI and higher education seminar

Published

on


Greetings from early September, when fall classes have begun.  Today I’d like to share information about one of my seminars as part of my long-running practice of being open about my teaching.

It’s in Georgetown University’s Learning, Design, and Technology graduate program. I’m team teaching it with LDT’s founding director, professor Eddie Maloney, who first designed and led the class last year, along with some great guest presenters.  The subject is the impact of AI on higher education.

Every week students dive into one topic, from pedagogy to criticism, changes in politics and economics to using LLMs to craft simulations.  During the semester students lead discussions and also teach the class about a particular AI technology.  Each student will also tackle Georgetown’s Applied Artificial Intelligence Microcredential.

Midjourney-created image of AI impacting a university.

Almost all of the readings are online.  We have two books scheduled: B. Mairéad Pratschke, Generative AI and Education and De Kai, Raising AI: An Essential Guide to Parenting Our Future.

Here’s the syllabus:

Introduction to AI in Higher Education:
Overview of AI, its history, and current applications in academia

Signing up for tech sessions (45 min max) (pair up) and discussion leading spots

Delving deeper into LLMs
Guest speakers: Molly Chehak and Ella Csarno.
Readings:

  1. AI Tools in Society & Cognitive Offloading
  2. MIT study, Your Brain on ChatGPT. (overview, since the study is 120 pages)
  3. Allie K. Miller, Practical Guide to Prompting ChatGPT 5 Differently

Macro Impacts: Economics, Culture, Politics

A broad look at AI’s societal effects—on labor, governance, and policy—with a focus on emerging regulatory frameworks and debates about automation and democracy.

Readings:

Optional: Daron Acemoglu, “The Simple Macroeconomics of AI”

Institutional Responses

This week, we will also examine how colleges and universities are responding structurally to the rise of AI, from policy and pedagogy to strategic planning and public communication.

Reading:

How Colleges and Universities Are Grappling with AI

We consider AI’s influence on teaching and learning through institutional case studies and applied practices, with guest insights on faculty development and student experience.

Guest speaker: Eddie Watson on the AAC&U Institute on AI, Pedagogy, and the Curriculum.

Reading: 

Pedagogy, Conversations, and Anthropomorphism

Through simulations and classroom case studies, we examine the pedagogical potential and ethical complications of human-AI interaction, academic integrity, and AI as a “conversational partner.”

Readings:

AI and Ethics/Humanities

This session explores ethical and philosophical questions around AI development and deployment, drawing on work in the humanities, global ethics, and human-centered design.

Guest speaker: De Kai

Readings: selections from Raising AI: An Essential Guide to Parenting Our Future (TBD)

Critiques of AI

We engage critical perspectives on AI, focusing on algorithmic bias, epistemology, and the political economy of data, while challenging dominant narratives of inevitability and neutrality.

Readings:

Human-AI Learning

This week considers how humans and AI collaborate for learning, and what this partnership means for workforce development, education, and a sense of lifelong fulfillment.

Guest Speaker: Dewey Murdick

Readings: TBD

 

Agentic Possibilities

A close look at emerging AI systems and agents, with attention to autonomy, instructional design, and how educational tools are integrating generative AI features.

Reading

  • Pratschke, Generative AI and Education, chapters 5-6

Future Possibilities

We explore visions for the future of AI in higher education, including utopian and dystopian framings, and ask how ethical leadership and equity might shape what comes next.

Readings:

One week with topic reserved for emerging issues

Topic, materials, and exercises to be determined by instructors and students.

Student final presentations

I’m very excited to be teaching it.

Liked it? Take a second to support Bryan Alexander on Patreon!

Become a patron at Patreon!



Source link

Continue Reading

Ethics & Policy

A dynamic dialogue with Southeast Asia to put the OECD AI Principles into action

Published

on


A policy roundtable in Tokyo and a workshop in Bangkok deepened the dialogue between Southeast Asia and the OECD, fostering collaboration on AI governance across countries, sectors, and policy communities.

A dialogue strengthened by two dynamics

Southeast Asia is rapidly emerging as a vibrant hub for Artificial Intelligence. From Indonesia and Thailand to Singapore, and Viet Nam, governments have launched national AI strategies, and ASEAN published a Guide on AI Governance and Ethics to promote consistency of AI frameworks across jurisdictions.

At the same time, OECD’s engagement with Southeast Asia is strengthening. The 2024 Ministerial Council Meeting highlighted the region as a priority for OECD global relations, coinciding with the tenth anniversary of the Southeast Asia Regional Programme (SEARP) and the initiation of the accession processes for Indonesia and Thailand.

Together, these dynamics open new avenues for practical cooperation on trustworthy, safe and secure AI.

In 2025, this momentum translated into two concrete engagement initiatives: a policy roundtable in Tokyo in May and a co-creation workshop for the OECD AI Policy Toolkit in Bangkok in August. Both events shaped regional dialogues on AI governance and helped to bridge the gap between technical expertise and policy design.

Japan actively supported both initiatives, demonstrating a strong commitment to regional AI governance. At the OECD SEARP Regional Forum in Bangkok, Japan expressed hope that AI would become a new pillar of OECD–Southeast Asia cooperation, highlighting the Tokyo Policy Roundtable on AI as the first of many such initiatives. Subsequently, Japan supported the co-creation workshop in Bangkok in August, helping to ensure a regional focus and high-level engagement across Southeast Asia.

The Tokyo roundtable enabled discussions on AI in agriculture, natural disaster management, national strategies and more

On 26 May 2025, the OECD and its Tokyo Office held a regional policy roundtable, bringing together over 80 experts and policymakers from Japan, Korea, Southeast Asia, and the ASEAN Secretariat, with many more joining online. The event highlighted the importance of linking technical expertise with policy to ensure AI delivers benefits responsibly, drawing on the OECD AI Principles and Policy Observatory. Speakers from ASEAN, Thailand, and Singapore shared progress on implementing their national AI strategies,

AI’s potential came into focus through two powerful examples. In agriculture, it can speed up crop breeding and enable precision farming, while in disaster management, digital twins built from satellite and telecoms data can strengthen early warnings and damage assessments. As climate change intensifies agricultural stress and natural hazards, these cases demonstrate how AI can deliver real societal benefits—while underscoring the need for robust governance and regional cooperation, supported by OECD initiatives such as the upcoming AI Policy Toolkit.

The OECD presented activities in international AI governance, including the AI Policy Observatory, the AI Incidents and Hazards Monitor and the Reporting Framework for the G7 Hiroshima Code of Conduct for Developers of Advanced AI systems.

Bangkok co-creation workshop: testing the OECD AI Policy Toolkit

Following the Tokyo roundtable, the OECD, supported by the Foreign Ministries of Japan and Thailand, hosted the first co-creation workshop for the OECD AI Policy Toolkit on 6 August 2025 in Bangkok. Twenty senior policymakers and AI experts from across the region contributed regional perspectives to shape the tool, which will feature a self-assessment module to identify priorities and gaps, alongside a repository of proven policies and practices. The initiative, led by Costa Rica as Chair of the 2025 OECD Ministerial Council Meeting, has already gained strong backing from governments and organisations, including Japan’s Ministry of Internal Affairs and Communications.

Hands-on discussion on key challenges and practical solutions

The co-creation workshop provided a space for participants to work in breakout groups and discuss concrete challenges, explore practical solutions and design effective AI policies in key domains.

Participants identified several pressing challenges for AI governance in Southeast Asia. Designing public funding programmes for AI research and development remains difficult in an environment where technology evolves faster than policy cycles, while the need for large-scale investment continues to grow.

The scarcity of high-quality, local-language data, weak governance frameworks, limited data-sharing mechanisms,  and reliance on foreign compute providers further constrain progress, alongside the shortage of locally developed AI models tailored to sectoral needs.  

Participants also focused on labour market transformation, digital divides, and the need to advance AI literacy across all levels of society – from citizens to policymakers – to foster awareness of both opportunities and risks.

Participants showcased promising national initiatives, from responsible data-sharing frameworks and investment incentives for data centres and venture capital, to sectoral data platforms and local-language large language models. Countries are also rolling out capacity-building programmes to strengthen AI adoption and oversight, while seeking the right balance between regulation and innovation to foster trustworthy AI.

Across groups, participants agreed on the need to strengthen engagement, foster collaboration, and create enabling conditions for the practical deployment of AI, capacity building, and knowledge sharing.

The instruments discussed during the workshop will feed into the Toolkit’s policy repository, enabling other countries to draw on these experiences and adapt them to their national contexts.

Taking AI governance from global guidance to local practice

The Tokyo roundtable and Bangkok workshop were key milestones in building a regional dialogue on AI governance with Southeast Asian countries. By combining policy frameworks with technical demonstrations, the discussions focused on turning international guidance into practical, locally tailored measures. Southeast Asia is already shaping how global principles translate into action, and with continued collaboration, the OECD AI Policy Toolkit will provide governments in the region—and beyond—with concrete tools to design and implement trustworthy AI.

The authors would like to thank the team members who contributed to the success of these projects: Hugo Lavigne, Luis Aranda, Lucia Russo, Celine Caira, Kasumi Sugimoto, Julia Carro, Nikolas Schmidt and Takako Kitahara.

The post A dynamic dialogue with Southeast Asia to put the OECD AI Principles into action appeared first on OECD.AI.



Source link

Continue Reading

Trending