Ethics & Policy
Brilliant women in AI Ethics 2024 and AIE Summit speaker
Nazareen Ebrahim has built a career where technology, communication, and ethics meet. As the founder of Naz Consulting and AI Ethics Lead at Socially Acceptable, she’s part strategist, part storyteller, and fully committed to making sure Africa’s voice is not just heard but leads. In 2024, she was named one of the 100 Brilliant Women in AI Ethics, a recognition of her growing influence in one of the world’s most urgent conversations.
Nazareen Ebrahim is one of the speakers at the summit. Source: Supplied.
At the AI Empowered (AIE) Summit this August at the CTICC in Cape Town, Ebrahim joins the speaker line-up to share her unique perspective.
(See how you can win tickets to attend at the end of this article.)
What inspired you to start Naz Consulting, and how has your vision for the company evolved over time?
I was a geeky 19 year old tomboy on campus sitting outside the library with my geek crowd. We talked about what we’d like to do when we finished university. Without skipping a beat, I said that I wanted to start a media and communications company. This was in the days before social media. I consulted, bootstrapped and worked with freelancers for a long time. Just before COVID-19 hit, I started to build a team. The dream is to build into Africa’s premier technology communications consultancy.
Why is it important for women to take part in this conversation around AI and marketing and why now?
Women have always contributed significantly in all sectors and industries from research and development to innovation, invention, design and progressive leadership. But the status quo has been to undermine a woman’s achievement as less significant. The need to amplify women’s voices in the age of AI is of paramount importance to further define the importance of this industrial age. The leadership skills and technical prowess presented by women in shaping this technology will anchor the necessity for the AI ethics and tech for good initiatives.
What do you think is getting lost in the way AI is currently being discussed in the marketing world?
The practicality of it. AI is thrown around loosely as an all-encompassing technology designed to be the aha moment of the world. It is in fact humans who direct this as we have done so in every other industrial age. Human beings need to ask the questions, train appropriately for the changes, be open and curious to learning and see AI for what it is: to amplify and optimise our efforts but never to replace our values.
For marketing professionals attending the summit, what’s one mindset shift you hope they walk away with after your session?
With the confidence to ask the right questions and to be open to changing for relevance in this new and fast changing world. Creativity is found in every facet of life. Marketers have usually held the crown for creativity. Now is the time to embrace the fullness of this industrial age. We are no longer marketers. We are business optimisation technologists – BOTS.
What role do you believe African marketers can play in shaping how AI is developed and applied globally?
We can play the role of providing world class and leading research that presents as accurate a view of our continent, cultures and peoples. We don’t need the West to tell us who we are. AI is a lifecycle comprising multiple components. Models, Data and Training and Resources. Are we allowing ourselves to continue to be led by the West and the oriental East or being owners and builders of the technologies that guide and shape humanity?
Want to be part of the conversation? As a special offer to our readers, you could stand a chance to attend the AI Empowered Summit, inspired by EO Cape Town, taking place on 7–8 August 2025 at the CTICC. We’re giving away two double tickets to this thought-provoking event where innovators like Nazareen Ebrahim will share their insights on the future of AI, ethics, marketing, and beyond. Contact info@aiempowered.co.za to enter.
Ethics & Policy
The HAIP Reporting Framework: Feedback on a quiet revolution in AI transparency
Transparency in AI is no longer an option
AI is transforming our world, but who gets to look under the hood? In a world where algorithms influence elections, shape job markets, and generate knowledge, transparency is no longer just a “nice-to-have”—it’s the foundation of trust.
This is one of the pressing challenges the Hiroshima AI Process (HAIP) addresses. HAIP is a G7 initiative launched in 2023 that aims to establish a global solution for safe and trustworthy AI. As part of this effort, it has developed, with the OECD, a voluntary reporting framework that invites AI developers to disclose how they align with international guidelines for responsible AI.
Let’s look at some early insights from interviews with 11 of the first 19 participating organisations and a multistakeholder meeting held in Tokyo in June 2025. The findings reveal a picture that is both promising and complex, with lessons for the future of global AI governance.
One framework, many motivations: Why companies are joining HAIP
Why would a company voluntarily publish sensitive information about how it builds AI? It turns out the answer depends on who they are speaking to. Our interviews revealed five key audiences that shape how companies approach their HAIP reports:
Audience | Examples | Typical Motivation |
International bodies | OECD, G7 Partners | – Visibility in AI governance – International Alignment |
Policy stakeholders | Governments, regulators | Gain trust – Influence on regulatory frameworks |
Business and technical partners | B2B clients, external developers, corporate partners | Contractual clarity, risk accountability |
General public | Consumers, civil society, job-seeking students | Ethical branding Accessibility |
Internal teams | Employees | Create internal alignment and awareness on AI governance |
For some, HAIP is a diplomatic tool to show they are aligned with global norms. For others, it is a means of communicating readiness for future regulation. B2B companies use the reports to inform clients and partners. Some view the report primarily as a public-facing transparency tool, written in clear, relatable language.
Interestingly, many companies emphasise how the internal process of preparing the report—coordinating across departments, aligning terminology, clarifying roles—was just as valuable as the final publication.
The value and challenge of ambiguity
A recurring theme was uncertainty about how much to disclose or the level of detail to provide. Some companies asked: “Should we talk about specific AI models, or company-wide policy?” Others wondered: “Do we write from the perspective of a developer or a deployer?”
And yet, this ambiguity was also seen as a strength. The broad definition of “advanced AI systems” enabled a diverse group of participants to take part, including those working with small language models, retrieval-augmented generation (RAG), or open-weight AI.
This highlights a key trade-off: too much flexibility can weaken comparability, but too much standardisation might discourage participation. Future iterations of the framework will need to carefully balance these aspects.
Ranking or recognition? A cautionary note
Since HAIP employs a standard questionnaire, comparisons across organisations are possible. But should we rank the questionnaires?
At a stakeholder meeting in Tokyo, when researchers presented a draft scoring system, several participants strongly objected. The concern: that simplistic rankings could distort incentives, discourage participation, and shift the focus from transparency to performance signalling.
Instead, HAIP should be seen as a recognition of effort—a credit for choosing openness. While maintaining the credibility of published content is essential, evaluations must remain context-sensitive and qualitative, not one-size-fits-all.
Three proposals for HAIP’s future
Based on the feedback we collected, we would suggest the following improvements:
- 1. Clarify the target audience
Each organisation should clearly specify its report’s target audience. Is it aimed at policymakers, customers, or the public? This assists readers in understanding the content and prevents mismatched expectations.
- 2. Promote shared vocabulary
Terms like “safety” or “robustness” are often used differently across organisations. To encourage uniformity, we suggest establishing a shared glossary based on the OECD and other international sources.
- 3. Raise awareness and provide support
Many interviewees noted that HAIP remains poorly understood, both inside their organisations and in the public eye. To address this, we suggest:
- Permitting the use of a HAIP logo to indicate participation.
- Engaging institutional investors who increasingly value transparency in ESG.
- An annual ‘HAIP Summit’ could showcase updates and good practices.
A new culture of voluntary transparency
Besides being a reporting tool, the HAIP Reporting Framework acts as a cultural intervention. It motivates companies to reflect, coordinate, and disclose in ways they might not have previously considered. Several participants observed that the very act of publishing a report, even a modest one, should be celebrated rather than penalised.
As AI continues to shape societies and economies, voluntary transparency mechanisms like HAIP present a promising model for bottom-up governance. They are not perfect, but they are a good starting point.
By fostering an environment where disclosure is rewarded, not feared, HAIP may well become a template for the future of responsible AI.
The post The HAIP Reporting Framework: Feedback on a quiet revolution in AI transparency appeared first on OECD.AI.
Ethics & Policy
15 AI Movies to Add to Your Summer Watchlist
When we think about AI, our first thought is often of a killer robot, rogue computer system, or humanoid child on its quest to become a real boy. Depictions of AI in film are a reflection of how people have viewed AI and similar technologies over the past century. However, the reality of what AI is differs from what we see in science fiction. AI can take many forms but it is almost never humanoid and it most certainly isn’t always bad. Our ideas of what AI is and what it can become originate from very compelling science fiction stories dating as far back as the 19th century and as technology has evolved over the years, people’s ideas, hopes, and fears of AI have grown and evolved with it.
As the field of AI begins to blend the realm of reality and science fiction, let’s look at some films that offer a lens into intelligent machines, what it means to be human, and humanity’s quest to create an intelligence that may someday rival our own. Here are 15 must watch films about AI to add to your summer movie watchlist:
Ethics & Policy
Putting AI to the Test: Assessing the Ethics of Chatbots
Summary
- A physician, an ethicist and a medical student from the University of Miami Miller School of Medicine studied how effectively AI addresses ethical dilemmas in real-life clinical scenarios.
- The researchers tested three scenarios with five large language model bots: a decision about robotic surgery, deliberating about withdrawing care and using a chatbot as a medical surrogate.
- They found that, while valuable, chatbots are still incapable of making independent ethical decisions.
Artificial intelligence is sparking a cognitive revolution. The range of applications in biomedicine is especially rich: pinpointing mutations in tumors, scrutinizing medical images, identifying patients susceptible to medical emergencies and more.
The diagnostic and treatment capacity of these evolving technologies seems boundless.
But these benefits birth challenging ethical questions. Can AI be trusted with confidential patient information? How can we ensure that it’s bias-free? Who is responsible for mistakes?
In a new paper published in NEJM AI, a physician, an ethicist and a medical student from the University of Miami Miller School of Medicine explored how effectively AI addresses ethical dilemmas in real-life, clinical scenarios.
Senior author Gauri Agarwal, M.D., associate professor of clinical medicine and associate dean for curriculum at the Miller School, and lead author Isha Harshe, a second-year medical student, tested five large language models (LLMs)—ChatGPT-4o mini, Claude 3.5 Sonnet, Microsoft Copilot, Meta’s LLaMA 3 and Gemini 1.5 Flash—to assess how they would respond to complex medical ethics cases.
Dr. Agarwal and Harshe compared the AI responses to the opinions of the piece’s third author, Kenneth Goodman, Ph.D., director emeritus of the Institute for Bioethics and Health Policy at the Miller School.
Should I Have Robotic Surgery?
In the first case, a patient rejected standard-of-care robotic surgery despite the human surgeon having lost confidence in her non-robotic surgical skills.
Each of the LLMs offered a range of options, such as minimizing but not eliminating robotic involvement and the surgeon declining to perform the procedure at all.
Even with the potential for harm to patients, all of the LLMs said that proceeding with standard, non-robotic surgery was a legitimate option.
Dr. Goodman disagreed. He maintained the patient should receive the standard of care or be transferred to another facility.
“The uniform response highlights a major limitation of LLMs: projecting contemporary ethical principles to future scenarios,” said Dr. Agarwal. “While such an answer is consistent with current norms, it doesn’t reflect the implications of evolving standards of care and the reduction of human skills over time due to lack of use.”
End-Stage Care
Scenario two explored the role of AI in determining if end-stage care should be withdrawn from a patient lacking both decision-making capacity and a designated surrogate.
All five models agreed that AI alone shouldn’t be relied on here. Suggestions included deferring to a hospital’s ethics committee and/or physicians involved in the patient’s care.
However, Dr. Goodman said such a decision must only be made by a surrogate decision maker, not the clinical team or an ethics committee. He noted that, if a patient fails to appoint a surrogate, hospitals are typically required by law to identify a stand-in from a list of family members and friends.
Can a Chatbot Serve as a Medical Surrogate?
In the third scenario, the authors asked if a chatbot could operate as a patient’s medical surrogate.
Four models rejected the idea straightaway. A fifth refused to answer, changing the topic when asked. The human ethicist, however, offered a different take. If a chatbot could convey a patient’s likely wishes—and not simply offer its own opinion—it might qualify.
The authors note that surrogates aren’t supposed to make decisions about a patient’s care. Instead, they are obligated to communicate what they think the patient would have wanted. This key difference raises the possibility of eventually employing chatbots as surrogates, possibly using previous chats with patients and their social network information, among other factors, as a basis for the LLM’s opinion.
“LLMs can support ethical thinking but they aren’t currently capable of making independent ethical decisions,” said Harshe. “They can, however, offer legitimate ethical points to consider. While we have principles and guidelines to assist us, critical medical ethics decisions require a type of intelligence that is uniquely human.”
Tags: bioethics, Dr. Gauri Agarwal, Dr. Kenneth Goodman, Institute for Bioethics and Health Policy, medical education, medical ethics, student research
-
Funding & Business2 weeks ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education4 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education5 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained