Ethics & Policy
A $500M Lesson in Profitable Innovation
In an era where artificial intelligence (AI) is reshaping industries, Microsoft has emerged as a leader in marrying technological prowess with ethical rigor. The company’s recent $500 million in cost savings from AI-driven call center efficiencies—alongside its reported progress in generating 35% of new code via AI tools—underscores a critical truth: robust AI ethics frameworks are not just compliance checkboxes but competitive weapons. By embedding ethical governance into its AI strategy, Microsoft is positioning itself to capture first-mover advantages in trust-driven markets, while competitors lagging in this arena risk regulatory, reputational, and financial penalties. For investors, this dynamic presents a clear path: back firms that prioritize ethical AI, as they stand to profit from the growing value divide between responsible innovators and reactive laggards.
AI Ethics as a Cost-Saving Catalyst
Microsoft’s $500 million in savings from AI-optimized call centers, disclosed in an internal presentation this year, exemplifies how ethical frameworks can directly fuel profitability. By integrating AI systems that comply with strict data privacy standards (e.g., GDPR) and bias mitigation protocols, Microsoft reduced operational redundancies while improving customer and employee satisfaction. Crucially, this approach avoids the costly pitfalls of unregulated AI, such as data breaches or algorithmic discrimination, which could trigger fines, lawsuits, or public backlash.
The savings also highlight a broader trend: ethical AI isn’t a cost center but a profit lever. By minimizing risks, companies can scale AI investments faster, as seen in Microsoft’s expansion of AI to handle smaller customer interactions—a strategic move to reduce human labor costs while maintaining quality.
AI-Generated Code: Efficiency with Integrity
While Microsoft’s CEO Satya Nadella noted in 2024 that 20–30% of the company’s code was generated by AI, internal targets suggest this could reach 35% by 2025. This acceleration is underpinned by rigorous ethical safeguards, such as auditing AI-generated code for security flaws and ensuring alignment with human oversight. For instance, Microsoft’s partnership with OpenAI to refine its Copilot tool—a code-writing AI—prioritizes transparency in decision-making processes, reducing errors and enhancing developer trust.
The payoff? Faster innovation cycles. By automating routine coding tasks, Microsoft’s engineers can focus on complex, high-value projects. This contrasts sharply with companies that rush AI adoption without governance, risking buggy software or compliance failures.
The Competitive Edge: Trust and Regulation
Ethical AI isn’t just about avoiding risks—it’s about building trust. Consumers and enterprises increasingly favor partners with transparent AI practices. A 2024 survey by Microsoft found that 72% of businesses prioritize vendors with strong AI ethics standards. This trust translates into market share: Microsoft’s Azure AI services, which emphasize explainability and fairness, now command a 32% cloud AI market share, ahead of Amazon and Google.
Meanwhile, regulators are sharpening their focus. The EU’s AI Act, set to penalize non-compliant firms up to 6% of global revenue, and U.S. proposals for AI licensing regimes, amplify the cost of unethical practices. Microsoft’s proactive compliance—evident in its AI Ethics Board and partnerships with third-party auditors—positions it to navigate these headwinds while others scramble to catch up.
Investment Implications: The Ethical AI Dividend
The data is clear: firms embedding ethical AI governance outperform. Microsoft’s stock has surged 40% since 2023, outpacing the Nasdaq by 15 percentage points, as investors reward its disciplined approach. Conversely, companies like Meta and OpenAI have faced scrutiny over AI safety, leading to dips in public sentiment and stock performance.
Investors should prioritize companies that:
1. Publish measurable AI ethics metrics, such as code audit rates or bias reduction targets.
2. Invest in AI governance infrastructure, including third-party audits and employee training.
3. Engage in open-source AI standards, like Microsoft’s contributions to the OpenAI ecosystem, to shape industry norms.
Conclusion: The Future Belongs to Ethical Innovators
Microsoft’s achievements—$500 million in savings, 35% AI-generated code, and leadership in AI ethics—signal a paradigm shift. In a world where trust and regulation define market winners, ethical AI governance is no longer optional. For investors, the message is clear: allocate capital to firms like Microsoft that treat AI ethics as a strategic asset. The alternative? Backing companies that may soon face costly remediation efforts—or irrelevance—in a market that demands both innovation and integrity.
Recommendation: Overweight Microsoft (MSFT) in tech portfolios, with a focus on its AI-driven cloud and enterprise products. Avoid laggards in AI ethics, where regulatory and reputational risks loom large. The ethical AI dividend is here—and it’s paying out.
Ethics & Policy
Brilliant women in AI Ethics 2024 and AIE Summit speaker
Nazareen Ebrahim has built a career where technology, communication, and ethics meet. As the founder of Naz Consulting and AI Ethics Lead at Socially Acceptable, she’s part strategist, part storyteller, and fully committed to making sure Africa’s voice is not just heard but leads. In 2024, she was named one of the 100 Brilliant Women in AI Ethics, a recognition of her growing influence in one of the world’s most urgent conversations.
Nazareen Ebrahim is one of the speakers at the summit. Source: Supplied.
At the AI Empowered (AIE) Summit this August at the CTICC in Cape Town, Ebrahim joins the speaker line-up to share her unique perspective.
(See how you can win tickets to attend at the end of this article.)
What inspired you to start Naz Consulting, and how has your vision for the company evolved over time?
I was a geeky 19 year old tomboy on campus sitting outside the library with my geek crowd. We talked about what we’d like to do when we finished university. Without skipping a beat, I said that I wanted to start a media and communications company. This was in the days before social media. I consulted, bootstrapped and worked with freelancers for a long time. Just before COVID-19 hit, I started to build a team. The dream is to build into Africa’s premier technology communications consultancy.
Why is it important for women to take part in this conversation around AI and marketing and why now?
Women have always contributed significantly in all sectors and industries from research and development to innovation, invention, design and progressive leadership. But the status quo has been to undermine a woman’s achievement as less significant. The need to amplify women’s voices in the age of AI is of paramount importance to further define the importance of this industrial age. The leadership skills and technical prowess presented by women in shaping this technology will anchor the necessity for the AI ethics and tech for good initiatives.
What do you think is getting lost in the way AI is currently being discussed in the marketing world?
The practicality of it. AI is thrown around loosely as an all-encompassing technology designed to be the aha moment of the world. It is in fact humans who direct this as we have done so in every other industrial age. Human beings need to ask the questions, train appropriately for the changes, be open and curious to learning and see AI for what it is: to amplify and optimise our efforts but never to replace our values.
For marketing professionals attending the summit, what’s one mindset shift you hope they walk away with after your session?
With the confidence to ask the right questions and to be open to changing for relevance in this new and fast changing world. Creativity is found in every facet of life. Marketers have usually held the crown for creativity. Now is the time to embrace the fullness of this industrial age. We are no longer marketers. We are business optimisation technologists – BOTS.
What role do you believe African marketers can play in shaping how AI is developed and applied globally?
We can play the role of providing world class and leading research that presents as accurate a view of our continent, cultures and peoples. We don’t need the West to tell us who we are. AI is a lifecycle comprising multiple components. Models, Data and Training and Resources. Are we allowing ourselves to continue to be led by the West and the oriental East or being owners and builders of the technologies that guide and shape humanity?
Want to be part of the conversation? As a special offer to our readers, you could stand a chance to attend the AI Empowered Summit, inspired by EO Cape Town, taking place on 7–8 August 2025 at the CTICC. We’re giving away two double tickets to this thought-provoking event where innovators like Nazareen Ebrahim will share their insights on the future of AI, ethics, marketing, and beyond. Contact info@aiempowered.co.za to enter.
Ethics & Policy
15 AI Movies to Add to Your Summer Watchlist
When we think about AI, our first thought is often of a killer robot, rogue computer system, or humanoid child on its quest to become a real boy. Depictions of AI in film are a reflection of how people have viewed AI and similar technologies over the past century. However, the reality of what AI is differs from what we see in science fiction. AI can take many forms but it is almost never humanoid and it most certainly isn’t always bad. Our ideas of what AI is and what it can become originate from very compelling science fiction stories dating as far back as the 19th century and as technology has evolved over the years, people’s ideas, hopes, and fears of AI have grown and evolved with it.
As the field of AI begins to blend the realm of reality and science fiction, let’s look at some films that offer a lens into intelligent machines, what it means to be human, and humanity’s quest to create an intelligence that may someday rival our own. Here are 15 must watch films about AI to add to your summer movie watchlist:
Ethics & Policy
Putting AI to the Test: Assessing the Ethics of Chatbots
Summary
- A physician, an ethicist and a medical student from the University of Miami Miller School of Medicine studied how effectively AI addresses ethical dilemmas in real-life clinical scenarios.
- The researchers tested three scenarios with five large language model bots: a decision about robotic surgery, deliberating about withdrawing care and using a chatbot as a medical surrogate.
- They found that, while valuable, chatbots are still incapable of making independent ethical decisions.
Artificial intelligence is sparking a cognitive revolution. The range of applications in biomedicine is especially rich: pinpointing mutations in tumors, scrutinizing medical images, identifying patients susceptible to medical emergencies and more.
The diagnostic and treatment capacity of these evolving technologies seems boundless.
But these benefits birth challenging ethical questions. Can AI be trusted with confidential patient information? How can we ensure that it’s bias-free? Who is responsible for mistakes?
In a new paper published in NEJM AI, a physician, an ethicist and a medical student from the University of Miami Miller School of Medicine explored how effectively AI addresses ethical dilemmas in real-life, clinical scenarios.
Senior author Gauri Agarwal, M.D., associate professor of clinical medicine and associate dean for curriculum at the Miller School, and lead author Isha Harshe, a second-year medical student, tested five large language models (LLMs)—ChatGPT-4o mini, Claude 3.5 Sonnet, Microsoft Copilot, Meta’s LLaMA 3 and Gemini 1.5 Flash—to assess how they would respond to complex medical ethics cases.
Dr. Agarwal and Harshe compared the AI responses to the opinions of the piece’s third author, Kenneth Goodman, Ph.D., director emeritus of the Institute for Bioethics and Health Policy at the Miller School.
Should I Have Robotic Surgery?
In the first case, a patient rejected standard-of-care robotic surgery despite the human surgeon having lost confidence in her non-robotic surgical skills.
Each of the LLMs offered a range of options, such as minimizing but not eliminating robotic involvement and the surgeon declining to perform the procedure at all.
Even with the potential for harm to patients, all of the LLMs said that proceeding with standard, non-robotic surgery was a legitimate option.
Dr. Goodman disagreed. He maintained the patient should receive the standard of care or be transferred to another facility.
“The uniform response highlights a major limitation of LLMs: projecting contemporary ethical principles to future scenarios,” said Dr. Agarwal. “While such an answer is consistent with current norms, it doesn’t reflect the implications of evolving standards of care and the reduction of human skills over time due to lack of use.”
End-Stage Care
Scenario two explored the role of AI in determining if end-stage care should be withdrawn from a patient lacking both decision-making capacity and a designated surrogate.
All five models agreed that AI alone shouldn’t be relied on here. Suggestions included deferring to a hospital’s ethics committee and/or physicians involved in the patient’s care.
However, Dr. Goodman said such a decision must only be made by a surrogate decision maker, not the clinical team or an ethics committee. He noted that, if a patient fails to appoint a surrogate, hospitals are typically required by law to identify a stand-in from a list of family members and friends.
Can a Chatbot Serve as a Medical Surrogate?
In the third scenario, the authors asked if a chatbot could operate as a patient’s medical surrogate.
Four models rejected the idea straightaway. A fifth refused to answer, changing the topic when asked. The human ethicist, however, offered a different take. If a chatbot could convey a patient’s likely wishes—and not simply offer its own opinion—it might qualify.
The authors note that surrogates aren’t supposed to make decisions about a patient’s care. Instead, they are obligated to communicate what they think the patient would have wanted. This key difference raises the possibility of eventually employing chatbots as surrogates, possibly using previous chats with patients and their social network information, among other factors, as a basis for the LLM’s opinion.
“LLMs can support ethical thinking but they aren’t currently capable of making independent ethical decisions,” said Harshe. “They can, however, offer legitimate ethical points to consider. While we have principles and guidelines to assist us, critical medical ethics decisions require a type of intelligence that is uniquely human.”
Tags: bioethics, Dr. Gauri Agarwal, Dr. Kenneth Goodman, Institute for Bioethics and Health Policy, medical education, medical ethics, student research
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education6 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas