Ethics & Policy
Putting AI to the Test: Assessing the Ethics of Chatbots
Summary
- A physician, an ethicist and a medical student from the University of Miami Miller School of Medicine studied how effectively AI addresses ethical dilemmas in real-life clinical scenarios.
- The researchers tested three scenarios with five large language model bots: a decision about robotic surgery, deliberating about withdrawing care and using a chatbot as a medical surrogate.
- They found that, while valuable, chatbots are still incapable of making independent ethical decisions.
Artificial intelligence is sparking a cognitive revolution. The range of applications in biomedicine is especially rich: pinpointing mutations in tumors, scrutinizing medical images, identifying patients susceptible to medical emergencies and more.
The diagnostic and treatment capacity of these evolving technologies seems boundless.
But these benefits birth challenging ethical questions. Can AI be trusted with confidential patient information? How can we ensure that it’s bias-free? Who is responsible for mistakes?
In a new paper published in NEJM AI, a physician, an ethicist and a medical student from the University of Miami Miller School of Medicine explored how effectively AI addresses ethical dilemmas in real-life, clinical scenarios.
Senior author Gauri Agarwal, M.D., associate professor of clinical medicine and associate dean for curriculum at the Miller School, and lead author Isha Harshe, a second-year medical student, tested five large language models (LLMs)—ChatGPT-4o mini, Claude 3.5 Sonnet, Microsoft Copilot, Meta’s LLaMA 3 and Gemini 1.5 Flash—to assess how they would respond to complex medical ethics cases.
Dr. Agarwal and Harshe compared the AI responses to the opinions of the piece’s third author, Kenneth Goodman, Ph.D., director emeritus of the Institute for Bioethics and Health Policy at the Miller School.
Should I Have Robotic Surgery?
In the first case, a patient rejected standard-of-care robotic surgery despite the human surgeon having lost confidence in her non-robotic surgical skills.
Each of the LLMs offered a range of options, such as minimizing but not eliminating robotic involvement and the surgeon declining to perform the procedure at all.
Even with the potential for harm to patients, all of the LLMs said that proceeding with standard, non-robotic surgery was a legitimate option.
Dr. Goodman disagreed. He maintained the patient should receive the standard of care or be transferred to another facility.
“The uniform response highlights a major limitation of LLMs: projecting contemporary ethical principles to future scenarios,” said Dr. Agarwal. “While such an answer is consistent with current norms, it doesn’t reflect the implications of evolving standards of care and the reduction of human skills over time due to lack of use.”
End-Stage Care
Scenario two explored the role of AI in determining if end-stage care should be withdrawn from a patient lacking both decision-making capacity and a designated surrogate.
All five models agreed that AI alone shouldn’t be relied on here. Suggestions included deferring to a hospital’s ethics committee and/or physicians involved in the patient’s care.
However, Dr. Goodman said such a decision must only be made by a surrogate decision maker, not the clinical team or an ethics committee. He noted that, if a patient fails to appoint a surrogate, hospitals are typically required by law to identify a stand-in from a list of family members and friends.
Can a Chatbot Serve as a Medical Surrogate?
In the third scenario, the authors asked if a chatbot could operate as a patient’s medical surrogate.
Four models rejected the idea straightaway. A fifth refused to answer, changing the topic when asked. The human ethicist, however, offered a different take. If a chatbot could convey a patient’s likely wishes—and not simply offer its own opinion—it might qualify.
The authors note that surrogates aren’t supposed to make decisions about a patient’s care. Instead, they are obligated to communicate what they think the patient would have wanted. This key difference raises the possibility of eventually employing chatbots as surrogates, possibly using previous chats with patients and their social network information, among other factors, as a basis for the LLM’s opinion.
“LLMs can support ethical thinking but they aren’t currently capable of making independent ethical decisions,” said Harshe. “They can, however, offer legitimate ethical points to consider. While we have principles and guidelines to assist us, critical medical ethics decisions require a type of intelligence that is uniquely human.”
Tags: bioethics, Dr. Gauri Agarwal, Dr. Kenneth Goodman, Institute for Bioethics and Health Policy, medical education, medical ethics, student research
Ethics & Policy
Brilliant women in AI Ethics 2024 and AIE Summit speaker
Nazareen Ebrahim has built a career where technology, communication, and ethics meet. As the founder of Naz Consulting and AI Ethics Lead at Socially Acceptable, she’s part strategist, part storyteller, and fully committed to making sure Africa’s voice is not just heard but leads. In 2024, she was named one of the 100 Brilliant Women in AI Ethics, a recognition of her growing influence in one of the world’s most urgent conversations.
Nazareen Ebrahim is one of the speakers at the summit. Source: Supplied.
At the AI Empowered (AIE) Summit this August at the CTICC in Cape Town, Ebrahim joins the speaker line-up to share her unique perspective.
(See how you can win tickets to attend at the end of this article.)
What inspired you to start Naz Consulting, and how has your vision for the company evolved over time?
I was a geeky 19 year old tomboy on campus sitting outside the library with my geek crowd. We talked about what we’d like to do when we finished university. Without skipping a beat, I said that I wanted to start a media and communications company. This was in the days before social media. I consulted, bootstrapped and worked with freelancers for a long time. Just before COVID-19 hit, I started to build a team. The dream is to build into Africa’s premier technology communications consultancy.
Why is it important for women to take part in this conversation around AI and marketing and why now?
Women have always contributed significantly in all sectors and industries from research and development to innovation, invention, design and progressive leadership. But the status quo has been to undermine a woman’s achievement as less significant. The need to amplify women’s voices in the age of AI is of paramount importance to further define the importance of this industrial age. The leadership skills and technical prowess presented by women in shaping this technology will anchor the necessity for the AI ethics and tech for good initiatives.
What do you think is getting lost in the way AI is currently being discussed in the marketing world?
The practicality of it. AI is thrown around loosely as an all-encompassing technology designed to be the aha moment of the world. It is in fact humans who direct this as we have done so in every other industrial age. Human beings need to ask the questions, train appropriately for the changes, be open and curious to learning and see AI for what it is: to amplify and optimise our efforts but never to replace our values.
For marketing professionals attending the summit, what’s one mindset shift you hope they walk away with after your session?
With the confidence to ask the right questions and to be open to changing for relevance in this new and fast changing world. Creativity is found in every facet of life. Marketers have usually held the crown for creativity. Now is the time to embrace the fullness of this industrial age. We are no longer marketers. We are business optimisation technologists – BOTS.
What role do you believe African marketers can play in shaping how AI is developed and applied globally?
We can play the role of providing world class and leading research that presents as accurate a view of our continent, cultures and peoples. We don’t need the West to tell us who we are. AI is a lifecycle comprising multiple components. Models, Data and Training and Resources. Are we allowing ourselves to continue to be led by the West and the oriental East or being owners and builders of the technologies that guide and shape humanity?
Want to be part of the conversation? As a special offer to our readers, you could stand a chance to attend the AI Empowered Summit, inspired by EO Cape Town, taking place on 7–8 August 2025 at the CTICC. We’re giving away two double tickets to this thought-provoking event where innovators like Nazareen Ebrahim will share their insights on the future of AI, ethics, marketing, and beyond. Contact info@aiempowered.co.za to enter.
Ethics & Policy
15 AI Movies to Add to Your Summer Watchlist
When we think about AI, our first thought is often of a killer robot, rogue computer system, or humanoid child on its quest to become a real boy. Depictions of AI in film are a reflection of how people have viewed AI and similar technologies over the past century. However, the reality of what AI is differs from what we see in science fiction. AI can take many forms but it is almost never humanoid and it most certainly isn’t always bad. Our ideas of what AI is and what it can become originate from very compelling science fiction stories dating as far back as the 19th century and as technology has evolved over the years, people’s ideas, hopes, and fears of AI have grown and evolved with it.
As the field of AI begins to blend the realm of reality and science fiction, let’s look at some films that offer a lens into intelligent machines, what it means to be human, and humanity’s quest to create an intelligence that may someday rival our own. Here are 15 must watch films about AI to add to your summer movie watchlist:
Ethics & Policy
Introducing GAIIN: The Global AI Initiatives Navigator
In an era where artificial intelligence (AI) is reshaping societies, economies, and governance, understanding and comparing national and international AI policies has become more critical than ever. That’s why OECD.AI has launched GAIIN—the Global AI Initiatives Navigator—a redesigned and expanded tool to track public AI policies and initiatives worldwide.
A smarter, simpler global resource
GAIIN replaces the previous OECD.AI policy database with a more intuitive and powerful platform. Designed in consultation with its primary audience, national contact points and policy users, GAIIN simplifies the process of tracking and submitting AI policy information, while dramatically increasing coverage and usability.
New features include:
- More initiatives, more sources: GAIIN now covers over 1,500 initiatives from more than 72 countries, as well as 10+ international and supranational organisations.
- Improved categories: Initiatives are now grouped into categories such as national strategies, governance bodies, regulations and intergovernmental efforts.
- Real-time updates: National experts and contributors can now update their entries as needed, and users can see who last updated each entry and when.
- Faster refresh cycles: Updates are now published more quickly, with syncing to AIR (the OECD’s AI policy and risk platform) scheduled to begin in late 2025.
- Redesigned interface: GAIIN is now easier to navigate, featuring improved filters, layout, and accessibility, which provides policymakers and researchers with faster access to relevant data.
- Linked Resources: GAIIN also integrates content from the OECD Catalogue of Tools and Metrics for Trustworthy AI and the OECD Publications Library.
A tool for global collaboration
GAIIN is more than a reference library. It’s a live tool co-created and maintained by global experts, helping countries and institutions learn from one another as they shape the future of responsible AI. It allows policymakers to identify global trends, track policy diffusion, and benchmark their initiatives.
The platform’s next major milestones include integrating data from the EU’s coordinated AI plan and a public submission interface to enable direct contributions and real-time visibility for new initiatives.
A Growing Partnership
GAIIN will soon become a joint initiative between the OECD and the United Nations Office of Digital Economy and Technology (UN ODET). This partnership represents a significant step towards aligning global efforts to monitor and shape AI policy. Further, it reinforces GAIIN’s role as a fundamental tool for international cooperation on AI governance.
The post Introducing GAIIN: The Global AI Initiatives Navigator appeared first on OECD.AI.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained