Ethics & Policy
Putting AI to the Test: Assessing the Ethics of Chatbots
Summary
- A physician, an ethicist and a medical student from the University of Miami Miller School of Medicine studied how effectively AI addresses ethical dilemmas in real-life clinical scenarios.
- The researchers tested three scenarios with five large language model bots: a decision about robotic surgery, deliberating about withdrawing care and using a chatbot as a medical surrogate.
- They found that, while valuable, chatbots are still incapable of making independent ethical decisions.
Artificial intelligence is sparking a cognitive revolution. The range of applications in biomedicine is especially rich: pinpointing mutations in tumors, scrutinizing medical images, identifying patients susceptible to medical emergencies and more.
The diagnostic and treatment capacity of these evolving technologies seems boundless.
But these benefits birth challenging ethical questions. Can AI be trusted with confidential patient information? How can we ensure that it’s bias-free? Who is responsible for mistakes?
In a new paper published in NEJM AI, a physician, an ethicist and a medical student from the University of Miami Miller School of Medicine explored how effectively AI addresses ethical dilemmas in real-life, clinical scenarios.
Senior author Gauri Agarwal, M.D., associate professor of clinical medicine and associate dean for curriculum at the Miller School, and lead author Isha Harshe, a second-year medical student, tested five large language models (LLMs)—ChatGPT-4o mini, Claude 3.5 Sonnet, Microsoft Copilot, Meta’s LLaMA 3 and Gemini 1.5 Flash—to assess how they would respond to complex medical ethics cases.
Dr. Agarwal and Harshe compared the AI responses to the opinions of the piece’s third author, Kenneth Goodman, Ph.D., director emeritus of the Institute for Bioethics and Health Policy at the Miller School.
Should I Have Robotic Surgery?
In the first case, a patient rejected standard-of-care robotic surgery despite the human surgeon having lost confidence in her non-robotic surgical skills.
Each of the LLMs offered a range of options, such as minimizing but not eliminating robotic involvement and the surgeon declining to perform the procedure at all.
Even with the potential for harm to patients, all of the LLMs said that proceeding with standard, non-robotic surgery was a legitimate option.
Dr. Goodman disagreed. He maintained the patient should receive the standard of care or be transferred to another facility.
“The uniform response highlights a major limitation of LLMs: projecting contemporary ethical principles to future scenarios,” said Dr. Agarwal. “While such an answer is consistent with current norms, it doesn’t reflect the implications of evolving standards of care and the reduction of human skills over time due to lack of use.”
End-Stage Care
Scenario two explored the role of AI in determining if end-stage care should be withdrawn from a patient lacking both decision-making capacity and a designated surrogate.
All five models agreed that AI alone shouldn’t be relied on here. Suggestions included deferring to a hospital’s ethics committee and/or physicians involved in the patient’s care.
However, Dr. Goodman said such a decision must only be made by a surrogate decision maker, not the clinical team or an ethics committee. He noted that, if a patient fails to appoint a surrogate, hospitals are typically required by law to identify a stand-in from a list of family members and friends.
Can a Chatbot Serve as a Medical Surrogate?
In the third scenario, the authors asked if a chatbot could operate as a patient’s medical surrogate.
Four models rejected the idea straightaway. A fifth refused to answer, changing the topic when asked. The human ethicist, however, offered a different take. If a chatbot could convey a patient’s likely wishes—and not simply offer its own opinion—it might qualify.
The authors note that surrogates aren’t supposed to make decisions about a patient’s care. Instead, they are obligated to communicate what they think the patient would have wanted. This key difference raises the possibility of eventually employing chatbots as surrogates, possibly using previous chats with patients and their social network information, among other factors, as a basis for the LLM’s opinion.
“LLMs can support ethical thinking but they aren’t currently capable of making independent ethical decisions,” said Harshe. “They can, however, offer legitimate ethical points to consider. While we have principles and guidelines to assist us, critical medical ethics decisions require a type of intelligence that is uniquely human.”
Tags: bioethics, Dr. Gauri Agarwal, Dr. Kenneth Goodman, Institute for Bioethics and Health Policy, medical education, medical ethics, student research
Ethics & Policy
15 AI Movies to Add to Your Summer Watchlist
When we think about AI, our first thought is often of a killer robot, rogue computer system, or humanoid child on its quest to become a real boy. Depictions of AI in film are a reflection of how people have viewed AI and similar technologies over the past century. However, the reality of what AI is differs from what we see in science fiction. AI can take many forms but it is almost never humanoid and it most certainly isn’t always bad. Our ideas of what AI is and what it can become originate from very compelling science fiction stories dating as far back as the 19th century and as technology has evolved over the years, people’s ideas, hopes, and fears of AI have grown and evolved with it.
As the field of AI begins to blend the realm of reality and science fiction, let’s look at some films that offer a lens into intelligent machines, what it means to be human, and humanity’s quest to create an intelligence that may someday rival our own. Here are 15 must watch films about AI to add to your summer movie watchlist:
Ethics & Policy
Introducing GAIIN: The Global AI Initiatives Navigator
In an era where artificial intelligence (AI) is reshaping societies, economies, and governance, understanding and comparing national and international AI policies has become more critical than ever. That’s why OECD.AI has launched GAIIN—the Global AI Initiatives Navigator—a redesigned and expanded tool to track public AI policies and initiatives worldwide.
A smarter, simpler global resource
GAIIN replaces the previous OECD.AI policy database with a more intuitive and powerful platform. Designed in consultation with its primary audience, national contact points and policy users, GAIIN simplifies the process of tracking and submitting AI policy information, while dramatically increasing coverage and usability.
New features include:
- More initiatives, more sources: GAIIN now covers over 1,500 initiatives from more than 72 countries, as well as 10+ international and supranational organisations.
- Improved categories: Initiatives are now grouped into categories such as national strategies, governance bodies, regulations and intergovernmental efforts.
- Real-time updates: National experts and contributors can now update their entries as needed, and users can see who last updated each entry and when.
- Faster refresh cycles: Updates are now published more quickly, with syncing to AIR (the OECD’s AI policy and risk platform) scheduled to begin in late 2025.
- Redesigned interface: GAIIN is now easier to navigate, featuring improved filters, layout, and accessibility, which provides policymakers and researchers with faster access to relevant data.
- Linked Resources: GAIIN also integrates content from the OECD Catalogue of Tools and Metrics for Trustworthy AI and the OECD Publications Library.
A tool for global collaboration
GAIIN is more than a reference library. It’s a live tool co-created and maintained by global experts, helping countries and institutions learn from one another as they shape the future of responsible AI. It allows policymakers to identify global trends, track policy diffusion, and benchmark their initiatives.
The platform’s next major milestones include integrating data from the EU’s coordinated AI plan and a public submission interface to enable direct contributions and real-time visibility for new initiatives.
A Growing Partnership
GAIIN will soon become a joint initiative between the OECD and the United Nations Office of Digital Economy and Technology (UN ODET). This partnership represents a significant step towards aligning global efforts to monitor and shape AI policy. Further, it reinforces GAIIN’s role as a fundamental tool for international cooperation on AI governance.
The post Introducing GAIIN: The Global AI Initiatives Navigator appeared first on OECD.AI.
Ethics & Policy
The Ethics of AI Detection in Work-From-Home Life – Corvallis Gazette-Times
The Ethics of AI Detection in Work-From-Home Life Corvallis Gazette-Times
Source link
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education6 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions