Ethics & Policy
Vatican urges ethical AI development

Ethical AI must prioritise the common good over profit or efficiency.
At the AI for Good Summit in Geneva, the Vatican urged global leaders to adopt ethical principles when designing and using AI.
The message, delivered by Cardinal Pietro Parolin on behalf of Pope Leo XIV, warned against letting technology outpace moral responsibility.
Framing the digital age as a defining moment, the Vatican cautioned that AI cannot replace human judgement or relationships, no matter how advanced. It highlighted the risk of injustice if AI is developed without a commitment to human dignity and ethical governance.
The statement called for inclusive innovation that addresses the digital divide, stressing the need to reach underserved communities worldwide. It also reaffirmed Catholic teaching that human flourishing must guide technological progress.
Pope Leo XIV supported a unified global approach to AI oversight, grounded in shared values and respect for freedom. His message underscored the belief that wisdom, not just innovation, must shape the digital future.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Ethics & Policy
The AI Ethics Brief #173: Power, Policy, and Practice

Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
Writing from Oxford’s Wadham College this week where we’re exploring “Civilisation on the Edge,” we’re struck by how the challenges facing AI governance mirror broader questions about institutional adaptation in times of rapid change.
-
We share our call for case studies and examples for the State of AI Ethics Report Volume 7, seeking real-world implementation stories and community-driven insights as we build a practitioner’s guide for navigating AI challenges in 2025.
-
We examine how Silicon Valley is embedding itself within the military industrial complex through initiatives like Detachment 201, where tech executives from OpenAI, Meta, Palantir, and Thinking Machines Lab are commissioned as lieutenant colonels. Meanwhile, companies abandon previous policies against military involvement as artists boycott platforms with defense investments.
-
Our AI Policy Corner with GRAIL at Purdue University explores contrasting state approaches to AI mental health legislation, comparing Illinois’s restrictive model requiring professional oversight with New York’s transparency-focused framework, as lawmakers respond to AI-related teen suicides with divergent regulatory strategies.
-
We investigate the psychological risks of AI companionship beyond dependency, revealing how social comparison with perfect AI companions can devalue human relationships, creating a “Companionship-Alienation Irony” where tools designed to reduce loneliness may increase isolation.
-
Our Recess series with Encode Canada examines Canada’s legislative gaps around non-consensual deepfakes, analyzing how current laws may not cover synthetic intimate images and comparing policy solutions from British Columbia and the United States.
What connects these stories: The persistent tension between technological capability and institutional readiness. Whether examining military AI integration, mental health legislation, psychological manipulation, or legal frameworks for synthetic media, each story reveals how communities and institutions are scrambling to govern technologies that outpace traditional regulatory mechanisms. These cases illuminate the urgent need for governance approaches that center human agency, democratic accountability, and community-driven solutions rather than accepting technological determinism as inevitable.
This week, our team at the Montreal AI Ethics Institute is taking part in The Wadham Experience, a week-long leadership program hosted at Oxford’s Wadham College. The program, Thinking Critically: Civilisation on the Edge, invites participants to reflect on the systems, stories, and power structures that have shaped societies and how they must evolve to meet this moment of profound change.
As we sit in these historic rooms discussing democracy and demagoguery, myth and modernity, we’re also shaping the next phase of our work: The State of AI Ethics Report, Volume 7 (AI at the Crossroads: A Practitioner’s Guide to Community-Centred Solutions), which we announced in Brief #172 and will release on November 4, 2025.
This year’s report is different. We’re building it not just as a landscape analysis, but as a practical guide for those working on AI challenges in communities, institutions, and movements. It is structured to offer case studies, toolkits, and implementation stories from around the world, grounded in real-world applications: what’s working, what’s not, and what’s next.
The questions we’re grappling with at Oxford feel particularly urgent in 2025: What kind of AI governance do we build when institutions lag behind? How do we govern technologies that evolve faster than our institutions can adapt? What happens when communities need AI solutions but lack formal authority to regulate platforms or shape policy? How do we move beyond corporate principles and policy frameworks to actual implementation in messy, resource-constrained environments?
The conversations here at Wadham remind us that societies have faced technological disruption before. The printing press reshaped information flows. Industrialization transformed labour and social structures. But AI presents unique challenges: its speed of deployment, its capacity for autonomous decision-making, and its embedding into virtually every aspect of social life.
SAIER Volume 7 will cover five interconnected parts:
-
Foundations & Governance: How governments, regions, and communities are shaping AI policy in 2025, from superpower competition to middle-power innovation and grassroots governance experiments.
-
Social Justice & Equity: Examining AI’s impact on democratic participation, algorithmic justice, surveillance and privacy rights, and environmental justice, with particular attention to how communities are developing their own accountability mechanisms and responding to AI’s growing energy and infrastructure costs.
-
Sectoral Applications: AI ethics in healthcare, education, labour, the arts, and military contexts, focusing on what happens when AI systems meet real-world constraints and competing values.
-
Emerging Tech: Governing agentic systems that act independently, community-controlled AI infrastructure, and Indigenous approaches to AI stewardship that center long-term thinking and data sovereignty.
-
Collective Action: How communities are building AI literacy, organizing for worker rights, funding alternative models, and creating public sector leadership that serves democratic values.
Throughout the report, we are asking grounded questions:
-
How are small governments and nonprofits actually deploying responsible AI under tight resource constraints?
-
What did communities learn when their AI bias interventions didn’t work?
-
What happened when workers tried to stop AI surveillance in the workplace, and what can others learn from those efforts?
-
Where are the creative models of AI that are truly community-controlled rather than corporate-managed? And more.
While we’re curating authors for the chapters and sections of this report, we’re also inviting contributions from those working directly on the ground. We’re not looking for polished case studies or success stories that fit neatly into academic frameworks. We’re seeking the work that’s often overlooked: the experiments, lessons, and emerging blueprints shaped by lived experience.
Think of the nurse who figured out how to audit their hospital’s AI diagnostic tool. The city council that drafted AI procurement standards with limited resources. The artists’ collective building alternative licensing models for training data. The grassroots organization that successfully challenged biased algorithmic hiring in their community.
These are the stories that reveal what it actually takes to do this work: the political navigation, resource constraints, technical hurdles, and human relationships that determine whether ethical AI remains an aspiration or becomes a lived reality.
Our goal goes beyond documentation. We want this report to connect people doing similar work in different contexts, to surface patterns across sectors, and to offer practical grounding at a moment when the search for direction, purpose, and solidarity feels especially urgent.
When you share your story, you’re not just contributing to a report. You’re helping others find collaborators, ideas, and renewed momentum for their own work.
If you’re part of a project, policy, or initiative that reflects these values, whether it succeeded or failed, we’d love to include your insight in this edition.
We’re especially seeking:
-
Implementation stories that moved beyond paper to practice
-
Community-led initiatives that addressed AI harms without formal authority
-
Institutional experiments that navigated AI adoption under constraints
-
Quiet failures and what they revealed about systemic barriers
-
Cross-sector collaborations that found unexpected solutions
-
Community organizing strategies that built power around AI issues
As we continue shaping SAIER Volume 7, your stories can help build a resource that is grounded, practical, and genuinely useful for those navigating AI implementation in 2025. Together, we can document what’s working, what barriers still need addressing, and how we might move forward collectively, deliberately, and with care.
Please share your thoughts with the MAIEI community:
This summer, the CEO of Spotify, Daniel Ek, faced significant backlash after investing $700 million into Helsing through his investment firm, Prima Materia. Helsing is a Munich-based AI defense company founded in 2021 that sells autonomous weapons to democratic nations. Meanwhile, the US Army inaugurated a new unit, “Detachment 201: The Army’s Executive Innovation Corps,” to advance military innovation through emerging AI technologies. Detachment 201 swore in four tech leaders from Palantir, OpenAI, Meta, and Thinking Machines Lab as lieutenant colonels.
📌 MAIEI’s Take and Why It Matters:
The entanglement of tech companies and the U.S. military represents a stark Silicon Valley Shift. Companies like Google and Meta, which formerly pledged no militaristic involvement backed by corresponding corporate policies, are now abandoning those policies and developing tools, such as virtual reality, to train soldiers.
This policy reversal extends beyond military applications: OpenAI quietly removed language from their usage policies in January 2024 that prohibited military use of their technology, while Meta has simultaneously ended their fact-checking program and made other content moderation changes with geopolitical implications.
The militarization trend includes both defense contracts and direct integration. Google’s $1.2 billion Project Nimbus cloud computing contract with the Israeli military, run jointly with Amazon, has faced ongoing employee protests, while companies like Scale AI have emerged as major players in military AI contracts alongside established defense tech firms like Palantir. Meanwhile, Detachment 201’s commissioning of tech executives as lieutenant colonels represents direct embedding within military command structures, bringing Silicon Valley directly into the chain of command.
As Professor of International Relations, Erman Akilli, noted:
“The commissioning of these tech executives… is unprecedented. Rather than serving as outside consultants, they will be insiders in Army ranks, each committing a portion of their time to tackle real defense projects from within. This model effectively brings Silicon Valley into the chain of command.”
This raises significant concerns for the increasing profitability of war for major corporations, in addition to the proliferation of killer robots.
Following Spotify CEO Daniel Ek’s investment firm Prima Materia investing $700 million into Helsing, major artists protested the platform’s financial connection to AI military technology by pulling their music from the app. Key examples include Deerhoof, King Gizzard and the Lizard Wizard, and Xiu Xiu. Deerhoof highlighted a major ethical violation of AI warfare in the Instagram post through which they announced their split with Spotify:
Computerized targeting, computerized extermination, computerized destabilization for profit, successfully tested on the people of Gaza since last year, also finally solves the perennial inconvenience to war-makers — it takes human compassion and morality out of the equation.
Artist backlash has not altered the investments of Daniel Ek thus far, however, it has both demonstrated wide opposition to militaristic AI technology and raised awareness of the company’s ties to such technology, informing broader audiences about these ethical concerns. Such education is crucial when civilian AI developers and the broader public are unaware of the militaristic risks of AI.
A piece from 2024 co-authored by the late MAIEI founder, Abhishek Gupta, argues that to ensure AI development does not destroy global peace, we should invest in interdisciplinary AI education that includes responsible AI principles and perspectives from the humanities and social sciences. As Silicon Valley works to reify the military industrial complex, we must not forget the disruptive force of collective knowledge.
Did we miss anything? Let us know in the comments below.
This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, examines how recent AI-related teen suicides are catalyzing a new wave of state legislation, with Illinois and New York pioneering contrasting frameworks that may shape national approaches to AI mental health governance. The analysis contrasts Illinois’s restrictive approach requiring licensed professional oversight for all AI mental health interactions with New York’s regulatory framework that mandates transparency disclosures and crisis intervention safeguards for AI companions. The piece reveals a key policy tension: Illinois gatekeeps AI out of clinical settings but misses broader consumer use, while New York addresses parasocial AI relationships but lacks clinical protections.
To dive deeper, read the full article here.
Hamed Maleki explores a lesser-discussed psychological risk of AI companionship: social comparison. Through interviews with Gen-Z users of platforms like Character.AI, his research reveals how users compare their perfect, always-available AI companions to flawed human relationships, leading to devaluation of real-world connections. Users progress through three stages—interaction, emotional engagement, and emotional idealization and comparison—where AI companions feel more dependable and emotionally safe than people, prompting withdrawal from demanding human relationships. This creates the “Companionship–Alienation Irony”: tools designed to alleviate loneliness may actually increase it by reshaping expectations for intimacy. As AI companions integrate memory, emotional language, and personalization, understanding these psychological effects is essential for designing safeguards, especially for younger users seeking comfort and connection.
To dive deeper, read the full article here.
As part of our Encode Canada Policy Fellowship Recess series, this analysis examines Canada’s legislative gaps in addressing non-consensual pornographic deepfakes, which make up 96% of all deepfake content and target women 99% of the time. Canada’s Criminal Code Section 162.1, which requires “recordings of a person,” may exclude synthetic images, leaving victims without clear legal protection. Canada’s Criminal Code Section 162.1 may not cover synthetic intimate images due to language requiring “recordings of a person,” leaving victims with limited legal recourse. The piece compares policy solutions from British Columbia’s Intimate Images Protection Act, which explicitly includes altered images and provides expedited removal processes, with the U.S. TAKE IT DOWN Act, which criminalizes AI-generated intimate content but raises concerns about false reporting abuse.
A multi-pronged policy approach is recommended:
-
Criminal law amendments to explicitly include synthetic media
-
Enhanced civil remedies with streamlined removal processes
-
Platform accountability measures with robust investigation requirements
-
A self-regulatory organization to prevent malicious exploitation while protecting victims’ dignity and rights.
To dive deeper, read the full article here.
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!
Ethics & Policy
‘Humans in the Loop’ Film Explores AI, Ethics & Adivasi Labor

Humans in the Loop, director Aranya Sahay’s 2024 film about an Adivasi woman working as a data annotator, was screened at the UNESCO House in New Delhi on September 6. The film explored the hidden biases behind Artificial Intelligence (AI) and the ethical dilemmas associated with the technology while providing a glance at the human labour that powers the “artificial” intelligence.
During the post-screening discussion, executive producer Kiran Rao pointed out that many contemporary conversations about AI centred on the economics of the business. “Our film talks about equitability, representation and data colonialism,” she added.
The movie’s protagonist, Nehma, lives in a remote village in Jharkhand, taking care of a rebellious school-going daughter and an infant son. She also finds work as a data annotator, labelling objects in images and videos accurately to create datasets for training AI models. Nehma begins her work with some hesitation and confusion initially, not entirely understanding the technology and her role in it. However, she soon grows fond of her work, likening AI to a child that needs to be “taught” the right things.
And what is the right thing for an AI to be taught? That question forms the central premise of the film, following Nehma’s inner turmoil as she navigates the conflict between her manager’s expectations and what she knows to be true and right.
Initially, Nehma’s job is simple enough – label and outline body parts accurately so she can get a human-like computer model to walk. She completes her task successfully, and the model stumbles, falls, and finally walks upright, much to her joy.
Her second task is a lot more morally complex. Nehma and her team are working for an agritech firm that wants them to go through millions of images and accurately label crops and pests, like small insects or critters. The result? The so-called ‘pests’ are violently eradicated with a precision laser, with the crops left to thrive.
This is where the ‘Humans in the Loop’ film raises a crucial point – what makes something a pest? Nehma is visibly uncomfortable with the display of the laser’s awesome power as it scorches a small caterpillar. Acting on a whim, she decides not to label that particular critter as a pest.
The results pour in, and Nehma’s American clients are incensed. They demand that Nehma’s supervisor either fix the mistakes or risk losing the contract. The ball eventually falls in Nehma’s court, where she explains her reasoning. That particular caterpillar isn’t a pest, she argues, but a harmless critter that only eats the dead part of leaves without damaging the crop as a whole.
Nehma’s supervisor won’t have it – “Client ne bola hai pest hai, to hai,” she exclaims in Hindi, meaning that if the client considers something to be a pest, then that is a pest.
So who is in the right here? Nehma’s knowledge comes from her own lived experience as an Adivasi woman, drawn from close proximity to Jharkhand’s dense forests. In fact, director Aranya Sahay argued that India’s Adivasis would see life in AI, which is why Nehma wanted to impart what she knew to the model.
‘Humans in the Loop’ also raises an important point – terminologies and classifiers like ‘pests’, ‘weeds’ or ‘crops’ are dependent on functionality – what is a ‘weed’ to one party is a ‘herb’ to another, just like a friendly caterpillar may become a pest to another.
For industrial agricultural operations, only the plants which bring in a profit are useful; everything else can be thrown to the laser.
“Will this industrial consumption economy dictate our knowledge?” asked Kiran Rao.
Advertisements
The other important problem that the movie touched upon was the question of adequate representation – in one telling scene where Nehma prompts an image generator to create an image of a tribal woman, only to receive images of women wearing vaguely native american headdresses. When she tries the prompt “beautiful woman”, she is greeted with images of white-skinned, blonde and blue-eyed women.
As many commentators have pointed out, AI models are heavily dependent on large volumes of data for accuracy, most of which comes from the Global North. On top of that, these datasets mostly feature images of white, Western European people, meaning that any resulting AI systems are much more inaccurate when dealing with people of colour or non-Western cultures in general.
Sahay gave an example from personal experience – during a special screening of the film for Adivasi scholars, a young tribal boy was attempting to generate an image of himself sitting on a crocodile. What he instead got was a white boy sitting atop an alligator.
“We’ve all tried to create images of ourselves, and there’s a definite slant towards European images,” said Rao.
‘Humans in the loop’ provides a solution for this problem by having Nehma take pictures of herself and other members of her community and feed them to the model. It soon starts generating images of brown-skinned tribal women. While this would most likely not occur this way in real life, as the model would have to undergo a fresh training cycle with the new data, culturally representative datasets are, in fact, something AI companies are increasingly hungry for.
It is in this context that fresh concerns arise, ones that do not get addressed through the movie. These representative datasets, containing images of indigenous people, their languages, culture and knowledge systems, are a product of the labour of the masses. The AI image generator is arguably better off with Nehma’s additions, but what does she or her community get out of it? If foreign AI companies are dependent on the value generated by India’s tribals, trained on their images and utilising their knowledge of nature, what is their stake in the multi-billion-dollar valuations commanded by Silicon Valley giants?
Also Read:
Support our journalism:
For You
Source link
Ethics & Policy
AI in healthcare: legal and ethical considerations at the new frontier

Whilst the EU Commission’s guidelines5, published in July 2025, offer some insight as to the compute threshold at which downstream modification constitutes the creation of a new model (with that downstream modifier then becoming a “provider” of the GPAIM and therefore subject to extensive compliance requirements), simple numerical thresholds do not necessarily tell the whole story. There are many different techniques for customizing general purpose AI models, and a simple compute threshold will not capture some customization techniques that are likely to have a more significant impact on model behavior, such as system prompts. Careful case-by-case consideration of the modification in practice will be necessary.
Organizations at risk of falling within scope of the EU AI Act GPAI requirements should consider the relevance of the General Purpose AI Code of Practice (the GPAI Code)6. The GPAI Code, while non-binding, has been developed collaboratively under the leadership of the European AI Office and is intended to be a practical tool to support organizations in complying with the AI Act for GPAI models, addressing transparency, copyright and safety and security in particular. The drafting process sparked significant debate among stakeholders: some arguing that the GPAI Code is overly restrictive with calls for greater flexibility, particularly regarding the training of LLMs. However, the European Commission asserts that signatories will benefit from a “simple and transparent way to demonstrate compliance with the AI Act”, with enforcement expected to be focused on monitoring their adherence to the GPAI Code. It remains to be seen how organizations manage that adherence, particularly, for example, in the face of technical challenges (such as output filtering) and legal complexities (not least due to the interplay with ongoing court action) and the allocation of liability between provider and deployer.
- Unlike the EU, the UK has, to date, chosen not to pass any AI-specific laws. Instead, it encourages regulators to first determine how existing technology-neutral legislation, such as the Medical Device Regulations, the UK GDPR, the Data Protection Act, can be applied to AI uses. For example, the Medicines & Healthcare products Regulatory Agency (MHRA) is actively working to extend existing software regulations to encompass “AI as a Medical Device” (or AIaMD). The MHRA’s new program focuses on ensuring both explainability and interpretability of AI systems as well as managing the retraining of AI models to maintain their effectiveness and safety over time.
- In China, the National Health Commission and the National Medical Products Administration recently published several guidelines on the registration of AI-driven medical devices and the permissible use cases of applying AI in diagnosis, treatment, public health, medical education, and administration. The guidelines all emphasize AI’s assisting roles in drug and medical device development and monitoring under human supervision.
Leading AI developers are also setting up in-house AI ethics policies and processes, including independent ethics board and review committee, to ensure safe and ethical AI research. These frameworks are crucial while the international landscape of legally binding regulations continues to mature.
Recommendations: scenario-based assessments for AI tools
Healthcare companies face a delicate balancing act. On one hand, their license to operate depends on maintaining the trust of patients, which requires prioritizing safety above all else. Ensuring that patients feel secure is non-negotiable in a sector where lives are at stake. On the other hand, being overly risk-averse can stifle the very innovations that have the potential to transform lives and deliver better outcomes for patients and society as a whole. Striking this balance is critical: rigorous testing and review processes must coexist with a commitment to fostering innovation, ensuring progress without compromising safety.
In this regard, a risk-based framework is recommended for regulating AI in healthcare. This approach involves varying the approval processes based on the risk level of each application. Essentially, the higher the risks associated with the AI tools, the more controls and safeguards should be required by authorities. For instance, AI tools that conduct medical training, promote disease awareness, and perform medical automation should generally be considered low risk. Conversely, AI tools that perform autonomous surgery and critical monitoring may be regarded as higher risk and require greater transparency and scrutiny. By tailoring the regulatory requirements to the specific risks, we can foster innovation while ensuring that safety is adequately protected.
Moreover, teams reviewing AI systems should consist of stakeholders representing a broad range of expertise and disciplines to ensure comprehensive oversight. For example, this may include professionals with backgrounds in healthcare, medical technology, legal and compliance, cybersecurity, ethics and other relevant fields as well as patient interest groups. By bringing together diverse perspectives, the complexities and ethical considerations of AI in healthcare can be better addressed, fostering trust and accountability.
Data protection and privacy
Data privacy requirements are a key consideration when using AI in healthcare contexts, especially given that many jurisdictions’ laws broadly define “personal data,” potentially capturing a wide range of data. Further, privacy regulators have been the forerunners in bringing AI-related enforcement actions. For example, AI tools such as OpenAI’s ChatGPT have encountered extensive regulatory scrutiny at EU level through the European Data Protection Board (EDPB) taskforce and NOYB (None of your Business)/the European Center for Digital Rights, the data privacy campaign group founded by Max Schrems, the well-known privacy activist, has initiated a complaint against the company in Austria, alleging GDPR breach. DeepSeek has also attracted immediate attention from EU and other international regulators, with investigations initiated and the EDPB taskforce extended to cover its offerings.
Privacy considerations in AI
There are several privacy considerations to navigate when using AI. This can raise challenges as, often U.S. based, developers look to navigate highly regulated jurisdictions such as those in the EU, where regulators are scrutinizing approaches taken to data protection compliance. This includes the issue of identifying a lawful basis for the processing activity. Many jurisdictions’ data privacy laws contain a legitimate interests basis or similar provisions which, when applicable, permit the data controller to process personal data without first requiring individuals’ explicit consent. However, there are diverging views on whether this basis can be used for AI-related processing.
The European Data Protection Board (EDPB) issued an Opinion 28/20247 in December 2024, which provides detailed guidance on the use of legitimate interest as a legal basis for processing personal data in the development and deployment of AI models, including LLMs (the EDPB AI Opinion). The EDPB AI Opinion, although indicating that legitimate interest may be a possible legal basis, highlights the three-step test that should be applied when assessing the use of legitimate interest as a legal basis, i.e. (1) identify the legitimate interest pursued by the controller or a third party; (2) analyse the necessity of the processing for the purposes of the legitimate interest pursued (the “necessity test”); and (3) assess that the legitimate interest is not overridden by the interests or fundamental rights and freedoms of the data subjects (the “balancing test”). It also highlights the need for robust safeguards to protect data subjects’ rights. The examples where legitimate interests could be a suitable lawful basis in the EDPB AI Opinion are relatively limited, including examples such as a conversational agent, fraud detection and threat analysis in an information system.
An EDPB Opinion adopted a few months earlier, in October 2024, which addresses the legitimate interests basis for processing of personal data more generally (the EDPB LI Opinion), while helpful in referencing scientific research as a potential legitimate interest, is cautious about establishing a legitimate interest on the basis of societal benefit, emphasizing that the legitimate interest should tie to the interest of the controller or third party and that processing should be “strictly” necessary to achieve the legitimate interest (i.e. there is no other reasonable and equally effective method which is less privacy intrusive).
The EDPB AI Opinion clarifies that the unlawful processing of personal data during the development phase may not automatically render subsequent processing in the deployment phase unlawful, but controllers must be able to demonstrate compliance and accountability throughout the lifecycle of the AI system.
Individual consent
As an alternative, businesses may need to obtain individual consent for AI-related processing activities. While this can be a difficult basis to use given the high bar for valid consent, it can be particularly challenging in an AI healthcare context given the heightened compliance obligations that apply to special category data (which includes health data), raising the requirement for consent to “explicit consent” combined with the potential for public distrust and misunderstanding around AI technologies.
Further, in some jurisdictions it is common for individuals to place stringent conditions, including time restrictions, on what their personal data can be used for. This could prevent their personal data being used in connection with AI, given it is not always possible to delete or amend personal data once it has been ingested into an AI system.
Professional accountability
Determining fault when an AI system makes an error is a particularly complex issue, especially given the number of parties that may be involved throughout the value chain. The challenge is heightened by the fact that different regulations may apply at different stages, and the legal landscape is still developing in response to these new technologies.
In the case of fully autonomous AI decision-making, one possible approach is that liability could fall on the AI developer, as it may be difficult to hold a human user responsible for outcomes they do not control. However, the allocation of responsibility could vary depending on the specific circumstances and regulatory frameworks in place.
Where AI systems operate with human involvement, another potential approach is for regulators to introduce a strict liability standard for consequences arising from the use of AI tools. While this could offer greater protection for patients, it may also have implications for the pace of technological innovation. Alternatively, some have suggested that requiring AI developers and commercial users to carry insurance against product liability claims could help address these risks. The WHO, for example, has recommended the establishment of no-fault, no-liability compensation funds as a way to ensure that patients are compensated for harm without the need to prove fault.8
In July 2025, a study commissioned by the European Parliament’s Policy Department for Justice, Civil Liberties and Institutional Affairs, was published9. Its aim was to critically analyze the EU’s evolving approach to regulating civil liability for AI systems, four policy proposals are discussed and the report advocated for a strict liability regime targeting high-risk AI systems.
Ultimately, the question of legal responsibility for AI in healthcare remains unsettled and is likely to require ongoing adaptation as technology and regulation evolve. Accountability will be a particular challenge given the complexity of the value chain and the interplay of different regulatory regimes. It will be important for all stakeholders to engage in continued dialogue to ensure that legal frameworks keep pace with technological developments and that patient safety remains a central focus.
Ethical concerns
There are multiple ethical considerations that developers and deployers may need to address when using AI systems in healthcare. Three prominent examples are explored below.
Bias causing unjust discrimination
Bias in AI systems can lead to unjustified discriminatory treatment of certain protected groups. There are two primary types of bias that may arise in healthcare:
- Disparate impact risk: This occurs when people are treated differently when they should be treated the same. For example, a study10 found that Black patients in the U.S. healthcare system were assigned significantly lower “risk scores” than White patients with similar medical conditions. This discrepancy arose because the algorithm used each patient’s annual cost of care as a proxy for determining the complexity of their medical condition(s). However, less money is spent on Black patients due to various factors including systemic racism, lower rates of insurance, and poorer access to care.11 Consequently, using care costs created unjustified discrepancies for Black patients.
- Improper treatment risk: Bias in AI systems can arise when training data fails to account for the diversity of patient populations, leading to suboptimal or harmful outcomes. For example, one study12 demonstrated that facial recognition algorithms often exhibit higher error rates when identifying individuals with darker skin tones. While this study focused on facial recognition, the same principle applies in healthcare, where AI systems used for dermatological diagnoses have been found to perform less accurately on patients with darker skin.13 This occurs because the datasets used to train these systems often contain a disproportionate number of images from lighter-skinned individuals. Such biases can lead to misdiagnoses or delays in treatment, illustrating the critical need for diverse and representative training data in healthcare AI applications.
Transparency and explainability
Providing individuals with information about how healthcare decisions are made, the process used to reach that decision, and the factors considered is crucial for maintaining trust between medical professionals and their patients. Understanding the reasoning behind certain decisions is not only important for ensuring high-quality healthcare and patient safety, but also helps facilitate patients’ medical and bodily autonomy over their treatment. However, explainability can be particularly challenging for AI systems, especially generative AI, as their “black box” nature means deployers may not always be able to identify exactly how an AI system produced its output. It is hoped that technological advances, including recent work on neural network interpretability,14 will assist with practical solutions to this challenge.
Human review
To facilitate fair, high-quality outcomes, it is important for end-users—often healthcare professionals—to understand the AI system’s intended role in their clinical workflow and whether the AI system is intended to replace user decision-making or augment it.
However, it may not always be appropriate for the human to override the AI system’s output; their involvement in the workflow will likely vary depending on what the AI tool is being used for. For example, if an AI system has been trained to detect potentially cancerous cells in skin cell samples, and the AI system flags the sample as being potentially cancerous but the healthcare professional disagrees, it may be more appropriate to escalate the test to a second-level review than to permit the healthcare professional to simply override the AI system’s decision. A false positive here is likely to be less risky than a false negative. It is therefore important to take a considered, nuanced approach when determining how any human-in-the-loop process flow should operate.
Conclusion
AI offers significant benefits in healthcare but also presents legal and ethical challenges that must be navigated. Collaborative efforts among policymakers, healthcare professionals, AI developers, and legal experts are essential to establish robust frameworks that safeguard patient rights and promote equitable access to advanced healthcare technologies.
This article was written by Jieni Ji and David Egan, assistant general counsel, global digital and privacy at GSK in London.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries