Ethics & Policy
Will AI Save Or Harm Us? 3 Ethical Challenges For Businesses In 2025

Will we meet the ethical challenges posed by AI or succumb to them?
The top challenges facing businesses in 2025 is whether AI will be, on balance, helpful or hurtful. Most news reports about AI focus on technological issues, such as the nature of AI vulnerabilities.
But unless we use AI as a force for good, it doesn’t matter how technologically sophisticated AI becomes. The harms will overpower the advances. In other words, the ethical implications of AI are just as worthy of investigation as are the digital nuts and bolts.
Here are three ethical challenges that AI raises for businesses around the world. We’ll also take a look at how businesses, including yours, can meet these challenges.
1. Why AI safety should be your top concern
Do No Harm isn’t just for physicians and nurses!
The most fundamental ethical principle of all is Do No Harm. It applies not just to physicians and other health care workers but to leaders in the AI space and everybody else.
At the AI Safety Summit in November 2023, Dario Amodei, CEO of Anthropic, addressed this very issue. “We need both a way to frequently monitor these emerging risks, and a protocol for responding appropriately when they occur,” he said.
Although the summit took place in 2023, Amodei’s insights remain critical for 2025.
Examples of some of the harms that AI can cause
Unfair discrimination, in which AI models perpetuate unfair practices in hiring and talent retention
Reputational damage through the use of deep fakes, in which videos of real human beings are digitally manipulated to say and do things they never said or did
Dissemination of misinformation, which can happen when AI states that something is true when it isn’t, and someone posts that false or misleading information on social media
How Anthropic is meeting this challenge
Anthropic simulates adversarial attacks to identify weaknesses like biased outputs or harmful behaviors. This simulation (“red teaming”) helps Anthropic insure that its AI system is safe, reliable and resilient before the company uses it.
Does this simulation take some time? Yes. But Anthropic rightly considers it an investment in its reputation and what it owes to the people it serves.
For reflection
How will you ensure that the AI systems you use don’t cause harm to your clients, to your business’s good name and to your own reputation?
2. If you don’t manage your AI systems, someone else will
“If you don’t manage your time, someone else will,” goes a saying in time management. We could … More
When I began my career at the West Virginia University Health Sciences Center in Morgantown, I took a seminar in time management. The instructor, law professor Forest “Jack” Bowman, told us, “If you don’t manage your time, someone else will.”
That wise saying could be updated to: “If you don’t manage your AI systems, someone else will.”
Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute, notes that in areas like cybersecurity, it can be easy to bypass AI security measures. These workarounds are known as “jailbreaks” and tech-savvy people have no problem doing them.
Recall, in David Fincher’s The Girl with the Dragon Tattoo, the look of disbelief that Rooney Mara’s hacker Lisbeth Salander (Rooney Mara) gives Mikael Blomkvist (Daniel Craig) when he asks her about the difficulty of breaking into a computer system. And that was in 2011! (Written by Steven Zaillian, the film was based on the novel by Stieg Larsson.)
How one company and one continent manage this challenge
IBM’s Precision Regulation Policy addresses three components of AI ethics: 1) accountability, 2) transparency, and 3) fairness.
On a broader level, the European Union’s Artificial Intelligence Act (AI Act), which went into effect on August 1 this year, bans AI with unacceptable risks, like “social scoring,” in which individuals are given scores based on their behavior and actions. Such uses of AI can make it difficult for people to gain access to financial services, employment, travel, education, housing, and public benefits and thus risks violating the ethical responsibility to be fair.
For reflection
If your company is under the aegis of the AI Act, what is it doing to honor it? If your company does not have these legal requirements, how would adopting an an AI ethics policy be a quick win? (Okay, this wouldn’t be quick, but it would be a win nonetheless.)
3. What does AI mean for the future of work?
The threat to jobs that AI poses is real but not insurmountable.
Earlier we considered the ethical principle Do No Harm with respect to safety. That fundamental ethical imperative also applies to employment.
Whatever euphemism you wish to use—reduction in force, downsizing—the effect is the same. Letting loyal, hardworking employees go causes harm, even if there are financial benefits for the companies that do this.
“The IMF [International Monetary Fund] said that about 40 percent of global jobs could be affected,” Yahoo founder Andrew Yang noted earlier this year. “That’s hundreds of millions of workers around the world.”
How one company has prepared for this challenge
Seeing the writing the wall, AT&T created a $1 billion internal initiative called Future Ready. The goal was to prepare over 100,000 employees for roles that were being transformed or replaced by technology that eventually included AI.
If your company invested considerable resources in retraining you for a new position instead of sending you packing, wouldn’t that promote your loyalty to the business? AT&T’s expensive initiative was an investment in both its workforce and its reputation as a company that stands by its employees.
Just think about the positive word-of-mouth this initiative must have created among the employees AT&T retained.
For reflection
How can your organization stay current with its use of AI and promote job security?
The takeaway
Now hear (or see) this!
In 2025, businesses will have to answer the crucial question, “How can we use AI as a force for good and prevent abuse?” If your organization takes this question seriously, you will go a long way toward ensuring that your own AI systems don’t wind up like HAL 9000 from 2001: A Space Odyssey and become humanity’s worst nightmare.
Ethics & Policy
The AI Ethics Brief #172: The State of AI Ethics in 2025: What’s Actually Working

Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
-
One Question We’re Pondering: What does it actually take to move responsible AI from theory to practice, and who is doing that work when no one is watching? We explore the quiet, persistent efforts happening in classrooms, healthcare systems, cities, courtrooms, union halls, and communities that rarely make headlines but are shaping AI’s real-world impact.
-
SAIER Volume 7 Returns: We officially announce the State of AI Ethics Report (SAIER) Volume 7: “AI at the Crossroads: A Practitioner’s Guide to Community-Centred Solutions,” scheduled for November 4, 2025. After a pause since February 2022, this report focuses on practical, replicable examples of responsible AI implementation grounded in real-world experience rather than aspirational principles.
-
The Alan Turing Institute Crisis: We examine the UK’s premier AI research institute facing potential funding withdrawal unless it pivots to a national security focus, representing a significant loss for independent non-governmental AI research and raising questions about accountability in publicly funded research institutions.
-
U.S. AI Education Push: Our AI Policy Corner with GRAIL at Purdue University analyzes the April 23rd Executive Order on Advancing AI Education for American Youth, which emphasizes adoption and implementation over risk mitigation, sitting within the broader Trump administration’s AI policy framework.
-
Canadian AI Governance Insights: From the Victoria Forum 2025 and new public opinion data showing Canadians deeply divided on AI (34% beneficial vs. 36% harmful), we explore Canada’s unique position in developing democratic AI governance that moves beyond consultation toward co-creation.
What connects these stories: The recognition that responsible AI implementation and AI ethics occur not in boardrooms or policy papers, but through the daily work of people building civic competence and practical solutions at the intersection of technology and community needs.
As conversations around AI governance grow louder (see Brief #170: How the US and China Are Reshaping AI Geopolitics), what we’re hearing behind the scenes is quieter, more persistent, and perhaps more urgent. Colleagues across sectors are asking the same thing in different languages: What’s working, what isn’t, and what can we learn from both?
Not in theory or in press releases, but in classrooms trying to preserve academic integrity, in healthcare systems navigating algorithmic risk, in cities designing procurement standards, in community-led efforts resisting surveillance they never consented to, and more.
Over the past year, this question has resurfaced repeatedly for us at MAIEI: at Zurich’s Point Zero Forum, the recently held Victoria Forum in British Columbia (more on this below in Insights & Perspectives), in guest lectures at universities, and through hundreds of emails and conversations. It has also been nearly a year since the passing of our dear friend and collaborator, Abhishek Gupta, founder and principal researcher at MAIEI. In that time, this question has become increasingly persistent, and his reminder to keep moving “onwards and upwards” has guided our search for answers as we rebuild and reimagine MAIEI’s role in the community.
And yet, the answers are rarely loud. They appear in quiet experiments, shared reflections, and the daily work of people operating at the edges of institutions and at the centre of communities. This persistent need for connection and practical guidance is why we’re bringing the State of AI Ethics Report (SAIER) back, and are committed to doing so on an annual basis. Returning to our roots of building civic competence and shaping public understanding on the societal impacts of AI, the SAIER represents both a tribute to Abhishek’s legacy and a cornerstone of MAIEI’s path forward.

After a pause since February 2022, we’re officially announcing SAIER Volume 7: AI at the Crossroads: A Practitioner’s Guide to Community-Centred Solutions, scheduled for release on November 4, 2025.
Following hundreds of conversations and a close review of over 800 pieces published on the MAIEI website since 2018, one insight stood out: the field needs connection and interpretation. There’s a growing recognition that isolated efforts across sectors contain valuable knowledge that rarely gets shared or built upon, including lessons from quiet failures that never made headlines.
The world also looks fundamentally different than when Volume 6 was published in February 2022. The ChatGPT paradigm now dominates (see Brief #171: The Contradictions Defining AI’s Future for our commentary on GPT-5 and GPT-OSS), reshaping everything from student homework to healthcare diagnostics, from corporate decision-making to creative industries. In an era where foundation models are deployed before safety frameworks are in place, where open-source agents outperform flagship releases, and where community groups write their own rules amid policy gaps, the demand for practical, replicable examples for communities to adapt and adopt has become urgent.
Volume 7 is built on a simple premise: responsible AI has always been as much about capacity as it is about commitment. The gap between theoretical principles and practical implementations rarely reflects a lack of intent, but rather, missing infrastructure, institutional inertia, unclear mandates, or poorly designed incentives that collectively contribute to this challenge. The hard work often falls to those without formal authority, including local organizers, frontline workers, junior engineers, and researchers who work across silos.
We’re asking: What does responsible AI implementation look like when it’s grounded rather than aspirational? What happens when AI ethics is shaped in classrooms, courtrooms, hospitals, union halls, and local governments? Who is doing the work of making responsible AI stick through innovation, repair, adaptation, and institutional resilience?
Most importantly: What are we willing to let go of to make room for what actually works?
Volume 7 represents the MAIEI global community coming together to build civic competence by showcasing practical solutions. We’re deeply grateful to all of you, our 17,500+ AI Ethics Brief subscribers, who have made this community possible. Your engagement, questions, and shared insights continue to shape how we approach these critical conversations about AI’s role in society.
We hope this report will serve as both a practitioner’s guide for policymakers, educators, community organizers, and researchers, and an entry point for anyone seeking to understand the broader landscape of AI ethics and responsible AI implementation in 2025. It’s designed to help readers see the forest from the trees, offering both tactical guidance and strategic perspectives on where responsible AI stands today, while serving as a historical artifact for future generations to understand this pivotal moment.
As MAIEI transitions to becoming a financially sustainable organization (see Open Letter: Moving Forward Together – MAIEI’s Next Chapter, December 2024), we’re expanding our impact while keeping our work open access, because building public understanding of AI’s societal impacts shouldn’t be behind paywalls.
Paid subscribers to The AI Ethics Brief will be highlighted in the Acknowledgment page of SAIER Volume 7, unless you indicate otherwise. If you’re already a subscriber and enjoy reading this newsletter, consider upgrading to directly support this work, be recognized, and help us build the civic infrastructure for long-term impact.
For organizations committed to advancing responsible AI implementation, we’re exploring strategic partnerships for SAIER Volume 7. These collaborations allow companies and philanthropic foundations to support independent, community-centred knowledge sharing, while demonstrating a genuine commitment to AI ethics beyond corporate statements. Partnership opportunities include report sponsorship, case study collaboration, and community engagement initiatives. If your organization is interested in supporting this work, please reach out at support@montrealethics.ai
If you have case studies, policy examples, or practical insights that have been successful (or unsuccessful) in real-world applications, please reach out by responding directly to this newsletter or emailing us at support@montrealethics.ai.
We’re particularly interested in:
-
Implementation stories that moved beyond paper to practice
-
Community-led initiatives that addressed AI challenges without formal authority
-
Institutional experiments that navigated AI adoption under constraints
-
Quiet failures and the lessons learned from them
We want honest accounts of what it takes to do this work when no one is watching: the blueprints being built quietly, rigorously, and in lockstep with and for the community. We recognize that no single report can fully capture the scope of this field. That’s why we’re actively seeking diverse perspectives for Volume 7: to document what’s working, what isn’t, what often goes unseen, and where the state of AI ethics stands in 2025.
Please share your thoughts with the MAIEI community:
The Alan Turing Institute is facing significant pressure from the UK Government to pivot its focus or risk losing funding. At the end of 2024, 93 workers signed a letter expressing a lack of confidence in the leadership team. In April 2025, the charity announced “Turing 2.0,” a pivot focusing on environmental sustainability, health and national security, which would involve cutting up to a quarter of current research projects.
Following the Strategic Defense Review in June 2025, UK Secretary of State for Science and Technology Peter Kyle sent a letter to the institute in July stating that it must focus on national security or face funding withdrawal. This month, workers launched a whistleblowing complaint accusing leadership of “misusing public funds, overseeing a “toxic internal culture”, and failing to deliver on the charity’s mission.” The institute has also seen high-profile departures, including former Chief Technology Officer Jonathan Starck in May 2025, amid reports that recommendations for modernization from current Chief Executive Jean Innes have not been implemented.
📌 MAIEI’s Take and Why It Matters:
The situation at the Alan Turing Institute represents a missed opportunity. While the institute has produced valuable research on important topics, including children and AI, its current predicament raises serious questions about accountability in publicly funded research institutions.
The broader issue concerns how non-governmental citizen representation in AI research can be better protected to avoid a similar situation. From our perspective, and reflected in this analysis of the institute, accountability is key. The institute’s governance structure across multiple founding universities created challenges in establishing a unified research agenda and central operational responsibility. What has transpired transforms the institute from an independent third-party charity into, in effect, an arm of the UK government, representing a significant loss for non-governmental AI research and independent oversight in the field.
Did we miss anything? Let us know in the comments below.
This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, examines the April 23rd Executive Order on Advancing Artificial Intelligence Education for American Youth, which focuses on integrating AI education into K-12 learning environments. The Executive Order establishes a framework that promotes the benefits of long-term AI usage in education through five key strategies: creating an AI Education Task Force, launching the Presidential Artificial Intelligence Challenge, and fostering public-private partnerships for improving education, training educators on AI applications, and expanding registered apprenticeships in AI-related fields.
While the Order emphasizes workforce development and preparing students for an AI-driven economy, it takes a notably different approach from previous federal AI education initiatives by focusing primarily on adoption and implementation rather than addressing potential risks or safeguards. This education-focused directive sits within the broader context of the Trump administration’s AI policy framework, as outlined in the July 2025 AI Action Plan (covered in Brief #170), though it predates that comprehensive strategy by several months.
As schools begin implementing these directives, questions remain about how this approach will address concerns around equitable access, student privacy, and the potential for AI systems to perpetuate educational inequities, issues that become more pressing as AI tools become even more embedded in fundamental learning environments.
To dive deeper, read the full article here.
At the Victoria Forum 2025, co-hosted by the University of Victoria and the Senate of Canada from August 24-26 in Victoria, BC, MAIEI joined lawmakers, scholars and civic leaders to examine how Canada can shape AI governance rooted in both global competitiveness and democratic values. On a panel moderated by Senator Rosemary Moodie, MAIEI emphasized the need to move beyond consultation toward co-creation, embedding diverse public perspectives into every stage of AI system design. Drawing from MAIEI’s work on building civic competence and shaping public understanding, we framed AI as a socio-technical system, where governance must address both technical and societal impacts of AI. Key insights included Canada’s unique position between global models, the importance of inclusive policymaking that reflects lived experience, and the risks of relying on voluntary standards. The conversation highlighted that truly democratic AI governance demands more than technical fixes. It requires public participation, meaningful inclusion, and policy frameworks that reflect Canada’s social complexity.
To dive deeper, read the full article here.
A comprehensive survey by Leger, reported by Hessie Jones for Forbes, reveals that Canadians remain deeply divided on artificial intelligence, with 34% viewing AI as beneficial for society while 36% consider it harmful. The study, which tracked AI adoption from February 2023 to August 2025, shows usage has more than doubled from 25% to 57%, driven primarily by younger adults aged 18-34 (83% usage) compared to just 34% among those 55 and older.
While chatbots dominate usage at 73%, they also generate the highest concerns, with 73% of Canadians believing AI chatbots should be prohibited from children’s games and websites. The survey highlights significant privacy concerns (83%) and worries about societal dependence (83%), with Canadians primarily holding AI companies responsible for potential harms (57%) rather than users (18%) or government (11%). Notably, 46% of users worry that frequent AI use might make them “intellectually lazy or lead to a decline in cognitive skills.”
Further, [Renjie] Butalid, of Montreal AI Ethics Institute, notes that the survey findings on privacy (83% concerned) and job displacement (78% see AI as a threat to human jobs), reveal where government leadership is most needed. “These aren’t just individual consumer choices, they’re systemic issues that require coordinated policy responses. When Canadians say they want companies to regulate AI systems more, they’re really asking government to set the rules of the game. Privacy protection and workforce transition support are exactly the kind of challenges where government tone-setting through clear standards, regulations, and investment priorities can make the difference between AI serving Canadian interests or leaving communities behind.”
These insights highlight the pressing need for comprehensive governance frameworks that address both the technical and societal dimensions of AI deployment, particularly as Canada continues to develop its regulatory approach in this rapidly evolving landscape.
To dive deeper, read the full article here.
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!
Ethics & Policy
Humans Only Cafe Explores Love, AI Ethics, and Feminism in Acclaimed Shanghai Debut

Shanghai, China, Sept. 02, 2025 (GLOBE NEWSWIRE) — The new sci-fi play Humans Only Cafe debuted to an audience of nearly 300 people at Wanping Theater in Shanghai, receiving strong reviews and sparking discussions on love, AI ethics, and feminism. Written and directed by playwright and actor Angelina Guo, the play challenges audiences to reconsider the values that shape human relationships and society’s perception of technology.
At the heart of Humans Only Cafe is the concept of love. Guo believes her generation undervalues love, calling this a tragic development. Through the play, she examines the intersection of love, feminist thought, shifting societal values, and the rapid advancements in artificial intelligence. The story invites audiences to question whether machines may someday be capable of love and whether humans can love machines in return.
The play also delves into AI ethics. Guo noted that many people dismiss machines as fake and fail to see the parallels between human behavior and structured programming. By drawing this comparison, the play raises important questions about the boundaries between humans and machines, while addressing concerns over humanity’s confidence in its ability to control its own creations.
Feminism and the deconstruction of social media trends are another central theme. Guo observed that some Chinese social media users promote the idea that women need only money and not love. She interprets this as a troubling reduction of human value to wealth and status. For her, feminism is not simply about financial success but about respecting women’s choices in all forms.
The play also highlights how cultural expressions reinforce patriarchal values. Guo points out that the popular Chinese phrase for “strong woman” carries male-centered assumptions, while an equivalent for “strong man” does not exist. She argues this reflects how society celebrates women’s success using standards defined by men.
Audience members described Humans Only Cafe as a play about courage that goes beyond feminism. An important female AI character challenges beliefs about robots by showcasing individuality and the capacity to love, while a human character rejects the “strong woman” archetype to embrace a more authentic identity.
Originally created as a 15-minute short piece, the complexity of the narrative led Guo to expand it into a mid-length play. Writing began in February 2025 and took six months to complete. Plans are now underway for a UK staging. The script will also be made available for under-resourced schools through Guo’s website, where schools can request permission to stage their own productions.
Angelina Guo, a student at Sevenoaks School in the UK, is the winner of the Harvard Book Prize and the Humanities Merit Prize for Philosophy. Her work explores the intersection of feminism, technology trends, AI advancements, and evolving social values.
For more information, visit https://www.angelinagqy.com/
Sarah Lambert Email: angelinaguo20080125@gmail.com Website: https://www.angelinagqy.com/
Ethics & Policy
Tech, Power and Perspective: Davidson College Grad Wins Gates Cambridge Scholarship to Study AI Ethics Through a Global Lens

Her interest in these areas has only deepened since graduating. After her original post-graduate employer, located in New York City, announced a hiring freeze, the freshly degreed Wildcat’s plans of getting some U.S.-based work experience and then applying to graduate school changed. She instead moved back to India and began working at a social impact consulting firm advising non-profits, followed by a move to the Mumbai-based organization Point of View (POV).
“At POV, I worked on projects building knowledge at the intersection of gender, sexuality and technology, specifically in the Indian context,” Kandhari said. “This has given me a thorough understanding of how the constituencies we work with — women from low-income backgrounds, sex workers, LGBTQ people and persons with disabilities — use and are impacted by technology. Through this work, I understand there are several factors that impede digital freedoms, including the gendered digital divide, family surveillance of use of digital devices, threat of technology-facilitated gender-based violence and cyber fraud, as well as improper safeguards for privacy and data protection.”
The experience exposed Kandhari to issues around ownership of technology, the consequences of how that ownership is connected to power and contributes to inequalities in the world.
“This work has been crucial to my knowledge and understanding, and it’s made me a better researcher, specifically in a non-academic organization,” she said, “which is much different from the research I did at Davidson.”
When Kandhari graduated from Davidson, she was recognized with the W.E.B. Du Bois Award for Excellence for her grasp of theory and methodology and excellent work through independent research. This award goes to a student demonstrating the skills and priorities that are central to sociology as a field of study and arena of advocacy.
-
Business4 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi