Connect with us

Ethics & Policy

Can AI Solve Accent Bias in CX? The Ethics of Voice Tech

Published

on


View on YouTube.

In this exclusive CX Today interview, we sit down with Sanas to explore the cutting-edge world of AI-powered accent translation.

From improving customer experience to tackling ethical concerns, we dive deep into the implications of reshaping the way we communicate.

Join us as we discuss:

  • How AI accent translation enhances global communication
  • The ethical debate around voice modification and identity
  • Real-world applications for CX and business operations
  • What does AI-driven accent translation mean for the future of customer experience?

Subscribe for the latest insights on AI, CX, and digital innovation.



Source link

Ethics & Policy

The AI Ethics Brief #172: The State of AI Ethics in 2025: What’s Actually Working

Published

on


Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.

❤️ Support Our Work.

  • One Question We’re Pondering: What does it actually take to move responsible AI from theory to practice, and who is doing that work when no one is watching? We explore the quiet, persistent efforts happening in classrooms, healthcare systems, cities, courtrooms, union halls, and communities that rarely make headlines but are shaping AI’s real-world impact.

  • SAIER Volume 7 Returns: We officially announce the State of AI Ethics Report (SAIER) Volume 7: “AI at the Crossroads: A Practitioner’s Guide to Community-Centred Solutions,” scheduled for November 4, 2025. After a pause since February 2022, this report focuses on practical, replicable examples of responsible AI implementation grounded in real-world experience rather than aspirational principles.

  • The Alan Turing Institute Crisis: We examine the UK’s premier AI research institute facing potential funding withdrawal unless it pivots to a national security focus, representing a significant loss for independent non-governmental AI research and raising questions about accountability in publicly funded research institutions.

  • U.S. AI Education Push: Our AI Policy Corner with GRAIL at Purdue University analyzes the April 23rd Executive Order on Advancing AI Education for American Youth, which emphasizes adoption and implementation over risk mitigation, sitting within the broader Trump administration’s AI policy framework.

  • Canadian AI Governance Insights: From the Victoria Forum 2025 and new public opinion data showing Canadians deeply divided on AI (34% beneficial vs. 36% harmful), we explore Canada’s unique position in developing democratic AI governance that moves beyond consultation toward co-creation.

What connects these stories: The recognition that responsible AI implementation and AI ethics occur not in boardrooms or policy papers, but through the daily work of people building civic competence and practical solutions at the intersection of technology and community needs.

As conversations around AI governance grow louder (see Brief #170: How the US and China Are Reshaping AI Geopolitics), what we’re hearing behind the scenes is quieter, more persistent, and perhaps more urgent. Colleagues across sectors are asking the same thing in different languages: What’s working, what isn’t, and what can we learn from both?

Not in theory or in press releases, but in classrooms trying to preserve academic integrity, in healthcare systems navigating algorithmic risk, in cities designing procurement standards, in community-led efforts resisting surveillance they never consented to, and more.

Over the past year, this question has resurfaced repeatedly for us at MAIEI: at Zurich’s Point Zero Forum, the recently held Victoria Forum in British Columbia (more on this below in Insights & Perspectives), in guest lectures at universities, and through hundreds of emails and conversations. It has also been nearly a year since the passing of our dear friend and collaborator, Abhishek Gupta, founder and principal researcher at MAIEI. In that time, this question has become increasingly persistent, and his reminder to keep moving “onwards and upwards” has guided our search for answers as we rebuild and reimagine MAIEI’s role in the community.

And yet, the answers are rarely loud. They appear in quiet experiments, shared reflections, and the daily work of people operating at the edges of institutions and at the centre of communities. This persistent need for connection and practical guidance is why we’re bringing the State of AI Ethics Report (SAIER) back, and are committed to doing so on an annual basis. Returning to our roots of building civic competence and shaping public understanding on the societal impacts of AI, the SAIER represents both a tribute to Abhishek’s legacy and a cornerstone of MAIEI’s path forward.

(Note: The cover design for SAIER Volume 7 is not final. It is shown here for illustrative purposes only.)

After a pause since February 2022, we’re officially announcing SAIER Volume 7: AI at the Crossroads: A Practitioner’s Guide to Community-Centred Solutions, scheduled for release on November 4, 2025.

Following hundreds of conversations and a close review of over 800 pieces published on the MAIEI website since 2018, one insight stood out: the field needs connection and interpretation. There’s a growing recognition that isolated efforts across sectors contain valuable knowledge that rarely gets shared or built upon, including lessons from quiet failures that never made headlines.

The world also looks fundamentally different than when Volume 6 was published in February 2022. The ChatGPT paradigm now dominates (see Brief #171: The Contradictions Defining AI’s Future for our commentary on GPT-5 and GPT-OSS), reshaping everything from student homework to healthcare diagnostics, from corporate decision-making to creative industries. In an era where foundation models are deployed before safety frameworks are in place, where open-source agents outperform flagship releases, and where community groups write their own rules amid policy gaps, the demand for practical, replicable examples for communities to adapt and adopt has become urgent.

Volume 7 is built on a simple premise: responsible AI has always been as much about capacity as it is about commitment. The gap between theoretical principles and practical implementations rarely reflects a lack of intent, but rather, missing infrastructure, institutional inertia, unclear mandates, or poorly designed incentives that collectively contribute to this challenge. The hard work often falls to those without formal authority, including local organizers, frontline workers, junior engineers, and researchers who work across silos.

We’re asking: What does responsible AI implementation look like when it’s grounded rather than aspirational? What happens when AI ethics is shaped in classrooms, courtrooms, hospitals, union halls, and local governments? Who is doing the work of making responsible AI stick through innovation, repair, adaptation, and institutional resilience?

Most importantly: What are we willing to let go of to make room for what actually works?

Volume 7 represents the MAIEI global community coming together to build civic competence by showcasing practical solutions. We’re deeply grateful to all of you, our 17,500+ AI Ethics Brief subscribers, who have made this community possible. Your engagement, questions, and shared insights continue to shape how we approach these critical conversations about AI’s role in society.

We hope this report will serve as both a practitioner’s guide for policymakers, educators, community organizers, and researchers, and an entry point for anyone seeking to understand the broader landscape of AI ethics and responsible AI implementation in 2025. It’s designed to help readers see the forest from the trees, offering both tactical guidance and strategic perspectives on where responsible AI stands today, while serving as a historical artifact for future generations to understand this pivotal moment.

As MAIEI transitions to becoming a financially sustainable organization (see Open Letter: Moving Forward Together – MAIEI’s Next Chapter, December 2024), we’re expanding our impact while keeping our work open access, because building public understanding of AI’s societal impacts shouldn’t be behind paywalls.

Paid subscribers to The AI Ethics Brief will be highlighted in the Acknowledgment page of SAIER Volume 7, unless you indicate otherwise. If you’re already a subscriber and enjoy reading this newsletter, consider upgrading to directly support this work, be recognized, and help us build the civic infrastructure for long-term impact.

For organizations committed to advancing responsible AI implementation, we’re exploring strategic partnerships for SAIER Volume 7. These collaborations allow companies and philanthropic foundations to support independent, community-centred knowledge sharing, while demonstrating a genuine commitment to AI ethics beyond corporate statements. Partnership opportunities include report sponsorship, case study collaboration, and community engagement initiatives. If your organization is interested in supporting this work, please reach out at support@montrealethics.ai

If you have case studies, policy examples, or practical insights that have been successful (or unsuccessful) in real-world applications, please reach out by responding directly to this newsletter or emailing us at support@montrealethics.ai.

We’re particularly interested in:

  • Implementation stories that moved beyond paper to practice

  • Community-led initiatives that addressed AI challenges without formal authority

  • Institutional experiments that navigated AI adoption under constraints

  • Quiet failures and the lessons learned from them

We want honest accounts of what it takes to do this work when no one is watching: the blueprints being built quietly, rigorously, and in lockstep with and for the community. We recognize that no single report can fully capture the scope of this field. That’s why we’re actively seeking diverse perspectives for Volume 7: to document what’s working, what isn’t, what often goes unseen, and where the state of AI ethics stands in 2025.

Please share your thoughts with the MAIEI community:

Leave a comment

The Alan Turing Institute is facing significant pressure from the UK Government to pivot its focus or risk losing funding. At the end of 2024, 93 workers signed a letter expressing a lack of confidence in the leadership team. In April 2025, the charity announced “Turing 2.0,” a pivot focusing on environmental sustainability, health and national security, which would involve cutting up to a quarter of current research projects.

Following the Strategic Defense Review in June 2025, UK Secretary of State for Science and Technology Peter Kyle sent a letter to the institute in July stating that it must focus on national security or face funding withdrawal. This month, workers launched a whistleblowing complaint accusing leadership of “misusing public funds, overseeing a “toxic internal culture”, and failing to deliver on the charity’s mission.” The institute has also seen high-profile departures, including former Chief Technology Officer Jonathan Starck in May 2025, amid reports that recommendations for modernization from current Chief Executive Jean Innes have not been implemented.

📌 MAIEI’s Take and Why It Matters:

The situation at the Alan Turing Institute represents a missed opportunity. While the institute has produced valuable research on important topics, including children and AI, its current predicament raises serious questions about accountability in publicly funded research institutions.

The broader issue concerns how non-governmental citizen representation in AI research can be better protected to avoid a similar situation. From our perspective, and reflected in this analysis of the institute, accountability is key. The institute’s governance structure across multiple founding universities created challenges in establishing a unified research agenda and central operational responsibility. What has transpired transforms the institute from an independent third-party charity into, in effect, an arm of the UK government, representing a significant loss for non-governmental AI research and independent oversight in the field.

Did we miss anything? Let us know in the comments below.

Leave a comment

This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, examines the April 23rd Executive Order on Advancing Artificial Intelligence Education for American Youth, which focuses on integrating AI education into K-12 learning environments. The Executive Order establishes a framework that promotes the benefits of long-term AI usage in education through five key strategies: creating an AI Education Task Force, launching the Presidential Artificial Intelligence Challenge, and fostering public-private partnerships for improving education, training educators on AI applications, and expanding registered apprenticeships in AI-related fields.

While the Order emphasizes workforce development and preparing students for an AI-driven economy, it takes a notably different approach from previous federal AI education initiatives by focusing primarily on adoption and implementation rather than addressing potential risks or safeguards. This education-focused directive sits within the broader context of the Trump administration’s AI policy framework, as outlined in the July 2025 AI Action Plan (covered in Brief #170), though it predates that comprehensive strategy by several months.

As schools begin implementing these directives, questions remain about how this approach will address concerns around equitable access, student privacy, and the potential for AI systems to perpetuate educational inequities, issues that become more pressing as AI tools become even more embedded in fundamental learning environments.

To dive deeper, read the full article here.

At the Victoria Forum 2025, co-hosted by the University of Victoria and the Senate of Canada from August 24-26 in Victoria, BC, MAIEI joined lawmakers, scholars and civic leaders to examine how Canada can shape AI governance rooted in both global competitiveness and democratic values. On a panel moderated by Senator Rosemary Moodie, MAIEI emphasized the need to move beyond consultation toward co-creation, embedding diverse public perspectives into every stage of AI system design. Drawing from MAIEI’s work on building civic competence and shaping public understanding, we framed AI as a socio-technical system, where governance must address both technical and societal impacts of AI. Key insights included Canada’s unique position between global models, the importance of inclusive policymaking that reflects lived experience, and the risks of relying on voluntary standards. The conversation highlighted that truly democratic AI governance demands more than technical fixes. It requires public participation, meaningful inclusion, and policy frameworks that reflect Canada’s social complexity.

To dive deeper, read the full article here.

A comprehensive survey by Leger, reported by Hessie Jones for Forbes, reveals that Canadians remain deeply divided on artificial intelligence, with 34% viewing AI as beneficial for society while 36% consider it harmful. The study, which tracked AI adoption from February 2023 to August 2025, shows usage has more than doubled from 25% to 57%, driven primarily by younger adults aged 18-34 (83% usage) compared to just 34% among those 55 and older.

While chatbots dominate usage at 73%, they also generate the highest concerns, with 73% of Canadians believing AI chatbots should be prohibited from children’s games and websites. The survey highlights significant privacy concerns (83%) and worries about societal dependence (83%), with Canadians primarily holding AI companies responsible for potential harms (57%) rather than users (18%) or government (11%). Notably, 46% of users worry that frequent AI use might make them “intellectually lazy or lead to a decline in cognitive skills.”

Further, [Renjie] Butalid, of Montreal AI Ethics Institute, notes that the survey findings on privacy (83% concerned) and job displacement (78% see AI as a threat to human jobs), reveal where government leadership is most needed. “These aren’t just individual consumer choices, they’re systemic issues that require coordinated policy responses. When Canadians say they want companies to regulate AI systems more, they’re really asking government to set the rules of the game. Privacy protection and workforce transition support are exactly the kind of challenges where government tone-setting through clear standards, regulations, and investment priorities can make the difference between AI serving Canadian interests or leaving communities behind.”

These insights highlight the pressing need for comprehensive governance frameworks that address both the technical and societal dimensions of AI deployment, particularly as Canada continues to develop its regulatory approach in this rapidly evolving landscape.

To dive deeper, read the full article here.

Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.

For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai

Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!



Source link

Continue Reading

Ethics & Policy

Humans Only Cafe Explores Love, AI Ethics, and Feminism in Acclaimed Shanghai Debut

Published

on


Shanghai, China, Sept. 02, 2025 (GLOBE NEWSWIRE) — The new sci-fi play Humans Only Cafe debuted to an audience of nearly 300 people at Wanping Theater in Shanghai, receiving strong reviews and sparking discussions on love, AI ethics, and feminism. Written and directed by playwright and actor Angelina Guo, the play challenges audiences to reconsider the values that shape human relationships and society’s perception of technology.


At the heart of Humans Only Cafe is the concept of love. Guo believes her generation undervalues love, calling this a tragic development. Through the play, she examines the intersection of love, feminist thought, shifting societal values, and the rapid advancements in artificial intelligence. The story invites audiences to question whether machines may someday be capable of love and whether humans can love machines in return.

The play also delves into AI ethics. Guo noted that many people dismiss machines as fake and fail to see the parallels between human behavior and structured programming. By drawing this comparison, the play raises important questions about the boundaries between humans and machines, while addressing concerns over humanity’s confidence in its ability to control its own creations.

Feminism and the deconstruction of social media trends are another central theme. Guo observed that some Chinese social media users promote the idea that women need only money and not love. She interprets this as a troubling reduction of human value to wealth and status. For her, feminism is not simply about financial success but about respecting women’s choices in all forms.

The play also highlights how cultural expressions reinforce patriarchal values. Guo points out that the popular Chinese phrase for “strong woman” carries male-centered assumptions, while an equivalent for “strong man” does not exist. She argues this reflects how society celebrates women’s success using standards defined by men.

Audience members described Humans Only Cafe as a play about courage that goes beyond feminism. An important female AI character challenges beliefs about robots by showcasing individuality and the capacity to love, while a human character rejects the “strong woman” archetype to embrace a more authentic identity.

Originally created as a 15-minute short piece, the complexity of the narrative led Guo to expand it into a mid-length play. Writing began in February 2025 and took six months to complete. Plans are now underway for a UK staging. The script will also be made available for under-resourced schools through Guo’s website, where schools can request permission to stage their own productions.

Angelina Guo, a student at Sevenoaks School in the UK, is the winner of the Harvard Book Prize and the Humanities Merit Prize for Philosophy. Her work explores the intersection of feminism, technology trends, AI advancements, and evolving social values.

For more information, visit https://www.angelinagqy.com/


Sarah Lambert
Email: angelinaguo20080125@gmail.com
Website: https://www.angelinagqy.com/



Source link

Continue Reading

Ethics & Policy

Tech, Power and Perspective: Davidson College Grad Wins Gates Cambridge Scholarship to Study AI Ethics Through a Global Lens

Published

on


Her interest in these areas has only deepened since graduating. After her original post-graduate employer, located in New York City, announced a hiring freeze, the freshly degreed Wildcat’s plans of getting some U.S.-based work experience and then applying to graduate school changed. She instead moved back to India and began working at a social impact consulting firm advising non-profits, followed by a move to the Mumbai-based organization Point of View (POV). 

“At POV, I worked on projects building knowledge at the intersection of gender, sexuality and technology, specifically in the Indian context,” Kandhari said. “This has given me a thorough understanding of how the constituencies we work with — women from low-income backgrounds, sex workers, LGBTQ people and persons with disabilities — use and are impacted by technology. Through this work, I understand there are several factors that impede digital freedoms, including the gendered digital divide, family surveillance of use of digital devices, threat of technology-facilitated gender-based violence and cyber fraud, as well as improper safeguards for privacy and data protection.”

The experience exposed Kandhari to issues around ownership of technology, the consequences of how that ownership is connected to power and contributes to inequalities in the world.

“This work has been crucial to my knowledge and understanding, and it’s made me a better researcher, specifically in a non-academic organization,” she said, “which is much different from the research I did at Davidson.”

When Kandhari graduated from Davidson, she was recognized with the W.E.B. Du Bois Award for Excellence for her grasp of theory and methodology and excellent work through independent research. This award goes to a student demonstrating the skills and priorities that are central to sociology as a field of study and arena of advocacy.



Source link

Continue Reading

Trending