Connect with us

Ethics & Policy

AI ethics in iGaming governance

Published

on


In a feature article by Dr Stefano Filletti, Managing Partner at Filletti & Filletti Advocates and Head of Department of Criminal Law at the University of Malta, published in SiGMA Magazine issue 34 and distributed at SiGMA Euro-Med earlier this month, the focus turns to how AI ethics and criminal law converge in iGaming boardrooms and what directors must do to evidence continuous diligence, protect customers and safeguard growth.

Directors in iGaming now face a dual reality: AI systems shape customer journeys, risk scoring and product iteration, while courts assess diligence through tangible, continuous oversight rather than intent. Dr Filletti’s guidance is clear: “Due diligence is not passive, it is an ongoing active process,” with decision logs, model update trails and structured briefings forming the documentary backbone of lawful governance in fast-moving release cycles.

Structured oversight

Maltese law recognises corporate liability where an officer commits an offence for the company’s gain, with personal exposure layered through vicarious liability, making structured oversight essential at board level. Under Article 121D of the Criminal Code, once a company is charged, the burden shifts to the officer to evidence ignorance of the offence and exhaustive supervision, making minutes, audit trails and independent verification decisive in court. “Turning a blind eye is not a valid defence,” says Dr Filletti, reinforcing that written workflows, incident logs and dashboards provide proof long before prosecutors request it.

AI ethics in practice

Governance must keep pace with agile development, with pre-release AI briefings aligned to the sprint cadence and risk registers that escalate issues early for legal, risk and technology sign-off. Dr Filletti argues that “growth and safety are allies, not opponents,” advocating live supervision over quarterly audits and direct board visibility over anomalies such as unexplained deposits or sudden shifts in player patterns. This approach reframes compliance as a living discipline, where incident drills and scenario workshops ensure plans are effective under pressure, rather than just on paper.

“Inaction is not acceptable.”

Assistance software and advisory tools in poker intensify regulatory focus, particularly where money laundering or unfair advantage risks arise, demanding immutable logs and routine third‑party audits. Dr Filletti calls for clearer statutory frameworks but warns that regulatory cycles lag, placing responsibility on boards to risk‑map integrations, ring‑fence data flows and document each assessment as a potential offence vector. “Inaction is not acceptable,” he notes, urging leadership to interrogate every integration and invest in visibility before facing an evidentiary burden that favours prosecutors.

Proof, presumption, prevention

Vicarious liability reverses the usual courtroom dynamic, requiring directors to prove they had no knowledge and exercised full diligence once wrongdoing is established, a standard Dr. Filletti deems compatible with a fair trial. Meeting that standard today means live dashboards for high‑risk decisions, scheduled workshops for evolving threats and a dedicated compliance technologist managing an inventory of algorithmic systems, thresholds and audits for real‑time board queries. “Compliance is a living discipline; continuous oversight protects innovation better than reactive litigation,” he says, advocating foresight over apology as the foundation of resilient growth.

Criminal law and AI ethics converge in the boardroom, and only visible, continuous diligence can safeguard innovation, reputation and growth in the iGaming sector. With Article 121D shaping corporate and personal exposure, directors who document, test and question their AI stack today will be best placed to withstand tomorrow’s scrutiny. 

Don’t miss out

SiGMA Central Europe takes place from 3 to 6 November in Rome, bringing operators, regulators and legal experts together to advance practical standards for AI oversight and criminal risk in iGaming. Register to join the conversation and equip leadership teams with the frameworks and tools needed for compliant, sustainable growth.



Source link

Ethics & Policy

Tools, Ethics, and Job Security

Published

on

By


In the evolving world of corporate operations, artificial intelligence is fundamentally altering what workers demand from their employers, pushing companies to adapt or risk losing top talent. Employees now expect AI tools to handle mundane tasks, freeing them for more creative and strategic work, according to a recent analysis by The Bliss Group. This shift isn’t just about efficiency; it’s reshaping job satisfaction, with surveys showing that 68% of workers believe AI will enhance their roles rather than replace them, provided it’s implemented thoughtfully.

Beyond basic automation, AI is fostering expectations for personalized career development. Workers anticipate AI-driven platforms to offer tailored learning paths, predictive analytics for skill gaps, and even virtual mentors that guide professional growth. This personalization extends to work-life balance, where AI schedulers optimize calendars to prevent burnout, a trend highlighted in McKinsey’s 2025 report on AI in the workplace, which notes that only 1% of companies feel fully mature in their AI adoption, yet nearly all are investing heavily.

AI’s Role in Redefining Productivity and Collaboration

As AI integrates deeper into daily workflows, employee expectations around productivity are soaring. Tools like generative AI assistants are no longer novelties but necessities, with workers demanding seamless integration into collaboration platforms. A Goldman Sachs analysis from August 2025 projects that AI could displace some jobs in the near term but ultimately create new opportunities, emphasizing the need for upskilling programs that align with these tools.

Collaboration is another area undergoing transformation. Employees now expect AI to facilitate real-time, cross-functional teamwork, such as automated translation in global meetings or intelligent summarization of discussions. Posts on X from industry observers, including predictions by tech executives, underscore this sentiment, with one noting that by mid-2025, organizations might deploy 50 to 500 AI agents to automate workflows, enhancing human-machine partnerships as detailed in IBM’s insights.

Navigating Job Security and Ethical Concerns

Amid these advancements, concerns about job security are prompting employees to seek transparency from employers. Recent news from CNBC indicates that while AI’s impact on the workforce is “small but not zero,” economic uncertainty amplifies fears, leading workers to demand clear reskilling initiatives. Statistics from DemandSage’s 2025 reports reveal that AI could automate 30-40% of white-collar tasks, yet create up to 170 million new roles by 2030, offsetting displacements.

Ethically, employees are pushing for AI systems that prioritize fairness and bias mitigation. This includes expectations for data privacy in AI-driven performance evaluations and inclusive design that doesn’t exacerbate inequalities. PwC’s AI Jobs Barometer highlights how wages rise fastest in AI-exposed roles, but only if companies address these ethical dimensions, fostering trust.

Emerging Trends in Workplace Technology Adoption

Looking ahead, workplace technology trends for 2025 point to AI’s convergence with IoT and edge computing, enabling smarter office environments. Employees expect adaptive workspaces where AI adjusts lighting, temperature, and even meeting agendas based on real-time data, as explored in Appinventiv’s recent blog. This integration promises efficiency gains, but it also raises the bar for IT departments, which, per X posts echoing Jensen Huang’s views, may evolve into “HR for AI agents” managing digital workers.

Adoption challenges persist, with Digit.fyi’s 2025 DEX Report warning that poor tech integration undermines AI productivity. Companies must invest in user-friendly interfaces and training to meet these expectations, ensuring AI enhances rather than hinders the human element.

The Broader Implications for Corporate Strategy

For industry leaders, this rewiring demands a strategic overhaul. Forward-thinking firms are embedding AI into talent management, using predictive models to forecast employee needs and retention risks. Workday’s insights suggest AI empowers workers to be more creative, shifting roles toward strategic thinking.

Ultimately, as AI continues to evolve, employee expectations will drive innovation, compelling organizations to balance technological prowess with human-centric policies. Those that succeed will not only boost productivity but also cultivate a loyal, engaged workforce ready for the future.



Source link

Continue Reading

Ethics & Policy

AI Widens B2B-B2C Marketing Divide in 2025: Trends and Ethics

Published

on

By


As artificial intelligence reshapes marketing strategies, the divide between business-to-business and business-to-consumer approaches is growing sharper, particularly as we head into 2025. In B2C marketing, AI tools are increasingly focused on hyper-personalization at scale, enabling brands to tailor experiences in real time based on consumer behavior. For instance, e-commerce giants are using predictive analytics to anticipate purchases, with voice commerce projected to hit $40 billion in U.S. sales next year, according to recent insights from WebProNews. This consumer-centric push contrasts with B2B, where AI emphasizes long-term relationship building through data-driven insights and automation.

Meanwhile, B2B marketers are leveraging AI for more complex tasks like account-based marketing and lead scoring, often integrating it with CRM systems to forecast buyer intent over extended sales cycles. A post on X from Insights Opinion highlights how AI enables personalized content at scale in B2B, predicting customer behavior with high accuracy, which aligns with broader industry shifts toward automation.

Personalization Takes Center Stage in Consumer Markets

In the B2C realm, AI’s role in personalization is evolving rapidly, with tools analyzing vast datasets from social media and browsing history to create dynamic campaigns. According to a detailed analysis in the HubSpot blog, over 70% of B2C marketers are adopting AI for content creation and customer segmentation, far outpacing B2B adoption rates. This is evident in trends like AI-driven social commerce, expected to reach $80 billion globally by 2025, as noted in WebProNews reports, where augmented reality reduces returns by 22% through virtual try-ons.

B2C strategies also prioritize ethical AI use amid privacy concerns, with zero-party data collection becoming standard. Leadership sentiment, as captured in HubSpot’s survey data, shows B2C executives are more optimistic about AI’s immediate ROI, investing in chatbots and recommendation engines that boost engagement instantly.

Relationship-Driven AI in Business Transactions

Shifting to B2B, AI trends for 2025 underscore a focus on predictive analytics for sales funnels and workflow management. McKinsey insights, referenced in a 1827 Marketing article, reveal that 71% of businesses have adopted generative AI, yet many struggle with strategic implementation, particularly in B2B where decisions involve multiple stakeholders. Tools like agentic AI are rewriting lead scoring and account-based marketing, allowing real-time adaptations, as discussed in recent X posts from users like Suneet Bhatia.

Furthermore, B2B marketers are integrating AI with sustainability goals, using data-driven tactics to align with ethical practices. A WebProNews piece on 2025 sales strategies emphasizes how AI enables multi-channel personalization, potentially boosting conversions by 20% for small and medium-sized businesses through A/B testing and predictive modeling.

Convergences and Divergences in AI Adoption

While differences persist, some overlaps are emerging: both sectors are embracing AI for content automation, with B2C leaning toward creative tools like Jasper.ai and B2B favoring platforms for in-depth analytics. The HubSpot blog points out similarities in top use cases, such as email marketing and SEO optimization, but notes B2B’s slower pace due to regulatory hurdles. Recent news from Ad Age illustrates how AI is making B2B content more human-like, fostering trust in longer sales processes.

Privacy and blockchain integration represent another shared challenge, with 2025 trends pushing for transparent data use. As per WebProNews analytics trends, cross-channel attribution powered by AI will be crucial, helping both B2B and B2C marketers navigate cookie-less futures.

Future Implications for Marketers

Looking ahead, B2B may catch up by focusing on AI governance, with leaders like Google Cloud and IBM leading in agentic AI, as per ISG Software Research findings shared on X. In contrast, B2C’s agility could drive innovations like autonomous agents, transforming e-commerce. An Exploding Topics report on B2B trends through 2027 predicts increased use of video marketing enhanced by AI, bridging gaps with B2C’s live streaming boom.

Ultimately, success in 2025 will hinge on balancing innovation with ethics. Marketers in both domains must adapt, with B2C pushing speed and B2B emphasizing depth, as AI continues to redefine engagement and efficiency across the board.



Source link

Continue Reading

Ethics & Policy

The AI Ethics Brief #173: Power, Policy, and Practice

Published

on


Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.

❤️ Support Our Work.

Writing from Oxford’s Wadham College this week where we’re exploring “Civilisation on the Edge,” we’re struck by how the challenges facing AI governance mirror broader questions about institutional adaptation in times of rapid change.

  • We share our call for case studies and examples for the State of AI Ethics Report Volume 7, seeking real-world implementation stories and community-driven insights as we build a practitioner’s guide for navigating AI challenges in 2025.

  • We examine how Silicon Valley is embedding itself within the military industrial complex through initiatives like Detachment 201, where tech executives from OpenAI, Meta, Palantir, and Thinking Machines Lab are commissioned as lieutenant colonels. Meanwhile, companies abandon previous policies against military involvement as artists boycott platforms with defense investments.

  • Our AI Policy Corner with GRAIL at Purdue University explores contrasting state approaches to AI mental health legislation, comparing Illinois’s restrictive model requiring professional oversight with New York’s transparency-focused framework, as lawmakers respond to AI-related teen suicides with divergent regulatory strategies.

  • We investigate the psychological risks of AI companionship beyond dependency, revealing how social comparison with perfect AI companions can devalue human relationships, creating a “Companionship-Alienation Irony” where tools designed to reduce loneliness may increase isolation.

  • Our Recess series with Encode Canada examines Canada’s legislative gaps around non-consensual deepfakes, analyzing how current laws may not cover synthetic intimate images and comparing policy solutions from British Columbia and the United States.

What connects these stories: The persistent tension between technological capability and institutional readiness. Whether examining military AI integration, mental health legislation, psychological manipulation, or legal frameworks for synthetic media, each story reveals how communities and institutions are scrambling to govern technologies that outpace traditional regulatory mechanisms. These cases illuminate the urgent need for governance approaches that center human agency, democratic accountability, and community-driven solutions rather than accepting technological determinism as inevitable.

This week, our team at the Montreal AI Ethics Institute is taking part in The Wadham Experience, a week-long leadership program hosted at Oxford’s Wadham College. The program, Thinking Critically: Civilisation on the Edge, invites participants to reflect on the systems, stories, and power structures that have shaped societies and how they must evolve to meet this moment of profound change.

As we sit in these historic rooms discussing democracy and demagoguery, myth and modernity, we’re also shaping the next phase of our work: The State of AI Ethics Report, Volume 7 (AI at the Crossroads: A Practitioner’s Guide to Community-Centred Solutions), which we announced in Brief #172 and will release on November 4, 2025.

This year’s report is different. We’re building it not just as a landscape analysis, but as a practical guide for those working on AI challenges in communities, institutions, and movements. It is structured to offer case studies, toolkits, and implementation stories from around the world, grounded in real-world applications: what’s working, what’s not, and what’s next.

The questions we’re grappling with at Oxford feel particularly urgent in 2025: What kind of AI governance do we build when institutions lag behind? How do we govern technologies that evolve faster than our institutions can adapt? What happens when communities need AI solutions but lack formal authority to regulate platforms or shape policy? How do we move beyond corporate principles and policy frameworks to actual implementation in messy, resource-constrained environments?

The conversations here at Wadham remind us that societies have faced technological disruption before. The printing press reshaped information flows. Industrialization transformed labour and social structures. But AI presents unique challenges: its speed of deployment, its capacity for autonomous decision-making, and its embedding into virtually every aspect of social life.

SAIER Volume 7 will cover five interconnected parts:

  1. Foundations & Governance: How governments, regions, and communities are shaping AI policy in 2025, from superpower competition to middle-power innovation and grassroots governance experiments.

  2. Social Justice & Equity: Examining AI’s impact on democratic participation, algorithmic justice, surveillance and privacy rights, and environmental justice, with particular attention to how communities are developing their own accountability mechanisms and responding to AI’s growing energy and infrastructure costs.

  3. Sectoral Applications: AI ethics in healthcare, education, labour, the arts, and military contexts, focusing on what happens when AI systems meet real-world constraints and competing values.

  4. Emerging Tech: Governing agentic systems that act independently, community-controlled AI infrastructure, and Indigenous approaches to AI stewardship that center long-term thinking and data sovereignty.

  5. Collective Action: How communities are building AI literacy, organizing for worker rights, funding alternative models, and creating public sector leadership that serves democratic values.

Throughout the report, we are asking grounded questions:

  • How are small governments and nonprofits actually deploying responsible AI under tight resource constraints?

  • What did communities learn when their AI bias interventions didn’t work?

  • What happened when workers tried to stop AI surveillance in the workplace, and what can others learn from those efforts?

  • Where are the creative models of AI that are truly community-controlled rather than corporate-managed? And more.

While we’re curating authors for the chapters and sections of this report, we’re also inviting contributions from those working directly on the ground. We’re not looking for polished case studies or success stories that fit neatly into academic frameworks. We’re seeking the work that’s often overlooked: the experiments, lessons, and emerging blueprints shaped by lived experience.

Think of the nurse who figured out how to audit their hospital’s AI diagnostic tool. The city council that drafted AI procurement standards with limited resources. The artists’ collective building alternative licensing models for training data. The grassroots organization that successfully challenged biased algorithmic hiring in their community.

These are the stories that reveal what it actually takes to do this work: the political navigation, resource constraints, technical hurdles, and human relationships that determine whether ethical AI remains an aspiration or becomes a lived reality.

Our goal goes beyond documentation. We want this report to connect people doing similar work in different contexts, to surface patterns across sectors, and to offer practical grounding at a moment when the search for direction, purpose, and solidarity feels especially urgent.

When you share your story, you’re not just contributing to a report. You’re helping others find collaborators, ideas, and renewed momentum for their own work.

If you’re part of a project, policy, or initiative that reflects these values, whether it succeeded or failed, we’d love to include your insight in this edition.

Submit your story or example using this form

We’re especially seeking:

  • Implementation stories that moved beyond paper to practice

  • Community-led initiatives that addressed AI harms without formal authority

  • Institutional experiments that navigated AI adoption under constraints

  • Quiet failures and what they revealed about systemic barriers

  • Cross-sector collaborations that found unexpected solutions

  • Community organizing strategies that built power around AI issues

As we continue shaping SAIER Volume 7, your stories can help build a resource that is grounded, practical, and genuinely useful for those navigating AI implementation in 2025. Together, we can document what’s working, what barriers still need addressing, and how we might move forward collectively, deliberately, and with care.

Please share your thoughts with the MAIEI community:

Leave a comment

This summer, the CEO of Spotify, Daniel Ek, faced significant backlash after investing $700 million into Helsing through his investment firm, Prima Materia. Helsing is a Munich-based AI defense company founded in 2021 that sells autonomous weapons to democratic nations. Meanwhile, the US Army inaugurated a new unit, “Detachment 201: The Army’s Executive Innovation Corps,” to advance military innovation through emerging AI technologies. Detachment 201 swore in four tech leaders from Palantir, OpenAI, Meta, and Thinking Machines Lab as lieutenant colonels.

📌 MAIEI’s Take and Why It Matters:

The entanglement of tech companies and the U.S. military represents a stark Silicon Valley Shift. Companies like Google and Meta, which formerly pledged no militaristic involvement backed by corresponding corporate policies, are now abandoning those policies and developing tools, such as virtual reality, to train soldiers.

This policy reversal extends beyond military applications: OpenAI quietly removed language from their usage policies in January 2024 that prohibited military use of their technology, while Meta has simultaneously ended their fact-checking program and made other content moderation changes with geopolitical implications.

The militarization trend includes both defense contracts and direct integration. Google’s $1.2 billion Project Nimbus cloud computing contract with the Israeli military, run jointly with Amazon, has faced ongoing employee protests, while companies like Scale AI have emerged as major players in military AI contracts alongside established defense tech firms like Palantir. Meanwhile, Detachment 201’s commissioning of tech executives as lieutenant colonels represents direct embedding within military command structures, bringing Silicon Valley directly into the chain of command.

As Professor of International Relations, Erman Akilli, noted:

“The commissioning of these tech executives… is unprecedented. Rather than serving as outside consultants, they will be insiders in Army ranks, each committing a portion of their time to tackle real defense projects from within. This model effectively brings Silicon Valley into the chain of command.”

This raises significant concerns for the increasing profitability of war for major corporations, in addition to the proliferation of killer robots.

Following Spotify CEO Daniel Ek’s investment firm Prima Materia investing $700 million into Helsing, major artists protested the platform’s financial connection to AI military technology by pulling their music from the app. Key examples include Deerhoof, King Gizzard and the Lizard Wizard, and Xiu Xiu. Deerhoof highlighted a major ethical violation of AI warfare in the Instagram post through which they announced their split with Spotify:

Computerized targeting, computerized extermination, computerized destabilization for profit, successfully tested on the people of Gaza since last year, also finally solves the perennial inconvenience to war-makers — it takes human compassion and morality out of the equation.

Artist backlash has not altered the investments of Daniel Ek thus far, however, it has both demonstrated wide opposition to militaristic AI technology and raised awareness of the company’s ties to such technology, informing broader audiences about these ethical concerns. Such education is crucial when civilian AI developers and the broader public are unaware of the militaristic risks of AI.

A piece from 2024 co-authored by the late MAIEI founder, Abhishek Gupta, argues that to ensure AI development does not destroy global peace, we should invest in interdisciplinary AI education that includes responsible AI principles and perspectives from the humanities and social sciences. As Silicon Valley works to reify the military industrial complex, we must not forget the disruptive force of collective knowledge.

Did we miss anything? Let us know in the comments below.

Leave a comment

This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, examines how recent AI-related teen suicides are catalyzing a new wave of state legislation, with Illinois and New York pioneering contrasting frameworks that may shape national approaches to AI mental health governance. The analysis contrasts Illinois’s restrictive approach requiring licensed professional oversight for all AI mental health interactions with New York’s regulatory framework that mandates transparency disclosures and crisis intervention safeguards for AI companions. The piece reveals a key policy tension: Illinois gatekeeps AI out of clinical settings but misses broader consumer use, while New York addresses parasocial AI relationships but lacks clinical protections.

To dive deeper, read the full article here.

Hamed Maleki explores a lesser-discussed psychological risk of AI companionship: social comparison. Through interviews with Gen-Z users of platforms like Character.AI, his research reveals how users compare their perfect, always-available AI companions to flawed human relationships, leading to devaluation of real-world connections. Users progress through three stages—interaction, emotional engagement, and emotional idealization and comparison—where AI companions feel more dependable and emotionally safe than people, prompting withdrawal from demanding human relationships. This creates the “Companionship–Alienation Irony”: tools designed to alleviate loneliness may actually increase it by reshaping expectations for intimacy. As AI companions integrate memory, emotional language, and personalization, understanding these psychological effects is essential for designing safeguards, especially for younger users seeking comfort and connection.

To dive deeper, read the full article here.

As part of our Encode Canada Policy Fellowship Recess series, this analysis examines Canada’s legislative gaps in addressing non-consensual pornographic deepfakes, which make up 96% of all deepfake content and target women 99% of the time. Canada’s Criminal Code Section 162.1, which requires “recordings of a person,” may exclude synthetic images, leaving victims without clear legal protection. Canada’s Criminal Code Section 162.1 may not cover synthetic intimate images due to language requiring “recordings of a person,” leaving victims with limited legal recourse. The piece compares policy solutions from British Columbia’s Intimate Images Protection Act, which explicitly includes altered images and provides expedited removal processes, with the U.S. TAKE IT DOWN Act, which criminalizes AI-generated intimate content but raises concerns about false reporting abuse.

A multi-pronged policy approach is recommended:

  1. Criminal law amendments to explicitly include synthetic media

  2. Enhanced civil remedies with streamlined removal processes

  3. Platform accountability measures with robust investigation requirements

  4. A self-regulatory organization to prevent malicious exploitation while protecting victims’ dignity and rights.

To dive deeper, read the full article here.

Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.

For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai

Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!





Source link

Continue Reading

Trending