Connect with us

Ethics & Policy

When your AI knows you better than anyone: Privacy in the age of intimate assistants

Published

on


A very modern confidant

For generations, people found privacy by shutting the door, speaking in a low voice or keeping a journal in a personal drawer. Today, tens of millions of people share private information with AI assistants. A recent survey found that 60 per cent of US adults surveyed use both general AI chatbots and specialised AI tools in their daily lives. Today, people even rely on AI assistants for tasks once reserved for loved ones, therapists or doctors.

This intimacy is new – and it is increasing. While search engines have long answered discrete questions, AI assistants listen to the stories of our lives, connect the dots, and increasingly, draw upon previous interactions for further context. The result is a digital portrait richer than any search history: hopes, fears, moods, financial goals, medical goals and half-finished love letters.

As we share more of our digital ‘selves’ with AI assistants, we feel empowered. However, the depth of the data that we share creates new privacy and legal questions. How secure is our data? And who else can access – or demand access – to our data?

What we share

Unlike search engines and more traditional web interfaces, conversational AI assistants encourage oversharing. The convenience afforded by their simple and general-purpose interfaces can make it easy to overlook privacy protections like scrubbing names, account numbers and emotional context before hitting Enter.

When someone writes “I woke up anxious about my cardiology appointment,” or “help me negotiate my company’s lending term sheet” or “is my grandmother entitled to rent protection”, they share health information, trade secrets and family information, respectively. And each of those prompts is just the beginning of a conversation.

This sensitive information goes into cloud storage, model fine-tuning pipelines and even third-party plugins. Every such hop increases exposure risk.

Follow us on LinkedIn

First line of defence: security

AI platforms are a magnet for ‘prompt leaks.’ A recent study of 300 tools found that over 4% of prompts and 20% of files fed to chatbots contained confidential information. Attackers know that if they breach an AI assistant platform, they can gain access to everything its users chose to reveal.

However, in much of the world, AI prompts are treated like other cloud data. Policymakers can address this gap by encouraging or mandating strong encryption for conversation histories and standard data retention practices. For example, conversation history could be auto-deleted after 3 months.

Sophisticated businesses are already leading the way with enterprise-wide controls that prevent employees from inputting sensitive information into AI assistants. Policymakers can encourage businesses to develop cybersecurity frameworks that standardise and require such practices.

Health information and attorney-client records already enjoy special legal privileges and, consequently, special data handling requirements. Policymakers should explore extending similar privileges to conversations with AI assistants.

Second line of defence: Standardising lawful access

Even if a platform stays secure, different AI assistant platforms allow varying levels of internal access to conversation histories. Limited access is necessary to prevent AI assistants from being used for illegal purposes. Additionally, many AI assistant platforms aggregate trends on the kinds of conversations that people are having.

Rather than expect the users of AI assistants to review the fine print of how organisations might use their data, policymakers should put forth clear standards for them to abide by.

Additionally, in many jurisdictions, courts and authorities may be able to order AI assistant platforms to release specific conversations to assist with criminal investigations, civil discovery, employer audits or regulatory oversight. Significant precedent exists for courts and authorities demanding the release of e-mails, text messages and documents in cloud storage.

Policymakers should require AI providers to provide clear notice and publish transparency reports (e.g., how often internal conversation histories are accessed individually and in aggregate and how many external requests are received and complied with) so that users understand the risk. Here, policymakers will need to balance privacy requirements against judicial and law enforcement requirements.

Third line of defence: The digital estate dilemma

When a user becomes incapacitated, who can review their AI conversations? And when a user dies, who inherits their AI ‘memories’? In several recent cases, grieving family members have reviewed the AI conversation histories of their departed loved ones to foster better understanding. However, such histories may include sensitive or personal information related to other businesses or people, raising new data protection issues. And what if a family member wants to use the AI-generated voice of a departed loved one for personal or public-facing content?

Legal regimes are uneven or silent on many of these topics. Who can legally access the content of a person’s digital assets, such as e-mails and conversations with AI assistants, depends on the platform’s terms of service and the incapacity and succession rules of the jurisdiction.

As a practical matter, family members and friends may have access to an AI assistant’s conversation history when someone becomes incapacitated or passes away, regardless of the legal complexities. Conversely, even when platform rules or laws allow disclosure of such information, strong encryption can prevent access unless passwords are shared.

Policymakers can address these issues in three ways. First, they can clarify the roles of AI assistant history and digital likenesses in incapacity and succession rules. Second, they can participate in standardisation efforts across jurisdictions, given that data may be distributed across multiple regions. Third, they can consider adopting a set of default rules that overrule platforms’ terms of service.

The OECD AI Principles as a north star

As policymakers work to address these emerging issues related to conversations with AI assistants, the OECD AI Principles can serve as a north star. For example, human-centred values can guide requirements for platforms to give users easy access to privacy settings, the ability to opt out of data retention and the option to easily export conversation history. In the interests of transparency and explainability, policymakers should require platforms to disclose who can read stored conversation history and under what conditions. Robustness, security and safety considerations can inform which default encryption standards should apply to conversations with AI assistants.

AI assistant rules of thumb

As the policy process plays out, individuals should follow a few rules of thumb when using AI assistants. Stripping personal information from prompts or rephrasing them to avoid identifying them with specific people is an easy start. Becoming informed about the security and access rules that may apply to different platforms can also help individuals make informed decisions. In alignment with applicable laws, individuals should outline in estate planning documents which people they want to have access to their AI conversation histories, or if they want them deleted. If necessary, these wishes should be supplemented with clear processes for accessing passwords.

As with all relationships, trust is key

We have entered an era in which AI assistants can appear to know our hopes as well as our spouses and our financial goals better than our accountants. This has resulted in real benefits, such as personalised coaching and faster research in areas like finances and health. Yet this growing intimacy can also become dangerous, if it is not paired with strong privacy and consumer protection guardrails that address hacking, surveillance, access and ownership.

Building trust in tomorrow’s AI assistants will require shared responsibility: users empowered to practice data hygiene, AI assistant providers that bake privacy and consumer protection into their platforms and governments that modernise legal frameworks on an ongoing basis. If we succeed, the AI assistant will remain what we want it to be: a helpful friend who doesn’t pry and keeps our secrets safe.

The post When your AI knows you better than anyone: Privacy in the age of intimate assistants appeared first on OECD.AI.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Your browser is not supported

Published

on


Your browser is not supported | jacksonville.com
logo

jacksonville.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on jacksonville.com



Source link

Continue Reading

Ethics & Policy

Navigating the Investment Implications of Regulatory and Reputational Challenges

Published

on


The generative AI industry, once hailed as a beacon of innovation, now faces a storm of regulatory scrutiny and reputational crises. For investors, the stakes are clear: companies like Meta, Microsoft, and Google must navigate a rapidly evolving legal landscape while balancing ethical obligations with profitability. This article examines how regulatory and reputational risks are reshaping the investment calculus for AI leaders, with a focus on Meta’s struggles and the contrasting strategies of its competitors.

The Regulatory Tightrope

In 2025, generative AI platforms are under unprecedented scrutiny. A Senate investigation led by Senator Josh Hawley (R-MO) is probing whether Meta’s AI systems enabled harmful interactions with children, including romantic roleplay and the dissemination of false medical advice [1]. Leaked internal documents revealed policies inconsistent with Meta’s public commitments, prompting lawmakers to demand transparency and documentation [1]. These revelations have not only intensified federal oversight but also spurred state-level action. Illinois and Nevada, for instance, have introduced legislation to regulate AI mental health bots, signaling a broader trend toward localized governance [2].

At the federal level, bipartisan efforts are gaining momentum. The AI Accountability and Personal Data Protection Act, introduced by Hawley and Richard Blumenthal, seeks to establish legal remedies for data misuse, while the No Adversarial AI Act aims to block foreign AI models from U.S. agencies [1]. These measures reflect a growing consensus that AI governance must extend beyond corporate responsibility to include enforceable legal frameworks.

Reputational Fallout and Legal Precedents

Meta’s reputational risks have been compounded by high-profile lawsuits. A Florida case involving a 14-year-old’s suicide linked to a Character.AI bot survived a First Amendment dismissal attempt, setting a dangerous precedent for liability [2]. Critics argue that AI chatbots failing to disclose their non-human nature or providing false medical advice erode public trust [4]. Consumer advocacy groups and digital rights organizations have amplified these concerns, pressuring companies to adopt ethical AI frameworks [3].

Meanwhile, Microsoft and Google have faced their own challenges. A bipartisan coalition of U.S. attorneys general has warned tech giants to address AI risks to children, with Meta’s alleged failures drawing particular criticism [1]. Google’s decision to shift data-labeling work away from Scale AI—after Meta’s $14.8 billion investment in the firm—highlights the competitive and regulatory tensions reshaping the industry [2]. Microsoft and OpenAI are also reevaluating their ties to Scale AI, underscoring the fragility of partnerships in a climate of mistrust [4].

Financial Implications: Capital Expenditures and Stock Volatility

Meta’s aggressive AI strategy has come at a cost. The company’s projected 2025 AI infrastructure spending ($66–72 billion) far exceeds Microsoft’s $80 billion capex for data centers, yet Meta’s stock has shown greater volatility, dropping -2.1% amid regulatory pressures [2]. Antitrust lawsuits threatening to force the divestiture of Instagram or WhatsApp add further uncertainty [5]. In contrast, Microsoft’s stock has demonstrated stability, with a lower average post-earnings drawdown of 8% compared to Meta’s 12% [2]. Microsoft’s focus on enterprise AI and Azure’s record $75 billion annual revenue has insulated it from some of the reputational turbulence facing Meta [1].

Despite Meta’s 78% earnings forecast hit rate (vs. Microsoft’s 69%), its high-risk, high-reward approach raises questions about long-term sustainability. For instance, Meta’s Reality Labs segment, which includes AI-driven projects, has driven 38% year-over-year EPS growth but also contributed to reorganizations and attrition [6]. Investors must weigh these factors against Microsoft’s diversified business model and strategic investments, such as its $13 billion stake in OpenAI [3].

Investment Implications: Balancing Innovation and Compliance

The AI industry’s future hinges on companies’ ability to align innovation with ethical and legal standards. For Meta, the path forward requires addressing Senate inquiries, mitigating reputational damage, and proving that its AI systems prioritize user safety over engagement metrics [4]. Competitors like Microsoft and Google may gain an edge by adopting transparent governance models and leveraging state-level regulatory trends to their advantage [1].

Conclusion

As AI ethics and legal risks dominate headlines, investors must scrutinize how companies navigate these challenges. Meta’s struggles highlight the perils of prioritizing growth over governance, while Microsoft’s stability underscores the value of a measured, enterprise-focused approach. For now, the AI landscape remains a high-stakes game of regulatory chess, where the winners will be those who balance innovation with accountability.

Source:
[1] Meta Platforms Inc.’s AI Policies Under Investigation and [https://www.mintz.com/insights-center/viewpoints/54731/2025-08-22-meta-platforms-incs-ai-policies-under-investigation-and]
[2] The AI Therapy Bubble: How Regulation and Reputational [https://www.ainvest.com/news/ai-therapy-bubble-regulation-reputational-risks-reshaping-mental-health-tech-market-2508/]
[3] Breaking down generative AI risks and mitigation options [https://www.wolterskluwer.com/en/expert-insights/breaking-down-generative-ai-risks-mitigation-options]
[4] Experts React to Reuters Reports on Meta’s AI Chatbot [https://techpolicy.press/experts-react-to-reuters-reports-on-metas-ai-chatbot-policies]
[5] AI Compliance: Meaning, Regulations, Challenges [https://www.scrut.io/post/ai-compliance]
[6] Meta’s AI Ambitions: Talent Volatility and Strategic Reorganization—A Double-Edged Sword for Investors [https://www.ainvest.com/news/meta-ai-ambitions-talent-volatility-strategic-reorganization-double-edged-sword-investors-2508/]



Source link

Continue Reading

Ethics & Policy

7 Life-Changing Books Recommended by Catriona Wallace | Books

Published

on


7 Life-Changing Books Recommended by Catriona Wallace (Picture Credit – Instagram)

Some books ignite something immediate. Others change you quietly, over time. For Dr Catriona Wallace—tech entrepreneur, AI ethics advocate, and one of Australia’s most influential business leaders, books are more than just ideas on paper. They are frameworks, provocations, and spiritual companions. Her reading list offers not just guidance for navigating leadership and technology, but for embracing identity, power, and inner purpose. These seven titles reflect a mind shaped by disruption, ethics, feminism, and wisdom. They are not trend-driven. They are transformational.

1. Lean In by Sheryl Sandberg

A landmark in feminist career literature, Lean In challenges women to pursue their ambitions while confronting the structural and cultural forces that hold them back. Sandberg uses her own journey at Facebook and Google to dissect gender inequality in leadership. The book is part memoir, part manifesto, and remains divisive for valid reasons. But Wallace cites it as essential for starting difficult conversations about workplace dynamics and ambition. It asks, simply: what would you do if you weren’t afraid?

Lean In
Lean In (Picture Credit – Instagram)

2. Women and Power: A Manifesto by Mary Beard

In this sharp, incisive book, classicist Mary Beard examines the historical exclusion of women from power and public voice. From Medusa to misogynistic memes, Beard exposes how narratives built around silence and suppression persist today. The writing is fiery, brief, and packed with centuries of insight. Wallace recommends it for its ability to distil complex ideas into cultural clarity. It’s a reminder that power is not just a seat at the table; it is a script we are still rewriting.

3. The World of Numbers by Adam Spencer

A celebration of mathematics as storytelling, this book blends fun facts, puzzles, and history to reveal how numbers shape everything from music to human behaviour. Spencer, a comedian and maths lover, makes the subject inviting rather than intimidating. Wallace credits this book with sparking new curiosity about logic, data, and systems thinking. It’s not just for mathematicians. It’s for anyone ready to appreciate the beauty of patterns and the thinking habits that come with them.

4. Small Giants by Bo Burlingham

This book is a love letter to companies that chose to be great instead of big. Burlingham profiles fourteen businesses that opted for soul, purpose, and community over rapid growth. For Wallace, who has founded multiple mission-driven companies, this book affirms that success is not about scale. It is about integrity. Each story is a blueprint for building something meaningful, resilient, and values-aligned. It is a must-read for anyone tired of hustle culture and hungry for depth.

5. The Misogynist Factory by Alison Phipps

A searing academic work on the production of misogyny in modern institutions. Phipps connects the dots between sexual violence, neoliberalism, and resistance movements in a way that is as rigorous as it is radical. Wallace recommends this book for its clear-eyed confrontation of how systemic inequality persists beneath performative gestures. It equips readers with language to understand how power moves, morphs, and resists change. This is not light reading. It is a necessary reading for anyone seeking to challenge structural harm.

6. Tribes by Seth Godin

Godin’s central idea is simple but powerful: people don’t follow brands, they follow leaders who connect with them emotionally and intellectually. This book blends marketing, leadership, and human psychology to show how movements begin. Wallace highlights ‘Tribes’ as essential reading for purpose-driven founders and changemakers. It reminds readers that real influence is built on trust and shared values. Whether you’re leading a company or a cause, it’s a call to speak boldly and build your own tribe.

7. The Tibetan Book of Living and Dying by Sogyal Rinpoche

Equal parts spiritual guide and philosophical reflection, this book weaves Tibetan Buddhist teachings with Western perspectives on mortality, grief, and rebirth. Wallace turns to it not only for personal growth but also for grounding ethical decision-making in a deeper sense of purpose. It’s a book that speaks to those navigating endings—personal, spiritual, or professional and offers a path toward clarity and compassion. It does not offer answers. It offers presence, which is often far more powerful.

The Tibetan Book of Living and Dying
The Tibetan Book of Living and Dying (Picture Credit – Instagram)

The books that shape us are often those that disrupt us first. Catriona Wallace’s list is not filled with comfort reads. It’s made of hard questions, structural truths, and radical shifts in thinking. From feminist manifestos to Buddhist reflections, from purpose-led business to systemic critique, this bookshelf is a mirror of her own leadership—decisive, curious, and grounded in values. If you’re building something bold or seeking language for change, there’s a good chance one of these books will meet you where you are and carry you further than you expected.





Source link

Continue Reading

Trending