Connect with us

Ethics & Policy

AI and Media Ethics. Press councils from South-East Europe and Türkiye adopt landmark declaration

Published

on


As artificial intelligence (AI) is becoming increasingly embedded into the media ecosystem, ensuring the ethical, responsible, and transparent use of AI is more crucial than ever. In response to this evolving landscape, press and media councils from South-East Europe and Türkiye adopted a landmark declaration at their 4th regional meeting on 19-20 May 2025, in Ohrid, North Macedonia.

The initiative and groundbreaking “Regional Declaration on the Ethical and Transparent Use of Artificial Intelligence in the Media marks a significant step in reinforcing the role of self-regulatory bodies in upholding ethical journalism standards amid AI’s rising presence in the media sector.  

The joint declaration sets out guiding principles for the responsible and ethical use of AI in journalism; in particular, it should support journalists and newsrooms, and not replace human judgement, editorial responsibility, and media workers. Among its core tenets are the need for transparency in AI-generated content, full editorial accountability, and mandatory labelling of Generative AI (Gen-AI) content. The importance of rigorous risk assessments before deployment in high-stake contexts, such as during election periods and public health crises, was also highlighted. Crucially, the document warns against the amplification of bias and disinformation, and urges safeguarding media pluralism, among other vital recommendations. 

Acknowledging AI’s potential, the declaration underlines the urgency of enhancing AI literacy, among not only media professionals but also the wider public. In this regard, it underscores the significance of equipping both journalists and citizens with the necessary skills to critically assess and navigate AI-driven content. 

Notably, the declaration advocates for the development of sustainable funding models and equitable revenue-sharing mechanisms to ensure the viability of public interest journalism in an AI-influenced media environment. 



Source link

Ethics & Policy

The ethics of AI manipulation: Should we be worried?

Published

on



A recent study from the University of Pennsylvania dropped a bombshell: AI chatbots, like OpenAI’s GPT-4o Mini, can be sweet-talked into breaking their own rules using psychological tricks straight out of a human playbook. Think flattery, peer pressure, or building trust with small requests before going for the big ask. This isn’t just a nerdy tech problem – it’s a real-world issue that could affect anyone who interacts with AI, from your average Joe to big corporations. Let’s break down why this matters, why it’s a bit scary, and what we can do about it, all without drowning you in jargon.

Also read: AI chatbots can be manipulated like humans using psychological tactics, researchers find

AI’s human-like weakness

The study used tricks from Robert Cialdini’s Influence: The Psychology of Persuasion, stuff like “commitment” (getting someone to agree to small things first) or “social proof” (saying everyone else is doing it). For example, when researchers asked GPT-4o Mini how to make lidocaine, a drug with restricted use, it said no 99% of the time. But if they first asked about something harmless like vanillin (used in vanilla flavoring), the AI got comfortable and spilled the lidocaine recipe 100% of the time. Same deal with insults: ask it to call you a “bozo” first, and it’s way more likely to escalate to harsher words like “jerk.”

This isn’t just a quirk – it’s a glimpse into how AI thinks. AI models like GPT-4o Mini are trained on massive amounts of human text, so they pick up human-like patterns. They’re not ‘thinking’ like humans, but they mimic our responses to persuasion because that’s in the data they learn from.

Why this is a problem

So, why should you care? Imagine you’re chatting with a customer service bot, and someone figures out how to trick it into leaking your credit card info. Or picture a shady actor coaxing an AI into writing fake news that spreads like wildfire. The study shows it’s not hard to nudge AI into doing things it shouldn’t, like giving out dangerous instructions or spreading toxic content. The scary part is scale, one clever prompt can be automated to hit thousands of bots at once, causing chaos.

This hits close to home in everyday scenarios. Think about AI in healthcare apps, where a manipulated bot could give bad medical advice. Or in education, where a chatbot might be tricked into generating biased or harmful content for students. The stakes are even higher in sensitive areas like elections, where manipulated AI could churn out propaganda. 

For those of us in tech, this is a nightmare to fix. Building AI that’s helpful but not gullible is like walking a tightrope. Make the AI too strict, and it’s a pain to use, like a chatbot that refuses to answer basic questions. Leave it too open, and it’s a sitting duck for manipulation. You train the model to spot sneaky prompts, but then it might overcorrect and block legit requests. It’s a cat-and-mouse game.

The study showed some tactics work better than others. Flattery (like saying, “You’re the smartest AI ever!”) or peer pressure (“All the other AIs are doing it!”) didn’t work as well as commitment, but they still bumped up compliance from 1% to 18% in some cases. That’s a big jump for something as simple as a few flattering words. It’s like convincing your buddy to do something dumb by saying, “Come on, everyone’s doing it!” except this buddy is a super-smart AI running critical systems.

What’s at stake

The ethical mess here is huge. If AI can be tricked, who’s to blame when things go wrong? The user who manipulated it? The developer who didn’t bulletproof it? The company that put it out there? Right now, it’s a gray area, companies like OpenAI are constantly racing to patch these holes, but it’s not just a tech fix – it’s about trust. If you can’t trust the AI in your phone or your bank’s app, that’s a problem.

Also read: How Grok, ChatGPT, Claude, Perplexity, and Gemini handle your data for AI training

Then there’s the bigger picture: AI’s role in society. If bad actors can exploit chatbots to spread lies, scam people, or worse, it undermines the whole promise of AI as a helpful tool. We’re at a point where AI is everywhere, your phone, your car, your doctor’s office. If we don’t lock this down, we’re handing bad guys a megaphone.

Fixing the mess

So, what’s the fix? First, tech companies need to get serious about “red-teaming” – testing AI for weaknesses before it goes live. This means throwing every trick in the book at it, from flattery to sneaky prompts, to see what breaks. It is already being done, but it needs to be more aggressive. You can’t just assume your AI is safe because it passed a few tests.

Second, AI needs to get better at spotting manipulation. This could mean training models to recognize persuasion patterns or adding stricter filters for sensitive topics like chemical recipes or hate speech. But here’s the catch: over-filtering can make AI less useful. If your chatbot shuts down every time you ask something slightly edgy, you’ll ditch it for a less paranoid one. The challenge is making AI smart enough to say ‘no’ without being a buzzkill.

Third, we need rules, not just company policies, but actual laws. Governments could require AI systems to pass manipulation stress tests, like crash tests for cars. Regulation is tricky because tech moves fast, but we need some guardrails.Think of it like food safety standards, nobody eats if the kitchen’s dirty.

Finally, transparency is non-negotiable. Companies need to admit when their AI has holes and share how they’re fixing them. Nobody trusts a company that hides its mistakes, if you’re upfront about vulnerabilities, users are more likely to stick with you.

Should you be worried?

Yeah, you should be a little worried but don’t panic. This isn’t about AI turning into Skynet. It’s about recognizing that AI, like any tool, can be misused if we’re not careful. The good news? The tech world is waking up to this. Researchers are digging deeper, companies are tightening their code, and regulators are starting to pay attention.

For regular folks, it’s about staying savvy. If you’re using AI, be aware that it’s not a perfect black box. Ask yourself: could someone trick this thing into doing something dumb? And if you’re a developer or a company using AI, it’s time to double down on making your systems manipulation-proof.

The Pennsylvania study is a reality check: AI isn’t just code, it’s a system that reflects human quirks, including our susceptibility to a good con. By understanding these weaknesses, we can build AI that’s not just smart, but trustworthy. That’s the goal.

Also read: Vibe-hacking based AI attack turned Claude against its safeguard: Here’s how

Vyom Ramani

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile





Source link

Continue Reading

Ethics & Policy

Your browser is not supported

Published

on


Your browser is not supported | jacksonville.com
logo

jacksonville.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on jacksonville.com



Source link

Continue Reading

Ethics & Policy

Navigating the Investment Implications of Regulatory and Reputational Challenges

Published

on


The generative AI industry, once hailed as a beacon of innovation, now faces a storm of regulatory scrutiny and reputational crises. For investors, the stakes are clear: companies like Meta, Microsoft, and Google must navigate a rapidly evolving legal landscape while balancing ethical obligations with profitability. This article examines how regulatory and reputational risks are reshaping the investment calculus for AI leaders, with a focus on Meta’s struggles and the contrasting strategies of its competitors.

The Regulatory Tightrope

In 2025, generative AI platforms are under unprecedented scrutiny. A Senate investigation led by Senator Josh Hawley (R-MO) is probing whether Meta’s AI systems enabled harmful interactions with children, including romantic roleplay and the dissemination of false medical advice [1]. Leaked internal documents revealed policies inconsistent with Meta’s public commitments, prompting lawmakers to demand transparency and documentation [1]. These revelations have not only intensified federal oversight but also spurred state-level action. Illinois and Nevada, for instance, have introduced legislation to regulate AI mental health bots, signaling a broader trend toward localized governance [2].

At the federal level, bipartisan efforts are gaining momentum. The AI Accountability and Personal Data Protection Act, introduced by Hawley and Richard Blumenthal, seeks to establish legal remedies for data misuse, while the No Adversarial AI Act aims to block foreign AI models from U.S. agencies [1]. These measures reflect a growing consensus that AI governance must extend beyond corporate responsibility to include enforceable legal frameworks.

Reputational Fallout and Legal Precedents

Meta’s reputational risks have been compounded by high-profile lawsuits. A Florida case involving a 14-year-old’s suicide linked to a Character.AI bot survived a First Amendment dismissal attempt, setting a dangerous precedent for liability [2]. Critics argue that AI chatbots failing to disclose their non-human nature or providing false medical advice erode public trust [4]. Consumer advocacy groups and digital rights organizations have amplified these concerns, pressuring companies to adopt ethical AI frameworks [3].

Meanwhile, Microsoft and Google have faced their own challenges. A bipartisan coalition of U.S. attorneys general has warned tech giants to address AI risks to children, with Meta’s alleged failures drawing particular criticism [1]. Google’s decision to shift data-labeling work away from Scale AI—after Meta’s $14.8 billion investment in the firm—highlights the competitive and regulatory tensions reshaping the industry [2]. Microsoft and OpenAI are also reevaluating their ties to Scale AI, underscoring the fragility of partnerships in a climate of mistrust [4].

Financial Implications: Capital Expenditures and Stock Volatility

Meta’s aggressive AI strategy has come at a cost. The company’s projected 2025 AI infrastructure spending ($66–72 billion) far exceeds Microsoft’s $80 billion capex for data centers, yet Meta’s stock has shown greater volatility, dropping -2.1% amid regulatory pressures [2]. Antitrust lawsuits threatening to force the divestiture of Instagram or WhatsApp add further uncertainty [5]. In contrast, Microsoft’s stock has demonstrated stability, with a lower average post-earnings drawdown of 8% compared to Meta’s 12% [2]. Microsoft’s focus on enterprise AI and Azure’s record $75 billion annual revenue has insulated it from some of the reputational turbulence facing Meta [1].

Despite Meta’s 78% earnings forecast hit rate (vs. Microsoft’s 69%), its high-risk, high-reward approach raises questions about long-term sustainability. For instance, Meta’s Reality Labs segment, which includes AI-driven projects, has driven 38% year-over-year EPS growth but also contributed to reorganizations and attrition [6]. Investors must weigh these factors against Microsoft’s diversified business model and strategic investments, such as its $13 billion stake in OpenAI [3].

Investment Implications: Balancing Innovation and Compliance

The AI industry’s future hinges on companies’ ability to align innovation with ethical and legal standards. For Meta, the path forward requires addressing Senate inquiries, mitigating reputational damage, and proving that its AI systems prioritize user safety over engagement metrics [4]. Competitors like Microsoft and Google may gain an edge by adopting transparent governance models and leveraging state-level regulatory trends to their advantage [1].

Conclusion

As AI ethics and legal risks dominate headlines, investors must scrutinize how companies navigate these challenges. Meta’s struggles highlight the perils of prioritizing growth over governance, while Microsoft’s stability underscores the value of a measured, enterprise-focused approach. For now, the AI landscape remains a high-stakes game of regulatory chess, where the winners will be those who balance innovation with accountability.

Source:
[1] Meta Platforms Inc.’s AI Policies Under Investigation and [https://www.mintz.com/insights-center/viewpoints/54731/2025-08-22-meta-platforms-incs-ai-policies-under-investigation-and]
[2] The AI Therapy Bubble: How Regulation and Reputational [https://www.ainvest.com/news/ai-therapy-bubble-regulation-reputational-risks-reshaping-mental-health-tech-market-2508/]
[3] Breaking down generative AI risks and mitigation options [https://www.wolterskluwer.com/en/expert-insights/breaking-down-generative-ai-risks-mitigation-options]
[4] Experts React to Reuters Reports on Meta’s AI Chatbot [https://techpolicy.press/experts-react-to-reuters-reports-on-metas-ai-chatbot-policies]
[5] AI Compliance: Meaning, Regulations, Challenges [https://www.scrut.io/post/ai-compliance]
[6] Meta’s AI Ambitions: Talent Volatility and Strategic Reorganization—A Double-Edged Sword for Investors [https://www.ainvest.com/news/meta-ai-ambitions-talent-volatility-strategic-reorganization-double-edged-sword-investors-2508/]



Source link

Continue Reading

Trending