Connect with us

Ethics & Policy

Anupamaa’s Rupali Ganguly Lauds PM Modi For Banning Pakistani Content On OTT: Hats Off!

Published

on


Rupali Ganguly Lauds PM Modi For Banning Pakistani Content On OTT: Hats Off!

After banning Pakistani artists and their social media handles, the Ministry of Information and Broadcasting advised OTT platforms to limit streaming content linked to Pakistan. Hours after the order, actress Rupali Ganguly voiced her support. She praised Prime Minister Narendra Modi for taking steps to safeguard the nation’s digital space.

Rupali Ganguly Lauds PM Modi

Appreciating the government’s effort to tighten control over cross-border digital content, she wrote, “Hats off to Modi Govt for banning Pak streaming content! In times of tension, we must protect our digital borders.”

Rupali has been actively sharing and reacting to updates amid India-Pakistan tensions following the horrifying Pahalgam attack, where 26 innocent civilians lost their lives. Earlier today, she responded to the Indian Armed Forces targeting air defence radars and systems at multiple locations in Pakistan by writing, “Every tear will be avenged. #OperationSindoor is ongoing.”

Before this, she made headlines for calling out Pakistani actor Fawad Khan over his statement on Operation Sindoor, during which India successfully targeted nine terrorist bases in Pakistan and Pakistan-occupied Kashmir. She wrote, “You working in Indian films was also ‘shameful’ for us.”

Pakistani Content Banned On OTT

The Ministry of Information and Broadcasting has issued an advisory to OTT platforms, streaming services, and online publishers, asking them to avoid hosting content linked to Pakistan. As per the rules, platforms must follow the ‘Code of Ethics,’ which requires caution against sharing content that could harm India’s security, unity, or foreign relations, or that could incite violence. The advisory also reminded platforms to prevent users from uploading such harmful content. Citing the recent terrorist attack in Pahalgam, which killed several Indians and a Nepali citizen, the ministry highlighted links to Pakistan-based groups. It also instructed OTT platforms to stop streaming Pakistani web series, films, and songs.

“In the interest of national security, all OTT platforms, media streaming platforms, and intermediaries operating in India are advised to discontinue the web-series, films, songs, podcasts and other streaming media content, whether made available on a subscription-based model or otherwise, having its origins in Pakistan with immediate effect,” said the advisory.





Source link

Ethics & Policy

Beyond the AI Hype: Mindful Steps Marketers Should Take Before Using GenAI

Published

on


In 2025, the prestigious Cannes Lions International Festival of Creativity made an unprecedented move by stripping agency DM9 of multiple awards, including a Creative Data Lions Grand Prix, after discovering the campaigns contained AI-generated and manipulated footage that misrepresented real-world results.

The agency had used generative AI to create synthetic visuals and doctored case films, leading juries to evaluate submissions under completely false pretenses.

This was a watershed moment that exposed how desperately our industry needs to catch up with the ethical implications of the AI tools we’re all racing to adopt.

The Promethean gap is now a chasm

I don’t know about you, but the speed at which AI is evolving before I even have time to comprehend the implications, is making me feel slightly nauseous with a mix of fear, excitement, and overwhelm. If you’re wondering what this feeling is, it has a name called ‘The Promethean Gap’.

German philosopher Günther Anders warned us about this disparity between our power to imagine and invent new technologies and our ethical ability to understand and manage them.

But this gap has now widened into a chasm because AI developments massively outpace our ability to even think about the governance or ethics of such applications. This is precisely where Maker Lab’s expertise comes in: we are not just about the hype; we focus on responsible and effective AI integration.

In a nutshell, whilst we’ve all been busy desperately trying to keep pace with the AI hype-train (myself included), we’re still figuring out how to make the best use of GenAI, let alone having the time or headspace to digest the ethics of it all.

For fellow marketers, you might feel like ethical conduct has been a topic of debate throughout your entire career. The concerns around AI are eerily similar to what we’ve faced before:

Transparency and consumer trust: Just as we learned from digital advertising scandals, being transparent about where and how consumer data is used, both explicitly and implicitly, is crucial. But AI’s opaque nature makes it even harder for consumers to understand how their data is used and how marketing messages are tailored, creating an unfair power dynamic.

Bias and representation: Remember DDB NZ’s “Correct the Internet” campaign, which highlighted how biased online information negatively impacts women in sports? AI amplifies this issue exponentially and biased training data can lead to marketing messages that reinforce harmful stereotypes and exclude marginalised groups. Don’t even get me started on the images GenAI presents when asked about what an immigrant looks like…versus an expat, for example. Try it and see for yourself.

The power dynamic problem: Like digital advertising and personalisation, AI is a double-edged sword because it offers valuable insights into consumer behaviour, but its ethical implications depend heavily on the data it’s trained on and the intentions of those who use it. Tools are not inherently unethical, but without proper human oversight, it can become one.

The Cannes Lions controversy perfectly illustrates what happens when we prioritise innovation speed over ethical consideration, as it results in agencies creating work that fundamentally deceives both judges and consumers.

Learning from Cannes: What went wrong and how to fix it

Following the DM9 controversy, Cannes Lions implemented several reforms that every marketing organisation should consider adopting:

  • Mandatory AI disclosure: All entries must explicitly state any use of generative AI
  • Enhanced ethics agreements: Stricter codes of conduct for all participants
  • AI detection technology: Advanced tools to identify manipulated or inauthentic content
  • Ethics review committees: Expert panels to evaluate questionable submissions

These changes signal that the industry is finally taking AI ethics seriously, but we can’t wait for external bodies to police our actions. This is why we help organisations navigate AI implementation through human-centric design principles, comprehensive team training, and ethical framework development.

As marketers adopt AI tools at breakneck speed, we’re seeing familiar ethical dilemmas amplified and accelerated. It is up to us to uphold a culture of ethics within our own organisations. Here’s how:

1. Governance (Not rigid rules)

Instead of blanket AI prohibitions, establish clear ethics committees and decision-making frameworks. Create AI ethics boards that include diverse perspectives, not just tech teams, but legal, creative, strategy, and client services representatives. Develop decision trees that help teams evaluate whether an AI application aligns with your company’s values before implementation. This ensures AI is used responsibly and aligns with company values from the outset.

Actionable step: Draft an ‘AI Ethics Canvas’, a one-page framework that teams must complete before deploying any AI tool, covering data sources, potential bias, transparency requirements, and consumer impact.

2. Safe experimentation spaces

Create environments where teams can test AI applications with built-in ethical checkpoints. Establish sandbox environments where the potential for harm is minimised, and learning is maximised. This means creating controlled environments where AI can be tested and refined ethically, ensuring human oversight.

Actionable step: Implement ‘AI Ethics Sprints’, where short, structured periods where teams test AI tools against real scenarios while documenting ethical considerations and potential pitfalls.

3. Cross-functional culture building

Foster open dialogue about AI implications across all organisational levels and departments. Make AI ethics discussions a regular part of team meetings, not just annual compliance training.

Actionable step: Institute monthly ‘AI Ethics Coffee Chats’ or ‘meet-ups’ where team members (or anyone in the company) can share AI tools they’re using and discuss ethical questions that arise. Create a shared document where people can flag ethical concerns without judgment.

We believe that human input and iteration is what sets great AI delivery apart from just churn, and we’re in the business of equipping brands with the best talent for their evolving needs. This signifies our commitment to integrating AI ethically across all teams.

Immediate steps you can take today

1. Audit your current AI tools: List every AI application your team uses and evaluate it against basic ethical criteria like transparency, bias potential, and consumer impact.

2. Implement disclosure protocols: Develop clear guidelines about when and how you will inform consumers about AI use in your campaigns.

3. Diversify your AI training data: Actively seek out diverse data sources and regularly audit for bias in AI outputs.

4. Create feedback loops: Establish mechanisms for consumers and team members to raise concerns about AI use without fear of retribution.

These are all areas where Maker Lab offers direct support. Our AI methodology extends across all areas where AI can drive measurable business impact, including creative development, media planning, client analytics, and strategic insights. We can help clients implement these steps effectively, ensuring they are not just compliant but also leveraging AI for positive impact.

The marketing industry has a trust problem and according to recent studies, consumer trust in advertising is at historic lows. The Cannes scandal and similar ethical failures, only deepen this crisis.

However, companies that proactively address AI ethics will differentiate themselves in an increasingly crowded and sceptical marketplace.

Tech leaders from OpenAI’s Sam Altman to Google’s Sundar Pichai have warned that we need more regulation and awareness of the power and responsibility that comes with AI. But again, we cannot wait for regulation to catch up.

The road ahead

Our goal at Maker Lab is to ensure we’re building tools and campaigns that enhance rather than exploit the human experience. Our expertise lies in developing ethical and impactful AI solutions, as demonstrated by our commitment to human-centric design and our proven track record. For instance, we have helped our client teams transform tasks into daily automated deliverables, thus achieving faster turnarounds, freeing up time for more valuable and quality work. We are well-equipped to guide clients in navigating the future of AI responsibly.

The Cannes Lions controversy should serve as a wake-up call because we have the power to shape how AI is used in marketing, but only if we act thoughtfully and together.

The future of marketing is about having the wisdom to use them responsibly. The question is whether we will choose to use AI ethically,

Because in the end, the technology that serves humanity best is the most thoughtfully applied.





Source link

Continue Reading

Ethics & Policy

Leadership and Ethics in an AI-Driven Evolution

Published

on







































Leader Forum: Leadership and Ethics in an AI-Driven Evolution
















Hei, det ser ut som du bruker en utdatert nettleser. Vi anbefaler at du har siste versjon av nettleseren installert. Tekna.no støtter blant annet Edge,
Firefox, Google Chrome, Safari og Opera. Dersom du ikke har mulighet til å oppdatere nettleseren til siste versjon, kan du laste ned andre nettlesere her:

http://browsehappy.com