Ethics & Policy
Anupamaa’s Rupali Ganguly Lauds PM Modi For Banning Pakistani Content On OTT: Hats Off!

Rupali Ganguly Lauds PM Modi For Banning Pakistani Content On OTT: Hats Off!
After banning Pakistani artists and their social media handles, the Ministry of Information and Broadcasting advised OTT platforms to limit streaming content linked to Pakistan. Hours after the order, actress Rupali Ganguly voiced her support. She praised Prime Minister Narendra Modi for taking steps to safeguard the nation’s digital space.
Rupali Ganguly Lauds PM Modi
Appreciating the government’s effort to tighten control over cross-border digital content, she wrote, “Hats off to Modi Govt for banning Pak streaming content! In times of tension, we must protect our digital borders.”
Rupali has been actively sharing and reacting to updates amid India-Pakistan tensions following the horrifying Pahalgam attack, where 26 innocent civilians lost their lives. Earlier today, she responded to the Indian Armed Forces targeting air defence radars and systems at multiple locations in Pakistan by writing, “Every tear will be avenged. #OperationSindoor is ongoing.”
Before this, she made headlines for calling out Pakistani actor Fawad Khan over his statement on Operation Sindoor, during which India successfully targeted nine terrorist bases in Pakistan and Pakistan-occupied Kashmir. She wrote, “You working in Indian films was also ‘shameful’ for us.”
Pakistani Content Banned On OTT
The Ministry of Information and Broadcasting has issued an advisory to OTT platforms, streaming services, and online publishers, asking them to avoid hosting content linked to Pakistan. As per the rules, platforms must follow the ‘Code of Ethics,’ which requires caution against sharing content that could harm India’s security, unity, or foreign relations, or that could incite violence. The advisory also reminded platforms to prevent users from uploading such harmful content. Citing the recent terrorist attack in Pahalgam, which killed several Indians and a Nepali citizen, the ministry highlighted links to Pakistan-based groups. It also instructed OTT platforms to stop streaming Pakistani web series, films, and songs.
“In the interest of national security, all OTT platforms, media streaming platforms, and intermediaries operating in India are advised to discontinue the web-series, films, songs, podcasts and other streaming media content, whether made available on a subscription-based model or otherwise, having its origins in Pakistan with immediate effect,” said the advisory.
Ethics & Policy
Beyond the AI Hype: Mindful Steps Marketers Should Take Before Using GenAI

In 2025, the prestigious Cannes Lions International Festival of Creativity made an unprecedented move by stripping agency DM9 of multiple awards, including a Creative Data Lions Grand Prix, after discovering the campaigns contained AI-generated and manipulated footage that misrepresented real-world results.
The agency had used generative AI to create synthetic visuals and doctored case films, leading juries to evaluate submissions under completely false pretenses.
This was a watershed moment that exposed how desperately our industry needs to catch up with the ethical implications of the AI tools we’re all racing to adopt.
The Promethean gap is now a chasm
I don’t know about you, but the speed at which AI is evolving before I even have time to comprehend the implications, is making me feel slightly nauseous with a mix of fear, excitement, and overwhelm. If you’re wondering what this feeling is, it has a name called ‘The Promethean Gap’.
German philosopher Günther Anders warned us about this disparity between our power to imagine and invent new technologies and our ethical ability to understand and manage them.
But this gap has now widened into a chasm because AI developments massively outpace our ability to even think about the governance or ethics of such applications. This is precisely where Maker Lab’s expertise comes in: we are not just about the hype; we focus on responsible and effective AI integration.
In a nutshell, whilst we’ve all been busy desperately trying to keep pace with the AI hype-train (myself included), we’re still figuring out how to make the best use of GenAI, let alone having the time or headspace to digest the ethics of it all.
For fellow marketers, you might feel like ethical conduct has been a topic of debate throughout your entire career. The concerns around AI are eerily similar to what we’ve faced before:
Transparency and consumer trust: Just as we learned from digital advertising scandals, being transparent about where and how consumer data is used, both explicitly and implicitly, is crucial. But AI’s opaque nature makes it even harder for consumers to understand how their data is used and how marketing messages are tailored, creating an unfair power dynamic.
Bias and representation: Remember DDB NZ’s “Correct the Internet” campaign, which highlighted how biased online information negatively impacts women in sports? AI amplifies this issue exponentially and biased training data can lead to marketing messages that reinforce harmful stereotypes and exclude marginalised groups. Don’t even get me started on the images GenAI presents when asked about what an immigrant looks like…versus an expat, for example. Try it and see for yourself.
The power dynamic problem: Like digital advertising and personalisation, AI is a double-edged sword because it offers valuable insights into consumer behaviour, but its ethical implications depend heavily on the data it’s trained on and the intentions of those who use it. Tools are not inherently unethical, but without proper human oversight, it can become one.
The Cannes Lions controversy perfectly illustrates what happens when we prioritise innovation speed over ethical consideration, as it results in agencies creating work that fundamentally deceives both judges and consumers.
Learning from Cannes: What went wrong and how to fix it
Following the DM9 controversy, Cannes Lions implemented several reforms that every marketing organisation should consider adopting:
- Mandatory AI disclosure: All entries must explicitly state any use of generative AI
- Enhanced ethics agreements: Stricter codes of conduct for all participants
- AI detection technology: Advanced tools to identify manipulated or inauthentic content
- Ethics review committees: Expert panels to evaluate questionable submissions
These changes signal that the industry is finally taking AI ethics seriously, but we can’t wait for external bodies to police our actions. This is why we help organisations navigate AI implementation through human-centric design principles, comprehensive team training, and ethical framework development.
As marketers adopt AI tools at breakneck speed, we’re seeing familiar ethical dilemmas amplified and accelerated. It is up to us to uphold a culture of ethics within our own organisations. Here’s how:
1. Governance (Not rigid rules)
Instead of blanket AI prohibitions, establish clear ethics committees and decision-making frameworks. Create AI ethics boards that include diverse perspectives, not just tech teams, but legal, creative, strategy, and client services representatives. Develop decision trees that help teams evaluate whether an AI application aligns with your company’s values before implementation. This ensures AI is used responsibly and aligns with company values from the outset.
Actionable step: Draft an ‘AI Ethics Canvas’, a one-page framework that teams must complete before deploying any AI tool, covering data sources, potential bias, transparency requirements, and consumer impact.
2. Safe experimentation spaces
Create environments where teams can test AI applications with built-in ethical checkpoints. Establish sandbox environments where the potential for harm is minimised, and learning is maximised. This means creating controlled environments where AI can be tested and refined ethically, ensuring human oversight.
Actionable step: Implement ‘AI Ethics Sprints’, where short, structured periods where teams test AI tools against real scenarios while documenting ethical considerations and potential pitfalls.
3. Cross-functional culture building
Foster open dialogue about AI implications across all organisational levels and departments. Make AI ethics discussions a regular part of team meetings, not just annual compliance training.
Actionable step: Institute monthly ‘AI Ethics Coffee Chats’ or ‘meet-ups’ where team members (or anyone in the company) can share AI tools they’re using and discuss ethical questions that arise. Create a shared document where people can flag ethical concerns without judgment.
We believe that human input and iteration is what sets great AI delivery apart from just churn, and we’re in the business of equipping brands with the best talent for their evolving needs. This signifies our commitment to integrating AI ethically across all teams.
Immediate steps you can take today
1. Audit your current AI tools: List every AI application your team uses and evaluate it against basic ethical criteria like transparency, bias potential, and consumer impact.
2. Implement disclosure protocols: Develop clear guidelines about when and how you will inform consumers about AI use in your campaigns.
3. Diversify your AI training data: Actively seek out diverse data sources and regularly audit for bias in AI outputs.
4. Create feedback loops: Establish mechanisms for consumers and team members to raise concerns about AI use without fear of retribution.
These are all areas where Maker Lab offers direct support. Our AI methodology extends across all areas where AI can drive measurable business impact, including creative development, media planning, client analytics, and strategic insights. We can help clients implement these steps effectively, ensuring they are not just compliant but also leveraging AI for positive impact.
The marketing industry has a trust problem and according to recent studies, consumer trust in advertising is at historic lows. The Cannes scandal and similar ethical failures, only deepen this crisis.
However, companies that proactively address AI ethics will differentiate themselves in an increasingly crowded and sceptical marketplace.
Tech leaders from OpenAI’s Sam Altman to Google’s Sundar Pichai have warned that we need more regulation and awareness of the power and responsibility that comes with AI. But again, we cannot wait for regulation to catch up.
The road ahead
Our goal at Maker Lab is to ensure we’re building tools and campaigns that enhance rather than exploit the human experience. Our expertise lies in developing ethical and impactful AI solutions, as demonstrated by our commitment to human-centric design and our proven track record. For instance, we have helped our client teams transform tasks into daily automated deliverables, thus achieving faster turnarounds, freeing up time for more valuable and quality work. We are well-equipped to guide clients in navigating the future of AI responsibly.
The Cannes Lions controversy should serve as a wake-up call because we have the power to shape how AI is used in marketing, but only if we act thoughtfully and together.
The future of marketing is about having the wisdom to use them responsibly. The question is whether we will choose to use AI ethically,
Because in the end, the technology that serves humanity best is the most thoughtfully applied.
Ethics & Policy
Leadership and Ethics in an AI-Driven Evolution

Hei, det ser ut som du bruker en utdatert nettleser. Vi anbefaler at du har siste versjon av nettleseren installert. Tekna.no støtter blant annet Edge,
Firefox, Google Chrome, Safari og Opera. Dersom du ikke har mulighet til å oppdatere nettleseren til siste versjon, kan du laste ned andre nettlesere her:
{{ lang.changeRequest.changeSubmitted }}
Om foredragsholderen
{{state.speaker.FirstName}} {{state.speaker.MiddleName}} {{state.speaker.LastName}}
{{state.speaker.JobTitle}}
{{state.speaker.Workplace}}
{{state.speaker.Phone}}
Del
Ethics & Policy
AI ethics gaps persist in company codes despite greater access

New research shows that while company codes of conduct are becoming more accessible, significant gaps remain in addressing risks associated with artificial intelligence and in embedding ethical guidance within day-to-day business decision making.
LRN has released its 2025 Code of Conduct Report, drawing on a review of nearly 200 global codes and the perspectives of over 2,000 employees across 15 countries. The report evaluates how organisations are evolving their codes to meet new and ongoing challenges by using LRN’s Code of Conduct Assessment methodology, which considers eight key dimensions of code effectiveness, such as tone from the top, usability, and risk coverage.
Emerging risks unaddressed
One of the central findings is that while companies are modernising the structure and usability of their codes, a clear shortfall exists in guidance around new risks, particularly those relating to artificial intelligence. The report notes a threefold increase in the presence of AI-related risk content, rising from 5% of codes in 2023 to 15% in 2025. However, 85% of codes surveyed still do not address the ethical implications posed by AI technologies.
“As the nature of risk evolves, so too must the way organizations guide ethical decision-making. Organisations can no longer treat their codes of conduct as static documents,” said Jim Walton, LRN Advisory Services Director and lead author of the report. “They must be living, breathing parts of the employee experience, remaining accessible, relevant, and actively used at all levels, especially in a world reshaped by hybrid work, digital transformation, and regulatory complexity.”
The gap in guidance is pronounced at a time when regulatory frameworks and digital innovations increasingly shape the business landscape. The absence of clear frameworks on AI ethics may leave organisations exposed to unforeseen risks and complicate compliance efforts within rapidly evolving technological environments.
Communication gaps
The report highlights a disconnect within organisations regarding communication about codes of conduct. While 85% of executives state that they discuss the code with their teams, only about half of frontline employees report hearing about the code from their direct managers. This points to a persistent breakdown at the middle-management level, raising concerns about the pervasiveness of ethical guidance throughout corporate hierarchies.
Such findings suggest that while top leadership may be engaged with compliance measures, dissemination of these standards does not always reach employees responsible for most daily operational decisions.
Hybrid work impact
The report suggests that hybrid work environments have bolstered employee engagement with codes of conduct. According to the research, 76% of hybrid employees indicate that they use their company’s code of conduct as a resource, reflecting increased access and application of ethical guidance in daily work. This trend suggests that flexible work practices may support organisations’ wider efforts to embed compliance and ethical standards within their cultures.
Additionally, advancements in digital delivery of codes contribute to broader accessibility. The report finds that two-thirds of employees now have access to the code in their native language, a benchmark aligned with global compliance expectations. Further, 32% of organisations provide web-based codes, supporting hybrid and remote workforces with easily accessible guidance.
Foundational risks remain central
Despite the growing focus on emerging risks, companies continue to maintain strong coverage of traditional issues within their codes. Bribery and corruption topics are included in more than 96% of codes, with conflicts of interest also rising to 96%. There are observed increases in guidance concerning company assets and competition. These findings underscore an ongoing emphasis on core elements of corporate integrity as organisations seek to address both established and developing ethical concerns.
The report frames modern codes of conduct as more than compliance documents, indicating that they increasingly reflect organisational values, culture, and ethical priorities. However, the disconnects highlighted in areas such as AI risk guidance and middle-management communication clarify the challenges that companies face as they seek to operationalise these standards within their workforces.
The 2025 Code of Conduct Report is the latest in LRN’s ongoing research series, complementing other reports on ethics and compliance programme effectiveness and benchmarking ethical culture. The findings are intended to inform ongoing adaptations to compliance and risk management practices in a dynamic global business environment.
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions