Connect with us

Ethics & Policy

Beyond the AI Hype: Mindful Steps Marketers Should Take Before Using GenAI

Published

on


In 2025, the prestigious Cannes Lions International Festival of Creativity made an unprecedented move by stripping agency DM9 of multiple awards, including a Creative Data Lions Grand Prix, after discovering the campaigns contained AI-generated and manipulated footage that misrepresented real-world results.

The agency had used generative AI to create synthetic visuals and doctored case films, leading juries to evaluate submissions under completely false pretenses.

This was a watershed moment that exposed how desperately our industry needs to catch up with the ethical implications of the AI tools we’re all racing to adopt.

The Promethean gap is now a chasm

I don’t know about you, but the speed at which AI is evolving before I even have time to comprehend the implications, is making me feel slightly nauseous with a mix of fear, excitement, and overwhelm. If you’re wondering what this feeling is, it has a name called ‘The Promethean Gap’.

German philosopher Günther Anders warned us about this disparity between our power to imagine and invent new technologies and our ethical ability to understand and manage them.

But this gap has now widened into a chasm because AI developments massively outpace our ability to even think about the governance or ethics of such applications. This is precisely where Maker Lab’s expertise comes in: we are not just about the hype; we focus on responsible and effective AI integration.

In a nutshell, whilst we’ve all been busy desperately trying to keep pace with the AI hype-train (myself included), we’re still figuring out how to make the best use of GenAI, let alone having the time or headspace to digest the ethics of it all.

For fellow marketers, you might feel like ethical conduct has been a topic of debate throughout your entire career. The concerns around AI are eerily similar to what we’ve faced before:

Transparency and consumer trust: Just as we learned from digital advertising scandals, being transparent about where and how consumer data is used, both explicitly and implicitly, is crucial. But AI’s opaque nature makes it even harder for consumers to understand how their data is used and how marketing messages are tailored, creating an unfair power dynamic.

Bias and representation: Remember DDB NZ’s “Correct the Internet” campaign, which highlighted how biased online information negatively impacts women in sports? AI amplifies this issue exponentially and biased training data can lead to marketing messages that reinforce harmful stereotypes and exclude marginalised groups. Don’t even get me started on the images GenAI presents when asked about what an immigrant looks like…versus an expat, for example. Try it and see for yourself.

The power dynamic problem: Like digital advertising and personalisation, AI is a double-edged sword because it offers valuable insights into consumer behaviour, but its ethical implications depend heavily on the data it’s trained on and the intentions of those who use it. Tools are not inherently unethical, but without proper human oversight, it can become one.

The Cannes Lions controversy perfectly illustrates what happens when we prioritise innovation speed over ethical consideration, as it results in agencies creating work that fundamentally deceives both judges and consumers.

Learning from Cannes: What went wrong and how to fix it

Following the DM9 controversy, Cannes Lions implemented several reforms that every marketing organisation should consider adopting:

  • Mandatory AI disclosure: All entries must explicitly state any use of generative AI
  • Enhanced ethics agreements: Stricter codes of conduct for all participants
  • AI detection technology: Advanced tools to identify manipulated or inauthentic content
  • Ethics review committees: Expert panels to evaluate questionable submissions

These changes signal that the industry is finally taking AI ethics seriously, but we can’t wait for external bodies to police our actions. This is why we help organisations navigate AI implementation through human-centric design principles, comprehensive team training, and ethical framework development.

As marketers adopt AI tools at breakneck speed, we’re seeing familiar ethical dilemmas amplified and accelerated. It is up to us to uphold a culture of ethics within our own organisations. Here’s how:

1. Governance (Not rigid rules)

Instead of blanket AI prohibitions, establish clear ethics committees and decision-making frameworks. Create AI ethics boards that include diverse perspectives, not just tech teams, but legal, creative, strategy, and client services representatives. Develop decision trees that help teams evaluate whether an AI application aligns with your company’s values before implementation. This ensures AI is used responsibly and aligns with company values from the outset.

Actionable step: Draft an ‘AI Ethics Canvas’, a one-page framework that teams must complete before deploying any AI tool, covering data sources, potential bias, transparency requirements, and consumer impact.

2. Safe experimentation spaces

Create environments where teams can test AI applications with built-in ethical checkpoints. Establish sandbox environments where the potential for harm is minimised, and learning is maximised. This means creating controlled environments where AI can be tested and refined ethically, ensuring human oversight.

Actionable step: Implement ‘AI Ethics Sprints’, where short, structured periods where teams test AI tools against real scenarios while documenting ethical considerations and potential pitfalls.

3. Cross-functional culture building

Foster open dialogue about AI implications across all organisational levels and departments. Make AI ethics discussions a regular part of team meetings, not just annual compliance training.

Actionable step: Institute monthly ‘AI Ethics Coffee Chats’ or ‘meet-ups’ where team members (or anyone in the company) can share AI tools they’re using and discuss ethical questions that arise. Create a shared document where people can flag ethical concerns without judgment.

We believe that human input and iteration is what sets great AI delivery apart from just churn, and we’re in the business of equipping brands with the best talent for their evolving needs. This signifies our commitment to integrating AI ethically across all teams.

Immediate steps you can take today

1. Audit your current AI tools: List every AI application your team uses and evaluate it against basic ethical criteria like transparency, bias potential, and consumer impact.

2. Implement disclosure protocols: Develop clear guidelines about when and how you will inform consumers about AI use in your campaigns.

3. Diversify your AI training data: Actively seek out diverse data sources and regularly audit for bias in AI outputs.

4. Create feedback loops: Establish mechanisms for consumers and team members to raise concerns about AI use without fear of retribution.

These are all areas where Maker Lab offers direct support. Our AI methodology extends across all areas where AI can drive measurable business impact, including creative development, media planning, client analytics, and strategic insights. We can help clients implement these steps effectively, ensuring they are not just compliant but also leveraging AI for positive impact.

The marketing industry has a trust problem and according to recent studies, consumer trust in advertising is at historic lows. The Cannes scandal and similar ethical failures, only deepen this crisis.

However, companies that proactively address AI ethics will differentiate themselves in an increasingly crowded and sceptical marketplace.

Tech leaders from OpenAI’s Sam Altman to Google’s Sundar Pichai have warned that we need more regulation and awareness of the power and responsibility that comes with AI. But again, we cannot wait for regulation to catch up.

The road ahead

Our goal at Maker Lab is to ensure we’re building tools and campaigns that enhance rather than exploit the human experience. Our expertise lies in developing ethical and impactful AI solutions, as demonstrated by our commitment to human-centric design and our proven track record. For instance, we have helped our client teams transform tasks into daily automated deliverables, thus achieving faster turnarounds, freeing up time for more valuable and quality work. We are well-equipped to guide clients in navigating the future of AI responsibly.

The Cannes Lions controversy should serve as a wake-up call because we have the power to shape how AI is used in marketing, but only if we act thoughtfully and together.

The future of marketing is about having the wisdom to use them responsibly. The question is whether we will choose to use AI ethically,

Because in the end, the technology that serves humanity best is the most thoughtfully applied.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

$40 Million Series B Raised To Drive Ethical AI And Empower Publishers

Published

on


ProRataAI, a company committed to building AI solutions that honor and reward the work of content creators, has announced the close of a $40 million Series B funding round. The round was led by Touring Capital, with participation from a growing network of investors who share ProRata’s vision for a more equitable and transparent AI ecosystem. This latest investment brings the company’s total funding to over $75 million since its founding just last year, and it marks a significant step forward in its mission to reshape how publishers engage with generative AI.

The company also announced the launch of Gist Answers, ProRata’s new AI-as-a-service platform designed to give publishers direct control over how AI interacts with their content. Gist Answers allows media organizations to embed custom AI search, summarization, and recommendation tools directly into their websites and digital properties. Rather than watching their content be scraped and repurposed without consent, publishers can now offer AI-powered experiences on their own terms—driving deeper engagement, longer user sessions, and more meaningful interactions with their audiences.

The platform has already attracted early-access partners representing over 100 publications, a testament to the growing demand for AI tools that respect editorial integrity and support sustainable business models. Gist Answers is designed to be flexible and intuitive, allowing publishers to tailor the AI experience to their brand’s voice and editorial standards. It’s not just about delivering answers—it’s about creating a richer, more interactive layer of discovery that keeps users engaged and informed.

Beyond direct integration, ProRata is also offering publishers the opportunity to license their content to inform Gist Answers across third-party destinations. More than 700 high-quality publications around the world have already joined this initiative, contributing to a growing network of licensed content that powers AI responses with verified, attributable information. This model is underpinned by ProRata’s proprietary content attribution technology, which ensures that every piece of content used by the AI is properly credited and compensated. In doing so, the company is building a framework where human creativity is not only preserved but actively rewarded in the AI economy.

Gist Answers is designed to work seamlessly with Gist Ads, ProRata’s innovative advertising platform that transforms AI-generated responses into premium ad inventory. By placing native, conversational ads adjacent to AI answers, Gist Ads creates a format that aligns with user intent and delivers strong performance for marketers. For publishers, this means new revenue streams that are directly tied to the value of their content and the engagement it drives.

ProRata’s approach stands in stark contrast to the extractive models that have dominated the early days of generative AI. The company was founded on the belief that the work of journalists, creators, and publishers is not just data to be mined—it’s a vital source of knowledge and insight that deserves recognition, protection, and compensation. By building systems that prioritize licensing over scraping, transparency over opacity, and partnership over exploitation, ProRata is proving that AI can be both powerful and principled.

How the funding will be used: With the Series B funding, ProRata plans to scale its team, expand its product offerings, and deepen its relationships with publishers and content creators around the world. The company is focused on building tools that are not only technologically advanced but also aligned with the values of the people who produce the content that fuels AI. As generative AI continues to evolve, ProRata is positioning itself as a trusted partner for publishers seeking to navigate this new landscape with confidence and integrity.

KEY QUOTES:

“Search has always shaped how people discover knowledge, but for too long publishers have been forced to give that power away. Gist Answers changes that dynamic, bringing AI search directly to their sites, where it deepens engagement, restores control, and opens entirely new paths for discovery.”

Bill Gross, CEO and founder of ProRata

“Generative AI is reshaping search and digital advertising, creating an opportunity for a new category of infrastructure to compensate content creators whose work powers the answers we are relying on daily. ProRata is addressing this inflection point with a market-neutral model designed to become the default platform for attribution and fair monetization across the ecosystem. We believe the shift toward AI-native search experiences will unlock greater value for advertisers, publishers, and consumers alike.”

Nagraj Kashyap, General Partner, Touring Capital

“As a publisher, our priority is making sure our journalism reaches audiences in trusted ways. By contributing our content to the Gist network, we know it’s being used ethically, with full credit, while also helping adopters of Gist Answers deliver accurate, high-quality responses to their readers.”

Nicholas Thompson, CEO of The Atlantic

“The role of publishers in the AI era is to ensure that trusted journalism remains central to how people search and learn. By partnering with ProRata, we’re showing how an established brand can embrace new technology like Gist Answers to deepen engagement and demonstrate the enduring value of quality journalism.”

Andrew Perlman, CEO of Recurrent, owner of Popular Science

“Search has always been critical to how our readers find and interact with content. With Gist Answers, our audience can engage directly with us and get trusted answers sourced from our reporting, strengthened by content from a vetted network of international media outlets. Engagement is higher, and we’re able to explore new revenue opportunities that simply didn’t exist before.”

Jeremy Gulban, CEO of CherryRoad Media

“We’re really excited to be partnering with ProRata. At Arena, we’re always looking for unique and innovative ways to better serve our audience, and Gist Answers allows us to adapt to new technology in an ethical way.”

Paul Edmondson, CEO of The Arena Group, owner of Parade and Athlon Sports



Source link

Continue Reading

Ethics & Policy

Michael Lissack’s New Book “Questioning Understanding”

Published

on

By


Image: https://www.globalnewslines.com/uploads/2025/09/06b23a7a1cd3a9eec5188c16c0896a60.jpg
Photo Courtesy: Michael Lissack

“Understanding is not a destination we reach, but a spiral we climb-each new question changes the view, and each new view reveals questions we couldn’t see before.”

Michael Lissack, Executive Director of the Second Order Science Foundation, cybernetics expert, and professor at Tongji University, has released his new book, “Questioning Understanding [https://www.amazon.com/Questioning-Understanding-Michael-Lissack/dp/B0FC1S1LYL].” Now available, the book explores a fresh perspective on scientific inquiry by encouraging readers to reconsider the assumptions that shape how we understand the world.

A Thought-Provoking Approach to Scientific Inquiry

In “Questioning Understanding,” Lissack introduces the concept of second-order science, a framework that examines the uncritically examined presuppositions (UCEPs) that often underlie scientific practices. These assumptions, while sometimes essential for scientific work, may also constrain our ability to explore complex phenomena fully. Lissack suggests that by engaging with these assumptions critically, there could be potential for a deeper understanding of the scientific process and its role in advancing human knowledge.

The book features an innovative tete-beche format, offering two entry points for readers: “Questioning right Understanding” or “Understanding right Questioning.” This structure reflects the dynamic relationship between knowledge and inquiry, aiming to highlight how questioning and understanding are interconnected and reciprocal. By offering two different entry paths, Lissack emphasizes that the journey of scientific inquiry is not linear. Instead, it’s a continuous process of revisiting previous assumptions and refining the lens through which we view the world.

The Battle Against Sloppy Science

Lissack’s work took on new urgency during the COVID-19 pandemic, when he witnessed an explosion of what he calls “slodderwetenschap”-Dutch for “sloppy science”-characterized by shortcuts, oversimplifications, and the proliferation of “truthies” (assertions that feel true regardless of their validity).

Working with colleague Brenden Meagher, Lissack identified how sloppy science undermines public trust through what he calls the “3Ts”-Truthies, TL;DR (oversimplification), and TCUSI (taking complex understanding for simple information). Their research revealed how “truthies spread rampantly during the pandemic, damaging public health communication” through “biased attention, confirmation bias, and confusion between surface information and deeper meanings”.

“COVID-19 demonstrated that good science seldom comes from taking shortcuts or relying on ‘truthies,'” Lissack notes.

“Good science, instead, demands that we continually ask what about a given factoid, label, category, or narrative affords its meaning-and then to base further inquiry on the assumptions, contexts, and constraints so revealed.”

AI as the New Frontier of Questioning

As AI technologies, including Large Language Models (LLMs), continue to influence research and scientific methods, Lissack’s work has become increasingly relevant. In his book “Questioning Understanding”, Lissack presents a thoughtful examination of AI in scientific research, urging a responsible approach to its use. He discusses how AI tools may support scientific progress but also notes that their potential limitations can undermine the rigor of research if used uncritically.

“AI tools have the capacity to both support and challenge the quality of scientific inquiry, depending on how they are employed,” says Lissack.

“It is essential that we engage with AI systems as partners in discovery-through reflective dialogue-rather than relying on them as simple solutions to complex problems.”

He stresses that while AI can significantly accelerate research, it is still important for human researchers to remain critically engaged with the data and models produced, questioning the assumptions encoded within AI systems.

With over 2,130 citations on Google Scholar, Lissack’s work continues to shape discussions on how knowledge is created and applied in modern research. His innovative ideas have influenced numerous fields, from cybernetics to the integration of AI in scientific inquiry.

Recognition and Global Impact

Lissack’s contributions to the academic world have earned him significant recognition. He was named among “Wall Street’s 25 Smartest Players” by Worth Magazine and included in the “100 Americans Who Most Influenced How We Think About Money.” His efforts extend beyond personal recognition; he advocates for a research landscape that emphasizes integrity, critical thinking, and ethical foresight in the application of emerging technologies, ensuring that these tools foster scientific progress without compromising standards.

About “Questioning Understanding”

“Questioning Understanding” provides an in-depth exploration of the assumptions that guide scientific inquiry, urging readers to challenge their perspectives. Designed as a tete-beche edition-two books in one with dual covers and no single entry point-it forces readers to choose where to begin: “Questioning right Understanding” or “Understanding right Questioning.” This innovative format reflects the recursive relationship between inquiry and insight at the heart of his work.

As Michael explains: “Understanding is fluid… if understanding is a river, questions shape the canyon the river flows in.” The book demonstrates how our assumptions about knowledge creation itself shape what we can discover, making the case for what he calls “reflexive scientific practice”-science that consciously examines its own presuppositions.

Image: https://www.globalnewslines.com/uploads/2025/09/a01d49d4c742e01bea6bfeb0a16f3132.jpg
Photo Courtesy: Michael Lissack

About Michael Lissack

Michael Lissack is a globally recognized figure in second-order science, cybernetics, and AI ethics. He is the Executive Director of the Second Order Science Foundation and a Professor of Design and Innovation at Tongji University in Shanghai. Lissack has served as President of the American Society for Cybernetics and is widely acknowledged for his contributions to the field of complexity science and the promotion of rigorous, ethical research practices.

Building on foundational work in cybernetics and complexity science, Lissack developed the framework of UnCritically Examined Presuppositions (UCEPs)-nine key dimensions, including context dependence, quantitative indexicality, and fundierung dependence, that act as “enabling constraints” in scientific inquiry. These hidden assumptions simultaneously make scientific work possible while limiting what can be observed or understood.

As Lissack explains: “Second order science examines variations in values assumed for these UCEPs and looks at the resulting impacts on related scientific claims. Second order science reveals hidden issues, problems, and assumptions which all too often escape the attention of the practicing scientist.”

Michael Lissack’s books are available through major retailers. Learn more about his work at lissack.com [https://www.lissack.com/] and the Second Order Science Foundation at secondorderscience.org [https://www.secondorderscience.org/].
Media Contact
Company Name: Digital Networking Agency
Email: Send Email [http://www.universalpressrelease.com/?pr=michael-lissacks-new-book-questioning-understanding-explores-the-future-of-scientific-inquiry-and-ai-ethics]
Phone: +1 571 233 9913
Country: United States
Website: https://www.digitalnetworkingagency.com/

Legal Disclaimer: Information contained on this page is provided by an independent third-party content provider. GetNews makes no warranties or responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you are affiliated with this article or have any complaints or copyright issues related to this article and would like it to be removed, please contact retract@swscontact.com

This release was published on openPR.



Source link

Continue Reading

Ethics & Policy

Michael Lissack’s New Book “Questioning Understanding” Explores the Future of Scientific Inquiry and AI Ethics

Published

on



Photo Courtesy: Michael Lissack

“Understanding is not a destination we reach, but a spiral we climb—each new question changes the view, and each new view reveals questions we couldn’t see before.”

Michael Lissack, Executive Director of the Second Order Science Foundation, cybernetics expert, and professor at Tongji University, has released his new book, “Questioning Understanding.” Now available, the book explores a fresh perspective on scientific inquiry by encouraging readers to reconsider the assumptions that shape how we understand the world.

A Thought-Provoking Approach to Scientific Inquiry

In “Questioning Understanding,” Lissack introduces the concept of second-order science, a framework that examines the uncritically examined presuppositions (UCEPs) that often underlie scientific practices. These assumptions, while sometimes essential for scientific work, may also constrain our ability to explore complex phenomena fully. Lissack suggests that by engaging with these assumptions critically, there could be potential for a deeper understanding of the scientific process and its role in advancing human knowledge.

The book features an innovative tête-bêche format, offering two entry points for readers: “Questioning → Understanding” or “Understanding → Questioning.” This structure reflects the dynamic relationship between knowledge and inquiry, aiming to highlight how questioning and understanding are interconnected and reciprocal. By offering two different entry paths, Lissack emphasizes that the journey of scientific inquiry is not linear. Instead, it’s a continuous process of revisiting previous assumptions and refining the lens through which we view the world.

The Battle Against Sloppy Science

Lissack’s work took on new urgency during the COVID-19 pandemic, when he witnessed an explosion of what he calls “slodderwetenschap”—Dutch for “sloppy science”—characterized by shortcuts, oversimplifications, and the proliferation of “truthies” (assertions that feel true regardless of their validity).

Working with colleague Brenden Meagher, Lissack identified how sloppy science undermines public trust through what he calls the “3Ts”—Truthies, TL;DR (oversimplification), and TCUSI (taking complex understanding for simple information). Their research revealed how “truthies spread rampantly during the pandemic, damaging public health communication” through “biased attention, confirmation bias, and confusion between surface information and deeper meanings”.

“COVID-19 demonstrated that good science seldom comes from taking shortcuts or relying on ‘truthies,'” Lissack notes.

“Good science, instead, demands that we continually ask what about a given factoid, label, category, or narrative affords its meaning—and then to base further inquiry on the assumptions, contexts, and constraints so revealed.”

AI as the New Frontier of Questioning

As AI technologies, including Large Language Models (LLMs), continue to influence research and scientific methods, Lissack’s work has become increasingly relevant. In his book “Questioning Understanding”, Lissack presents a thoughtful examination of AI in scientific research, urging a responsible approach to its use. He discusses how AI tools may support scientific progress but also notes that their potential limitations can undermine the rigor of research if used uncritically.

“AI tools have the capacity to both support and challenge the quality of scientific inquiry, depending on how they are employed,” says Lissack.

“It is essential that we engage with AI systems as partners in discovery—through reflective dialogue—rather than relying on them as simple solutions to complex problems.”

He stresses that while AI can significantly accelerate research, it is still important for human researchers to remain critically engaged with the data and models produced, questioning the assumptions encoded within AI systems.

With over 2,130 citations on Google Scholar, Lissack’s work continues to shape discussions on how knowledge is created and applied in modern research. His innovative ideas have influenced numerous fields, from cybernetics to the integration of AI in scientific inquiry.

Recognition and Global Impact

Lissack’s contributions to the academic world have earned him significant recognition. He was named among “Wall Street’s 25 Smartest Players” by Worth Magazine and included in the “100 Americans Who Most Influenced How We Think About Money.” His efforts extend beyond personal recognition; he advocates for a research landscape that emphasizes integrity, critical thinking, and ethical foresight in the application of emerging technologies, ensuring that these tools foster scientific progress without compromising standards.

About “Questioning Understanding”

“Questioning Understanding” provides an in-depth exploration of the assumptions that guide scientific inquiry, urging readers to challenge their perspectives. Designed as a tête-bêche edition—two books in one with dual covers and no single entry point—it forces readers to choose where to begin: “Questioning → Understanding” or “Understanding → Questioning.” This innovative format reflects the recursive relationship between inquiry and insight at the heart of his work.

As Michael explains: “Understanding is fluid… if understanding is a river, questions shape the canyon the river flows in.” The book demonstrates how our assumptions about knowledge creation itself shape what we can discover, making the case for what he calls “reflexive scientific practice”—science that consciously examines its own presuppositions.


Photo Courtesy: Michael Lissack

About Michael Lissack

Michael Lissack is a globally recognized figure in second-order science, cybernetics, and AI ethics. He is the Executive Director of the Second Order Science Foundation and a Professor of Design and Innovation at Tongji University in Shanghai. Lissack has served as President of the American Society for Cybernetics and is widely acknowledged for his contributions to the field of complexity science and the promotion of rigorous, ethical research practices.

Building on foundational work in cybernetics and complexity science, Lissack developed the framework of UnCritically Examined Presuppositions (UCEPs)—nine key dimensions, including context dependence, quantitative indexicality, and fundierung dependence, that act as “enabling constraints” in scientific inquiry. These hidden assumptions simultaneously make scientific work possible while limiting what can be observed or understood.

As Lissack explains: “Second order science examines variations in values assumed for these UCEPs and looks at the resulting impacts on related scientific claims. Second order science reveals hidden issues, problems, and assumptions which all too often escape the attention of the practicing scientist.”

Michael Lissack’s books are available through major retailers. Learn more about his work at lissack.com and the Second Order Science Foundation at secondorderscience.org.

Media Contact
Company Name: Digital Networking Agency
Email: Send Email
Phone: +1 571 233 9913
Country: United States
Website: https://www.digitalnetworkingagency.com/



Source link

Continue Reading

Trending