Connect with us

Ethics & Policy

AI Adoption Is Surging in Advertising, but is the Industry Prepared for Responsible AI?

Published

on


AI is now a regular part of marketing and advertising: more than half of marketers already use GenAI for creative content and audience targeting, while nearly all plan to expand AI use next year, especially for content development and audience engagement. But while adoption is accelerating, safeguards are not. Over 70% of marketers have encountered an AI-related incident in their advertising efforts, including hallucinations, bias, or off-brand content, yet less than 35% plan to increase investment in AI governance or brand integrity oversight over the next 12 months. 

This research, conducted by IAB in partnership with Aymara surveyed 125 advertising industry executives in the U.S. using the IAB Insights Engine platform powered by Attest. The data paints a striking picture: AI adoption, and thus AI-related challenges, are outpacing safeguards. And industry leaders are raising the alarm.

As AI becomes central to how brands create content and connect with audiences, the advertising industry is at an inflection point. Marketers are eager to innovate, but without clear governance, they risk brand trust, compliance, and long-term value. Now is the time for coordinated action to prioritize shared standards, stronger tools, and responsible practices to ensure AI enhances – rather than undermines – the future of advertising.

AI Is Everywhere in Marketing, and Still Growing

AI is now part of the marketing toolkit across the board. Over half of marketers are already using it for creative content, audience targeting, customer support, with nearly as many applying it to predictive analytics.

And usage is set to grow: 58% plan to increase AI for creative generation in the next year, along with expanded use in chatbots, targeting, and forecasting. AI isn’t just a trend, it’s quickly becoming core to how marketing gets done.

But concerns with AI are high: Incidents are already happening at an alarming rate, and a single incident can impact ROI 

Marketers are well aware that there can be risks with AI-generated advertising. Top concerns include misinformation and deepfakes, loss of creative control, and brand integrity risks from offensive or harmful outputs. Many also worry about consumer trust, with 37% fearing audiences will distrust ads made by AI. Other concerns include bias and fairness, regulatory compliance, and the challenge of monitoring AI content at scale. Some flagged the threat of adversarial prompts, like jailbreaks that trick models into unsafe behavior.

The takeaway: AI can pose serious ethical and quality risks, and marketers know these issues can damage trust and brand reputation. That’s why over 60% support labeling AI-generated ads, with only 15% opposed—signaling a strong push for transparency as a trust safeguard.

And these aren’t future risks. AI-related issues are already affecting advertising campaigns. In the research, 70% of marketers reported at least one AI incident. Common problems included hallucinated outputs (AI generated content that was factually incorrect, nonsensical, or fabricated), biased or inappropriate content, and off-brand or offensive material. Others saw loss of creative control and failures in regulatory compliance.

The consequences were significant: 40% had to pause or pull ads, over a third dealt with brand damage or PR issues, and nearly 30% had to conduct internal audits. Some saw wasted budgets, client complaints, or legal concerns. Only 6% said the impact was minimal.

These early missteps are a clear warning – without proper oversight, AI can scale risks as fast as it scales output.

Patchy Safeguards and A False Sense of Security

Despite growing risks, AI oversight remains inconsistent. Most teams rely on human review and brand integrity checklists, which are important but basic steps. More advanced practices are far less common such as consulting external AI ethics experts, running red team testing, and using automated evaluation tools. Alarmingly, 10% of respondents either do nothing or aren’t sure how they manage AI risks.

Yet confidence remains high. Nearly 90% say they feel prepared to catch AI issues before launch. This may reflect trust in existing workflows, but given that 70% have already had incidents, it also suggests a false sense of security.

The reality: only one-third of brands, agencies, and publishers have adopted or plan to adopt any formal governance tools, leaving major gaps (IAB State of Data 2025: The Now, The Near, and The Next Evolution of AI for Media Campaigns). There’s a strong need – and opportunity – for more structured, scalable safeguards across the industry including systems that flag risk, ensure alignment, and protect brand trust before campaigns reach the public.

Industry Calls for Standards, Tools, and Transparency

Marketers are calling for stronger AI governance. When asked what’s needed to keep AI in advertising safe and effective, top priorities included regular AI audits for bias and integrity, transparency in AI decision-making, data privacy protections, and IP safeguards for AI-created content. 

In short, marketers want tools, policies, and standards to close real governance gaps. Only 6% believe current safeguards are enough. This is an opportunity for the industry to define systems, tooling, and benchmarks that can ensure AI outputs are safe, accurate, and aligned with brand values.

Accountability and Leadership: Who’s Minding the AI?

One major challenge in AI governance is ownership. When asked who leads these efforts, responses vary, with the majority mentioning executive leadership or a dedicated AI task force, marketing/creative team’s, or legal/compliance teams are taking the lead. Some also rely on data science or MarTech teams. Only 17% of organizations currently use an external partner for AI governance today, suggesting most are building internal capabilities. But not all have definitive accountability:14% say no one owns AI governance, and others aren’t sure who does.

Without structured ownership, risks can fall through the cracks. As companies scale GenAI, it’s critical to define who is responsible, whether it is a chief AI officer, cross-functional council, or dedicated team. This clarity is essential to move from good intentions to accessible solutions that enable oversight, testing, and enforcement, regardless of organizational structure. Once roles are defined, organizations need to ensure their third-party partners are aware of who is responsible in order to enable better collaboration, strengthen industry relationships, and help mitigate shared risks.

Third-Party Support for Governance 

While most companies currently manage AI governance in-house, there’s strong interest in external support. When asked if they’d consider a third-party solution to evaluate risks like hallucinations, bias, or off-brand content, over 90% said yes.

Many see outside expertise as a valuable safety net. One marketer said it would offer “peace of mind,” while another noted it would “reduce risk to our brand and business.” Only a few were skeptical, mostly due to confidence in internal teams or concerns about cost. But those views were rare. Marketers are open and eager for expert tools and guidance to ensure their GenAI use is safe, effective, and aligned with brand values. This presents a timely opportunity to partner with trusted third parties to strengthen AI oversight across the industry.

No Time to Waste: A Call to Action on Responsible AI

This survey shows an industry moving fast on AI – but still building the guardrails as it goes. Advertisers are excited about AI’s potential for content, targeting, and engagement. But many have already seen the risks firsthand: misinformation, bias, and off-brand content that damage trust and waste budget.

Marketers are sending a strong message: they want help in the form of better standards, stronger tools, and expert support to use AI responsibly. Now is the time for collective action. Brands, agencies, publishers, and platforms all have a role to play in shaping AI governance.

Here are four steps to move forward:

  1. Make AI governance a priority. Assign ownership, engage leadership, and establish a cross-functional task force if you don’t have one already. Prioritize not only who is responsible, but how responsibility is translated into day-to-day workflows, review processes, and evaluation methods.
  2. Build your best practices. Start with foundational checks like human review, policy guidelines, bias testing—and build toward structured evaluations, automated audits (with human-in-the-loop oversight where appropriate), and continuous monitoring that can scale with content volume and model complexity.
  3. Bring in expert support to scale. Accelerate safely with trusted experts and third-party tools designed to evaluate, test, and certify AI-driven content at scale, so your team can move faster without increasing risk.
  4. Lead with transparency. Don’t just say you use AI responsibly – prove it. Build systems that track how AI is used, flag risks, and generate audit-ready records. Stay vigilant on fairness, privacy, and ethics. Consumer trust depends on it.

The data is clear: AI is undeniably transforming advertising but incidents are already happening, current safeguards aren’t keeping pace, and marketers need better solutions. This isn’t a future problem to solve but instead a present reality demanding immediate action to unlock AI’s full potential. With a few practical steps, responsible AI is not only possible – it can be the norm.

Survey Methodology

This research was conducted using the IAB Insights Engine platform, powered by Attest. It included a survey of 125 US ad industry executives who work for companies with 50+ employees that have active involvement/visibility into how their company uses AI in advertising and marketing. The survey was conducted in July 2024.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

7 Life-Changing Books Recommended by Catriona Wallace | Books

Published

on


7 Life-Changing Books Recommended by Catriona Wallace (Picture Credit – Instagram)

Some books ignite something immediate. Others change you quietly, over time. For Dr Catriona Wallace—tech entrepreneur, AI ethics advocate, and one of Australia’s most influential business leaders, books are more than just ideas on paper. They are frameworks, provocations, and spiritual companions. Her reading list offers not just guidance for navigating leadership and technology, but for embracing identity, power, and inner purpose. These seven titles reflect a mind shaped by disruption, ethics, feminism, and wisdom. They are not trend-driven. They are transformational.

1. Lean In by Sheryl Sandberg

A landmark in feminist career literature, Lean In challenges women to pursue their ambitions while confronting the structural and cultural forces that hold them back. Sandberg uses her own journey at Facebook and Google to dissect gender inequality in leadership. The book is part memoir, part manifesto, and remains divisive for valid reasons. But Wallace cites it as essential for starting difficult conversations about workplace dynamics and ambition. It asks, simply: what would you do if you weren’t afraid?

Lean In
Lean In (Picture Credit – Instagram)

2. Women and Power: A Manifesto by Mary Beard

In this sharp, incisive book, classicist Mary Beard examines the historical exclusion of women from power and public voice. From Medusa to misogynistic memes, Beard exposes how narratives built around silence and suppression persist today. The writing is fiery, brief, and packed with centuries of insight. Wallace recommends it for its ability to distil complex ideas into cultural clarity. It’s a reminder that power is not just a seat at the table; it is a script we are still rewriting.

3. The World of Numbers by Adam Spencer

A celebration of mathematics as storytelling, this book blends fun facts, puzzles, and history to reveal how numbers shape everything from music to human behaviour. Spencer, a comedian and maths lover, makes the subject inviting rather than intimidating. Wallace credits this book with sparking new curiosity about logic, data, and systems thinking. It’s not just for mathematicians. It’s for anyone ready to appreciate the beauty of patterns and the thinking habits that come with them.

4. Small Giants by Bo Burlingham

This book is a love letter to companies that chose to be great instead of big. Burlingham profiles fourteen businesses that opted for soul, purpose, and community over rapid growth. For Wallace, who has founded multiple mission-driven companies, this book affirms that success is not about scale. It is about integrity. Each story is a blueprint for building something meaningful, resilient, and values-aligned. It is a must-read for anyone tired of hustle culture and hungry for depth.

5. The Misogynist Factory by Alison Phipps

A searing academic work on the production of misogyny in modern institutions. Phipps connects the dots between sexual violence, neoliberalism, and resistance movements in a way that is as rigorous as it is radical. Wallace recommends this book for its clear-eyed confrontation of how systemic inequality persists beneath performative gestures. It equips readers with language to understand how power moves, morphs, and resists change. This is not light reading. It is a necessary reading for anyone seeking to challenge structural harm.

6. Tribes by Seth Godin

Godin’s central idea is simple but powerful: people don’t follow brands, they follow leaders who connect with them emotionally and intellectually. This book blends marketing, leadership, and human psychology to show how movements begin. Wallace highlights ‘Tribes’ as essential reading for purpose-driven founders and changemakers. It reminds readers that real influence is built on trust and shared values. Whether you’re leading a company or a cause, it’s a call to speak boldly and build your own tribe.

7. The Tibetan Book of Living and Dying by Sogyal Rinpoche

Equal parts spiritual guide and philosophical reflection, this book weaves Tibetan Buddhist teachings with Western perspectives on mortality, grief, and rebirth. Wallace turns to it not only for personal growth but also for grounding ethical decision-making in a deeper sense of purpose. It’s a book that speaks to those navigating endings—personal, spiritual, or professional and offers a path toward clarity and compassion. It does not offer answers. It offers presence, which is often far more powerful.

The Tibetan Book of Living and Dying
The Tibetan Book of Living and Dying (Picture Credit – Instagram)

The books that shape us are often those that disrupt us first. Catriona Wallace’s list is not filled with comfort reads. It’s made of hard questions, structural truths, and radical shifts in thinking. From feminist manifestos to Buddhist reflections, from purpose-led business to systemic critique, this bookshelf is a mirror of her own leadership—decisive, curious, and grounded in values. If you’re building something bold or seeking language for change, there’s a good chance one of these books will meet you where you are and carry you further than you expected.





Source link

Continue Reading

Ethics & Policy

Hyderabad: Dr. Pritam Singh Foundation hosts AI and ethics round table at Tech Mahindra

Published

on


The Dr. Pritam Singh Foundation and IILM University hosted a Round Table on “Human at Core: AI, Ethics, and the Future” in Hyderabad. Leaders and academics discussed leveraging AI for inclusive growth while maintaining ethics, inclusivity, and human-centric technology.

Published Date – 30 August 2025, 12:57 PM




Hyderabad: The Dr. Pritam Singh Foundation, in collaboration with IILM University, hosted a high-level Round Table Discussion on “Human at Core: AI, Ethics, and the Future” at Tech Mahindra, Cyberabad.

The event, held in memory of the late Dr. Pritam Singh, pioneering academic, visionary leader, and architect of transformative management education in India, brought together policymakers, business leaders, and academics to explore how India can harness artificial intelligence (AI) while safeguarding ethics, inclusivity, and human values.


In his keynote address, Padmanabhaiah Kantipudi, IAS (Retd.), Chairman of the Administrative Staff College of India (ASCI),

paid tribute to Dr. Pritam Singh, describing him as a nation-builder who bridged academia, business, and governance.
The Round Table theme, Leadership: AI, Ethics, and the Future, underscored India’s opportunity to leverage AI for inclusive growth across healthcare, agriculture, education, and fintech—while ensuring technology remains human-centric and trustworthy.



Source link

Continue Reading

Ethics & Policy

AI ethics: Bridging the gap between public concern and global pursuit – Pennsylvania

Published

on


(The Center Square) – Those who grew up in the 20th and 21st centuries have spent their lives in an environment saturated with cautionary tales about technology and human error, projections of ancient flood myths onto modern scenarios in which the hubris of our species brings our downfall.

They feature a point of no return, dubbed the “singularity” by Manhattan Project physicist John von Neumann, who suggested that technology would advance to a stage after which life as we know it would become unrecognizable.

Some say with the advent of artificial intelligence, that moment has come. And with it, a massive gap between public perception and the goals of both government and private industry. While states court data center development and tech investments, polling from Pew Research indicates Americans outside the industry have strong misgivings about AI.

In Pennsylvania, giants like Amazon and Microsoft have pledged to spend billions building the high-powered infrastructure required to enable the technology. Fostering this progress is a rare point of agreement between the state’s Democratic and Republican leadership, even bringing Gov. Josh Shapiro to the same event – if not the same stage – as President Donald Trump.

Pittsburgh is rebranding itself as the “global capital of physical AI,” leveraging its blue-collar manufacturing reputation and its prestigious academic research institutions to depict the perfect marriage of code and machine. Three Mile Island is rebranding itself as Crane Clean Energy Center, coming back online exclusively to power Microsoft AI services. Some legislators are eager to turn the lights back on fossil fuel-burning plants and even build new ones to generate the energy required to feed both AI and the everyday consumers already on the grid.

– Advertisement –

At the federal level, Trump has revoked guardrails established under the Biden administration with an executive order entitled “Removing Barriers to American Leadership in Artificial Intelligence.” In July, the White House released its “AI Action Plan.”

The document reads, “We need to build and maintain vast AI infrastructure and the energy to power it. To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day. Simply put, we need to ‘Build, Baby, Build!’”

To borrow an analogy from Shapiro’s favorite sport, it’s a full-court press, and there’s hardly a day that goes by that messaging from the state doesn’t tout the thrilling promise of the new AI era. Next week, Shapiro will be returning to Pittsburgh along with a wide array of luminaries to attend the AI Horizons summit in Bakery Square, a hub for established and developing tech companies.

According to leaders like Trump and Shapiro, the stakes could not be higher. It isn’t just a race for technological prowess — it’s an existential fight against China for control of the future itself. AI sits at the heart of innovation in fields like biotechnology, which promise to eradicate disease, address climate collapse, and revolutionize agriculture. It also sits at the heart of defense, an industry that thrives in Pennsylvania.

Yet, one area of overlap in which both everyday citizens and AI experts agree is that they want to see more government control and regulation of the technology. Already seeing the impacts of political deepfakes, algorithmic bias, and rogue chatbots, AI has far outpaced legislation, often to disastrous effect.

In an interview with The Center Square, Penn researcher Dr. Michael Kearns said that he’s less worried about autonomous machines becoming all-powerful than the challenges already posed by AI.

– Advertisement –

Kearns spends his time creating mathematical models and writing about how to embed ethical human principles into machine code. He believes that in some areas like chatbots, progress may have reached a point where improvements appear incremental for the average user. He cites the most recent ChatGPT update as evidence.

“I think the harms that are already being demonstrated are much more worrisome,” said Kearns. “Demographic bias, chatbots hurling racist invectives because they were trained on racist material, privacy leaks.”

Kearns says that a major barrier to getting effective regulatory policy is incentivizing experts to leave behind engaging work in the field as researchers and lucrative roles in tech in order to work on policy. Without people who understand how the algorithms operate, it’s difficult to create “auditable” regulations, meaning there are clear tests to pass.

Kearns pointed to ISO 420001. This is an international standard that focuses on process rather than outcome to guide developers in creating ethical AI. He also noted that the market itself is a strong guide. When someone gets hurt or hurts someone else using AI, it’s bad for business, incentivizing companies to do their due diligence.

He also noted crossroads where two ethical issues intersect. For instance, companies are entrusted with their users’ personal data. If policing misuse of the product requires an invasion of privacy, like accessing information stored on the cloud, there’s only so much that can be done.

OpenAI recently announced that it is scanning user conversations for concerning statements and escalating them to human teams, who may contact authorities when deemed appropriate. For some, the idea of alerting the police to someone suffering from mental illness is a dangerous breech. Still, it demonstrates the calculated risks AI companies have to make when faced with reports of suicide, psychosis, and violence arising out of conversations with chatbots.

Kearns says that even with the imperative for self-regulation on AI companies, he expects there to be more stumbling blocks before real improvement is seen in the absence of regulation. He cites watchdogs like the investigative journalists at ProPublica who demonstrated machine bias against Black people in programs used to inform criminal sentencing in 2016.

Kearns noted that the “headline risk” is not the same as enforceable regulation and mainly applies to well-established companies. For the most part, a company with a household name has an investment in maintaining a positive reputation. For others just getting started or flying under the radar, however, public pressure can’t replace law.

One area of AI concern that has been widely explored in the media is the use of AI by those who make and enforce the law. Kearns said, for his part, he’s found “three-letter agencies” to be “among the most conservative of AI adopters just because of the stakes involved.

In Pennsylvania, AI is used by the state police force.

In an email to The Center Square, PSP Communications Director Myles Snyder wrote, “The Pennsylvania State Police, like many law enforcement agencies, utilizes various technologies to enhance public safety and support our mission. Some of these tools incorporate AI-driven capabilities. The Pennsylvania State Police carefully evaluates these tools to ensure they align with legal, ethical, and operational considerations.”

PSP was unwilling to discuss the specifics of those technologies.

AI is also used by the U.S. military and other militaries around the world, including those of Israel, Ukraine, and Russia, who are demonstrating a fundamental shift in the way war is conducted through technology.

In Gaza, the Lavender AI system was used to identify and target individuals connected with Hamas, allowing human agents to approve strikes with acceptable numbers of civilian casualties, according to Israeli intelligence officials who spoke to The Guardian on the matter. Analysis of AI use in Ukraine calls for a nuanced understanding of the way the technology is being used and ways in which it should be regulated by international bodies governing warfare in the future.

Then, there are the more ephemeral concerns. Along with the long-looming “jobpocalypse,” many fear that offloading our day-to-day lives into the hands of AI may deplete our sense of meaning. Students using AI may fail to learn. Workers using AI may feel purposeless. Relationships with or grounded in AI may lead to disconnection.

Kearns acknowledged that there would be disruption in the classroom and workplace to navigate but it would also provide opportunities for people who previously may not have been able to gain entrance into challenging fields.

As for outsourcing joy, he asked “If somebody comes along with a robot that can play better tennis than you and you love playing tennis, are you going to stop playing tennis?”



Source link

Continue Reading

Trending