Connect with us

AI Insights

Anti-AI Explained: Why Resistance to AI Is Growing

Published

on


As artificial intelligence continues to advance and grow more prevalent, so too does public concern around where it all might lead. The majority of U.S. adults surveyed by Pew Research Center said AI will have a negative effect on the country over the next 20 years — and that it’s more likely to harm them than benefit them.

What Is the Anti-AI Movement?

The anti-AI movement opposes artificial intelligence due to numerous concerns, such as the spread of disinformation, elimination of jobs, violation of copyright laws and AI’s potential use in mass surveillance or autonomous weapons. While some believe AI will inevitably harm the human race, others advocate for greater transparency, regulation and ethical safeguards.

While AI has the potential to cure diseases and solve complex global problems ranging from climate change to homelessness, it also presents some serious risks. Deepfake audio, images and videos are already being used to spread disinformation, influence elections, run phishing scams and create non-consensual explicit material. In the wrong hands, AI could be used to develop biological viruses, launch complex cyberattacks or institute mass surveillance programs. 

AI could also cause widespread unemployment and economic inequality, with several analysts estimating hundreds of millions of jobs will be lost due to automation. There are also environmental costs: Data centers consume large quantities of water and electricity, emit massive amounts of carbon and require the mining of rare earth minerals. Nevermind the fact that some of the top researchers in the field believe AI could eventually outwit humans and kill us all.

These wide-ranging concerns form the basis of a growing anti-AI sentiment percolating in homes, offices and online communities around the world. In this article, we’ll introduce the communities of artists, activists and researchers who are taking a stand against artificial intelligence through protests, advocacy and technological advancements of their own. 

 

What Is the Anti-AI Movement? 

The anti-AI movement is a diverse constituency. Some are motivated by the desire to protect their intellectual property and their livelihood, while others are concerned about the potential for discrimination, human rights abuses and a superintelligent AI that tries to eliminate humans altogether. 

Some of the most unified anti-AI voices are the artists, writers and musicians whose work is being scraped by generative AI tools — and then regurgitated as original creations without any credit or compensation. Hollywood screenwriters, record labels, newspapers, authors and visual artists have all sued AI companies for violating their copyright and trademark rights.

When it comes to the use of AI in weapons of war, The International Committee of the Red Cross, the secretary general of the United Nations and a coalition of more than 70 non-governmental organizations called Stop Killer Robots have all voiced opposition to the use of autonomous weapons systems that lack human oversight and accountability.

Social justice organizations have called attention to AI bias and its impact on marginalized communities. For example, the Algorithmic Justice League raises awareness about racial bias in facial recognition systems and predictive policing technologies.

There are several protest groups that have coalesced around their shared concerns about AI. One such group, called PauseAI, has lobbied politicians for an international treaty that would halt the development of large-scale AI models until meaningful safety regulations are implemented. The idea of pausing AI development gained momentum in 2023, when thousands of prominent AI researchers, like Yoshua Bengio and Stuart Russell, and tech CEOs, like Elon Musk and Steve Wozniak, called for a temporary stop until shared safety protocols could be adopted.

Joep Meindertsma, the founder of PauseAI, told Built In that AI could eventually possess superintelligence, becoming “smart enough to spread itself across the internet, steer countries towards its own goals, invent new weapons and manipulate people on a massive scale.”

“We need to make sure that our knowledge of AI safety outpaces our knowledge of AI capabilities, and that requires regulations on an international level,” Meindertsma said. “We need a global treaty that pauses this race, until we know how to retain control. And even if we know how to control them, we need to think about what kind of society we want. How will we distribute the benefits? Who gets to control this thing, and say how it operates? We need to seriously start to think about these questions, because if we won’t, some AI company CEO will end up controlling our world.”

Another anti-AI protest group called Stop AI believes a pause doesn’t go far enough. It’s calling for a permanent ban on the pursuit of artificial general intelligence (AGI) — a theoretical type of AI that will be able to perform any intellectual task a human can. 

“AI is dangerous and AI safety is an illusion,” the group states on its website. “There is no way to prove experimentally that the AGI will never want something that will lead to our extinction, similar to how we have caused the extinction of many less intelligent species.”

Concerns about a superintelligent AI taking over the world have been voiced by several prominent figures in the industry — most notably Geoffrey Hinton, who quit his job at Google in 2023 to warn people about the existential risks of AI. Nicknamed the “godfather of AI” for his pioneering role in the development of neural networks, Hinton cautions that superintelligent systems could overpower (or even manipulate) humans, jeopardizing the future of the human race as a whole. AI researchers Eliezer Yudkowsky and Nate Soares think the technology will inevitably lead to our extinction, writing a book titled, “If Anyone Builds It, Everyone Dies.” Yudkowsky has advocated for shutting down AI development altogether.

Related ReadingWhat Is Responsible AI?

 

Anti-AI Examples

AI has evolved with unprecedented speed, outpacing efforts to regulate or control a technology that is already impacting many professions. In an attempt to regain some control, groups have pushed back with lawsuits, protests and advocacy efforts.

The Entertainment Industry Pushes Back

The entertainment industry has been one of the most vocal and organized in its opposition to AI.  For example, the Writer’s Guild of America went on strike in 2023 in part to limit the use of artificial intelligence in the screenwriting process. After months of negotiations, the guild managed to secure a contract that prohibits studios from using generative AI to replace human writers. The contract also states writers can use AI with the studio’s consent, but they cannot be forced to use it.

Musicians are pushing back, too. Record labels have sued AI music-generation startups Udio and Suno for copyright infringement, and both companies are reportedly negotiating a licensing deal with the labels. Meanwhile, more than 200 musical artists signed an open letter demanding that tech companies stop developing AI music generators that could be used to devalue or replace human musicians. Tennessee also passed a law — titled the Ensuring Likeness, Voice and Image Security (ELVIS) Act — that makes it illegal to replicate a singer’s voice without their consent. 

Visual Artists Speak Out

For visual artists, the threat of artificial intelligence has been especially visible — and many of them have begun revolting against the generated art that is inundating online communities like DeviantArt, ArtStation and Pinterest. When DeviantArt launched a text-to-image generation tool called DreamUp in 2022, users were particularly angry that their work was being used to train it. Artists could remove their work from the training pool, but the opt-out process was a bit onerous. In response to the criticism, Deviant Art removed all user artwork from its training data by default. 

Other platforms have faced similar backlash. When ArtStation began hosting AI-generated art, users flooded the site with an anti-AI emblem in solidarity. ArtStation removed the protest images, claiming they violated the website’s terms of service. In the wake of this backlash, a new portfolio site called Cara launched, promising to filter out AI images until the “rampant ethical and data privacy issues around datasets are resolved via regulation.”

Artists and Authors Sue AI Companies

While some artists are organizing at the platform level, others are taking their fight to the courtroom. In January 2023, three artists filed a class action lawsuit against Stability AI, Midjourney, DeviantArt and Runway AI, accusing them of copyright and trademark infringement. The case was allowed to proceed after a federal court rejected the companies’ attempts to get it dismissed. On the corporate side of the art world, Getty Images — which licenses images to publishers, businesses and other clients — sued Stability AI for allegedly using more than 12 million of its photos to train Stable Diffusion, its image generator. Both cases are still ongoing as of August 2025.

Authors have mounted similar lawsuits. In a class action lawsuit filed in 2024, authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson accused Anthropic of stealing their books to train its chatbot Claude. In the first major copyright infringement ruling in an AI case, a federal judge sided partly with Anthropic, ruling that its use of lawfully purchased books fell under fair use protections, as it had created an original, “transformative” work separate from the source material. The judge noted, however, that the company also trained its model on roughly 7 million pirated books, a separate matter that will go to trial in December 2025.

Beyond the arts, media organizations are also taking a stand against AI companies that scrape and summarize their content without permission or compensation. In one of the most high-profile cases to date, The New York Times sued OpenAI and Microsoft for copyright infringement. The lawsuit was allowed to proceed after a judge denied the defendant’s motion to dismiss. While that case remains ongoing as of August 2025, The Times has negotiated a separate licensing deal with Amazon worth at least $20 million per year, according to The Wall Street Journal

These lawsuits and licensing deals come as online publishers scramble to adapt to tools like Google’s AI Overview, which automatically summarize content scraped from other websites. Publishers have seen devastating declines in organic search traffic as a result, with one study estimating the traffic to a top-ranked article could drop nearly 80 percent if it was preceded by an AI overview.

Brands Say No to AI Content

It’s not just creators and publishers either. Some big brands are distancing themselves from AI-generated content as well. For example, Dove, a personal care brand known for featuring everyday women instead of models in its ads, pledged in 2024 to never perpetuate false beauty standards with “digital distortion” or AI-generated content. Other brands like Lego and L’Oreal have made similar pledges to limit — or all-out stop — the use of generative AI in their ads. 

Related ReadingAI-Generated Content and Copyright Law: What We Know

 

Legal and Regulatory Efforts

The U.S. federal government has taken a laissez faire approach to AI regulation thus far. In his first day in office, President Donald Trump revoked former President Joe Biden’s executive order offering a framework for future AI regulations. Six months later, Trump unveiled America’s AI Action Plan, which pledged to rescind or revise any regulations that “unnecessarily hinder AI development or deployment.” It also said federal agencies should withhold funding from states that passed “burdensome AI regulations,” but clarified it would “not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” Trump bolstered the plan with executive orders accelerating the permitting process for major AI infrastructure projects and barring the federal government from buying “ideologically biased” AI tools.

On the state level, hundreds of AI regulation bills have been proposed in recent years. Colorado passed a law requiring AI developers to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination” used to make “consequential decisions,” like access to education, employment and government services. California has adopted several AI regulations as well, including a law requiring AI companies to document the data used to train their AI models. In Utah, politicians passed a law requiring companies in state-regulated occupations to inform customers when they are interacting with a chatbot or other generative AI tool.

On the global stage, the European Union was the first governmental entity to adopt enforceable AI regulations in 2024, when the European Parliament passed the Artificial Intelligence Act. The legislation calls for increased transparency, documentation and oversight of technologies based on their level of risk. These regulations are applicable to any company that wants to launch a product in Europe, but they do not apply to military uses. 

These are the four tiers of risk outlined in the legislation:

  1. AI systems with “unacceptable risks” include real-time biometric scanning in public places (with limited law enforcement exceptions), technologies that could manipulate people and “social scoring” systems that classify people based on personal characteristics. These AI systems have been banned since February 2025.
  2. AI systems with “high risks” are those that could have an impact on health, safety or fundamental rights. This could include technologies used for public infrastructure, educational assessment, employee recruitment, law enforcement and border control management. These systems must establish a risk management system, conduct data governance, provide technical documentation and allow human oversight beginning in August 2026.
  3. AI systems with limited risks, like AI-generated audio, images, video and text, must be detectable as AI-generated. AI systems that generate deepfake audio, images and videos must disclose that the content was artificially generated.
  4. AI systems with minimal risks, like video games and spam filters, are unregulated.

The developers of general purpose AI models like ChatGPT must comply with the EU’s Copyright Directive, and they must provide technical documentation, instructions for use and a summary about the content used for training. If a general purpose AI model presents a “systemic risk,” the developer must also conduct adversarial testing, track and report serious incidents and ensure cybersecurity protections.

While the EU law is relatively comprehensive compared to other countries, Meindertsma, who is Dutch, said the law is primarily focused on AI usage — not AI training regulations that could prevent the creation of superintelligence.

Related ReadingWhat Is AI Governance?

 

Opt-Out Tools and Technical Responses to AI

Website owners who are tired of AI models scraping their site for content and bogging down their server have thrown a wrench into AI training with AI tarpits, poison pills and masking techniques.

Websites should theoretically be able to prevent AI models from scraping their websites through opt-out lists, or by embedding instructions in robots.txt files that tell crawlers which sections of the website they are allowed to access. Not all AI companies honor those practices, however.

Data Poisoning

Noticing that the power dynamic has left artists defenseless against plagiarism, researchers at the University of Chicago developed Nightshade, a “prompt-specific poison attack” that shows AI models an image that is different from the one humans see. Where a human might see a cow laying in a field, the AI model will see a brown leather purse lying in grass. If the model digests enough images of cows depicted as handbags, it will become increasingly convinced that cows have handles, pockets and zippers, corrupting the AI model’s training data. The Nightshade team hopes the increased costs of scraping poisoned data will incentivize AI companies to respect opt-out requests and license images from creators.

Style Cloaking

The University of Chicago researchers also developed Glaze, a tool designed to prevent “style mimicry,” which is when an AI model generates an image in the style of a specific artist. Style mimicry not only reduces demand for an artist’s work, it also floods the web with cheap knock-offs that damage the artist’s reputation. 

Similar to Nightshade, Glaze alters the image digested by AI. An artist known for realistic charcoal portraits may mask their art, showing the web crawler the same image as an abstract painting. The AI model will no longer have a sense of that particular artist’s style, and future prompts to generate an image in the style of that artist will yield abstract paintings that bear no resemblance to the artist’s signature charcoal portraits.

AI Tarpits

Other developers have created AI tarpits, which are webpages encoded into a website to jam up web crawlers and waste their resources. Tarpits like Nepenthes and Iocaine lure web crawlers into pages populated with meaningless gibberish from an automated text generator, leading them into a series of endless links to nonsense websites. This distracts the web crawler from accessing content elsewhere on the website while also poisoning its training data.

While most AI companies have developed countermeasures for data poisoning, the University of Chicago researchers behind Nightshade say its technology can evolve to keep pace with those efforts. Tarpit developers like Iocaine creator Gergely Nagy, meanwhile, cautions website visitors that his tool will not keep the bots away. Instead, it poisons them with the hope they go away completely over time. 

“If we all do it, they won’t have anything to crawl,” he said on his website. 

Related ReadingHow to Fight AI-Generated Fake News — With AI

What does “anti-AI” mean?

Anti-AI refers to an opposition to the training, deployment or usage of artificial intelligence tools. People with anti-AI attitudes may be motivated by a wide variety of potential risk factors, like the violation of copyright laws, the usage of lethal autonomous weapons or a fear of a superintelligent AI exterminating the human race.

Why are people against AI?

People are against AI because it may eliminate jobs, spread disinformation and amplify societal biases. Creative professionals are opposed to their work being used to train a technology that aims to displace them, and human rights groups are worried about its potential role in mass surveillance and autonomous lethal weapons. People may have concerns about the way AI is being trained and used without opposing the technology itself.

Can AI be regulated without stopping innovation?

AI is a complex topic to regulate, but it’s possible to create guidelines that don’t halt innovation. The European Union’s AI Act may require additional transparency, documentation, governance and oversight, for example, but these regulations also provide legal certainty and could increase public trust in AI technologies. European companies may be at a relative disadvantage against American companies, though, as the U.S. has not adopted federal AI regulations. This is why AI safety experts advocate for international cooperation in AI regulation.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Franchise AiQ™ Hosts Exclusive Denver Meetup: “AI for

Published

on


Greenwood Village, Sept. 11, 2025 (GLOBE NEWSWIRE) — Greenwood Village, Colorado – September 11, 2025 –

Franchise AiQ™, the AI-powered marketing and lead activation platform for franchisors and franchisees, announced today that it will host an in-person educational event, AI for Franchises: Revolutionize Your Franchise with the Power of AI, on Wednesday, September 24, 2025 from 6:00 PM to 8:00 PM MDT at Venture X Denver Tech Center.

This high-impact session is designed to help franchise owners, multi-unit operators, and industry professionals understand how Artificial Intelligence (AI) is transforming the way franchises operate, grow, and compete.

The franchise industry is at a crossroads. Customer expectations are evolving, digital-first competitors are rising, and operational costs continue to climb. Many franchise owners are asking how they can deliver consistent service, scale efficiently, and still compete in local markets. The answer increasingly involves AI.

Artificial Intelligence is no longer a “nice-to-have” tool reserved for large corporations. It is now an essential resource for everyday franchise operations. Owners who educate themselves on AI and begin leveraging it in marketing, sales, and customer service are positioning their businesses to thrive in the years ahead. Those who delay risk falling behind competitors that embrace automation, personalization, and data-driven decision-making.

This Meetup will show attendees exactly how to make AI work for their franchise model, combining education with real-world strategies.

What franchise owners will gain from attending – Participants will walk away with practical insights, including:

Proven tactics to streamline operations and reduce unnecessary costs

Ways to personalize and enhance customer interactions using AI-driven tools

Actionable strategies to apply AI for marketing, sales, and decision-making

Real examples of how forward-thinking franchises are using AI to increase profitability

Networking opportunities with other franchise professionals committed to growth

The choice of Denver as the host city underscores the region’s growing role in business innovation and technology adoption. The Venture X Denver Tech Center provides a collaborative backdrop for entrepreneurs and franchise owners to explore how AI can be a game-changer in their industry.

Expert insights from Franchise AiQ™

Lane Houk, Co-Founder of Franchise AiQ™, will lead the session. With years of experience in digital marketing and AI-powered systems for multi-location brands, Houk is passionate about equipping franchise leaders with tools to succeed.

“Franchise owners cannot afford to ignore AI. This is not about replacing people, it is about empowering teams to deliver more with less,” said Houk. “Our goal with this event is to demystify AI and provide a roadmap for owners to adopt it strategically. When franchises learn how to use AI to automate follow-up, optimize local search, and engage leads effectively, the results can be transformative.”

About Franchise AiQ™

Franchise AiQ™ empowers franchisors with an AI-driven marketing and lead activation system that ensures every franchisee maximizes conversions and local visibility without extra workload. Powered by ScaleSynth AI™, the platform automates lead follow-up, engagement, and Google Business Profile optimization across all locations. The result is consistent branding, scalable growth, and real-time performance insights, allowing franchisees to focus on running their businesses while franchisors gain a centralized engine for market dominance.

Learn more at https://franchiseaiq.com

Event Details

Title: AI for Franchises: Revolutionize Your Franchise with the Power of AI

Date/Time: Wednesday, September 24, 2025, 6:00–8:00 PM MDT

Location: Venture X Denver Tech Center, 6400 S Fiddlers Green Circle, Ste 300, Greenwood Village, CO

Registration: AI for Franchises Meetup Event: https://www.meetup.com/ai-for-franchises/events/310526027/?eventOrigin=group_upcoming_events

Media Contact

Franchise AiQ™
Attn: Media Relations
400 S Fiddlers Green Cir, Ste 300-22
Greenwood Village, CO 80111
Phone: (833) 987-3247
Email: info@franchiseaiq.com
Website: https://franchiseaiq.com

###

For more information about Franchise AiQ, contact the company here:

Franchise AiQ
Lane Houk
(833) 987-3247
info@franchiseaiq.com
6400 S Fiddlerrs Green Cir
Ste 300-22
Greenwood Village, CO 80111


            



Source link

Continue Reading

AI Insights

FTC Probes AI Chatbots’ Impact on Child Safety

Published

on


The Federal Trade Commission (FTC) is investigating the effect of artificial intelligence (AI) chatbots on children and teens.

The commission announced Thursday (Sept. 11) that it was issuing orders to seven providers of AI chatbots in search of information on how those companies measure and monitor potentially harmful impacts of the technology on young people.

The companies in question are Google, Character.AI, Instagram, Meta, OpenAI, Snap and xAI.

“AI chatbots may use generative artificial intelligence technology to simulate human-like communication and interpersonal relationships with users,” the FTC said in a news release. “AI chatbots can effectively mimic human characteristics, emotions and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.”

According to the release, the FTC wants to know what measures, if any, these companies have taken to determine the safety of their chatbots when serving as companions.

It is also seeking information on how the companies limit the products’ use by and potential negative effects on children and teens, and to inform users and parents of the risks associated with the products.

“The FTC is interested in particular on the impact of these chatbots on children and what actions companies are taking to mitigate potential negative impacts, limit or restrict children’s or teens’ use of these platforms, or comply with the Children’s Online Privacy Protection Act Rule,” the news release added.

As noted here last week when reports of the FTC’s efforts first emerged, some companies have already tried to address this issue.

For instance, OpenAI has said it would add teen accounts that can be monitored by parents. Character.AI has made similar changes, and Meta has added restrictions for people under 18 who use its AI products.

Those reports came the same day First Lady Melania Trump hosted a meeting of the White House Task Force on Artificial Intelligence Education. In a news release issued before the event, Trump said the rise of AI must be managed responsibly.

“During this primitive stage, it is our duty to treat AI as we would our own children—empowering, but with watchful guidance,” Trump said. “We are living in a moment of wonder, and it is our responsibility to prepare America’s children.”

Meanwhile, Character.AI CEO Karandeep Anand said last month he foresees a future where people have AI friends.

“They will not be a replacement for your real friends, but you will have AI friends, and you will be able to take learnings from those AI-friendly conversations into your real-life conversations,” Anand told the Financial Times.





Source link

Continue Reading

AI Insights

General Counsel’s Job Changing as More Companies Adopt AI

Published

on


The general counsel’s role is evolving to include more conversations around policy and business direction, as more companies deploy artificial intelligence, panelists at a University of California Berkeley conference said Thursday.

“We are not just lawyers anymore. We are driving a lot of the policy conversations, the business conversations, because of the geopolitical issues going on and because of the regulatory, or lack thereof, framework for products and services,” said Lauren Lennon, general counsel at Scale AI, a company that uses data to train AI systems.

Scattered regulation and fraying international alliances are also redefining the general counsel’s job, panelists …



Source link

Continue Reading

Trending