AI Insights
Anti-AI Explained: Why Resistance to AI Is Growing

As artificial intelligence continues to advance and grow more prevalent, so too does public concern around where it all might lead. The majority of U.S. adults surveyed by Pew Research Center said AI will have a negative effect on the country over the next 20 years — and that it’s more likely to harm them than benefit them.
What Is the Anti-AI Movement?
The anti-AI movement opposes artificial intelligence due to numerous concerns, such as the spread of disinformation, elimination of jobs, violation of copyright laws and AI’s potential use in mass surveillance or autonomous weapons. While some believe AI will inevitably harm the human race, others advocate for greater transparency, regulation and ethical safeguards.
While AI has the potential to cure diseases and solve complex global problems ranging from climate change to homelessness, it also presents some serious risks. Deepfake audio, images and videos are already being used to spread disinformation, influence elections, run phishing scams and create non-consensual explicit material. In the wrong hands, AI could be used to develop biological viruses, launch complex cyberattacks or institute mass surveillance programs.
AI could also cause widespread unemployment and economic inequality, with several analysts estimating hundreds of millions of jobs will be lost due to automation. There are also environmental costs: Data centers consume large quantities of water and electricity, emit massive amounts of carbon and require the mining of rare earth minerals. Nevermind the fact that some of the top researchers in the field believe AI could eventually outwit humans and kill us all.
These wide-ranging concerns form the basis of a growing anti-AI sentiment percolating in homes, offices and online communities around the world. In this article, we’ll introduce the communities of artists, activists and researchers who are taking a stand against artificial intelligence through protests, advocacy and technological advancements of their own.
What Is the Anti-AI Movement?
The anti-AI movement is a diverse constituency. Some are motivated by the desire to protect their intellectual property and their livelihood, while others are concerned about the potential for discrimination, human rights abuses and a superintelligent AI that tries to eliminate humans altogether.
Some of the most unified anti-AI voices are the artists, writers and musicians whose work is being scraped by generative AI tools — and then regurgitated as original creations without any credit or compensation. Hollywood screenwriters, record labels, newspapers, authors and visual artists have all sued AI companies for violating their copyright and trademark rights.
When it comes to the use of AI in weapons of war, The International Committee of the Red Cross, the secretary general of the United Nations and a coalition of more than 70 non-governmental organizations called Stop Killer Robots have all voiced opposition to the use of autonomous weapons systems that lack human oversight and accountability.
Social justice organizations have called attention to AI bias and its impact on marginalized communities. For example, the Algorithmic Justice League raises awareness about racial bias in facial recognition systems and predictive policing technologies.
There are several protest groups that have coalesced around their shared concerns about AI. One such group, called PauseAI, has lobbied politicians for an international treaty that would halt the development of large-scale AI models until meaningful safety regulations are implemented. The idea of pausing AI development gained momentum in 2023, when thousands of prominent AI researchers, like Yoshua Bengio and Stuart Russell, and tech CEOs, like Elon Musk and Steve Wozniak, called for a temporary stop until shared safety protocols could be adopted.
Joep Meindertsma, the founder of PauseAI, told Built In that AI could eventually possess superintelligence, becoming “smart enough to spread itself across the internet, steer countries towards its own goals, invent new weapons and manipulate people on a massive scale.”
“We need to make sure that our knowledge of AI safety outpaces our knowledge of AI capabilities, and that requires regulations on an international level,” Meindertsma said. “We need a global treaty that pauses this race, until we know how to retain control. And even if we know how to control them, we need to think about what kind of society we want. How will we distribute the benefits? Who gets to control this thing, and say how it operates? We need to seriously start to think about these questions, because if we won’t, some AI company CEO will end up controlling our world.”
Another anti-AI protest group called Stop AI believes a pause doesn’t go far enough. It’s calling for a permanent ban on the pursuit of artificial general intelligence (AGI) — a theoretical type of AI that will be able to perform any intellectual task a human can.
“AI is dangerous and AI safety is an illusion,” the group states on its website. “There is no way to prove experimentally that the AGI will never want something that will lead to our extinction, similar to how we have caused the extinction of many less intelligent species.”
Concerns about a superintelligent AI taking over the world have been voiced by several prominent figures in the industry — most notably Geoffrey Hinton, who quit his job at Google in 2023 to warn people about the existential risks of AI. Nicknamed the “godfather of AI” for his pioneering role in the development of neural networks, Hinton cautions that superintelligent systems could overpower (or even manipulate) humans, jeopardizing the future of the human race as a whole. AI researchers Eliezer Yudkowsky and Nate Soares think the technology will inevitably lead to our extinction, writing a book titled, “If Anyone Builds It, Everyone Dies.” Yudkowsky has advocated for shutting down AI development altogether.
Anti-AI Examples
AI has evolved with unprecedented speed, outpacing efforts to regulate or control a technology that is already impacting many professions. In an attempt to regain some control, groups have pushed back with lawsuits, protests and advocacy efforts.
The Entertainment Industry Pushes Back
The entertainment industry has been one of the most vocal and organized in its opposition to AI. For example, the Writer’s Guild of America went on strike in 2023 in part to limit the use of artificial intelligence in the screenwriting process. After months of negotiations, the guild managed to secure a contract that prohibits studios from using generative AI to replace human writers. The contract also states writers can use AI with the studio’s consent, but they cannot be forced to use it.
Musicians are pushing back, too. Record labels have sued AI music-generation startups Udio and Suno for copyright infringement, and both companies are reportedly negotiating a licensing deal with the labels. Meanwhile, more than 200 musical artists signed an open letter demanding that tech companies stop developing AI music generators that could be used to devalue or replace human musicians. Tennessee also passed a law — titled the Ensuring Likeness, Voice and Image Security (ELVIS) Act — that makes it illegal to replicate a singer’s voice without their consent.
Visual Artists Speak Out
For visual artists, the threat of artificial intelligence has been especially visible — and many of them have begun revolting against the generated art that is inundating online communities like DeviantArt, ArtStation and Pinterest. When DeviantArt launched a text-to-image generation tool called DreamUp in 2022, users were particularly angry that their work was being used to train it. Artists could remove their work from the training pool, but the opt-out process was a bit onerous. In response to the criticism, Deviant Art removed all user artwork from its training data by default.
Other platforms have faced similar backlash. When ArtStation began hosting AI-generated art, users flooded the site with an anti-AI emblem in solidarity. ArtStation removed the protest images, claiming they violated the website’s terms of service. In the wake of this backlash, a new portfolio site called Cara launched, promising to filter out AI images until the “rampant ethical and data privacy issues around datasets are resolved via regulation.”
Artists and Authors Sue AI Companies
While some artists are organizing at the platform level, others are taking their fight to the courtroom. In January 2023, three artists filed a class action lawsuit against Stability AI, Midjourney, DeviantArt and Runway AI, accusing them of copyright and trademark infringement. The case was allowed to proceed after a federal court rejected the companies’ attempts to get it dismissed. On the corporate side of the art world, Getty Images — which licenses images to publishers, businesses and other clients — sued Stability AI for allegedly using more than 12 million of its photos to train Stable Diffusion, its image generator. Both cases are still ongoing as of August 2025.
Authors have mounted similar lawsuits. In a class action lawsuit filed in 2024, authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson accused Anthropic of stealing their books to train its chatbot Claude. In the first major copyright infringement ruling in an AI case, a federal judge sided partly with Anthropic, ruling that its use of lawfully purchased books fell under fair use protections, as it had created an original, “transformative” work separate from the source material. The judge noted, however, that the company also trained its model on roughly 7 million pirated books, a separate matter that will go to trial in December 2025.
Beyond the arts, media organizations are also taking a stand against AI companies that scrape and summarize their content without permission or compensation. In one of the most high-profile cases to date, The New York Times sued OpenAI and Microsoft for copyright infringement. The lawsuit was allowed to proceed after a judge denied the defendant’s motion to dismiss. While that case remains ongoing as of August 2025, The Times has negotiated a separate licensing deal with Amazon worth at least $20 million per year, according to The Wall Street Journal.
These lawsuits and licensing deals come as online publishers scramble to adapt to tools like Google’s AI Overview, which automatically summarize content scraped from other websites. Publishers have seen devastating declines in organic search traffic as a result, with one study estimating the traffic to a top-ranked article could drop nearly 80 percent if it was preceded by an AI overview.
Brands Say No to AI Content
It’s not just creators and publishers either. Some big brands are distancing themselves from AI-generated content as well. For example, Dove, a personal care brand known for featuring everyday women instead of models in its ads, pledged in 2024 to never perpetuate false beauty standards with “digital distortion” or AI-generated content. Other brands like Lego and L’Oreal have made similar pledges to limit — or all-out stop — the use of generative AI in their ads.
Legal and Regulatory Efforts
The U.S. federal government has taken a laissez faire approach to AI regulation thus far. In his first day in office, President Donald Trump revoked former President Joe Biden’s executive order offering a framework for future AI regulations. Six months later, Trump unveiled America’s AI Action Plan, which pledged to rescind or revise any regulations that “unnecessarily hinder AI development or deployment.” It also said federal agencies should withhold funding from states that passed “burdensome AI regulations,” but clarified it would “not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” Trump bolstered the plan with executive orders accelerating the permitting process for major AI infrastructure projects and barring the federal government from buying “ideologically biased” AI tools.
On the state level, hundreds of AI regulation bills have been proposed in recent years. Colorado passed a law requiring AI developers to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination” used to make “consequential decisions,” like access to education, employment and government services. California has adopted several AI regulations as well, including a law requiring AI companies to document the data used to train their AI models. In Utah, politicians passed a law requiring companies in state-regulated occupations to inform customers when they are interacting with a chatbot or other generative AI tool.
On the global stage, the European Union was the first governmental entity to adopt enforceable AI regulations in 2024, when the European Parliament passed the Artificial Intelligence Act. The legislation calls for increased transparency, documentation and oversight of technologies based on their level of risk. These regulations are applicable to any company that wants to launch a product in Europe, but they do not apply to military uses.
These are the four tiers of risk outlined in the legislation:
- AI systems with “unacceptable risks” include real-time biometric scanning in public places (with limited law enforcement exceptions), technologies that could manipulate people and “social scoring” systems that classify people based on personal characteristics. These AI systems have been banned since February 2025.
- AI systems with “high risks” are those that could have an impact on health, safety or fundamental rights. This could include technologies used for public infrastructure, educational assessment, employee recruitment, law enforcement and border control management. These systems must establish a risk management system, conduct data governance, provide technical documentation and allow human oversight beginning in August 2026.
- AI systems with limited risks, like AI-generated audio, images, video and text, must be detectable as AI-generated. AI systems that generate deepfake audio, images and videos must disclose that the content was artificially generated.
- AI systems with minimal risks, like video games and spam filters, are unregulated.
The developers of general purpose AI models like ChatGPT must comply with the EU’s Copyright Directive, and they must provide technical documentation, instructions for use and a summary about the content used for training. If a general purpose AI model presents a “systemic risk,” the developer must also conduct adversarial testing, track and report serious incidents and ensure cybersecurity protections.
While the EU law is relatively comprehensive compared to other countries, Meindertsma, who is Dutch, said the law is primarily focused on AI usage — not AI training regulations that could prevent the creation of superintelligence.
Opt-Out Tools and Technical Responses to AI
Website owners who are tired of AI models scraping their site for content and bogging down their server have thrown a wrench into AI training with AI tarpits, poison pills and masking techniques.
Websites should theoretically be able to prevent AI models from scraping their websites through opt-out lists, or by embedding instructions in robots.txt files that tell crawlers which sections of the website they are allowed to access. Not all AI companies honor those practices, however.
Data Poisoning
Noticing that the power dynamic has left artists defenseless against plagiarism, researchers at the University of Chicago developed Nightshade, a “prompt-specific poison attack” that shows AI models an image that is different from the one humans see. Where a human might see a cow laying in a field, the AI model will see a brown leather purse lying in grass. If the model digests enough images of cows depicted as handbags, it will become increasingly convinced that cows have handles, pockets and zippers, corrupting the AI model’s training data. The Nightshade team hopes the increased costs of scraping poisoned data will incentivize AI companies to respect opt-out requests and license images from creators.
Style Cloaking
The University of Chicago researchers also developed Glaze, a tool designed to prevent “style mimicry,” which is when an AI model generates an image in the style of a specific artist. Style mimicry not only reduces demand for an artist’s work, it also floods the web with cheap knock-offs that damage the artist’s reputation.
Similar to Nightshade, Glaze alters the image digested by AI. An artist known for realistic charcoal portraits may mask their art, showing the web crawler the same image as an abstract painting. The AI model will no longer have a sense of that particular artist’s style, and future prompts to generate an image in the style of that artist will yield abstract paintings that bear no resemblance to the artist’s signature charcoal portraits.
AI Tarpits
Other developers have created AI tarpits, which are webpages encoded into a website to jam up web crawlers and waste their resources. Tarpits like Nepenthes and Iocaine lure web crawlers into pages populated with meaningless gibberish from an automated text generator, leading them into a series of endless links to nonsense websites. This distracts the web crawler from accessing content elsewhere on the website while also poisoning its training data.
While most AI companies have developed countermeasures for data poisoning, the University of Chicago researchers behind Nightshade say its technology can evolve to keep pace with those efforts. Tarpit developers like Iocaine creator Gergely Nagy, meanwhile, cautions website visitors that his tool will not keep the bots away. Instead, it poisons them with the hope they go away completely over time.
“If we all do it, they won’t have anything to crawl,” he said on his website.
What does “anti-AI” mean?
Anti-AI refers to an opposition to the training, deployment or usage of artificial intelligence tools. People with anti-AI attitudes may be motivated by a wide variety of potential risk factors, like the violation of copyright laws, the usage of lethal autonomous weapons or a fear of a superintelligent AI exterminating the human race.
Why are people against AI?
People are against AI because it may eliminate jobs, spread disinformation and amplify societal biases. Creative professionals are opposed to their work being used to train a technology that aims to displace them, and human rights groups are worried about its potential role in mass surveillance and autonomous lethal weapons. People may have concerns about the way AI is being trained and used without opposing the technology itself.
Can AI be regulated without stopping innovation?
AI is a complex topic to regulate, but it’s possible to create guidelines that don’t halt innovation. The European Union’s AI Act may require additional transparency, documentation, governance and oversight, for example, but these regulations also provide legal certainty and could increase public trust in AI technologies. European companies may be at a relative disadvantage against American companies, though, as the U.S. has not adopted federal AI regulations. This is why AI safety experts advocate for international cooperation in AI regulation.
AI Insights
Pittsburgh’s AI summit: five key takeaways

The push for artificial intelligence-related investments in Western Pennsylvania continued Thursday with a second conference that brought together business leaders and elected officials.
Not in attendance this time was President Donald Trump, who headlined a July 15 celebration of AI opportunity at Carnegie Mellon University.
This time Gov. Josh Shapiro, U.S. Sen. David McCormick and others converged in Bakery Square in Larimer to emphasize emerging public-private initiatives in anticipation of growing data center development and other artificial intelligence-related infrastructure including power plants.
Here’s what speakers and attendees at the summit were saying.
AI is not a fad
As regional leaders and business investors consider their options, BNY Mellon’s CEO Robin Vince cautioned against not taking AI seriously.
“The way to get left behind in the next 10 years is to not care about AI,” Vince said
“AI is transforming everything,” said Selin Song during Thursday’s event. As president of Google Customer Solutions, Song said that the company’s recent investment of $25 million across the Pennsylvania-Jersey-Maryland grid will help give AI training access to the more than 1 million small businesses in the state.
Google isn’t the only game in town
Shapiro noted that Amazon recently announced plans to spend at least $20 billion to establish multiple high-tech cloud computing and AI innovation campuses across the state.
“This is a generational change,” Shapiro said, calling it the largest private sector investment in Pennsylvania’s history. “This is our next chapter in innovative growth. It all fits together. This new investment is beyond data center 1.0 that we saw in Virginia.”
Fracking concerns elevated
With all of the plans for new power-hungry data centers, some are concerned that the AI push will create more environmental destruction. Outside the summit, Food & Water Watch Pennsylvania cautioned that the interest in AI development is a “Trojan horse” for more natural gas fracking. Amid President Donald Trump’s attempts to dismantle wind and solar power, alternatives to natural gas appear limited.
Nuclear ready for its moment
But one possible alternative was raised at the AI conference by Westinghouse Electric Company’s interim CEO Dan Summer.
The Pittsburgh-headquartered organization is leading a renewed interest in nuclear energy with plans to build a number of its AP 1000 reactors to help match energy needs and capabilities.
Summer said that the company is partnering with Google, allowing them to leverage Google’s AI capabilities “with our nuclear operations to construct new nuclear here.”
China vs. ‘heroes’
Underlying much of the AI activity: concerns with China’s work in this field
“With its vast resources, enormous capital, energy, workforce, the Chinese government is leveraging its resources to beat the United States in AI development,” said Nazak Nikakhtar, a national security and international trade attorney who chaired one of the panels Thursday.

Speaking to EQT’s CEO Toby Rice and Groq executive Ian Andrews, Nikakhtar outlined some of the challenges she saw in U.S. development of AI technology compared to China.
“We are attempting to leverage, now, our own resources, albeit in some respects much more limited vis-a-vis what China has, to accelerate AI leadership here in the United States and beat China,” she said. “But we’re somewhat constrained by the resources we have, by our population, by workforce, capital.”
Rice said in response that the natural resources his company is extracting will help power the country’s ability to compete with China.
Rice drew a link between the 9/11 terror attacks 24 years earlier and the “urgency” of competing with China in AI.
“People are looking to take down American economies,” Rice said. “And we have heroes. Never forget. And I do believe that us winning this race against China in AI is going to be one of the most heroic things we’re going to do.”
Eric Jankiewicz is PublicSource’s economic development reporter and can be reached at ericj@publicsource.org or on Twitter @ericjankiewicz.
AI Insights
Commanders vs. Packers props, SportsLine Machine Learning Model AI picks, bets: Jordan Love Over 223.5 yards

The NFL Week 2 schedule gets underway with a Thursday Night Football matchup between NFC playoff teams from a year ago. The Washington Commanders battle the Green Bay Packers beginning at 8:15 p.m. ET from Lambeau Field. Second-year quarterback Jayden Daniels led the Commanders to a 21-6 opening-day win over the New York Giants, completing 19 of 30 passes for 233 yards and one touchdown. Jordan Love, meanwhile, helped propel the Packers to a dominating 27-13 win over the Detroit Lions in Week 1. He completed 16 of 22 passes for 188 yards and two touchdowns.
NFL prop bettors will likely target the two young quarterbacks with NFL prop picks, in addition to proven playmakers like Deebo Samuel, Romeo Doubs and Zach Ertz. Green Bay’s Jayden Reed has been dealing with a foot injury, but still managed to haul in a touchdown pass in the opener, while Austin Ekeler (shoulder) does not carry an injury designation for TNF. The Packers enter as a 3-point favorite with Green Bay at -172 on the money line, while the over/under is 49 points. Before betting any Commanders vs. Packers props for Thursday Night Football, you need to see the Commanders vs. Packers prop predictions powered by SportsLine’s Machine Learning Model AI.
Built using cutting-edge artificial intelligence and machine learning techniques by SportsLine’s Data Science team, AI Predictions and AI Ratings are generated for each player prop.
For Packers vs. Commanders NFL betting on Monday Night Football, the Machine Learning Model has evaluated the NFL player prop odds and provided Commanders vs. Packers prop picks. You can only see the Machine Learning Model player prop predictions for Washington vs. Green Bay here.
Top NFL player prop bets for Commanders vs. Packers
After analyzing the Commanders vs. Packers props and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model says Packers quarterback Love goes Over 223.5 passing yards (-112 at FanDuel). Love passed for 224 or more yards in eight games a year ago, despite an injury-filled season. In 15 regular-season games in 2024, he completed 63.1% of his passes for 3,389 yards and 25 touchdowns with 11 interceptions. Additionally, Washington allowed an average of 240.3 passing yards per game on the road last season.
In a 30-13 win over the Seattle Seahawks on Dec. 15, he completed 20 of 27 passes for 229 yards and two touchdowns. Love completed 21 of 28 passes for 274 yards and two scores in a 30-17 victory over the Miami Dolphins on Nov. 28. The model projects Love to pass for 259.5 yards, giving this prop bet a 4.5 rating out of 5. See more NFL props here, and new users can also target the FanDuel promo code, which offers new users $300 in bonus bets if their first $5 bet wins:
How to make NFL player prop bets for Washington vs. Green Bay
In addition, the SportsLine Machine Learning Model says another star sails past his total and has nine additional NFL props that are rated four stars or better. You need to see the Machine Learning Model analysis before making any Commanders vs. Packers prop bets for Thursday Night Football.
Which Commanders vs. Packers prop bets should you target for Thursday Night Football? Visit SportsLine now to see the top Commanders vs. Packers props, all from the SportsLine Machine Learning Model.
AI Insights
Adobe Says Its AI Sales Are Coming in Strong. But Will It Lift the Stock?

Adobe (ADBE) just reported record quarterly revenue driven by artificial intelligence gains. Will it revive confidence in the stock?
The creative software giant late Thursday posted adjusted earnings per share of $5.31 on revenue that jumped 11% year-over-year to a record $5.99 billion in the fiscal third quarter, above analysts’ estimates compiled by Visible Alpha, as AI revenues topped company targets.
CEO Shantanu Narayen said that with the third-quarter’s revenue driven by AI, Adobe has already surpassed its “AI-first” revenue goals for the year, leading the company to boost its outlook. The company said it now anticipates full-year adjusted earnings of $20.80 to $20.85 per share and revenue of $23.65 billion to $23.7 billion, up from adjusted earnings of $20.50 to $20.70 on revenue of $23.50 billion to $23.6 billion previously.
Shares of Adobe were recently rising in late trading. But they’ve had a tough year so far, with the stock down more than 20% for 2025 through Thursday’s close amid worries about the company’s AI progress and growing competition.
Wall Street is optimistic. The shares finished Thursday a bit below $351, and the mean price target as tracked by Visible Alpha, above $461, represents a more than 30% premium. Most of the analysts tracking the stock have “buy” ratings.
But even that target represents a degree of caution in the context of recent highs. The shares were above $600 in February 2024.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi