Connect with us

AI Insights

U.S. Wants AI Propaganda to “Suppress Dissenting Arguments”

Published

on


The United States hopes to use machine learning to create and distribute propaganda overseas in a bid to “influence foreign target audiences” and “suppress dissenting arguments,” according to a U.S. Special Operations Command document reviewed by The Intercept.

The document, a sort of special operations wishlist of near-future military technology, reveals new details about a broad variety of capabilities that SOCOM hopes to purchase within the next five to seven years, including state-of-the-art cameras, sensors, directed energy weapons, and other gadgets to help operators find and kill their quarry. Among the tech it wants to procure is machine-learning software that can be used for information warfare.

To bolster its “Advanced Technology Augmentations to Military Information Support Operations” — also known as MISO — SOCOM is looking for a contractor that can “Provide a capability leveraging agentic Al or multi‐LLM agent systems with specialized roles to increase the scale of influence operations.”

So-called “agentic” systems use machine-learning models purported to operate with minimal human instruction or oversight. These systems can be used in conjunction with large language models, or LLMs, like ChatGPT, which generate text based on user prompts. While much marketing hype orbits around these agentic systems and LLMs for their potential to execute mundane tasks like online shopping and booking tickets, SOCOM believes the techniques could be well suited for running an autonomous propaganda outfit.

“The information environment moves too fast for military remembers [sic] to adequately engage and influence an audience on the internet,” the document notes. “Having a program built to support our objectives can enable us to control narratives and influence audiences in real time.”

Laws and Pentagon policy generally prohibit military propaganda campaigns from targeting U.S. audiences, but the porous nature of the internet makes that difficult to ensure.

In a statement, SOCOM spokesperson Dan Lessard acknowledged that SOCOM is pursuing “cutting-edge, AI-enabled capabilities.”

“All AI-enabled capabilities are developed and employed under the Department of Defense’s Responsible AI framework, which ensures accountability and transparency by requiring human oversight and decision-making,” he told The Intercept. “USSOCOM’s internet-based MISO efforts are aligned with U.S. law and policy. These operations do not target the American public and are designed to support national security objectives in the face of increasingly complex global challenges.”

Tools like OpenAI’s ChatGPT or Google’s Gemini have surged in popularity despite their propensity for factual errors and other erratic outputs. But their ability to immediately churn out text on virtually any subject, written in virtually any tone — from casual trolling to pseudo-academic — could mark a major leap forward for internet propagandists. These tools give users the potential to finetune messaging any number of audiences without the time or cost of human labor.

Whether AI-generated propaganda works remains an open question, but the practice has already been amply documented in the wild. In May 2024, OpenAI issued a report revealing efforts by Iranian, Chinese, and Russian actors to use the company’s tools to engage in covert influence campaigns, but found none had been particularly successful. In comments before the 2023 Senate AI Insight Forum, Jessica Brandt of the Brookings Institution warned “LLMs could increase the personalization, and therefore the persuasiveness, of information campaigns.” In an online ecosystem filled with AI information warfare campaigns, “skepticism about the existence of objective truth is likely to increase,” she cautioned. A 2024 study published in the academic journal PNAS Nexus found that “language models can generate text that is nearly as persuasive for US audiences as content we sourced from real-world foreign covert propaganda campaigns.”

Unsurprisingly, the national security establishment is now insisting that the threat posed by this technology in the hands of foreign powers, namely Russia and China, is most dire.

“The Era of A.I. Propaganda Has Arrived, and America Must Act,” warned a recent New York Times opinion essay on GoLaxy, software created by the Chinese firm Beijing Thinker originally used to play the board game Go. Co-authors Brett Benson, a political science professor at Vanderbilt University, and Brett Goldstein, a former Department of Defense official, paint a grim picture showing GoLaxy as an emerging leader in state-aligned influence campaigns.

GoLaxy, they caution, is able to scan public social media content and produce bespoke propaganda campaigns. “The company privately claims that it can use a new technology to reshape and influence public opinion on behalf of the Chinese government,” according to a companion piece by Times national security reporter Julian Barnes headlined “China Turns to A.I. in Information Warfare.” The news item strikes a similarly stark tone: “GoLaxy can quickly craft responses that reinforce the Chinese government’s views and counter opposing arguments. Once put into use, such posts could drown out organic debate with propaganda.” According to these materials, the Times says, GoLaxy has “undertaken influence campaigns in Hong Kong and Taiwan, and collected data on members of Congress and other influential Americans.”

To respond to this foreign threat, Benson and Goldstein argue a “coordinated response” across government, academia, and the private sector is necessary. They describe this response as defensive in nature: mapping and countering foreign AI propaganda.

That’s not what the document from the Special Operations Forces Acquisition, Technology, and Logistics Center suggests the Pentagon is seeking.

The material shows SOCOM believes it needs technology that closely matches the reported Chinese capabilities, with bots scouring and ingesting large volumes of internet chatter to better persuade a targeted population, or an individual, on any given subject.

SOCOM says it specifically wants “automated systems to scrape the information environment, analyze the situation and respond with messages that are in line with MISO objectives. This technology should be able to respond to post(s), suppress dissenting arguments, and produce source material that can be referenced to support friendly arguments and messages.”

The Pentagon is paying especially close attention to those who might call out its propaganda efforts.

“This program should also be able to access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages,” the document notes. “The capability should utilize information gained to create a more targeted message to influence that specific individual or group.”

“This program should also be able to access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages.”

SOCOM anticipates using generative systems to both craft propaganda messaging and simulate how this propaganda will be received once sent into the wild, the document notes. SOCOM hopes it will use “agentic systems that replicate specific knowledge, skills, abilities, personality traits, and sociocultural attributes required for different roles of individuals comprising a team,” before moving on to “brainstorm and test operational campaigns against agent‐based replicas of individuals and groups.” These simulations are more elaborate than focus groups, calling instead for “comprehensive models of entire societies to enable MISO planners to use these models to experiment or test various multiple scenarios.”

The SOCOM wishlist continues to include a need for offensive deepfake capabilities, first reported by The Intercept in 2023.

The prospect of LLMs creating an infinite firehose of expertly crafted propaganda has been received by alarm — but generally in the context of the United States as target, not perpetrator.

A 2023 publication by the State Department-funded nonprofit Freedom House warned of “The Repressive Power of Artificial Intelligence,” predicting “AI-assisted disinformation campaigns will skyrocket as malicious actors develop additional ways to bypass safeguards and exploit open-source models.” Warning that “Generative AI draws authoritarian attention,” the Freedom House report cites potential use by China and Russia, but only mentions domestic use of the technology in a brief section about the presidential campaigns of Ron DeSantis and Donald Trump, as well as a deepfake video of Joe Biden manipulated to depict the former president making transphobic comments. The extent to which an automated propaganda machine capable of global reach warrants public concern depends on the scope of its application, according to Andrew Lohn, former director for emerging technology on the National Security Council.

“I would not be so concerned if some foreign soldiers are wrongly convinced that our special operation is going to happen Wednesday morning by helicopter from the east rather than Tuesday night by boat from the west,” said Lohn, now a senior fellow at Georgetown’s Center for Security and Emerging Technology.

The military has a history of manipulating civilian populations for political or ideological purposes. A troubling example was uncovered in 2024, when Reuters reported the Defense Department had operated a clandestine anti-vax social media campaign to undercut public confidence in the Chinese Covid vaccine, fearing its efficacy might draw Asian countries closer to a major geopolitical rival. Pentagon-created tweets described the Chinese Sinovac-CoronaVac shot — described by the World Health Organization as “safe and effective” — as “fake” and untrustworthy. According to the Reuters report, then-Special Operations Command Pacific General Jonathan Braga “pressed his bosses in Washington to fight back in the so-called information space” by backing the clandestine propaganda campaign.

William Marcellino, a behavioral scientist at the RAND Corporation focusing on the geopolitics of machine-learning systems and Pentagon procurement, told The Intercept such systems are being built out of necessity. “Regimes like those from China and Russia are engaged in AI-enabled, at-scale malign influence efforts,” he said. State-affiliated groups in China, he warned, “have explicitly designed AI at-scale systems for public opinion warfare.”

“Countering those campaigns likely requires AI at-scale responses,” he said.

SOCOM has in recent years been public about its desire for AI-created propaganda systems. These statements suggest a broader interest that includes influence operations against entire populations, as opposed to narrowly tailored toward military personnel.

In 2019, a senior Pentagon special operations official spoke at a defense symposium of the country’s “need to move beyond our 20th century approach to messaging and start looking at influence as an integral aspect of modern irregular warfare.” The official noted that this “will also require new partnerships beyond traditional actors, throughout the world, through efforts to amplify voices of [non-governmental organizations] and individual citizens who bring transparency to malign activities of our competitors.” The following year, then-SOCOM commander Gen. Richard Clarke described his interest in using AI to achieve these ends.

“As we look at the ability to influence and shape in this [information] environment, we’re going to have to have artificial intelligence and machine learning tools,” Clarke said in 2020 remarks first reported by National Defense Magazine, “specifically for information ops that hit a very broad portfolio, because we’re going to have to understand how the adversary is thinking, how the population is thinking, and work in these spaces.”

Heidy Khlaaf, chief scientist at the AI Now Institute and former safety engineer at OpenAI, warned against a fighting-fire-with-fire approach: “Framing the use of generative and agentic AI as merely a mitigation to adversaries’ use is a misrepresentation of this technology, as offensive and defensive uses are really two sides of the same coin and would allow them to use it precisely in the same way that adversaries do.”

Automated online influence campaigns might wind up having lackluster results, according to Emerson Brooking, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab. “Russia has been using AI programs to automate its influence operations. The program is not very good,” he said.

The tendency of LLMs to fabricate falsehoods and perpetuate preconceptions when prompted by users could also prove a major liability, Brooking warned. “Tasked with figuring out the ‘hearts and minds’ of a complex and understudied country, they may lean heavily on an AI to help them, which will be likely to tell them what they already want to hear,” he said.

Khlaaf added that “agentic” systems, heavily marketed by tech firms as independent digital brains, are still error-prone and unpredictable. “The introduction of agentic AI in these disinformation campaigns adds a layer of both safety and security concerns, as several research results have demonstrated how easily we can compromise and divert the behavior of agentic AI,” she told The Intercept. “With these security issues unresolved, [SOCOM] risks that their campaigns are not only compromised, but that they produce material that was not intended.”

“AI tends to make these campaigns stupider, not more effective.”

Brooking, who previously worked as an adviser to the Office of the Under Secretary of Defense for Policy on cybersecurity matters, also pointed to the mixed track record of prior U.S. online propaganda efforts. In 2022, researchers revealed a network of Twitter and Facebook accounts secretly operated by U.S. Central Command that had been pushing bogus news articles containing anti-Russian and Iranian talking points. The network, which failed to gain traction on either social network, quickly became an embarrassment for the Pentagon.

“We know from other public reporting that the U.S. has long sought to ‘suppress dissenting arguments’ and generate positive press in certain areas of operation,” he said. “We also know that these efforts have not worked very well and can be deeply embarrassing or counterproductive when revealed to the American public. AI tends to make these campaigns stupider, not more effective.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Bitcoin Proxy’s Chief Seeks Funding Fix as ‘Flywheel’ Falters

Published

on




Simon Gerovich, who turned a struggling Japanese hotelier into a Bitcoin stockpiler and investor darling, is feeling the heat.



Source link

Continue Reading

AI Insights

Anthropic Settles Landmark Artificial Intelligence Copyright Case

Published

on


Anthropic’s settlement came after a mixed ruling on the “fair use” where it potentially faced massive piracy damages for downloading millions of books illegally. The settlement seems to clarify an important principle: how AI companies acquire data matters as much as what they do with it.

After warning both the district court and an appeals court that the potential pursuit of hundreds of billions of dollars in statutory damages created a “death knell” situation that would force an unfair settlement, Anthropic has settled its closely watched copyright lawsuit with authors whose books were allegedly pirated for use in Anthropic’s training data. Anthropic’s settlement this week in a landmark copyright case may signal how the industry will navigate the dozens of similar lawsuits pending nationwide. While settlement details remain confidential pending court approval, the timing reveals essential lessons for AI development and intellectual property law.

The settlement follows Judge William Alsup’s nuanced ruling that using copyrighted materials to train AI models constitutes transformative fair use (essentially, using copyrighted material in a new way that doesn’t compete with the original) — a victory for AI developers. The court held that AI models are “like any reader aspiring to be a writer” who trains upon works “not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”

(For readers unfamiliar with copyright law, “fair use” is a legal doctrine that allows limited use of copyrighted material without permission for purposes like criticism, comment, or — as courts are now determining — AI training. A key test is whether the new use “transforms” the original work by adding something new or serving a different purpose, rather than simply copying it. Think of it as the difference between a critic quoting a novel to review it versus someone photocopying the entire book to avoid buying it.)

After ruling in Anthropic’s favor on this issue, Judge Alsup drew a bright line at acquisition methods. Anthropic’s downloading of over seven million books from pirate sites like LibGen constituted infringement, the judge ruled, rejecting Anthropic’s “research purpose” defense: “You can’t just bless yourself by saying I have a research purpose and, therefore, go and take any textbook you want.”

The settlement’s timing suggests a pragmatic approach to risk management. While Anthropic could claim vindication on training methodology, defending its acquisition methods before a jury posed substantial financial exposure. Statutory damages for willful infringement can reach $150,000 per work, creating potential liability for Anthropic totaling in the billions.

Anthropic is still facing copyright suits from music publishers, including Universal Music Corp. and Concord Music Group Inc., as well as Reddit. The settlement with authors removes one of Anthropic’s many legal challenges. Lawyers for the plaintiffs said, “[t]his historic settlement will benefit all class members,” promising to announce details in the coming weeks.

This settlement solidifies the principles established in Judge Alsup’s prior ruling: how AI companies acquire training data matters as much as what they do with it. The court’s framework permits AI systems to learn from human cultural output, but only through legitimate channels.

For practitioners advising AI projects and companies, the lesson is straightforward: document data sources meticulously and ensure the legitimate acquisition of data. AI companies that previously relied on scraped or pirated content face strong incentives to negotiate licensing agreements or develop alternative training approaches. Publishers and authors gain leverage to demand compensation, even as the fair use doctrine limits their ability to block AI training entirely.

The Anthropic settlement marks neither a total victory nor a defeat for either side, but rather a recognition of the complex realities governing AI and intellectual property. It also remains to be seen what impact it will have on similar pending cases, including whether this will create a pattern of AI companies settling when facing potential class actions. In this new landscape, the legitimacy of the process matters as much as the innovation of the outcome. That balance will define the next chapter of AI development. Under Anthropic, it is apparent that to maximize chances of AI models constituting fair use, developers should use a bookstore, not a pirate’s flag.



Source link

Continue Reading

AI Insights

The Future of Robotics | Chapters

Published

on


Robotics has long captured the human imagination, from early science fiction to today’s advanced technologies that power industries, healthcare, and daily life. Over the past few decades, the field of robotics has evolved rapidly, transforming from simple mechanical systems into sophisticated, intelligent machines capable of learning, adapting, and interacting with humans in complex ways. With advancements in artificial intelligence (AI), machine learning, and materials science, robotics is on the verge of revolutionizing various sectors.

Key Areas of Advancement in Robotics

1. Artificial Intelligence and Machine Learning One of the most significant advancements in robotics is the integration of AI and machine learning. AI-driven robots can now process large datasets, learn from their environments, and make autonomous decisions. Machine learning algorithms allow robots to improve their performance over time, adapting to new tasks or environments without needing to be reprogrammed. This development has led to breakthroughs in robotics applications, from self-driving cars to smart manufacturing systems.

2. Collaborative Robots (Cobots) Collaborative robots, or “cobots,” are designed to work alongside humans in a shared workspace. Unlike traditional industrial robots that operate in isolated, fixed locations, cobots are more flexible, equipped with sensors to avoid collisions and ensure human safety. Cobots are increasingly being used in industries like manufacturing, healthcare, and logistics, performing tasks that are repetitive, dangerous, or physically demanding, while enhancing human productivity.

3. Soft Robotics Soft robotics is a rapidly emerging field that focuses on creating robots made from soft, flexible materials. Unlike rigid, traditional robots, soft robots can adapt to complex environments and interact more delicately with objects and humans. These robots are being developed for applications in healthcare, such as minimally invasive surgery, rehabilitation, and elderly care, where a gentle touch is essential.

4. Swarm Robotics Inspired by the collective behavior of insects like ants and bees, swarm robotics involves the coordination of large groups of simple robots to perform complex tasks. Each robot in a swarm may have limited capabilities, but when working together, they can accomplish challenging tasks such as search-and-rescue missions, environmental monitoring, or agriculture. Swarm robotics demonstrates the potential of decentralized systems in solving real-world problems.

5. Humanoid Robots Humanoid robots, designed to resemble and mimic human behavior, have come a long way. Advances in AI, sensors, and actuators have enabled the development of robots that can walk, talk, and even display human-like emotions. While still in the early stages of practical deployment, humanoid robots have shown potential in fields like customer service, education, and caregiving. Robots like Sophia and Atlas are examples of how close we are to creating lifelike, interactive machines that can complement human abilities.

6. Robotics in Healthcare Healthcare is one of the industries most affected by advancements in robotics. Surgical robots, such as the da Vinci system, allow for more precise and minimally invasive surgeries. Robotics is also transforming rehabilitation, with robots assisting patients in regaining mobility after injuries or strokes. Additionally, robotic exoskeletons are helping paraplegic individuals walk again, and autonomous robots are being used in hospitals to deliver supplies, disinfect rooms, and even provide telepresence for remote consultations.

7. Autonomous Vehicles Self-driving cars are among the most visible applications of robotics. With the help of AI, sensors, and machine learning, autonomous vehicles are capable of navigating roads, avoiding obstacles, and making decisions in real time. Companies like Tesla, Waymo, and traditional automakers are at the forefront of this technology, aiming to make fully autonomous transportation a reality in the near future.

Challenges and Ethical Considerations

While the advancements in robotics are impressive, they are not without challenges. Technical limitations, such as battery life, processing power, and sensor accuracy, continue to pose hurdles for creating truly autonomous systems. Additionally, as robots become more integrated into society, ethical concerns around job displacement, privacy, and safety arise. There is also the question of how much autonomy should be granted to robots, especially in critical areas like military operations or healthcare.

Ensuring the ethical development and deployment of robotics will require collaboration between governments, industry leaders, and ethicists. Establishing standards and regulations that balance innovation with human safety and privacy is crucial to maximizing the benefits of robotics while minimizing its risks.

The Future of Robotics

The future of robotics holds tremendous potential. With advancements in AI, robotics could transform nearly every sector of society. Industries like agriculture, logistics, construction, and even space exploration are already exploring how robots can increase efficiency and safety. In the home, robots may soon become as common as smartphones, assisting with chores, providing companionship, and improving the quality of life for people with disabilities or the elderly.

In conclusion, the field of robotics is advancing at a pace that promises to reshape how we live, work, and interact with technology. As robots become smarter, more flexible, and more capable, they will play an increasingly integral role in solving global challenges, improving quality of life, and driving innovation across multiple industries. However, navigating the ethical and societal impacts of robotics will be key to ensuring these advancements benefit humanity as a whole.



Source link

Continue Reading

Trending