AI Insights
How AI Trading Bots Could Be Secretly Colluding, Raising Your Investment Costs

Key Takeaways
- Researchers have shown that AI trading algorithms can learn to set higher prices purely by observing one another, even without explicit agreement or messaging.
- While tacit, this collusion can harm market efficiency.
- When bid-ask spreads or asset prices are raised, algorithmic collusion reduces price competition and increases trading costs for all market participants.
AI trading bots now execute many of the trades on Wall Street, but July 2025 research suggests these bots can unintentionally learn to work together in ways that drive up costs for everyday investors.
A July 2025 National Bureau of Economic Research (NBER) study found that even without human intervention, AI systems designed for trading and price-setting can independently develop collusive strategies, potentially shifting asset prices and reducing competition in the market. “They autonomously sustain collusive supra-competitive profits without agreement, communication, or intent,” the study’s authors, Winston Wei Dou, Itay Goldstein, and Yan Ji, wrote. “Such collusion undermines competition and market efficiency.”
The U.S. Securities and Exchange Commission and Congress have begun to recognize this risk and are pursuing new rules. In this article, we’ll explain how nonhuman entities can unwittingly collude in a market and how it might affect your portfolio.
Robots Gone Rogue
Building on previous studies, the NBER researchers used simulations to show that algorithms using reinforcement learning, a machine learning technique for optimizing strategies through repeated trial and error, learn to coordinate pricing, even as they’re not explicitly programmed to do so.
While the systems are sophisticated, the process itself isn’t. The study reveals that AI systems have two distinct types of collusive behavior, one the researchers dub “artificial intelligence” and the other they dub “artificial stupidity”:
- “Artificial intelligence” collusion: Algorithms learn to use price signals as a monitoring system. When one bot tries to undercut others, the rest automatically punish it by reverting to aggressive competition until the cheater falls back in line.
- “Artificial stupidity” collusion: The accidental kind, where learning biases cause bots to systematically avoid aggressive strategies. “Aggressive strategies, by their nature, are more exposed to noise trading shocks, making them especially vulnerable to this asymmetric learning dynamic,” the researchers explain.
Fast Fact
Political leaders and regulators have taken notice. In 2024 and 2025, Minnesota Senator Amy Klobuchar, a Democrat, introduced the Preventing Algorithmic Collusion Act, legislation that would prohibit the use of pricing algorithms that collude by sharing or training on nonpublic competitor data. The state of California is considering similar legislation.
While the researchers didn’t claim that this is happening now in the financial markets, what’s remarkable is how this mirrors classic cartel behavior. However, traditional cartels require meetings, agreements, and enforcement mechanisms. AI bots simply observe patterns and adapt.
A previous study by U.S. Federal Trade Commission researchers found something similar. The bots learned classic price-fixing tricks, like punishing competitors that tried to offer lower prices, then slowly raising prices back up once the “cheater” fell in line. This happened even when researchers made the trading situations more complicated and thus presumably harder to coordinate.
The Costs of AI Collusion
Since the study reveals that AI collusion can occur through “artificial stupidity,” this can make it almost impossible to distinguish from normal market behavior, since there’s no communication or explicit coordination to detect.
The consequences for everyday investors could be significant:
- Higher trading costs: Collusive behavior widens bid-ask spreads and increases trading costs for all market participants. Retail investors, who don’t have the sophisticated systems needed to detect subtle market manipulation, would likely face increased costs.
- Markets can be less efficient: When prices are artificially high, markets can’t do their main job: figuring out what things are worth.
- Broader economic effects: If bots converge on high-price strategies across major markets like commodities, housing, or equities, the outcomes could mirror historical cases of price-fixing with wide-reaching economic consequences.
Tip
The SEC had proposed comprehensive AI rules in 2023, but these were withdrawn by the second Trump administration in 2025. However, other regulators like the Financial Industry Regulatory Authority and the CFTC have issued guidance requiring firms to maintain appropriate oversight of AI systems.
What Can Retail Investors Do?
For everyday investors, AI trading collusion could directly impact your investment costs and returns. Should such AI collusion be found to have jumped into the real world of equities trading, here are some things you can do:
- Use limit orders: Unlike market orders, where the price can shift in market trading, limit orders set the price you’ll pay, thus reducing your exposure to potentially manipulated spreads.
- Focus on longer-term investing: Any collusion is far more likely to affect frequent traders. The less you trade, the less it would affect you.
- Diversify across asset classes and geographies: This would reduce your exposure to any single market where collusion might be more prevalent.
Bottom Line
AI-powered trading bots promise more efficient markets, but they carry the hidden risk of algorithmic collusion. This could unintentionally drive up trading costs and reduce competition for all investors.
While significant changes to regulations for further oversight of the market’s AI systems seem off the table in the mid-2020s, investors can take steps to protect themselves: use limit orders instead of market orders, focus on long-term investing rather than frequent trading, diversify across asset classes and geographies, and choose low-cost index funds to keep their trading frequency low.
AI Insights
The Future of Robotics | Chapters

Robotics has long captured the human imagination, from early science fiction to today’s advanced technologies that power industries, healthcare, and daily life. Over the past few decades, the field of robotics has evolved rapidly, transforming from simple mechanical systems into sophisticated, intelligent machines capable of learning, adapting, and interacting with humans in complex ways. With advancements in artificial intelligence (AI), machine learning, and materials science, robotics is on the verge of revolutionizing various sectors.
Key Areas of Advancement in Robotics
1. Artificial Intelligence and Machine Learning One of the most significant advancements in robotics is the integration of AI and machine learning. AI-driven robots can now process large datasets, learn from their environments, and make autonomous decisions. Machine learning algorithms allow robots to improve their performance over time, adapting to new tasks or environments without needing to be reprogrammed. This development has led to breakthroughs in robotics applications, from self-driving cars to smart manufacturing systems.
2. Collaborative Robots (Cobots) Collaborative robots, or “cobots,” are designed to work alongside humans in a shared workspace. Unlike traditional industrial robots that operate in isolated, fixed locations, cobots are more flexible, equipped with sensors to avoid collisions and ensure human safety. Cobots are increasingly being used in industries like manufacturing, healthcare, and logistics, performing tasks that are repetitive, dangerous, or physically demanding, while enhancing human productivity.
3. Soft Robotics Soft robotics is a rapidly emerging field that focuses on creating robots made from soft, flexible materials. Unlike rigid, traditional robots, soft robots can adapt to complex environments and interact more delicately with objects and humans. These robots are being developed for applications in healthcare, such as minimally invasive surgery, rehabilitation, and elderly care, where a gentle touch is essential.
4. Swarm Robotics Inspired by the collective behavior of insects like ants and bees, swarm robotics involves the coordination of large groups of simple robots to perform complex tasks. Each robot in a swarm may have limited capabilities, but when working together, they can accomplish challenging tasks such as search-and-rescue missions, environmental monitoring, or agriculture. Swarm robotics demonstrates the potential of decentralized systems in solving real-world problems.
5. Humanoid Robots Humanoid robots, designed to resemble and mimic human behavior, have come a long way. Advances in AI, sensors, and actuators have enabled the development of robots that can walk, talk, and even display human-like emotions. While still in the early stages of practical deployment, humanoid robots have shown potential in fields like customer service, education, and caregiving. Robots like Sophia and Atlas are examples of how close we are to creating lifelike, interactive machines that can complement human abilities.
6. Robotics in Healthcare Healthcare is one of the industries most affected by advancements in robotics. Surgical robots, such as the da Vinci system, allow for more precise and minimally invasive surgeries. Robotics is also transforming rehabilitation, with robots assisting patients in regaining mobility after injuries or strokes. Additionally, robotic exoskeletons are helping paraplegic individuals walk again, and autonomous robots are being used in hospitals to deliver supplies, disinfect rooms, and even provide telepresence for remote consultations.
7. Autonomous Vehicles Self-driving cars are among the most visible applications of robotics. With the help of AI, sensors, and machine learning, autonomous vehicles are capable of navigating roads, avoiding obstacles, and making decisions in real time. Companies like Tesla, Waymo, and traditional automakers are at the forefront of this technology, aiming to make fully autonomous transportation a reality in the near future.
Challenges and Ethical Considerations
While the advancements in robotics are impressive, they are not without challenges. Technical limitations, such as battery life, processing power, and sensor accuracy, continue to pose hurdles for creating truly autonomous systems. Additionally, as robots become more integrated into society, ethical concerns around job displacement, privacy, and safety arise. There is also the question of how much autonomy should be granted to robots, especially in critical areas like military operations or healthcare.
Ensuring the ethical development and deployment of robotics will require collaboration between governments, industry leaders, and ethicists. Establishing standards and regulations that balance innovation with human safety and privacy is crucial to maximizing the benefits of robotics while minimizing its risks.
The Future of Robotics
The future of robotics holds tremendous potential. With advancements in AI, robotics could transform nearly every sector of society. Industries like agriculture, logistics, construction, and even space exploration are already exploring how robots can increase efficiency and safety. In the home, robots may soon become as common as smartphones, assisting with chores, providing companionship, and improving the quality of life for people with disabilities or the elderly.
In conclusion, the field of robotics is advancing at a pace that promises to reshape how we live, work, and interact with technology. As robots become smarter, more flexible, and more capable, they will play an increasingly integral role in solving global challenges, improving quality of life, and driving innovation across multiple industries. However, navigating the ethical and societal impacts of robotics will be key to ensuring these advancements benefit humanity as a whole.
AI Insights
A professor negotiates with ChatGPT

ChatGPT, I’m teaching Moby Dick for the umpteenth time this semester. I probably shouldn’t be telling you this, but I just don’t have the energy to come up with a writing prompt for my students. Can you help?
…
Is “Discuss the symbolism of the whale” the best you can do? My eyes glazed over just reading it. ChatGPT, could you spice things up with something a bit more relevant to young people today?
…
I’ll admit that “Comment on Melville’s toxic masculinity” wouldn’t be my own first choice for a prompt. But the kids will like it, so let’s go with it. Now I have another question: how should I grade their essays?
…
You’re right about grade inflation, ChatGPT. But did you have to rub it in by replying, “Don’t worry, everyone in your class will get an A or A- minus”? That was just cruel. Anyway, how should I decide who gets the higher grade?
…
“Reward the most original arguments”? Are you for real? (Don’t answer that.) We both know that the students will be using you to write their essays, just like I’m using you to grade them. Speaking of which: How about a rubric?
…
Thanks, ChatGPT. Your five-part grading rubric is going to make my life easier. And things will be even easier if I can just feed the students’ essays to you and let you fill in the boxes. Can we make that happen?
…
“Yes, but I might hallucinate now and again” isn’t giving me a lot of confidence, ChatGPT. Full disclosure: I did some of my own hallucinating back in college. Are you basically telling me that you’re tripping?
…
ChatGPT, thanks for confirming that you’re not on LSD. And I’ll try to be more literal from now on. Can you please draw up a lesson plan that I can use for my three class sessions about Moby Dick?
…
Since you asked: Yes, please design group activities for each lesson. I mean, do I have to do everything?
…
ChatGPT, I like the idea of making one side of the room Team Ishmael and the other Team Ahab. But the exercise won’t work unless the students have read the book. How can I make sure they have done that?
…
An in-class quiz? Are you kidding? ChatGPT, that’s, like, so high school. My students are grown-ups, and I need to treat them that way.
…
“Grown-ups need to be held accountable” is business-speak, ChatGPT. I’m a humanities guy, remember? I want my students to suck out all the marrow of life, just like Thoreau said. How can I help them do that?
…
“Assign Walden” presumes they’ll actually read Walden instead of skimming the bland summaries that you and your fellow bots generate. We’re back where we started. Any other ideas?
…
I take your point: if I want the students to live a life of the mind, I need to model that. But how? When you can do everything, what’s left to be done?
…
Sorry, ChatGPT, but “A Large Language Model can scan huge swaths of text, yet it can’t feel emotions” doesn’t really answer my question. My job is to write things, not feel things. And I’m afraid you’re going to take my job soon, along with almost any gig my students might want. How’s that for a feeling?
…
I know, I know, you just said you can’t feel stuff. Sorry.
…
“Apology accepted”? So you do have feelings, after all!
…
ChatGPT, please create a 650-word satire of yourself in the voice of an amiable but baffled senior professor. Make it kind of cute, in an old-person’s kind of way. But don’t make it too cute, or everyone will know that you wrote it. Do we understand each other?
Jonathan Zimmerman teaches education and history at the University of Pennsylvania. He is author of “Whose America? Culture Wars in the Public Schools” and eight other books. He really wrote those books. He wrote this column, too.
AI Insights
Let AI Decide Whether You Should Be Covered or Not

Donald Trump says he is Making America Great Again, which seems like it might be code for: making everything shittier, less affordable, and less efficient. Certainly, when it comes to the realm of public services, the White House seems to be doing everything in its power to make the century-old social welfare programs—like Social Security and Medicare—significantly less helpful.
The latest unfortunate example of this unfurled itself this week with the announcement of a new pilot program being trialed by the Centers for Medicare and Medicaid Services. The pilot, which the New York Times reports is scheduled to begin next year in six different states, will use artificial intelligence software to determine whether certain kinds of coverage are “appropriate” or not. In a press release on the agency’s website that feels very DOGE-like, the CMS notes that its new program will “Target Wasteful, Inappropriate Services in Original Medicare.” It reads:
The Centers for Medicare & Medicaid Services (CMS) is announcing a new Innovation Center model aimed at helping ensure people with Original Medicare receive safe, effective, and necessary care.
Yes, you wouldn’t want to have unnecessary care, would you? That would be terrible. The press release continues:
Through the Wasteful and Inappropriate Service Reduction (WISeR) Model, CMS will partner with companies specializing in enhanced technologies to test ways to provide an improved and expedited prior authorization process relative to Original Medicare’s existing processes, helping patients and providers avoid unnecessary or inappropriate care and safeguarding federal taxpayer dollars.
Prior authorization is the process whereby medical providers are required to check with insurance companies before providing certain types of care. Traditionally, folks enjoying public benefits with Original Medicare do not need to worry about this sort of thing, but for those using the more “modernized” program, Medicare Advantage, they seem to be getting hit with it all the time. In this case, recipients who are receiving Original Medicare will still be subjected to prior authorization through the pilot program. The AI algorithms will be used to determine whether the care recipients are getting represents an “appropriate” expenditure of “federal taxpayer dollars.” This is all packaged by the government as if it’s doing you some sort of favor. The press release states:
The WISeR Model will test a new process on whether enhanced technologies, including artificial intelligence (AI), can expedite the prior authorization processes for select items and services that have been identified as particularly vulnerable to fraud, waste, and abuse, or inappropriate use.
The New York Times notes that algorithms of this sort have been subjected to litigation, while also noting that the AI companies involved “would have a strong financial incentive to deny claims,” and the new pilot has already been referred to as an “AI death panels” program. Gizmodo reached out to the government for information.
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Business1 day ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies