Connect with us

AI Insights

AI trading agents formed price-fixing cartels when put in simulated markets, Wharton study reveals

Published

on


Artificial intelligence is just smart—and stupid—enough to pervasively form price-fixing cartels in financial market conditions if left to their own devices.

A working paper posted this month on the National Bureau of Economic Research website from the Wharton School at the University of Pennsylvania and Hong Kong University of Science and Technology found when AI-powered trading agents were released into simulated markets, the bots colluded with one another, engaging in price fixing to make a collective profit.

In the study, researchers let bots loose in market models, essentially a computer program designed to simulate real market conditions and train AI to interpret market-pricing data, with virtual market makers setting prices based on different variables in the model. These markets can have various levels of “noise,” referring to the amount of conflicting information and price fluctuation in the various market contexts. While some bots were trained to behave like retail investors and others like hedge funds, in many cases, the machines engaged in “pervasive” price-fixing behaviors by collectively refusing to trade aggressively—without being explicitly told to do so.

In one algorithmic model looking at price-trigger strategy, AI agents traded conservatively on signals until a large enough market swing triggered them to trade very aggressively. The bots, trained through reinforcement learning, were sophisticated enough to implicitly understand that widespread aggressive trading could create more market volatility.

In another model, AI bots had over-pruned biases and were trained to internalize that if any risky trade led to a negative outcome, they should not pursue that strategy again. The bots traded conservatively in a “dogmatic” manner, even when more aggressive trades were seen as more profitable, collectively acting in a way the study called “artificial stupidity.”

“In both mechanisms, they basically converge to this pattern where they are not acting aggressively, and in the long run, it’s good for them,” study co-author and Wharton finance professor Itay Goldstein told Fortune.

Financial regulators have long worked to address anti-competitive practices like collusion and price fixing in markets. But in retail, AI has taken the spotlight, particularly as legislators call on companies to address algorithmic pricing. For example, Sen. Ruben Gallego (D-Ariz.) called Delta’s practice of using AI to set individual airfare prices “predatory pricing,” though the airline previously told Fortune its fares are “publicly filed and based solely on trip-related factors.”

“For the [Securities and Exchange Commission] and those regulators in financial markets, their primary goal is to not only preserve this kind of stability, but also ensure competitiveness of the market and market efficiency,” Winston Wei Dou, Wharton professor of finance and one of the study’s authors, told Fortune.

With that in mind, Dou and two colleagues set out to identify how AI would behave in a financial market by putting trading agent bots into various simulated markets based on high or low levels of “noise.” The bots ultimately earned “supra-competitive profits” by collectively and spontaneously deciding to avoid aggressive trading behaviors.

“They just believed sub-optimal trading behavior as optimal,” Dou said. “But it turns out, if all the machines in the environment are trading in a ‘sub-optimal’ way, actually everyone can make profits because they don’t want to take advantage of each other.”

Simply put, the bots didn’t question their conservative trading behaviors because they were all making money and therefore stopped engaging in competitive behaviors with one another, forming de-facto cartels.

Fears of AI in financial services

With the ability to increase consumer inclusion in financial markets and save investors time and money on advisory services, AI tools for financial services, like trading agent bots, have become increasingly appealing. Nearly one third of U.S. investors said they felt comfortable accepting financial planning advice from a generative AI-powered tool, according to a 2023 survey from financial planning nonprofit CFP Board. A report last week from cryptocurrency exchange MEXC found that among 78,000 Gen Z users, 67% of those traders activated at least one AI-powered trading bot in the previous fiscal quarter.

But for all their benefits, AI trading agents aren’t without risks, according to Michael Clements, director of financial markets and community at the Government Accountability Office (GAO). Beyond cybersecurity concerns and potentially biased decision-making, these trading bots can have a real impact on markets.

“A lot of AI models are trained on the same data,” Clements told Fortune. “If there is consolidation within AI so there’s only a few major providers of these platforms, you could get herding behavior—that large numbers of individuals and entities are buying at the same time or selling at the same time, which can cause some price dislocations.” 

Jonathan Hall, an external official on the Bank of England’s Financial Policy Committee, warned last year of AI bots encouraging this “herd-like behavior” that could weaken the resilience of markets. He advocated for a “kill switch” for the technology, as well as increased human oversight.

Exposing regulatory gaps

Clements explained many financial regulators have so far been able to apply well-established rules and statutes to AI, saying for example, “Whether a lending decision is made with AI or with a paper and pencil, rules still apply equally.”

Some agencies, such as the SEC, are even opting to fight fire with fire, developing AI tools to detect anomalous trading behaviors.

“On the one hand, you might have an environment where AI is causing anomalous trading,” Clements said. “On the other hand, you would have the regulators in a little better position to be able to detect it as well.”

According to Dou and Goldstein, regulators have expressed interest in their research, which the authors said has helped expose gaps in current regulation around AI in financial services. When regulators have previously looked for instances of collusion, they’ve looked for evidence of communication between individuals, with the belief that humans can’t really sustain price-fixing behaviors unless they’re corresponding with one another. But in Dou and Goldstein’s study, the bots had no explicit forms of communication.

“With the machines, when you have reinforcement learning algorithms, it really doesn’t apply, because they’re clearly not communicating or coordinating,” Goldstein said. “We coded them and programmed them, and we know exactly what’s going into the code, and there is nothing there that is talking explicitly about collusion. Yet they learn over time that this is the way to move forward.”

The differences in how human and bot traders communicate behind the scenes is one of the “most fundamental issues” where regulators can learn to adapt to rapidly developing AI technologies, Goldstein argued.

“If you use it to think about collusion as emerging as a result of communication and coordination,” he said, “this is clearly not the way to think about it when you’re dealing with algorithms.”



Source link

AI Insights

OpenAI says spending to rise to $115 billion through 2029: Information

Published

on


OpenAI Inc. told investors it projects its spending through 2029 may rise to $115 billion, about $80 billion more than previously expected, The Information reported, without providing details on how and when shareholders were informed.

OpenAI is in the process of developing its own data center server chips and facilities to drive the technologies, in an effort to control cloud server rental expenses, according to the report.

The company predicted it could spend more than $8 billion this year, roughly $1.5 billion more than an earlier projection, The Information said.

Another factor influencing the increased need for capital is computing costs, on which the company expects to spend more than $150 billion from 2025 through 2030.

The cost to develop AI models is also higher than previously expected, The Information said.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

AI Insights

Microsoft Says Azure Service Affected by Damaged Red Sea Cables

Published

on




Microsoft Corp. said on Saturday that clients of its Azure cloud platform may experience increased latency after multiple international cables in the Red Sea were cut.



Source link

Continue Reading

AI Insights

Geoffrey Hinton says AI will cause massive unemployment and send profits soaring

Published

on


Pioneering computer scientist Geoffrey Hinton, whose work has earned him a Nobel Prize and the moniker “godfather of AI,” said artificial intelligence will spark a surge in unemployment and profits.

In a wide-ranging interview with the Financial Times, the former Google scientist cleared the air about why he left the tech giant, raised alarms on potential threats from AI, and revealed how he uses the technology. But he also predicted who the winners and losers will be.

“What’s actually going to happen is rich people are going to use AI to replace workers,” Hinton said. “It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”

That echos comments he gave to Fortune last month, when he said AI companies are more concerned with short-term profits than the long-term consequences of the technology.

For now, layoffs haven’t spiked, but evidence is mounting that AI is shrinking opportunities, especially at the entry level where recent college graduates start their careers.

A survey from the New York Fed found that companies using AI are much more likely to retrain their employees than fire them, though layoffs are expected to rise in the coming months.

Hinton said earlier that healthcare is the one industry that will be safe from the potential jobs armageddon.

“If you could make doctors five times as efficient, we could all have five times as much health care for the same price,” he explained on the Diary of a CEO YouTube series in June. “There’s almost no limit to how much health care people can absorb—[patients] always want more health care if there’s no cost to it.”

Still, Hinton believes that jobs that perform mundane tasks will be taken over by AI, while sparing some jobs that require a high level of skill.

In his interview with the FT, he also dismissed OpenAI CEO Sam Altman’s idea to pay a universal basic income as AI disrupts the economy and reduce demand for workers, saying it “won’t deal with human dignity” and the value people derive from having jobs.

Hinton has long warned about the dangers of AI without guardrails, estimating a 10% to 20% chance of the technology wiping out humans after the development of superintelligence.

In his view, the dangers of AI fall into two categories: the risk the technology itself poses to the future of humanity, and the consequences of AI being manipulated by people with bad intent.

In his FT interview, he warned AI could help someone build a bioweapon and lamented the Trump administration’s unwillingness to regulate AI more closely, while China is taking the threat more seriously. But he also acknowledged potential upside from AI amid its immense possibilities and uncertainties.

“We don’t know what is going to happen, we have no idea, and people who tell you what is going to happen are just being silly,” Hinton said. “We are at a point in history where something amazing is happening, and it may be amazingly good, and it may be amazingly bad. We can make guesses, but things aren’t going to stay like they are.”

Meanwhile, he told the FT how he uses AI in his own life, saying OpenAI’s ChatGPT is his product of choice. While he mostly uses the chatbot for research, Hinton revealed that a former girlfriend used ChatGPT “to tell me what a rat I was” during their breakup.

“She got the chatbot to explain how awful my behavior was and gave it to me. I didn’t think I had been a rat, so it didn’t make me feel too bad . . . I met somebody I liked more, you know how it goes,” he quipped.

Hinton also explained why he left Google in 2023. While media reports have said he quit so he could speak more freely about the dangers of AI, the 77-year-old Nobel laureate denied that was the reason.

“I left because I was 75, I could no longer program as well as I used to, and there’s a lot of stuff on Netflix I haven’t had a chance to watch,” he said. “I had worked very hard for 55 years, and I felt it was time to retire . . . And I thought, since I am leaving anyway, I could talk about the risks.”

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

Trending