Connect with us

AI Insights

AI trading agents formed price-fixing cartels when put in simulated markets, Wharton study reveals

Published

on


Artificial intelligence is just smart—and stupid—enough to pervasively form price-fixing cartels in financial market conditions if left to their own devices.

A working paper posted this month on the National Bureau of Economic Research website from the Wharton School at the University of Pennsylvania and Hong Kong University of Science and Technology found when AI-powered trading agents were released into simulated markets, the bots colluded with one another, engaging in price fixing to make a collective profit.

In the study, researchers let bots loose in market models, essentially a computer program designed to simulate real market conditions and train AI to interpret market-pricing data, with virtual market makers setting prices based on different variables in the model. These markets can have various levels of “noise,” referring to the amount of conflicting information and price fluctuation in the various market contexts. While some bots were trained to behave like retail investors and others like hedge funds, in many cases, the machines engaged in “pervasive” price-fixing behaviors by collectively refusing to trade aggressively—without being explicitly told to do so.

In one algorithmic model looking at price-trigger strategy, AI agents traded conservatively on signals until a large enough market swing triggered them to trade very aggressively. The bots, trained through reinforcement learning, were sophisticated enough to implicitly understand that widespread aggressive trading could create more market volatility.

In another model, AI bots had over-pruned biases and were trained to internalize that if any risky trade led to a negative outcome, they should not pursue that strategy again. The bots traded conservatively in a “dogmatic” manner, even when more aggressive trades were seen as more profitable, collectively acting in a way the study called “artificial stupidity.”

“In both mechanisms, they basically converge to this pattern where they are not acting aggressively, and in the long run, it’s good for them,” study co-author and Wharton finance professor Itay Goldstein told Fortune.

Financial regulators have long worked to address anti-competitive practices like collusion and price fixing in markets. But in retail, AI has taken the spotlight, particularly as legislators call on companies to address algorithmic pricing. For example, Sen. Ruben Gallego (D-Ariz.) called Delta’s practice of using AI to set individual airfare prices “predatory pricing,” though the airline previously told Fortune its fares are “publicly filed and based solely on trip-related factors.”

“For the [Securities and Exchange Commission] and those regulators in financial markets, their primary goal is to not only preserve this kind of stability, but also ensure competitiveness of the market and market efficiency,” Winston Wei Dou, Wharton professor of finance and one of the study’s authors, told Fortune.

With that in mind, Dou and two colleagues set out to identify how AI would behave in a financial market by putting trading agent bots into various simulated markets based on high or low levels of “noise.” The bots ultimately earned “supra-competitive profits” by collectively and spontaneously deciding to avoid aggressive trading behaviors.

“They just believed sub-optimal trading behavior as optimal,” Dou said. “But it turns out, if all the machines in the environment are trading in a ‘sub-optimal’ way, actually everyone can make profits because they don’t want to take advantage of each other.”

Simply put, the bots didn’t question their conservative trading behaviors because they were all making money and therefore stopped engaging in competitive behaviors with one another, forming de-facto cartels.

Fears of AI in financial services

With the ability to increase consumer inclusion in financial markets and save investors time and money on advisory services, AI tools for financial services, like trading agent bots, have become increasingly appealing. Nearly one third of U.S. investors said they felt comfortable accepting financial planning advice from a generative AI-powered tool, according to a 2023 survey from financial planning nonprofit CFP Board. A report last week from cryptocurrency exchange MEXC found that among 78,000 Gen Z users, 67% of those traders activated at least one AI-powered trading bot in the previous fiscal quarter.

But for all their benefits, AI trading agents aren’t without risks, according to Michael Clements, director of financial markets and community at the Government Accountability Office (GAO). Beyond cybersecurity concerns and potentially biased decision-making, these trading bots can have a real impact on markets.

“A lot of AI models are trained on the same data,” Clements told Fortune. “If there is consolidation within AI so there’s only a few major providers of these platforms, you could get herding behavior—that large numbers of individuals and entities are buying at the same time or selling at the same time, which can cause some price dislocations.” 

Jonathan Hall, an external official on the Bank of England’s Financial Policy Committee, warned last year of AI bots encouraging this “herd-like behavior” that could weaken the resilience of markets. He advocated for a “kill switch” for the technology, as well as increased human oversight.

Exposing regulatory gaps

Clements explained many financial regulators have so far been able to apply well-established rules and statutes to AI, saying for example, “Whether a lending decision is made with AI or with a paper and pencil, rules still apply equally.”

Some agencies, such as the SEC, are even opting to fight fire with fire, developing AI tools to detect anomalous trading behaviors.

“On the one hand, you might have an environment where AI is causing anomalous trading,” Clements said. “On the other hand, you would have the regulators in a little better position to be able to detect it as well.”

According to Dou and Goldstein, regulators have expressed interest in their research, which the authors said has helped expose gaps in current regulation around AI in financial services. When regulators have previously looked for instances of collusion, they’ve looked for evidence of communication between individuals, with the belief that humans can’t really sustain price-fixing behaviors unless they’re corresponding with one another. But in Dou and Goldstein’s study, the bots had no explicit forms of communication.

“With the machines, when you have reinforcement learning algorithms, it really doesn’t apply, because they’re clearly not communicating or coordinating,” Goldstein said. “We coded them and programmed them, and we know exactly what’s going into the code, and there is nothing there that is talking explicitly about collusion. Yet they learn over time that this is the way to move forward.”

The differences in how human and bot traders communicate behind the scenes is one of the “most fundamental issues” where regulators can learn to adapt to rapidly developing AI technologies, Goldstein argued.

“If you use it to think about collusion as emerging as a result of communication and coordination,” he said, “this is clearly not the way to think about it when you’re dealing with algorithms.”



Source link

AI Insights

The AI Trade Picks Up Steam After Oracle’s ‘Truly Historic’ Quarter

Published

on


Key Takeaways

  • Cloud computing and software provider Oracle on Tuesday reported its backlog grew to $455 billion last quarter, a 359% increase from the prior year.
  • The company’s pipeline signaled artificial intelligence spending should remain strong for several years, sending Oracle shares sharply higher and boosting the majority of AI stocks.
  • Wall Street analysts expressed shock on Oracle’s earnings call Tuesday, with one calling the company’s growth forecasts “truly historic.”

The artificial intelligence trade got a shot of adrenaline on Wednesday after results from database software and cloud provider Oracle suggested the AI spending bonanza has ample room to run. 

Oracle (ORCL) on Tuesday reported its backlog swelled to $455 billion, a 359% year-over-year increase, after it signed four multibillion-dollar cloud deals in the first quarter of its 2026 fiscal year. Executives said the backlog is expected to surpass half a trillion as Oracle inks more big deals in the coming months.

Oracle also forecast cloud revenue would grow from an estimated $18 billion this fiscal year to $144 billion in 2030, about $50 billion more than Wall Street had forecast. Oracle said most of that revenue forecast was already reflected in its backlog, giving some investors greater confidence in the numbers.  Meanwhile, the Wall Street Journal reported Wednesday that Oracle had signed a five-year contract worth $300 billion with ChatGPT creator OpenAI.

Oracle’s projections completely overshadowed lackluster first-quarter results, and sent its shares soaring as much as 43% on Wednesday. 

The rising tide of robust AI spending was lifting plenty of boats on Wednesday. Shares of AI chip giants Nvidia (NVDA) and Broadcom (AVGO) were recently up more than 4% and 9%, respectively, while chip design company Arm Holdings (ARM) surged more than 8%. The PHLX Semiconductor Index (SOX) was up about 2%. Data center infrastructure provider Vertiv Holdings (VRT) jumped about 9%, while power generators Vistra (VST) and Constellation Energy (CEG) advanced 8% and 6%, respectively.

Oracle’s major cloud competitors were the only drag on the AI trade on Wednesday. Amazon (AMZN) declined more than 3%, while Meta Platforms (META) dropped nearly 2%. Alphabet (GOOG) and Microsoft (MSFT) ticked higher.

Wall Street Hails ‘Momentous’ Quarter

Wall Street’s ebullience over the results was first visible on Oracle’s earnings call Tuesday night. 

“Even I’m sort of blown away by what this looks like going forward,” said Guggenheim analyst John DiFucci at the top of the question and answer portion of the call. “I tell my team, ‘Pay attention to this’—even those that are not working on Oracle—‘because this is a career event happening right now,” DiFucci added.

“There’s no better evidence of a seismic shift happening in computing than these results that you just put up,” said Deutsche Bank analyst Brad Zelnick. Others called the quarter “momentous” and the backlog growth “truly historic.”

AI Demand, Investments Expected To Remain Strong

AI investments have been driven by what many have characterized as insatiable demand for training and inference, and Oracle’s results appeared to support that assessment. 

On Oracle’s earnings call, co-founder and chair Larry Ellison said an unnamed company had requested all of Oracle’s available inferencing capacity. “I’d never gotten a call like that,” Ellison said. 

Big Tech’s investment in capacity to meet that demand is expected to remain robust in the coming years, supported by healthy cash flows at the biggest cloud providers and supportive tax incentives.

Cloud providers like Microsoft, Alphabet and Amazon have been key drivers of the AI infrastructure trade in recent years. Hyperscalers are expected to spend a cumulative $368 billion on infrastructure this year, with much of that earmarked for data centers and the chips and servers that fill them, according to Goldman Sachs. 

Oracle on Tuesday forecast capital expenditures of $35 billion in the 12 months through May 2026, about $10 billion more than the figure executives gave as a minimum last quarter. 

Tax incentives written into the recently passed One Big Beautiful Bill Act should also support AI infrastructure investment. Morgan Stanley expects the bill’s immediate capital investment depreciation provisions to boost Big Tech’s free cash flows by nearly $50 billion this year. The firm expects a sizable portion of those tax savings to be spent on AI infrastructure.



Source link

Continue Reading

AI Insights

From Language Sovereignty to Ecological Stewardship – Intercontinental Cry

Published

on


Last Updated on September 10, 2025

Artificial intelligence is often framed as a frontier that belongs to Silicon Valley, Beijing, or the halls of elite universities. Yet across the globe, Indigenous peoples are shaping AI in ways that reflect their own histories, values, and aspirations. These efforts are not simply about catching up with the latest technological wave—they are about protecting languages, reclaiming data sovereignty, and aligning computation with responsibilities to land and community.

From India’s tribal regions to the Māori homelands of Aotearoa New Zealand, Indigenous-led AI initiatives are emerging as powerful acts of cultural resilience and political assertion. They remind us that intelligence—whether artificial or human—must be grounded in relationship, reciprocity, and respect.

Giving Tribal Languages a Digital Voice

Just this week, researchers at IIIT Hyderabad, alongside IIT Delhi, BITS Pilani, and IIIT Naya Raipur, launched Adi Vaani, a suite of AI-powered tools designed for tribal languages such as Santali, Mundari, and Bhili.

At the heart of the project is a simple premise that technology should serve the people who need it most. Adi Vaani offers text-to-speech, translation, and optical character recognition (OCR) systems that allow speakers of marginalized languages to access education, healthcare, and public services in their mother tongues.

One of the project’s most promising outputs is a Gondi translator app that enables real-time communication between Gondi, Hindi, and English. For the nearly three million Gondi speakers who have long been excluded from India’s digital ecosystem, this tool is nothing less than transformative.

Speaking about the value of the app, research scholar Gopesh Kumar Bharti commented, “Like many tribal languages, Gondi faces several challenges due to its lack of representation in the official schedule, which hampers its preservation and development. The aim is to preserve and restore the Gondi language so that the next generation understands its cultural and historical significance.”

Latin America’s Open-Source Revolution

In Latin America, a similar wave of innovation is underway. Earlier this year, researchers at the Chilean National Center for Artificial Intelligence (CENIA) unveiled Latam-GPT, a free and open-source large language model trained not only on Spanish and Portuguese, but also incorporating Indigenous languages such as Mapuche, Rapanui, Guaraní, Nahuatl, and Quechua.

Unlike commercial AI systems that extract and commodify, Latam-GPT was designed with sovereignty and accessibility in mind.

To be successful, Latam-GPT needs to ensure the participation of “Indigenous peoples, migrant communities, and other historically marginalized groups in the model’s validation,” said Varinka Farren, chief executive officer of Hub APTA.

But as with most good things, it’s going to take time. Rodrigo Durán, CENIA’s general manager, told Rest of World that it will likely take at least a decade.

Māori Data Sovereignty: “Our Language, Our Algorithms”

Half a world away, the Māori broadcasting collective Te Hiku Media has become a global leader in Indigenous AI. In 2021, the organization released an automatic speech recognition (ASR) model for Te Reo Māori with an accuracy rate of 92%—outperforming international tech giants.

Their achievement was not the result of corporate investment or vast computing power, but of decades of community-led language revitalization. By combining archival recordings with new contributions from fluent speakers, Te Hiku demonstrated that Indigenous peoples can own not only their languages but also the algorithms that process them.

As co-director Peter-Lucas Jones explained, “In the digital world, data is like land,” he says. “If we do not have control, governance, and ongoing guardianship of our data as indigenous people, we will be landless in the digital world, too.”

Indigenous Leadership at UNESCO

On the global policy front, leadership is also shifting. Earlier this year, UNESCO appointed Dr. Sonjharia Minz, an Oraon computer scientist from India’s Jharkhand state, as co-chair of the Indigenous Knowledge Research Governance and Rematriation program.

Her mandate is ambitious: to guide the development of AI-based systems that can securely store, share, and repatriate Indigenous cultural heritage. For communities who have seen their songs, rituals, and even sacred objects stolen and digitized without consent, this initiative signals a long-overdue turn toward justice.

As Dr. Minz told The Times of India, “We are on the brink of losing indigenous languages around the world. Indigenous languages are more than mere communication tools. They are repository of culture, knowledge and knowledge system. They are awaiting urgent attention for revitalization.”

AI and Environmental Co-Stewardship

Artificial intelligence is also being harnessed to care for the land and waters that sustain Indigenous peoples. In the Arctic, communities are blending traditional ecological knowledge with AI-driven satellite monitoring to guide adaptive mariculture practices—helping to ensure that changing seas still provide food for generations to come.

In the Pacific Northwest, Indigenous nations are deploying AI-powered sonar and video systems to monitor salmon runs, an effort vital not only to ecosystems but to cultural survival. Unlike conventional “black box” AI, these systems are validated by Indigenous experts, ensuring that machine predictions remain accountable to local governance and ecological ethics.

Such projects remind us that AI need not be extractive. It can be used to strengthen stewardship practices that have protected biodiversity for millennia.

The Hidden Toll of AI’s Appetite

As Indigenous communities lead the charge toward ethical and ecologically grounded AI, we must also confront the environmental realities underpinning the technology—especially the vast energy and water demands of large language models.

In Chile, the rapid proliferation of data centers—driven partly by AI demands—has sparked fierce opposition. Activists argue that facilities run by tech giants like Amazon, Google, and Microsoft exacerbate water scarcity in drought-stricken regions. As one local put it, “It’s turned into extractivism … We end up being everybody’s backyard.”

The energy hunger of LLMs compounds this strain further. According to researchers at MIT, training clusters for generative AI consume seven to eight times more energy than typical computing workloads, accelerating energy demands just as renewable capacity lags behind.

Globally, by 2022, data centers had consumed a staggering 460 terawatt-hours—a scale comparable to the electricity use of entire states such as France—and are projected to reach 1,050 TWh by 2026, which would place data centers among the top five global electricity users.

LLMs aren’t just energy-intensive; their environmental footprint also extends across their whole lifecycle. New modeling shows that inference—the use of pre-trained models—now contributes to more than half of total emissions. Meanwhile, Google’s own reporting suggests that AI operations have increased greenhouse gas emissions by roughly 48% over five years.

Communities hosting data centers often face additional challenges, including:

This environmental reckoning matters deeply to Indigenous-led AI initiatives—because AI should not replicate colonial patterns of extraction and dispossession. Instead, it must align with ecological reciprocity, sustainability, and respect for all forms of life.

Rethinking Intelligence

Together, these Indigenous-led initiatives compel us to rethink both what counts as intelligence and where AI should be heading. In the mainstream tech industry, intelligence is measured by processing power, speed, and predictive accuracy. But for Indigenous nations, intelligence is relational: it lives in languages that carry ancestral memory and in stories that guide communities toward balance and responsibility.

When these values shape artificial intelligence, the results look radically different from today’s extractive systems. AI becomes a tool for reciprocity instead of extraction. In other words, it becomes less about dominating the future and more about sustaining the conditions for life itself.

This vision matters because the current trajectory of AI as an arms race of ever-larger models, resource-hungry data centers, and escalating ecological costs—cannot be sustained.

The challenge is no longer technical but political and ethical. Will governments, institutions, and corporations make space for Indigenous leadership to shape AI’s future? Or will they repeat the same old colonial logics of extraction and exclusion? Time will tell.



Source link

Continue Reading

AI Insights

How federal tech leaders are rewriting the rules for AI and cyber hiring

Published

on


Terry Gerton Well, there’s a lot of things happening in your world. Let’s talk about, first, the new memo that came out at the end of August that talks about FedRAMP 20x. Put that in plain language for folks and then tell us what it means for PSC and its stakeholders.

Jim Carroll Yeah, I think really what it means, it’s a reflection of what’s happening in the industry overall, the GovCon world, as well as probably everything that we do, you know, even as individual citizens, which is more and more reliance on AI. What we’re seeing is the artificial intelligence world has really picked up steam, not only  I saw mention of it on the news today and they were talking about — every Google search now incorporates AI. So what we’re seeing with this GSA and FedRAMP initiative is really trying to fast track the authorization of the cloud-based services side of AI. Because it really is becoming more and more part of every basic use, not only in our private lives, like they talk about, but also in the federal contracting space. And what we are seeing are more and more federal government officials using it for routine things. And so I think what this is is really a reflection that they are going to move this as quickly as possible, in recognition that the world is changing right in front of us.

Terry Gerton So is this more for government contractors who are offering AI products, or for government contractors who are using AI in their internal products?

Jim Carroll It’s really for AI-based cloud services who are able to use AI tools that not only allow them, but really allow federal workers to be able to access AI in a much faster space. And, you know, there’s certainly some challenges with AI. I think, you’re hearing some of the futurists talk about, do we really understand AI enough to embrace it to the extent that we have? I don’t think anyone really knows the answer to that, but we know it’s out there and there is this recognition that there will be an ongoing routine federal use of AI. So let’s at least have the major players that are doing it the best authorized to be able to provide the service. And so much is happening right now in the AI space. And I think everyone knows the acronym. There’s a lot of acronyms we’re going to talk about today that are happening, but AI is an acronym that really is. And we did a poll and looked at our 400 member companies at PS Council. And I think it was 45% or 50% mentioned the use of AI on their homepage. And so I think there’s just recognition that GSA wants to be able to provide these solutions to the federal government workers.

Terry Gerton Do you see any risks or trade-offs in accelerating this approval versus adopting things that might not quite be ready for prime time?

Jim Carroll You know, I think there’s always that concern, as I mentioned, about some of the futurists that are looking at this and making sure that it’s safe. We’re hearing about it from the White House and we’re putting together — you’ve seen some public panels already with the White House, we’ve been asked to bring our PSC members for a policy discussion and some of the legal issues around AI to the White House. And so we’ll be bringing some members to the White House here in the next couple of weeks. And so I think there is concern that the people who use AI are also double-checking to make sure it’s accurate, right? That’s one of the concerns I think that people want to make sure is that there should not be an over-reliance or an exclusive reliance on AI tools. And we need to make sure that the solutions and the answers that our AI tools are giving us are actually accurate. One of the concerns, which I think goes into something we need to discuss that’s happening this week, is cybersecurity. Is AI secure? Is the use of it going to be able to safeguard some of the really important national security work that we’re doing? And how do we do that?

Terry Gerton I’m speaking with Jim Carroll. He’s the CEO of the Professional Services Council. Well, let’s stick in that tech vein and cybersecurity. There’s a new bill in Congress that wants to shift cybersecurity hiring to more of a skills-based qualification than professional degrees. How does PSC think about that proposal?

Jim Carroll I think again, it’s a reflection of what’s actually out there — that these new tools, we’ll say in cybersecurity, [are] really based on an individual’s ability to maneuver in this space, as opposed to just a degree. And being able to really focus on the ability of everyone, I think equals the playing field, right? It means more and more people are qualified to do this. When you take away a — I hate to say a barrier such as a degree, but it’s a reflection that there are other skill sets that people have learned to be able to actually do their work. And I can say this, having gotten a law degree many years ago, that you really sort of learn how to practice law by doing it and by having a mentor and doing it over the years, as opposed to just having a law degree. I don’t think it would be a good person to just go out and represent anyone on anything on the day after graduating from law school. You really need to learn how to apply it and I think that’s what this bipartisan bill is doing. And so you know, we’re encouraging more and more people being able to get into this, because there’s a greater and greater need, Terry. And so we’re okay with this.

Terry Gerton So what might it mean then for the GovCon workforce?

Jim Carroll I think there’s an opportunity here for the GovCon workspace and employees to be able to expand and really get some super-talented people to be able to work at these federal agencies. Which is a great plus, I think, for actually achieving the desired results that our GovCon members at PS Council are able to deliver, is we’re going to get the best and brightest and bring those people in to give real solutions.

Terry Gerton So the bill calls for more transparency from OPM on education-related hiring policies. Does PSC have an idea of what kind of oversight they’d like to see about that practice?

Jim Carroll Yeah, we’re looking into it now. We’re talking to our members and seeing what kind of oversight they have. You know, representing 400 organizations, companies that do business with the federal government and so many in this space of cybersecurity, being the leading trade organization for these 400 companies, it means that we’re able to go to our members and get from them, really, the safeguards that they think are important. Get the requirements that they think are important and get it in there. And so this is going to be a deliberative process. We have a little bit of time to work on this. But we’re excited about the potential. We really think this will be able to deliver great solutions, Terry.

Terry Gerton Well, speaking of cyber, there’s a new memo out on the cybersecurity maturity model. What’s your hot take there?

Jim Carroll Terry, how long has that been pending? I think five years. I think it’s five years is what I heard this morning. And so, you know, this will provide three levels of certification and clarity for CMMC [(Cybersecurity Maturity Model Certification)]. We’re looking at it now. This is obviously a critical issue and we are starting a working group. And we’re going to be able to provide resources to our members for this, to make sure that the certification — some of which are going to be very expensive for our members, depending on what type of certification that they want. So we’re gearing up. We have been ready for this. Like I said, we started planning this for five years ago, right? So did you, Terry. And so we have five years of thought going into it and we will be announcing and developing a website for our members to be able to have information on this, learn from this. We’ll be conducting seminars for our members. So now that CMMC — the other acronym I think that I mentioned earlier — is finally here, it’ll be implemented, I guess, in 60 days. And so we’ll have some time to use the skills that we have been developing over the last five years to give to our members.

Terry Gerton Any surprises for you in the final version? I know that PSC had quite a bit of input in the development.

Jim Carroll Not right now. We’re sort of looking at it; obviously, it just dropped in the last 24 hours. And so nothing right now that has caught us off guard. And so we’ve been ready for this and we’re ready to educate our members on this.

Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.





Source link

Continue Reading

Trending