Connect with us

AI Research

Prediction: Oracle Will Surpass Amazon, Microsoft, and Google to Become the Top Cloud for Artificial Intelligence (AI) By 2031

Published

on


Key Points

  • Oracle is on its way to becoming the best cloud for AI and high-performance computing.

  • Oracle multicloud is cutting networking complexity and reducing data transfer latency.

  • OpenAI will need to raise capital or generate cash flow to afford its $300 billion cloud deal with Oracle.

On Sept. 10, Oracle (NYSE: ORCL) stock popped 36% in response to a massive increase in customer orders for Oracle’s cloud services.

Oracle forecasts that revenue from its Oracle Cloud Infrastructure (OCI) segment could grow from around $10 billion in its last fiscal year (fiscal 2025), to $18 billion in its current fiscal year (fiscal 2026), $32 billion in fiscal 2027, $73 billion in fiscal 2028, $114 billion in fiscal 2029, and $144 billion in fiscal 2030 — corresponding with calendar year 2031.

Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Learn More »

For context, Amazon Web Services (AWS) generated over $60 billion in net sales in the first half of 2025 — so a $120 billion annual run rate. Microsoft, which just wrapped up its fiscal 2025 year, reported $106 billion in Intelligent Cloud revenue. And Alphabet‘s Google Cloud generated $26 billion in revenue in the first half of 2025. This means that OCI is forecast to exceed the current size of Google Cloud within three years, the current size of Microsoft Azure within four years, and the current size of AWS within five years.

Here’s why Oracle is winning cloud contracts from leading artificial intelligence (AI) companies like OpenAI, and why the company could become the top cloud for AI within the next five years.

A rendering of a cloud with touch points extending around the world, illustrating the growing need for cloud computing in the age of artificial intelligence (AI).

Image source: Getty Images.

The future of cloud computing

Oracle’s push into cloud infrastructure is arguably its boldest bet in the company’s history. Oracle isn’t cutting corners, either; it is bringing on dozens of data centers online in just a few years. It has built 34 multicloud data centers and should have another 37 online in less than a year.

These multicloud data centers are unique because they allow an organization to use services or workloads from two or more cloud providers, such as AWS, Microsoft Azure, Google Cloud, and OCI. All of these clouds can work with the Oracle database. The idea is to allow customers to select the best cloud service for each task.

AWS, Azure, and Google Cloud all have multicloud strategies too, but the big difference is that Oracle is embedding native versions of its infrastructure (Oracle Autonomous Database and Exadata Database Service) inside the big three clouds to boost performance and decrease latency. Examples include Oracle Database@AWS, Oracle Database@Azure, and Oracle Database@Google Cloud. The “big three” are more about managing workloads rather than integrating them natively.

The buildout of OCI as a formidable alternative to the big three, paired with Oracle’s ultra-modern data centers, put Oracle on the cutting edge of data center workflow. According to Oracle, OCI can achieve 50% better price-to-performance and 3.5 times time savings for high-performance cloud computing workflows compared to the previous generation of computing.

Race to the clouds

Oracle is purpose-building its cloud from scratch specifically for AI, whereas the majority of AWS, Microsoft Azure, and Google Cloud handle non-AI tasks, like basic compute and storage, database and analytics, networking, etc. So while Oracle will likely become the biggest cloud for AI if it hits its fiscal 2030 OCI revenue target of $144 billion, it still may be a smaller cloud by total revenue compared to the more established giants.

Still, Oracle is achieving milestones that are impossible to ignore — laying the foundation for Oracle to be the go-to cloud for AI. It exited the recent quarter with a 359% increase in its contract backlog, bringing the total to $455 billion. Reports indicate that Oracle landed a multiyear $300 billion contract with OpenAI. To afford that deal, OpenAI will need to start generating more cash flow.

On Sept. 11 — two days after Oracle reported earnings — OpenAI and Microsoft released a joint statement to transition OpenAI from a pure-play nonprofit to a nonprofit owning a majority stake in a Public Benefit Corporation (PBC). A PBC is like a corporation with mission-backed guardrails. The aim is to generate a profit, but only if it fulfills a mission. Still, OpenAI’s transition could allow it to raise billions more in funding, which would presumably help fund its deal with OCI even if OpenAI isn’t generating positive free cash flow.

OpenAI, as the cornerstone of Oracle’s backlog, has its pros and cons. On the one hand, it demonstrates that one of the most cutting-edge AI companies recognizes the value in what Oracle is building. But it also adds concentration risk to Oracle’s projections. And if OpenAI’s targets don’t go as planned, Oracle’s forecast could fall apart.

A high-risk, high-potential-reward AI play

Oracle is attracting massive deals from the big three cloud players with its multicloud offering. It has also built an attractive pricing model for customers specifically looking for high-performance computing to train AI models.

With customers lining up at the door, including a jewel in OpenAI, all Oracle has to do now is scale its infrastructure. It’s become the best restaurant in town with reservations booked years in advance. The demand is undeniable, especially given these are multibillion-dollar, multiyear contracts.

Given Oracle’s extremely pricey valuation, investors should only consider the stock if they have a high risk tolerance, a long-term time horizon, and believe that Oracle’s multicloud offering will be the premier option for AI customers. If that thesis plays out, Oracle will likely be worth considerably more in the future than it is today, even after the stock has nearly doubled over the last year and more than quadrupled over the last three years.

Should you invest $1,000 in Oracle right now?

Before you buy stock in Oracle, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Oracle wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider when Netflix made this list on December 17, 2004… if you invested $1,000 at the time of our recommendation, you’d have $648,369!* Or when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $1,089,583!*

Now, it’s worth noting Stock Advisor’s total average return is 1,060% — a market-crushing outperformance compared to 189% for the S&P 500. Don’t miss out on the latest top 10 list, available when you join Stock Advisor.

See the 10 stocks »

*Stock Advisor returns as of September 15, 2025

Daniel Foelber has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon, Microsoft, and Oracle. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

AI Research

(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment – The Standard (HK)

Published

on



(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment  The Standard (HK)



Source link

Continue Reading

AI Research

[2506.08171] Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models


View a PDF of the paper titled Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models, by Daniel Koh and 4 other authors

View PDF
HTML (experimental)

Abstract:Large language models (LLMs) have demonstrated strong performance on coding tasks such as generation, completion and repair, but their ability to handle complex symbolic reasoning over code still remains underexplored. We introduce the task of worst-case symbolic constraints analysis, which requires inferring the symbolic constraints that characterise worst-case program executions; these constraints can be solved to obtain inputs that expose performance bottlenecks or denial-of-service vulnerabilities in software systems. We show that even state-of-the-art LLMs (e.g., GPT-5) struggle when applied directly on this task. To address this challenge, we propose WARP, an innovative neurosymbolic approach that computes worst-case constraints on smaller concrete input sizes using existing program analysis tools, and then leverages LLMs to generalise these constraints to larger input sizes. Concretely, WARP comprises: (1) an incremental strategy for LLM-based worst-case reasoning, (2) a solver-aligned neurosymbolic framework that integrates reinforcement learning with SMT (Satisfiability Modulo Theories) solving, and (3) a curated dataset of symbolic constraints. Experimental results show that WARP consistently improves performance on worst-case constraint reasoning. Leveraging the curated constraint dataset, we use reinforcement learning to fine-tune a model, WARP-1.0-3B, which significantly outperforms size-matched and even larger baselines. These results demonstrate that incremental constraint reasoning enhances LLMs’ ability to handle symbolic reasoning and highlight the potential for deeper integration between neural learning and formal methods in rigorous program analysis.

Submission history

From: Daniel Koh [view email]
[v1]
Mon, 9 Jun 2025 19:33:30 UTC (1,462 KB)
[v2]
Tue, 16 Sep 2025 10:35:33 UTC (1,871 KB)



Source link

Continue Reading

AI Research

Spatially-Aware Image Focus for Visual Reasoning


View a PDF of the paper titled SIFThinker: Spatially-Aware Image Focus for Visual Reasoning, by Zhangquan Chen and 6 other authors

View PDF
HTML (experimental)

Abstract:Current multimodal large language models (MLLMs) still face significant challenges in complex visual tasks (e.g., spatial understanding, fine-grained perception). Prior methods have tried to incorporate visual reasoning, however, they fail to leverage attention correction with spatial cues to iteratively refine their focus on prompt-relevant regions. In this paper, we introduce SIFThinker, a spatially-aware “think-with-images” framework that mimics human visual perception. Specifically, SIFThinker enables attention correcting and image region focusing by interleaving depth-enhanced bounding boxes and natural language. Our contributions are twofold: First, we introduce a reverse-expansion-forward-inference strategy that facilitates the generation of interleaved image-text chains of thought for process-level supervision, which in turn leads to the construction of the SIF-50K dataset. Besides, we propose GRPO-SIF, a reinforced training paradigm that integrates depth-informed visual grounding into a unified reasoning pipeline, teaching the model to dynamically correct and focus on prompt-relevant regions. Extensive experiments demonstrate that SIFThinker outperforms state-of-the-art methods in spatial understanding and fine-grained visual perception, while maintaining strong general capabilities, highlighting the effectiveness of our method. Code: this https URL.

Submission history

From: Zhangquan Chen [view email]
[v1]
Fri, 8 Aug 2025 12:26:20 UTC (5,223 KB)
[v2]
Thu, 14 Aug 2025 10:34:22 UTC (5,223 KB)
[v3]
Sun, 24 Aug 2025 13:04:46 UTC (5,223 KB)
[v4]
Tue, 16 Sep 2025 09:40:13 UTC (5,223 KB)



Source link

Continue Reading

Trending