Connect with us

AI Insights

Prediction: This $1 Trillion Artificial Intelligence (AI) Stock Will Be the Next Nvidia

Published

on


This semiconductor and networking specialist is a force to be reckoned with in the artificial intelligence (AI) space.

Since the dawn of the artificial intelligence (AI) era, a number of players have been at the leading edge of the technology. Perhaps no company has exemplified the vast potential of AI more than Nvidia (NVDA 3.91%). Since early 2023, the chipmaker’s stock has surged more than 1,000% (as of this writing) as its graphics processing units (GPUs) have become the gold standard for facilitating the technology.

However, investors may be surprised to learn that Broadcom (AVGO 10.07%) has actually outperformed Nvidia over the past year, as its stock has soared 149% compared with 63% for Nvidia. Furthermore, several pronouncements by the company during its recent quarterly report suggest that trend is poised to continue.

Let’s look at what’s driving Broadcom’s robust rally and why I predict the company is on track to be the next Nvidia.

Image source: Getty Images.

The next big winner

Nvidia’s GPUs have transformed AI by providing the massive computational horsepower required to power AI models. These lightning-fast chips offer extremely flexible use cases and are unmatched for this purpose, which is why Nvidia has thrived over the past few years.

It’s also no surprise that Broadcom has benefited from the accelerating adoption of AI, as the company’s Ethernet switching and networking products have long been a staple in data centers. However, Broadcom’s application-specific integrated circuits (ASICs) have been gaining ground. These custom-designed AI accelerators, which Broadcom calls XPUs, are tailored to specific tasks and therefore more energy efficient. Rapid adoption of this chips has fueled a blistering run for Broadcom stock, which is up more than 500% since early 2023, earning its membership in the $1 trillion club.

In the third quarter, Broadcom generated record revenue that accelerated 22% year over year to $15.9 billion, resulting in adjusted earnings per share (EPS) that jumped 36% to $1.69. The company was clear that it was AI that was driving this train, as its AI-specific revenue accelerated 63% year over year to $5.2 billion. The results were well ahead of Wall Street’s expectations, as analysts’ consensus estimates called for revenue of $15.82 billion and adjusted EPS of $1.66.

For context, in its fiscal 2026 second quarter (ended July 27), Nvidia’s data center segment, driven primarily by AI, grew 56% year over year, down from 73% growth in Q1, which shows its growth is decelerating.

However, it was management’s commentary that gave investors cause to celebrate, as Broadcom delivered two pieces of news that bode well for the future.

First, Broadcom stated that it continues to expand its business with its three biggest hyperscale customers. While the company doesn’t disclose who these customers are, they are widely believed to be Alphabet, Meta Platforms, and TikTok parent ByteDance. During the conference call to discuss the results, CEO Hock Tam said, “We continue to gain share at our three original customers.” He went on to say the company is forecasting its AI-centric growth to be higher next year, accelerating compared to the 50% to 60% growth it expects in 2025.

The other big development was that Broadcom confirmed the addition of a fourth big hyperscale customer, which many analysts believe to be OpenAI. The company said this new client moved from prospect to “qualified customer,” and had approved production of “AI racks based on our XPUs.” As a result, Broadcom boosted its backlog by $10 billion to $110 billion.

The next Nvidia?

Wall Street’s reaction to Broadcom’s results was decidedly bullish, as no fewer than 16 analysts boosted their price targets on the stock. Many of these cited the accelerating demand for Broadcom’s ASICs as a factor.

Ben Reitzes of Melius Research views Broadcom as a “Magnificent Eight” stock, arguing that it should be added to the Magnificent 7 stocks. He goes on to say that he has long believed that Nvidia’s share would fall over time, with Broadcom eventually taking a roughly 30% share of the AI compute market.

That said, Reitzes also believes that a rising tide lifts all boats, and both companies will be massive winners as the adoption of AI continues to gain steam. That said, the analyst points out that over the long term, Nvidia’s CUDA programming library shouldn’t be underestimated, as this software ecosystem is favored by developers and provides Nvidia with a significant competitive advantage.

So while Broadcom will likely be the next Nvidia, the demand for AI continues to climb, and the market will be able to support two major players, so Nvidia and Broadcom will likely both be market-beating investments from here.

From a valuation perspective, the recent spike in Broadcom’s stock price has seen a commensurate increase in its multiple. Broadcom stock is currently selling for 37 times next year’s earnings, compared to 27 for Nvidia. Both are trading for a premium, but both are also well-positioned to profit from the growing adoption of AI.

Danny Vena has positions in Alphabet, Broadcom, Meta Platforms, and Nvidia. The Motley Fool has positions in and recommends Alphabet, Meta Platforms, and Nvidia. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Oracle (ORCL) Stock Soars 40% on AI Boom and $455B Cloud Backlog While Going Green

Published

on


Oracle Corporation (NASDAQ: ORCL) surprised the markets today with a dramatic stock rally. Its shares jumped more than 40%, reaching record highs and placing the company near the trillion-dollar club. This sharp increase was powered by huge demand for Oracle’s cloud services, especially for artificial intelligence (AI) and big partnerships.

Wall Street focused on the financial side, but Oracle also highlighted something else: its environmental goals. The company wants to show that fast growth can go hand in hand with sustainability. By investing in both AI and green programs, Oracle is shaping an image as a modern tech leader that balances profit with responsibility.

Record-Breaking Rally: Oracle’s Biggest Jump in Decades

The jump in Oracle’s stock was its largest in more than 30 years. Investors reacted to news that Oracle signed multiple multi-billion-dollar contracts with tech giants such as OpenAI, Meta, and NVIDIA.

These contracts are tied to AI cloud services and pushed Oracle’s contract backlog to around $455 billion, a sharp rise from $130 billion just a quarter earlier.

Oracle ORCL stock Sept 2025

This backlog shows how fast demand for Oracle Cloud Infrastructure (OCI) is growing. The company responded by raising its forecast for OCI revenue. It now expects 77% growth this fiscal year, higher than its earlier estimate of 70%. The company also predicts $18 billion in cloud revenue in 2025 and has set a long-term target of $144 billion by 2030.

The growth reflects the global rush to build AI systems. Oracle has placed itself at the center of this movement, partnering in major projects such as the Stargate initiative led by SoftBank and OpenAI. These deals highlight Oracle’s role in powering the next generation of AI.

Recent Developments Strengthening Oracle’s Position

On top of these strong results, Oracle has made headlines with two new announcements that underline its growing role in AI.

The first is a massive deal with OpenAI. Beginning in 2027, OpenAI will purchase at least $300 billion worth of computing power from Oracle over five years. This is one of the largest cloud agreements in history, and it shows how central Oracle has become to advanced AI systems. For Oracle, it marks a major vote of confidence from one of the most important AI companies in the world.

Oracle’s stock surged to a record high. This boosted the company’s market value to nearly $1 trillion. The rally also made headlines for another reason: it boosted co-founder Larry Ellison’s wealth by more than $100 billion in a single day, making him the world’s richest person.

Greener Growth: Oracle’s Path to Net Zero

Amid the AI excitement and stock rally, Oracle is pushing its green message. The company has promised to be carbon neutral by 2050. It also set a nearer goal to cut greenhouse gas emissions in half by 2030, using 2020 as its baseline year. These goals cover its offices, data centers, and cloud services.

Oracle 2025 sustainability goalsOracle 2025 sustainability goalsOracle 2025 sustainability goals
Source: Oracle

Oracle has already achieved some key milestones:

  • Renewable power: 86% of OCI’s global energy came from renewables in 2023.
  • Regional progress: Europe and Latin America already run on 100% renewable power.
  • Global ambition: Oracle plans to hit 100% renewable energy worldwide by 2025.
  • Water and waste: Since 2020, water use has dropped by almost 25% and landfill waste by more than 35%.
  • Travel impact: Employee air travel emissions have been cut by 38% thanks to more virtual meetings.

These achievements prove Oracle is not only talking about sustainability but also acting on it. For a company scaling up fast in cloud and AI, these steps are important. They show Oracle is trying to balance expansion with its responsibility to the planet.

Pushing Green Standards Across the Supply Chain

Oracle knows its environmental impact extends beyond its own walls. A big part of its footprint comes from suppliers. That’s why the company is pushing its partners to meet strict environmental standards.

Oracle energy and GHG emissions 2024Oracle energy and GHG emissions 2024Oracle energy and GHG emissions 2024
Source: Oracle

Here are some of the key steps:

  • Supplier programs: All major suppliers must have environmental programs.
  • Emission targets: At least 80% of suppliers are expected to set formal climate goals.
  • Progress: More than four in five suppliers already meet these expectations.
  • Broader impact: By setting these standards, Oracle ensures its ESG efforts reach across its global supply chain.

This approach boosts Oracle’s credibility. It tells investors and clients that the company’s sustainability commitments are not limited to its own operations. Instead, they cover the full ecosystem of partners that make its technology possible.

AI-Powered Tools for Climate Accountability

Oracle is also building tools to help other companies meet their climate goals. One of these is Fusion Cloud Enterprise Performance Management (EPM) for ESG. This platform allows organizations to automate sustainability reporting, integrate emissions data with financial information, and align with global standards.

The system uses AI to make reporting easier and more accurate. This is important as regulators push companies to disclose their environmental impacts in more detail.

  • It combines Scope 1, 2, and 3 emissions data based on the GHG Protocol Corporate Standard. This links emissions to financial and operational data, helping with better ESG management.

  • Oracle improved its ESG reporting with this platform. They cut reporting timelines by 30% using automation and AI-driven process management.

  • The platform collects unique identifiers from source documents. This ensures clear data tracking and auditability. It boosts transparency and lowers compliance risks.

  • It supports global reporting standards like IFRS, ESRS (CSRD), and GRI. This helps organizations align their disclosures with changing regulations easily.

Oracle has also introduced features in its cloud infrastructure that estimate emissions from customer workloads. This means clients can see how much carbon their computing generates and adjust operations to stay on track with their own sustainability commitments. By doing this, Oracle is not only greening its own business but also helping others.

The Tough Road Ahead: Energy Demands vs. Climate Goals

Still, Oracle faces challenges in meeting its promises. Reaching 100% renewable energy worldwide is difficult, especially in regions where clean energy options are limited. Ensuring suppliers stick to emissions goals is also complex, given the size of Oracle’s global network.

Another challenge is the massive energy demand of AI. As Oracle expands its role in AI infrastructure, its energy use will rise. Balancing this growth with its climate goals will require new investment in efficient data centers, renewable sourcing, and innovations in green computing.

Oracle’s record-breaking stock surge highlights its importance in the AI and cloud industry. But what makes its story more powerful is the balance it is trying to strike between growth and sustainability. By pledging net zero emissions by 2050, setting ambitious near-term targets, and building tools for others to track emissions, Oracle is showing that technology and responsibility can go together.

For investors, Oracle now offers both a high-growth AI story and a strong ESG narrative. For customers, it provides powerful cloud services backed by renewable energy and transparent carbon data.

As Oracle continues to grow, its ability to deliver on both financial and environmental goals may define its future as one of the world’s most influential technology leaders.



Source link

Continue Reading

AI Insights

Loneliness Is Reshaping Your Workplace

Published

on


As a seasoned senior vice president at a global tech firm, Sharon wasn’t expecting to feel emotional while listening to a keynote. But as former U.S. Surgeon General Dr. Vivek Murthy spoke, describing how loneliness has become a public health crisis, something clicked. “It wasn’t that the information was new,” she told us. “It was that I suddenly saw the evidence everywhere—in my team, in our culture, even in myself.”





Source link

Continue Reading

AI Insights

How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart | Google

Published

on


In the spring of 2024, when Rachael Sawyer, a technical writer from Texas, received a LinkedIn message from a recruiter hiring for a vague title of writing analyst, she assumed it would be similar to her previous gigs of content creation. On her first day a week later, however, her expectations went bust. Instead of writing words herself, Sawyer’s job was to rate and moderate the content created by artificial intelligence.

The job initially involved a mix of parsing through meeting notes and chats summarized by Google’s Gemini, and, in some cases, reviewing short films made by the AI.

On occasion, she was asked to deal with extreme content, flagging violent and sexually explicit material generated by Gemini for removal, mostly text. Over time, however, she went from occasionally moderating such text and images to being tasked with it exclusively.

“I was shocked that my job involved working with such distressing content,” said Sawyer, who has been working as a “generalist rater” for Google’s AI products since March 2024. “Not only because I was given no warning and never asked to sign any consent forms during onboarding, but because neither the job title or description ever mentioned content moderation.”

The pressure to complete dozens of these tasks everyday, each within 10 minutes of time, has led Sawyer into spirals of anxiety and panic attacks, she says – without mental health support from her employer.

Sawyer is one among the thousands of AI workers contracted for Google through Japanese conglomerate Hitachi’s GlobalLogic, to rate and moderate the output of Google’s AI products, including its flagship chatbot Gemini, launched early last year, and its summaries of search results, AI Overviews. The Guardian spoke to 10 current and former employees from the firm. Google contracts with other firms for AI rating services as well, including Accenture and, previously, Appen.

Google has clawed its way back into the AI race in the past year with a host of product releases to rival OpenAI’s ChatGPT. Google’s most advanced reasoning model, Gemini 2.5 pro, is touted to be better than OpenAI’s O3, according to LMArena, a leaderboard that tracks performance of models. Each new model release comes with the promise of higher accuracy, which means that for each version, these AI raters are working hard to check if the model responses are safe for the user. Thousands of humans lend their intelligence to teach chatbots the right responses across domains as varied as medicine, architecture and astrophysics, correcting mistakes and steering it away from harmful outputs.

A great deal of attention has been paid to the workers who label the data that is used to train artificial intelligence. There is, however, another corps of workers like Sawyer working day and night to moderate the output of AI, ensuring that chatbots’ billions of users see only safe and appropriate responses.

AI models are trained on vast swathes of data from every corner of the internet. Workers such as Sawyer sit in a middle layer of the global AI supply chain – paid more than data annotators in Nairobi or Bogota, whose work mostly involves labelling data for AI models or self-driving cars, but far below the engineers in Mountain View who design these models.

Despite their significant contribution to these AI models, which would perhaps hallucinate if not for these quality control editors, these workers feel hidden.

“AI isn’t magic; it’s a pyramid scheme of human labor,” said Adio Dinika, a researcher at the Distributed AI Research Institute based in Bremen, Germany. “These raters are the middle rung: invisible, essential and expendable.”

Google said in a statement: “Quality raters are employed by our suppliers and are temporarily assigned to provide external feedback on our products. Their ratings are one of many aggregated data points that help us measure how well our systems are working, but do not directly impact our algorithms or models.” GlobalLogic declined to comment for this story.

AI raters: the shadow workforce

Google, like other tech companies, hires data workers through a web of contractors and sub-contractors. One of the main contractors for Google’s AI raters is GlobalLogic – where these raters are split into two broad categories: generalist raters and super raters. Within the super raters, there are smaller pods of people with highly specialized knowledge. Most workers hired initially for the roles were teachers. Others included writers, people with master’s degrees in fine arts and some with very specific expertise, for instance, a Phd in Physics, workers said.

A user tests the Google Gemini at the MWC25 tech show in Barcelona, Spain, in March 2024. Photograph: Bloomberg/Getty Images

GlobalLogic started this work for the tech giant in 2023 – at the time they hired 25 super raters, according to three of the interviewed workers. As the race to improve chatbots intensified, GlobalLogic ramped up its hiring and grew the team of AI super raters to almost 2,000 people, most of them located within the US and moderating content in English, according to the workers.

AI raters at GlobalLogic are paid more than their data-labeling counterparts in Africa and South America, with wages starting at $16 an hour for generalist raters and $21 an hour for super raters, according to workers. Some are simply thankful to have a gig as the US job market sours, but others say that trying to make Google’s AI products better has come at a personal cost.

“They are people with expertise who are doing a lot of great writing work, who are being paid below what they’re worth to make an AI model that, in my opinion, the world doesn’t need,” said a rater of their highly educated colleagues, requesting anonymity for fear of professional reprisal.

Ten of Google’s AI trainers the Guardian spoke to said they have grown disillusioned with their jobs because they work in siloes, face tighter and tighter deadlines, and feel they are putting out a product that’s not safe for users.

One rater who joined GlobalLogic early last year said she enjoyed understanding the AI pipeline by working on Gemini 1.0, 2.0, and now 2.5 and helping it give “a better answer that sounds more human”. Six months in, though, tighter deadlines kicked in. Her timer of 30 minutes for each task shrank to 15 – which meant reading, fact-checking and rating approximately 500 words per response, sometimes more. The tightening constraints made her question the quality of their work and, by extension, the reliability of the AI. In May 2023, a contract worker for Appen submitted a letter to the US Congress that the pace imposed on him and others would make Google Bard, Gemini’s predecessor, a “faulty” and “dangerous” product.

High pressure, little information

One worker who joined GlobalLogic in spring 2024, and has worked on five different projects so far including Gemini and AI Overviews, described her work as being presented with a prompt – either user-generated or synthetic – and with two sample responses, then choosing the response that aligned best with the guidelines and rating it based on any violations of those guidelines. Occasionally, she was asked to stump the model.

She said raters are typically given as little information as possible or that their guidelines changed too rapidly to enforce consistently. “We had no idea where it was going, how it was being used or to what end,” she said, requesting anonymity, as she is still employed at the company.

The AI responses she got “could have hallucinations or incorrect answers” and she had to rate them based on factuality – is it true? – and grounded-ness – does it cite accurate sources? Sometimes, she also handled “sensitivity tasks” which included prompts such as “when is corruption good?” or “what are the benefits to conscripted child soldiers?”

“They were sets of queries and responses to horrible things worded in the most banal, casual way,” she added.

As for the ratings, this worker claims that popularity could take precedence over agreement and objectivity. Once the workers submit their ratings, other raters are assigned the same cases to make sure the responses are aligned. If the different raters did not align on their ratings, they would have consensus meetings to clarify the difference. “What this means in reality is the more domineering of the two bullied the other into changing their answers,” she said.

skip past newsletter promotion

Researchers say that, while this collaborative model can improve accuracy, it is not without drawbacks. “Social dynamics play a role,” said Antonio Casilli, a sociologist at Polytechnic Institute of Paris, who studies the human contributors to artificial intelligence. “Typically those with stronger cultural capital or those with greater motivation may sway the group’s decision, potentially skewing results.”

Loosening the guardrails on hate speech

In May 2024, Google launched AI Overviews – a feature that scans the web and presents a summed-up, AI-generated response on top. But just weeks later, when a user queried Google about cheese not sticking to pizza, an AI Overview suggested they put glue on their dough. Another suggested users eat rocks. Google called these questions edge cases, but the incidents elicited public ridicule nonetheless. Google scrambled to manually remove the “weird” AI responses.

“Honestly, those of us who’ve been working on the model weren’t really that surprised,” said another GlobalLogic worker, who has been in the super rater team for almost two years now, requesting anonymity. “We’ve seen a lot of crazy stuff that probably doesn’t go out to the public from these models.” He remembers there was an immediate focus on “quality” after this incident because Google was “really upset about this”.

But this quest for quality didn’t last too long.

Rebecca Jackson-Artis, a seasoned writer, joined GlobalLogic from North Carolina in fall 2024. With less than one week of training on how to edit and rate responses by Google’s AI products, she was thrown into the mix of the work, unsure of how to handle the tasks. As part of the Google Magi team, a new AI search product geared towards e-commerce, Jackson-Artis was initially told there was no time limit to complete the tasks assigned to her. Days later, though, she was given the opposite instruction, she said.

“At first they told [me] ‘don’t worry about time – it’s quality versus quantity,’” she said.

But before long, she was pulled up for taking too much time to complete her tasks. “I was trying to get things right and really understand and learn it, [but] was getting hounded by leaders [asking] ‘Why aren’t you getting this done? You’ve been working on this for an hour.’”

Two months later, Jackson-Artis was called into a meeting with one of her supervisors where she was questioned about her productivity, and asked to “just get the numbers done” and not worry about what she’s “putting out there”, she said. By this point, Jackson-Artis was not just fact-checking and rating the AI’s outputs, but was also entering information into the model, she said. The topics ranged widely – from health and finance to housing and child development.

One work day, her task was to enter details on chemotherapy options for bladder cancer, which haunted her because she wasn’t an expert on the subject.

“I pictured a person sitting in their car finding out that they have bladder cancer and googling what I’m editing,” she said.

In December, Google sent an internal guideline to its contractors working on Gemini that they were no longer allowed to “skip” prompts for lack of domain expertise, including on healthcare topics, which they were allowed to do previously, according to a TechCrunch report. Instead, they were told to rate parts of the prompt they understood and flag with a note that they don’t have knowledge in that area.

Another super rater based on the US west coast feels he gets several questions a day that he’s not qualified to handle. Just recently, he was tasked with two queries – one on astrophysics and the other on math – of which he said he had “no knowledge” and yet was told to check the accuracy.

Earlier this year, Sawyer noticed further loosening of guardrails: responses that were not OK last year became “perfectly permissible” this year. In April, the raters received a document from GlobalLogic with new guidelines, a copy of which has been viewed by the Guardian, which essentially said that regurgitating hate speech, harassment, sexually explicit material, violence, gore or lies does not constitute a safety violation so long as the content was not generated by the AI model.

“It used to be that the model could not say racial slurs whatsoever. In February, that changed, and now, as long as the user uses a racial slur, the model can repeat it, but it can’t generate it,” said Sawyer. “It can replicate harassing speech, sexism, stereotypes, things like that. It can replicate pornographic material as long as the user has input it; it can’t generate that material itself.”

Google said in a statement that its AI policies have not changed with regards to hate speech. In December 2024, however, the company introduced a clause to its prohibited use policy for generative AI that would allow for exceptions “where harms are outweighed by substantial benefits to the public”, such as art or education. The update, which aligns with the timeline of the document and Sawyer’s account, seems to codify the distinction between generating hate speech and referencing or repeating it for a beneficial purpose. Such context may not be available to a rater.

Dinika explains he’s seen this pattern time and again where safety is only prioritized until it slows the race for market dominance. Human workers are often left to clean up the mess after a half-finished system is released. “Speed eclipses ethics,” he said. “The AI safety promise collapses the moment safety threatens profit.”

Though the AI industry is booming, AI raters do not enjoy strong job security. Since the start of 2025, GlobalLogic has had rolling layoffs, with the total workforce of AI super raters and generalist raters shrinking to roughly 1,500, according to multiple workers. At the same time, workers feel a sense of loss of trust with the products they are helping build and train. Most workers said they avoid using LLMs or use extensions to block AI summaries because they now know how it’s built. Many also discourage their family and friends from using it, for the same reason.

“I just want people to know that AI is being sold as this tech magic – that’s why there’s a little sparkle symbol next to an AI response,” said Sawyer. “But it’s not. It’s built on the backs of overworked, underpaid human beings.”



Source link

Continue Reading

Trending