Connect with us

AI Research

This Artificial Intelligence (AI) Stock Could Hit a $2 Trillion Valuation by 2028

Published

on


  • Broadcom’s solid growth in the past year has brought the company’s market cap to $1.3 trillion.

  • Its healthy revenue growth, massive addressable market, and improving customer base could help it deliver more upside.

  • Broadcom stock is expensive right now, but it can justify its expensive valuation and will eventually hit a $2 trillion market cap.

  • 10 stocks we like better than Broadcom ›

Invest in Gold

Powered by Money.com – Yahoo may earn commission from the links above.

Broadcom (NASDAQ: AVGO) has become a key player in the artificial intelligence (AI) chip market thanks to its application-specific integrated circuits (ASICs). ASICs are gaining tremendous traction among cloud service providers and hyperscalers because of their cost-effective nature and performance advantages over general-purpose computing chip systems such as graphics cards.

This explains why Broadcom stock has shot up an impressive 65% in the past year, which is significantly higher than the 30% gain for AI chip pioneer Nvidia during this period. This impressive surge has brought Broadcom’s market cap to roughly $1.3 trillion, and it wouldn’t be surprising to see this semiconductor stock enter the $2 trillion market cap club in the next three years.

Let’s look at the reasons why Broadcom seems capable of hitting that milestone by 2028.

Image source: Getty Images.

Broadcom released its fiscal 2025 second-quarter results (for the three months ended May 4) last month. The chipmaker’s revenue in the first six months of the fiscal year increased by 22% from the year-ago period to $29.9 billion. The impressive growth in sales of the company’s AI chips plays a central role in driving this impressive growth.

Broadcom’s AI chip revenue jumped 77% year over year in the fiscal first quarter, followed by a 46% increase in Q2. The company has sold $8.5 billion worth of AI chips in the first half of the year, which means that it is getting nearly 30% of its top line from this segment. Broadcom expects to sell $5.1 billion worth of AI chips in the current quarter, which would be a 60% jump from the year-ago period.

So, the company is on track to register a healthy jump in AI revenue compared to the previous fiscal year, when it sold $12.2 billion worth of AI chips. Importantly, Broadcom’s AI revenue still has tremendous room for growth, thanks to a couple of factors.

First, the adoption of custom AI processors is increasing at a healthy pace. Major cloud computing companies such as Microsoft, Alphabet‘s Google, Amazon, and AI giants such as OpenAI are turning to custom chips to deliver cutting-edge performance to their customers at reasonable prices. Microsoft, for instance, released two in-house chips late last year to speed up AI workloads and improve the security of its data center infrastructure.

Google, on the other hand, revealed its Ironwood custom AI inferencing processor three months ago, delivering a significant increase in performance over its previous chips with the aim of running AI workloads in a cost-effective manner. Meanwhile, OpenAI is reportedly working with Broadcom to finalize the design of its custom AI chip.

Broadcom’s client list for its custom chips now includes the likes of Meta Platforms, ByteDance, Alphabet, and OpenAI. The company is reportedly going to design chips for xAI, Oracle, and Apple as well. All these customers should expand Broadcom’s annual serviceable addressable market well beyond the $60 billion to $90 billion range that the company is forecasting by fiscal 2027.

The second reason why Broadcom is on track to win big from the custom AI processor market is its solid market share in this space. The company reportedly controls 70% of this lucrative end market, and its growing customer base should allow it to sustain this healthy share in the future.

Not surprisingly, investment banking firm TD Cowen estimates that Broadcom’s AI chip revenue could grow to $50 billion a year in 2027, which would be more than four times the revenue it generated from this segment last year. That could be sufficient for the company to get to a $2 trillion market cap. Here’s why.

Broadcom finished fiscal 2024 with $51.6 billion in revenue, $12.2 billion of which came from AI. If the company’s revenue from all other segments remains flat and it indeed generates $50 billion in AI revenue by 2027, its annual revenue could jump to just over $89 billion within the next three years. This is almost in line with what analysts are anticipating.

AVGO Revenue Estimates for Current Fiscal Year Chart
Data by YCharts.

However, the new AI customers that Broadcom is bringing on board could help it do better than that. But even if the company manages to achieve $89 billion in sales after three years and maintains its current price-to-sales ratio of 22.4, its market cap will hit almost $2 trillion. That points toward 60% gains from current levels.

Of course, Broadcom is trading at a premium valuation right now, and that seems justified considering the pace at which its AI revenue and clientele are growing. So, investors looking to buy an AI growth stock can still consider buying Broadcom even after the impressive gains that it has delivered in the past year, as it seems built for more upside over the next three years.

Before you buy stock in Broadcom, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Broadcom wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider when Netflix made this list on December 17, 2004… if you invested $1,000 at the time of our recommendation, you’d have $699,558!* Or when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $976,677!*

Now, it’s worth noting Stock Advisor’s total average return is 1,060% — a market-crushing outperformance compared to 180% for the S&P 500. Don’t miss out on the latest top 10 list, available when you join Stock Advisor.

See the 10 stocks »

*Stock Advisor returns as of June 30, 2025

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool’s board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool’s board of directors. Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon, Apple, Meta Platforms, Microsoft, Nvidia, and Oracle. The Motley Fool recommends Broadcom and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.

Prediction: This Artificial Intelligence (AI) Stock Could Hit a $2 Trillion Valuation by 2028 was originally published by The Motley Fool



Source link

AI Research

Here’s how doctors say you should ask AI for medical help

Published

on


The DoseWhat should I know about asking ChatGPT for health advice?

Family physician Dr. Danielle Martin doesn’t mince words about artificial intelligence. 

“I don’t think patients should use ChatGPT for medical advice. Period,” said Martin, chair of the University of Toronto’s department of family and community medicine. 

Still, with roughly 6.5 million Canadians without a primary care provider, she acknowledges that physicians can’t stop patients from turning to chatbots powered by large language models (LLMs) for health answers. 

Martin isn’t alone in her concerns. Physician groups like the Ontario Medical Association and research from institutions like the Sunnybrook Health Science Centre all caution patients against relying on AI for medical advice. 

A 2025 study comparing 10 popular chatbots, including ChatGPT, DeepSeek and Claude, found “a strong bias in many widely used LLMs towards overgeneralizing scientific conclusions, posing a significant risk of large-scale misinterpretations of research findings.”

Martin and other experts believe most patients would be better served by using telehealth options available across Canada, such as dialling 811 in most provinces.

But she also told The Dose host Dr. Brian Goldman that if they do choose to use chatbots, they can help reduce the risk of harm by avoiding open-ended questions and restricting AI-generated answers to credible sources.

Learning to ask the right questions

Unlike traditional search engines that provide users with links to reputable sources to answer questions, chatbots like Gemini, Claude and ChatGPT generate their own answers to users’ questions, based on existing databases of information.

Martin says a key challenge is figuring out how much of an AI-generated answer to a medical question is or isn’t essential information.

If you ask a chatbot something like, “I have a red rash on my leg, what could it be?” you could be given a “dump of information” which can do more harm than good.

“My concern is that the average busy person isn’t going to be able to read and process all of that information,” she said.

Danielle Martin is a family physician and chair of the department of family and community medicine at the University of Toronto. (Craig Chivers/CBC)

What’s more, if a patient asks “What do I need to know about lupus?”, for example, they “probably don’t know enough yet about lupus to be able to screen out or recognize the stuff that actually doesn’t make sense,” said Martin.

Martin says patients are more often better-served by asking them for help finding reliable sources, like official government websites. 

Instead of asking, “Should I get this year’s flu shot?” a better question would be, “What are the most reliable websites to learn more about this year’s flu shot?”

Be careful following treatment advice

Martin says that patients shouldn’t rely on solutions recommended by AI — like purchasing topical creams for rashes — without consulting a medical expert. 

In the case of symptoms like rashes which may have many possible causes, Martin instead recommends speaking to a health-care worker and to not ask an AI at all. 

Some people might also worry that an AI chatbot might talk patients out of consulting real-life physicians, but family physician Dr. Onil Bhattacharry says it’s not as likely as some may fear.

“Generally the tools are … slightly risk-averse, so they might push you to more likely seek care than not,” said Bhattacharrya, director of Women’s College Hospital’s institute for health system solutions and virtual care. 

Bhattacharrya is interested in how technology can support clinical care, and says artificial intelligence could be a way to democratize access to medical expertise. 

He uses tools like OpenEvidence which compiles information from medical journals and gives answers that are accessible to most health professionals.

WATCH | How doctors are using AI in the exam room — and why it could become the norm: 

How doctors are using AI in the exam room — and why it could become the norm

The Quebec government says it’s launching a pilot project involving artificial intelligence transcription tools for health-care professionals, with an increasing number saying they cut down the time they spend filling paperwork.

Still, Bhattacharrya recognizes that it can be more challenging for patients to determine the reliability of medical advice from an AI.

“As a doctor, I can critically appraise that information,” but it isn’t always easy for patients to do the same, he said.

Bhattacharrya also said chatbots can suggest treatment options that are available in some countries but not Canada, since many of them draw from American medical literature.

Despite her hesitations, Martin acknowledges there are some things an AI can do better than human physicians — like recalling a long list of possible conditions associated with a symptom. 

“On a good day, we’re best at identifying the things that are common and the things that are dangerous,” she said. 

“I would imagine that if you were to ask the bot, ‘What are all of the possible causes of low platelets?’ or whatever, it would probably include six things on the list that I have forgotten about because I haven’t seen or heard about them since third year medical school.”

Can patients with chronic conditions benefit from AI?

For his part, Bhattacharrya also sees AI as a way to empower people to improve their health literacy. 

A chatbot can help patients with chronic conditions looking for general information in simple language, though he cautions against “exploring nonspecific symptoms and their implications.”

WATCH | People are turning to AI chatbots for emotional support: 

People are turning to AI chatbots for emotional support

Warning: Mention of suicide and self-harm. Millions of people, especially teens, are finding companionship and emotional support in using AI chatbots, according to a kids digital safety non-profit. But health and technology experts say artificial intelligence isn’t properly designed for these scenarios and could do more harm than good.

“In primary care we see a large number of people with nonspecific symptoms,” he said. 

“I have not tested this, but I suspect the chatbots are not great at saying ‘I don’t know what is causing this but let’s just monitor it and see what happens.’ That’s what we say as family doctors much of the time.”



Source link

Continue Reading

AI Research

As they face conflicting messages about AI, some advice for educators on how to use it responsibly

Published

on


When it comes to the rapid integration of artificial intelligence into K-12 classrooms, educators are being pulled in two very different directions.

One prevailing media narrative stokes such profound fears about the emerging strengths of artificial intelligence that it could lead one to believe it will soon be “game over” for everything we know about good teaching. At the same time, a sweeping executive order from the White House and tech-forward education policymakers paint AI as “game on” for designing the educational system of the future.

I work closely with educators across the country, and as I’ve discussed AI with many of them this spring and summer, I’ve sensed a classic “approach-avoidance” dilemma — an emotional stalemate in which they’re encouraged to run toward AI’s exciting new capabilities while also made very aware of its risks.

Even as educators are optimistic about AI’s potential, they are cautious and sometimes resistant to it. These conflicting urges to approach and avoid can be paralyzing.

Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

What should responsible educators do? As a learning scientist who has been involved in AI since the 1980s and who conducts nationally funded research on issues related to reading, math and science, I have some ideas.

First, it is essential to keep teaching students core subject matter — and to do that well. Research tells us that students cannot learn critical thinking or deep reasoning in the abstract. They have to reason and critique on the basis of deep understanding of meaningful, important content. Don’t be fooled, for example, by the notion that because AI can do math, we shouldn’t teach math anymore.

We teach students mathematics, reading, science, literature and all the core subjects not only so that they will be well equipped to get a job, but because these are among the greatest, most general and most enduring human accomplishments.

You should use AI when it deepens learning of the instructional core, but you should also ignore AI when it’s a distraction from that core.

Second, don’t limit your view of AI to a focus on either teacher productivity or student answer-getting.

Instead, focus on your school’s “portrait of a graduate” — highlighting skills like collaboration, communication and self-awareness as key attributes that we want to cultivate in students.

Much of what we know in the learning sciences can be brought to life when educators focus on those attributes, and AI holds tremendous potential to enrich those essential skills. Imagine using AI not to deliver ready-made answers, but to help students ask better, more meaningful questions — ones that are both intellectually rigorous and personally relevant.

AI can also support student teams by deepening their collaborative efforts — encouraging the active, social dimensions of learning. And rather than replacing human insight, AI can offer targeted feedback that fuels deeper problem-solving and reflection.

When used thoughtfully, AI becomes a catalyst — not a crutch — for developing the kinds of skills that matter most in today’s world.

In short, keep your focus on great teaching and learning. Ask yourself: How can AI help my students think more deeply, work together more effectively and stay more engaged in their learning?

Related: PROOF POINTS: Teens are looking to AI for information and answers, two surveys show

Third, seek out AI tools and applications that are not just incremental improvements, but let you create teaching and learning opportunities that were impossible to deliver before. And at the same time, look for education technologies that are committed to managing risks around student privacy, inappropriate or wrong content and data security.

Such opportunities for a “responsible breakthrough” will be a bit harder to find in the chaotic marketplace of AI in education, but they are there and worth pursuing. Here’s a hint: They don’t look like popular chatbots, and they may arise not from the largest commercial vendors but from research projects and small startups.

For instance, some educators are exploring screen-free AI tools designed to support early readers in real-time as they work through physical books of their choice. One such tool uses a hand-held pointer with a camera, a tiny computer and an audio speaker — not to provide answers, but to guide students as they sound out words, build comprehension and engage more deeply with the text.

I am reminded: Strong content remains central to learning, and AI, when thoughtfully applied, can enhance — not replace — the interactions between young readers and meaningful texts without introducing new safety concerns.

Thus, thoughtful educators should continue to prioritize core proficiencies like reading, math, science and writing — and using AI only when it helps to develop the skills and abilities prioritized in their desired portrait of a graduate. By adopting ed-tech tools that are focused on novel learning experiences and committed to student safety, educators will lead us to a responsible future for AI in education.

Jeremy Roschelle is the executive director of Digital Promise, a global nonprofit working to expand opportunity for every learner.

Contact the opinion editor at opinion@hechingerreport.org.

This story about AI in the classroom was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.



Source link

Continue Reading

AI Research

Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal – The CSR Journal

Published

on



Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal  The CSR Journal



Source link

Continue Reading

Trending