Connect with us

AI Insights

Meet the Artificial Intelligence (AI) Stock That’s Crushing Nvidia on the Market in 2025

Published

on


This cybersecurity company’s improving revenue pipeline points toward more stock price upside.

The fast-growing adoption of artificial intelligence (AI) technology in the past three years has been a tailwind for Nvidia (NVDA). The company enjoyed an early-mover advantage in this market thanks to its graphics processing units (GPUs), which have played a central role in the training of popular AI models.

However, it looks like investors’ appetite for Nvidia stock may be fading. It has appreciated just 32% so far in 2025 despite sustaining healthy growth levels. Factors such as restrictions on sales of its chips to China and the potential impact of tariffs on Nvidia’s business seem to be weighing on the stock. So, if you’re looking for an alternative to capitalize on the fast-growing adoption of AI, now would be a good time to take a closer look at this cybersecurity specialist that has outperformed Nvidia so far this year.

Image source: Getty Images.

The proliferation of AI in the cybersecurity market is turning out to be a tailwind for this company

Zscaler (ZS 2.82%), a cloud-based cybersecurity company, has witnessed a 59% jump in its stock price in 2025. It is primarily known for providing zero-trust security solutions that help its customers verify the identity of users or devices accessing their networks. The zero-trust security market is projected to grow at an annual pace of almost 17% through 2030, generating more than $92 billion in annual revenue at the end of the decade, according to Grand View Research.

The good part is that Zscaler is growing at a faster pace than the zero-trust security market. Its revenue in the recently concluded fiscal year 2025 (which ended on July 31) increased by 23% to $2.7 billion. Looking ahead, Zscaler could keep growing at a faster pace than the zero trust security market thanks to its strategy of offering cybersecurity tools to customers to protect AI apps, ensure secure access to AI apps, and protect large language models (LLMs), among other tools.

Additionally, Zscaler is also offering agentic AI cybersecurity solutions to speed up the process of identifying the reasons behind IT outages, undertake corrective measures, and improve troubleshooting. The important thing to note here is that Zscaler’s agentic AI security offerings are growing at a nice pace. The annual recurring revenue (ARR) of its agentic security operations increased by an impressive 85% year over year, while its agentic AI operations grew by 58% last year.

With the adoption of agentic AI in cybersecurity expected to clock a compound annual growth rate (CAGR) of 34% through 2033, hitting an annual revenue of $322 billion at the end of the forecast period, Zscaler seems to be in a solid position to accelerate its growth in the long run.

Even better, the company is already building a healthy long-term revenue pipeline thanks to its focus on fast-growing niches such as AI. This is evident from the 31% spike in its remaining performance obligations (RPO) last quarter to $5.8 billion. That’s more than double the revenue it generated in the latest fiscal year.

As RPO refers to the value of a company’s contracted backlog, the faster growth in this metric when compared to the 21% increase in its quarterly revenue suggests that Zscaler is winning new business at a faster pace than it can fulfill.

That’s the reason why there is a good chance that its growth rate could pick up in the future, which is why it makes sense to buy this stock while it is trading at a relatively attractive valuation.

Zscaler’s growth could exceed Wall Street’s expectations

Though analysts are expecting Zscaler to deliver robust double-digit growth over the next three fiscal years, they are expecting a relatively slower pace of growth compared to its fiscal 2025 performance.

ZS Revenue Estimates for Current Fiscal Year Chart

ZS Revenue Estimates for Current Fiscal Year data by YCharts

But what’s worth noting is that Zscaler’s consensus revenue estimates have moved higher of late. That’s not surprising considering the improvement in the company’s RPO. Moreover, the outstanding growth opportunity in the AI-focused cybersecurity niches in the long run is likely to help Zscaler deliver much stronger growth than what analysts are expecting.

That’s why it makes sense to buy Zscaler while it is trading at 16 times sales. Though that’s not exactly cheap considering the U.S. technology sector’s average sales multiple of 8.5, it is much lower than Nvidia’s price-to-sales ratio of 25. What’s more, Zscaler’s growth after a couple of years is expected to be higher than that of Nvidia’s, as the latter’s growth could taper off thanks to its high revenue base.

NVDA Revenue Estimates for Current Fiscal Year Chart

NVDA Revenue Estimates for Current Fiscal Year data by YCharts

That’s why investors looking for a reasonably valued AI stock that has the potential to deliver robust gains in the long run can consider going long Zscaler even after the healthy gains that it has clocked this year.

Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Nvidia and Zscaler. The Motley Fool has a disclosure policy.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Concordia-led research program is Indigenizing artificial intelligence | Education

Published

on


An initiative steered by Concordia researchers is challenging the conversation around the direction of artificial intelligence (AI). It charges that the current trajectory is inherently biased against non-Western modes of thinking about intelligence — especially those originating from Indigenous cultures. As a way of decolonising the future of AI, they have created the Abundant Intelligences research program: an international, multi-institutional and interdisciplinary program that seeks to rethink how we conceive of AI. The driving concept behind it is the incorporation of Indigenous knowledge systems to create an inclusive, robust concept of intelligence and intelligent action, and how that can be embedded into existing and future technologies.

The full concept is described in a recent paper for the journal AI & Society.

 “Artificial intelligence has inherited conceptual and intellectual ideas from past formulations of intelligence that took on certain colonial pathways to establish itself, such as emphasizing a kind of industrial or production focus,” says Ceyda Yolgörmez, a postdoctoral fellow with Abundant Intelligences and one of the paper’s authors.

They write that this scarcity mindset contributed to resource exploitation and extraction that has extended a legacy of Indigenous erasure that influences discussion around AI to this day, adds lead author Jason Edward Lewis. The professor in the Department of Design and Computation Arts is also the University Research Chair in Computational Media and the Indigenous Future Imaginary. “The Abundant Intelligences research program is about deconstructing the scarcity mindset and making room for many kinds of intelligence and ways we might think about it.”

The researchers believe this alternative approach can create an AI that is oriented toward human thriving, that preserves and supports Indigenous languages, addresses pressing environmental and sustainability issues, re-imagines public health solutions and more.

Relying on local intelligence

The community-based research program is directed from Concordia in Montreal but much of the local work will be done by individual research clusters (called pods) across Canada, in the United States and in New Zealand.

The pods will be anchored to Indigenous-centred research and media labs at Western University in Ontario, the University of Lethbridge in Alberta, the University of Hawai’i—West Oahu, Bard College in New York and Massey University in New Zealand.

They bring together Indigenous knowledge-holders, cultural practitioners, language keepers, educational institutions and community organizations with research scientists, engineers, artists and social scientists to develop new computational practices fitted to an Indigenous-centred perspective.

The researchers are also partnering with AI professionals and industry researchers, believing that the program will open new avenues of research and propose new research questions for mainstream AI research. “For example, how do you build a rigorous system out of a small amount of resource data like different Indigenous languages?” asks Yolgörmez.  “How do you make multi-agent systems that are robust, recognize and support non-human actors and integrate different sorts of activities within the body of a single system?”

Lewis asserts that their approach is both complementary and alternative to mainstream AI research, particularly regarding data sets like Indigenous languages that are much smaller than the ones currently being used by industry leaders. “There is a commitment to working with data from Indigenous communities in an ethical way, compared to simply scraping the internet,” he says. “This yields miniscule amounts of data compared to what the larger companies are working with, but it presents the potential to innovate different approaches when working with small languages. That can be useful to researchers who want to take a different approach than the mainstream.

“This is one of the strengths of the decolonial approach: it’s one way to get out of this tunnel vision belief that there is only one way of doing things.”

Hēmi Whaanga, professor at Massey University in New Zealand, also contributed to the paper.

Read the cited paper: “Abundant intelligences: placing AI within Indigenous knowledge frameworks.

— By Patrick Lejtenyi

Concordia University

@ConcordiaUnews

— AB





Source link

Continue Reading

AI Insights

Eckerd College launches new minor in AI studies – News

Published

on


It couldn’t have come at a better time. Students have become more and more reliant on AI for coursework, and national studies are sending up warning signals about the new and creative ways students are utilizing AI to complete assignments.

“AI is definitely a balancing act that I think so many of us in higher education are dealing with,” says Ramsey-Tobienne, who also oversees the College Academic Honor Council. “As professors, we have to decide how, if and when to use it, and we need to help our students develop into critical consumers of AI. Indeed, critical AI literacy is really the foundation of so much of what we’re doing in the minor.

“For better or worse, AI is not going anywhere,” Ramsey-Tobienne adds. “And I think we do ourselves a disservice if we’re not helping our students to understand how to navigate this new AI world.” 



Source link

Continue Reading

AI Insights

AI drug companies are struggling—but don’t blame the AI

Published

on


Moonshot hopes of artificial intelligence being used to expedite the development of drugs are coming back down to earth. 

More than $18 billion has flooded into more than 200 biotechnology companies touting AI to expedite development, with 75 drugs or vaccines entering clinical trials, according to Boston Consulting Group. Now, investor confidence—and funding—is starting to waver.

In 2021, venture capital investment in AI drug companies reached an apex with more than 40 deals being made worth about $1.8 billion. This year, there have been fewer than 20 deals worth about half of that peak sum, the Financial Times reported, citing data from Pitchbook. 

Some existing companies have struggled in the face of challenges. In May, biotech company Recursion tabled three of its prospective drugs in a cost-cutting effort following a merger with Exscientia, a similar biotech firm, last year. Fortune previously reported that none of Recursion’s discovered AI-compounds have reached the market as approved drugs. After a major restructuring in December 2024, biotech company BenevolentAI delisted from the Euronext Amsterdam stock exchange in March before merging with Osaka Holdings. 

A Recursion spokesperson told Fortune the decision to shelve the drugs was “data-driven” and a planned outcome of its merger with Exscientia.

“Our industry’s 90% failure rate is not acceptable when patients are waiting, and we believe approaches like ours that integrate cutting-edge tools and technologies will be best positioned for long-term success,” the spokesperson said in a statement.

BenevolentAI did not respond to a request for comment.

The struggles of the industry coincide with a broader conversation around the failure of generative AI to deliver more quickly on its lofty promises of productivity and efficiency. An MIT report last month found 95% of generative AI pilots at companies failed to accelerate revenue. A U.S. Census Bureau survey this month found AI adoption in large U.S. companies has declined from its 14% peak earlier this year to 12% as of August.

But the AI technology used to help develop drugs is far different than those from large language models used in most workplace initiatives and should therefore not be held to the same standards, according to Scott Schoenhaus, managing director and equity research analyst for KeyBanc Capital Markets Inc. Instead, the industry faces its own set of challenges.

“No matter how much data you have, human biology is still a mystery,” Schoenhaus told Fortune.

Macro and political factors drying up AI drug development funding

At the crux of the slowed funding and slower development results may not be the limitations of the technology itself, but rather a slew of broader factors, Schoenhaus said.

“Everyone acknowledges the funding environment has dried up,” he said. “The biotech market is heavily influenced by low interest rates. Lower interest rates equals more funding coming into biotechs, which is why we’re seeing funding for biotech at record lows over the last several years, because interest rates have remained elevated.”

It wasn’t always this way. Leveraging AI in drug development is not only thanks to growing access to semiconductor chips, but also how technology has allowed for quick and now cheap ways of mapping the entire human genome. In 2001, it cost more than $100 million to map the human genome. Two decades later, that undertaking cost about $1,000.

Beyond having the pandemic to thank for next-to-nothing interest rates in 2021, COVID also expedited partnerships between AI drug development start ups and Big Pharma companies. In early 2022 biotechnology startup AbCellera and Eli Lilly got emergency FDA approval for an antibody used in the early COVID vaccines, a tangible example of how the tech could be used to aid in drug discoveries.

But since then, there have been other industry hurdles, Schoenhaus said, including Big Pharma cutting back on research and development costs amid slowing demand, as well as uncertainty surrounding whether President Donald Trump would impose a tariff on pharmaceuticals as the U.S. and European Union tussled over a trade deal. Trump signed a memo this week threatening to ban direct-to-consumer advertising for prescription medications, theoretically driving down pharma revenues.

Limitations of AI

That’s not to say there haven’t been technological hiccups in the industry.

“There is scrutiny around the technology themselves,” Schoenhaus said. “Everyone’s waiting for these readouts to prove that.”

The next 12 months of emerging data from AI drug development startups will be critical in determining how successful these companies stand to be, Schoenhaus said. Some of the results so far have been mixed. For example, Recursion released data from a mid-stage clinical trial of a drug to treat a neurovascular condition in September last year, finding the drug was safe but that there was little evidence of how effective it was. Company shares fell double digits following the announcement. 

These companies are also limited by how they’re able to leverage AI. The drug development process is one that takes 10 years and is intentionally bottlenecked to ensure the safety and efficacy of the drugs in question, according to according to David Siderovski, chair of University of North Texas Health Science Center’s Department of Pharmacology & Neuroscience, who has previously worked with AI drug development companies in the private sector. Biotechnology companies using AI to make these processes more efficient are usually only tackling one small part of this bottleneck, such as being able to screen and identify a drug-like molecule faster than previously.

“There are so many stages that have to be jumped over before you can actually declare the [European Medicines Agency], or the FDA, or Health Canada, whoever it is, will designate this as a safe, approved drug to be marketed to patients out in the world,” Siderovski told Fortune. “That one early bottleneck of auditioning compounds is not the be-all and end-all of satisfying shareholders by announcing, ‘We have approval for this compound as a drug.’”

Smaller companies in the sector have also made a concerted effort to partner less with Big Pharma companies, preferring instead to build their own pipelines, even if it means no longer having access to the franchise resources of industry giants. 

“They want to be able to pursue their technology and show the validation of their platform sooner than later,” Schoenhaus said. “They’re not going to wait around for large pharma to pursue a partnered molecule. They’d rather just do it themselves and say, ‘Hey, look, our technology platform works.’”

Schoenhaus sees this strategy as a way for companies looking to prove themselves by perfecting the use of AI to better understand the slippery, mysterious, and still greatly unknown frontier of human biology.

“It’s just a very much more complex application of AI,” he said, “hence why I think we are still seeing these companies focus on their own internal pipelines so that they can really, squarely focus their resources on trying to better their technology.”



Source link

Continue Reading

Trending