Connect with us

AI Research

2 Popular Artificial Intelligence (AI) Stocks to Sell Before They Fall 47% and 62%, According to Wall Street Analysts

Published

on


AI stocks Palantir and CoreWeave have rocketed higher this year, but certain analysts expect shareholders to sustain major losses in the coming months.

Palantir Technologies (PLTR -1.10%) and CoreWeave (CRWV -1.40%) shares have increased 109% and 175%, respectively, this year. But some Wall Street analysts think the popular artificial intelligence stocks are likely to crash in the next 12 months, as detailed below:

  • Brent Thill at Jefferies recently set Palantir with a target price of $60 per share. That implies 62% downside from its current share price of $158.
  • Keith Weiss at Morgan Stanley recently set CoreWeave with a target price of $58 per share. That implies 47% downside from the current share price of $110.

Here’s what investors should know about Palantir and CoreWeave.

Image source: Getty Images.

Palantir Technologies: 62% implied downside

Palantir develops analytics software for commercial and government customers. Its core platforms (Foundry and Gotham) help organizations manage and make sense of complex data. The company also develops an artificial intelligence platform called AIP, which lets clients apply large language models to analytics workflows and build generative AI applications.

Importantly, Palantir has distinguished itself with an ontology-based software architecture. An ontology is a framework that links digital information to real-world assets to facilitate better decision-making. The software also captures operational outcomes and feeds that information back to the ontology, creating a feedback loop that produces deeper insights over time.

Mark Giarelli at Morningstar explains, “The core ontology function and value proposition is that Palantir not only organizes and displays data, but it also creates prioritized, ranked data that can be quickly understood and interacted with, ultimately automating real-world efficiency gains.” He expects Palantir’s addressable market to reach $1.4 trillion by 2033.

Jefferies analyst Brent Thill acknowledges Palantir is successfully executing on a massive opportunity. “I think the company is incredibly well run. It’s a fundamentally sound story. But the valuation doesn’t make any sense,” he told Barron’s when the stock traded under $100 per share in May. The stock now trades at $158 per share, so it stands to reason that Thill thinks the valuation is even more outrageous today.

Indeed, Palantir currently trades at 126 times sales, making it the most expensive stock in the S&P 500 by a long shot. The next most richly valued company is Texas Pacific Land at 31 times sales. That means Palantir’s share price could fall 75% and it would still be the most expensive stock in the S&P 500.

The market is enamored with Palantir for good reason. Some analysts even think its data analytics tools could become as foundational as Salesforce‘s CRM software. But the current valuation is a clear source of downside risk. I think the stock could drop 60%+ if future growth fails to meet expectations, so shareholders should keep their positions very small.

CoreWeave: 47% implied downside

CoreWeave provides cloud infrastructure and software services. Its platform (called a GPU cloud) is purpose-built for artificial intelligence and other demanding workloads. Research company SemiAnalysis recently ranked CoreWeave as the best GPU cloud on the market, awarding it higher scores than peers like Amazon, Microsoft, and Alphabet‘s Google.

CoreWeave reported impressive first-quarter financial results. Revenue increased 420% to $981 million and adjusted operating income (which excludes stock-based compensation and interest payments) increased 550% to $162 million. However, the company reported a non-GAAP net loss of $150 million (up from $24 million last year) as interest payments on its $8.7 billion in debt cut deeply into profitability.

Morgan Stanley analyst Keith Weiss said CoreWeave’s strong financial results and ability to win major clients like OpenAI validate its leadership position in what could be a $360 billion market by 2028. However, he also expressed concern about the recent acceleration in its AI infrastructure buildout, which will increase cash burn through higher interest payments and capital expenditures in the near term.

CoreWeave’s initial public offering (IPO) took place in March, so the company has only been public for four months. Limited historical data, coupled with heavy AI infrastructure spending, make it difficult to guess how profitable CoreWeave will be. That makes it very challenging to value the stock today, which explains the wide range of target prices among Wall Street analysts: $32 per share at the low end to $185 per share at the high end.

The stock currently trades at 21 times sales. That look expensive when only six stocks in the S&P 500 have higher valuations. But CoreWeave’s sales are expected to increase at 129% annually through 2026, which makes the multiple more tolerable. I doubt shares will drop 47%, but it would be prudent to keep positions in this stock very small until the company is closer to profitability.

Trevor Jennewine has positions in Palantir Technologies. The Motley Fool has positions in and recommends Alphabet, Jefferies Financial Group, Microsoft, Palantir Technologies, and Salesforce. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment – The Standard (HK)

Published

on



(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment  The Standard (HK)



Source link

Continue Reading

AI Research

[2506.08171] Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models


View a PDF of the paper titled Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models, by Daniel Koh and 4 other authors

View PDF
HTML (experimental)

Abstract:Large language models (LLMs) have demonstrated strong performance on coding tasks such as generation, completion and repair, but their ability to handle complex symbolic reasoning over code still remains underexplored. We introduce the task of worst-case symbolic constraints analysis, which requires inferring the symbolic constraints that characterise worst-case program executions; these constraints can be solved to obtain inputs that expose performance bottlenecks or denial-of-service vulnerabilities in software systems. We show that even state-of-the-art LLMs (e.g., GPT-5) struggle when applied directly on this task. To address this challenge, we propose WARP, an innovative neurosymbolic approach that computes worst-case constraints on smaller concrete input sizes using existing program analysis tools, and then leverages LLMs to generalise these constraints to larger input sizes. Concretely, WARP comprises: (1) an incremental strategy for LLM-based worst-case reasoning, (2) a solver-aligned neurosymbolic framework that integrates reinforcement learning with SMT (Satisfiability Modulo Theories) solving, and (3) a curated dataset of symbolic constraints. Experimental results show that WARP consistently improves performance on worst-case constraint reasoning. Leveraging the curated constraint dataset, we use reinforcement learning to fine-tune a model, WARP-1.0-3B, which significantly outperforms size-matched and even larger baselines. These results demonstrate that incremental constraint reasoning enhances LLMs’ ability to handle symbolic reasoning and highlight the potential for deeper integration between neural learning and formal methods in rigorous program analysis.

Submission history

From: Daniel Koh [view email]
[v1]
Mon, 9 Jun 2025 19:33:30 UTC (1,462 KB)
[v2]
Tue, 16 Sep 2025 10:35:33 UTC (1,871 KB)



Source link

Continue Reading

AI Research

Spatially-Aware Image Focus for Visual Reasoning


View a PDF of the paper titled SIFThinker: Spatially-Aware Image Focus for Visual Reasoning, by Zhangquan Chen and 6 other authors

View PDF
HTML (experimental)

Abstract:Current multimodal large language models (MLLMs) still face significant challenges in complex visual tasks (e.g., spatial understanding, fine-grained perception). Prior methods have tried to incorporate visual reasoning, however, they fail to leverage attention correction with spatial cues to iteratively refine their focus on prompt-relevant regions. In this paper, we introduce SIFThinker, a spatially-aware “think-with-images” framework that mimics human visual perception. Specifically, SIFThinker enables attention correcting and image region focusing by interleaving depth-enhanced bounding boxes and natural language. Our contributions are twofold: First, we introduce a reverse-expansion-forward-inference strategy that facilitates the generation of interleaved image-text chains of thought for process-level supervision, which in turn leads to the construction of the SIF-50K dataset. Besides, we propose GRPO-SIF, a reinforced training paradigm that integrates depth-informed visual grounding into a unified reasoning pipeline, teaching the model to dynamically correct and focus on prompt-relevant regions. Extensive experiments demonstrate that SIFThinker outperforms state-of-the-art methods in spatial understanding and fine-grained visual perception, while maintaining strong general capabilities, highlighting the effectiveness of our method. Code: this https URL.

Submission history

From: Zhangquan Chen [view email]
[v1]
Fri, 8 Aug 2025 12:26:20 UTC (5,223 KB)
[v2]
Thu, 14 Aug 2025 10:34:22 UTC (5,223 KB)
[v3]
Sun, 24 Aug 2025 13:04:46 UTC (5,223 KB)
[v4]
Tue, 16 Sep 2025 09:40:13 UTC (5,223 KB)



Source link

Continue Reading

Trending