Connect with us

Business

AI Stocks: Best Artificial Intelligence Stocks To Watch Amid ChatGPT Hype

Published

on


The landscape of top artificial intelligence stocks continues to evolve. Semiconductor firms led by Nvidia (NVDA) are again among the best performing AI stocks. But investors are also focused on new plays such as CoreWeave (CRWV), Oracle (ORCL) and Snowflake (SNOW).

To be sure, top AI stocks such as Microsoft (MSFT) and Nvidia face high expectations. For many companies — such as Google parent Alphabet (GOOGL), Amazon.com (AMZN) and Facebook parent Meta Platforms (META) — the rise of generative AI poses both risk and opportunity.





X



NOW PLAYING
Inside Meta’s Hiring Spree: Why Zuckerberg Is Going All In On AI Talent



Google reports second quarter earnings on July 23. Wall Street analysts will focus on ad revenue growth at Google’s internet search business and monetization of “AI Overviews.”

Amid worries over AI competition, Google stock is down 2% in 2025. The possibility that OpenAI could launch an ad-supported version of ChatGPT looms over Google stock. But Google’s Gemini 2.5 AI model has been getting good reviews.

Many companies suddenly tout AI product roadmaps. In general, look for AI stocks that use artificial intelligence to improve products or gain a strategic edge.

Spending on cloud computing infrastructure continues to drive AI stocks. Nvidia, Palantir Technologies (PLTR) and CoreWeave remain on the IBD Leaderboard.

Q2 earnings for data analytics software maker Palantir are due Aug. 4. Palantir stock has gained 102% in 2025 after soaring 340% last year.

Q2 cloud computing growth at Amazon, Microsoft and Google could move AI stocks. Microsoft fiscal Q4 results are due July 30. Amazon reports Q2 earnings on July 31.

AI Stocks: Semiconductor Plays Rebound

A bellwether for AI stocks, chip maker Nvidia has advanced 28% in 2025, rebounding from a big sell-off tied to China-based DeepSeek’s AI models.

Nvidia’s AI chip sales to China slowed in 2025. Reversing course, the Trump administration will now allow Nvidia did to sell its lower-performing H20 AI accelerators to China.

Nvidia aims to broaden its customer base beyond big tech through “sovereign AI,” partnerships with governments. And, it’s encroaching on the turf of some customers with its own push into cloud computing services.

Further, Advanced Micro Devices (AMD) has climbed 30% this year. AMD hosted its Advancing AI event in mid-June. At the event, AMD updated its AI strategy. OpenAI, Oracle, Meta and xAI are among AMD’s customers with new speculation involving Amazon Web Services.

CoreWeave Acquires Core Scientific

Broadcom (AVGO) stock has advanced 22% in 2025. Broadcom currently is supplying custom AI chips for Google, Meta and TikTok owner ByteDance. It has four other prospective customers, with Apple, OpenAI and xAI among them, according to analysts.

Qualcomm (QCOM), ARM Holdings (ARM), and Marvell Technologies (MRVL) are other AI chip makers to watch.

Shares in Nvidia-backed CoreWeave, which in March launched an initial public offering, have surged 207% in 2025. But CoreWeave stock fell after the acquisition of Core Scientific (CORZ).

The company forecast higher-than-expected capital spending as it ramps up capacity for more customers. CoreWeave is a new AI cloud services provider that rents out Nvidia GPU-equipped servers. Here’s a look at how Nvidia, with a 7% stake in CoreWeave, is key to its future.

Among AI infrastructure stocks to watch, Oracle popped on its fiscal fourth quarter financial results and guidance. Oracle stock has surged 47% in 2025 amid strong growth in its AI cloud business .

Among data center infrastructure plays, Arista Networks (ANET) has edged up 1% this year. Q2 earnings for Arista are due Aug. 5. Here’s an interview with Arista Chief Executive Jayshree Ullal on its AI strategy.

Enterprise Data Key To AI Models

Having struggled to generate new revenue from “copilots,” software companies are now turning to autonomous, goal-driven AI agents. One big issue for software companies is how fast customers ramp up pilot programs to commercial deployment.

Palantir — as well as Snowflake and privately held Databricks — are focused on helping companies clean up and organize proprietary data to build their own AI models. Here’s a look at Databrick’s strategy. Snowflake stock has advanced 40% this year.

Both Snowflake and Databricks continue to make acquisitions as they try to get an upper hand. Databricks has not discussed plans for an initial public offering.

On the other hand, not all software firms are among the top-performing AI stocks.

AI Stocks: Software Plays Mixed

At TD Cowen, analysts Derrick Wood in a report said that AI software “infrastructure” plays in June out-performed AI application software names.

“We are hearing of a return to AI-related concerns within the apps space,” he said. “Conversely, sentiment seems to be gradually improving in the infrastructure software space.”

Salesforce (CRM), ServiceNow (NOW), Adobe (ADBE), and HubSpot (HUBS) are all down in 2025. Datadog (DDOG) has rebounded, he added.

In an IBD interview, ServiceNow (NOW) Chief executive Bill McDermott explained about how the enterprise software maker aims to be an AI winner. ServiceNow recently set new AI revenue targets for fiscal 2026.

Meanwhile, software maker Salesforce has agreed to buy Informatica (INFA) for $8 billion to boost its AI strategy. Salesforce stock has declined 18% this year.

Meta’s Big Bet On Scale AI

Meanwhile, Meta stock has gained 20% in 2025.

Meta continues to overhaul its AI strategy. Meta has invested $14.9 billion in Scale AI for a 49% stake in the startup. Scale AI provides data labeling services that help train and produce AI large language models.

Scale AI Chief Executive Alexandr Wang has joined a new AI research lab at Meta dedicated to pursuing “superintelligence.” And, Meta has hired away top scientists from OpenAI.

CEO Mark Zuckerberg has laid out five pillars of expected AI growth. They include improved advertising, engaging social media experiences, business messaging, the Meta AI app, and AI devices, including spatial computing.

The social networking giant in April launched the Meta AI app, built with its Llama 4 training model, with chatbot and web-searching features. Previously, Llama had been embedded in Meta applications such as Instagram and WhatsApp.

Apple Stock Lags

Meta in April released its open source Llama 4 AI model family. But Meta has delayed the roll out of its powerful Llama model,  Llama 4 Behemoth.

Apple stock has lagged in 2025, falling 15%. Further, Apple hosted its flagship Worldwide Developers Conference on June 9. There were no major surprises on its AI efforts.

But there’s speculation Apple could purchase Perplexity to catch up in generative AI.

With iPhone 17 models expected to debut in September 2025, Apple Intelligence features will likely not be much improved. Voice assistant Siri has yet to be upgraded with advanced AI technology.

OpenAI A Threat To Apple, Software Makers?

Meanwhile,  OpenAI was valued at $300 billion as part of a new $40 billion fundraising round led by SoftBank. OpenAI builds large, multimodal foundation models.

While OpenAI’s ChatGPT has pressured Google stock, the AI model maker also could compete with Apple and enterprise software makers, said JPMorgan analyst Brenda Duverce in a report.

“Recent agent releases, investments, and hiring raise our belief that OpenAI can take share from formidable hardware, software, and ad players and carve out new market space,” Duverce said. “If the history of disruption teaches us anything, it is that the nature of future disruption and disruptors, emerging technology convergence, and business model shifts are hard to predict. So as the pursuit of artificial general and superintelligence unfolds, we suspect that entirely new market spaces, architectural innovations, and monetization models will come to fruition — and that OpenAI could be a driving force behind many of these.”

Investors should keep a close watch on the fierce competition in AI models. Generally, the AI models are battling in reasoning and multimodal capabilities as well as computing needs. Large language models provide the building blocks to develop applications.

Anthropic’s latest funding round values it at $61.5 billion.

The commoditization of AI models could spur application development. While “training” AI models has been the biggest driver of capital spending, the market will shift to “inferencing,” or running AI applications, in the long run.

AI Stocks To Watch By Industry Group

Company Symbol Comp Rating Industry name AI angle
Nvidia (NVDA) 99 Elec-Semiconductor Fabless Cloud computing giants buying more chips to train AI models or run AI workloads. Big lead over rival Advanced Micro Devices (AMD).
CrowdStrike (CRWD) 96 Computer Software-Security AI chatbots expected to automate more functions in security-operations centers and reduce the time to detect computer hacking.
Arista Networks (ANET) 98 Computer-Networking Sells computer network switches that speed up communications among racks of computer servers packed into “hyperscale” data centers. With AI growth, internet data centers will need more network bandwidth.
Microsoft (MSFT) 94 Computer Software-Desktop Biggest investor in generative AI startup Open AI, whose ChatGPT users require Azure cloud services. Microsoft’s business AI assistant, Office 365 Copilot, is another potential revenue source.
Salesforce (CRM) 64 Computer Software-Enterprise Pivoted to autonomous, goal-driven AI agents from conversational co-pilots. Expected to use a mix of subscription and consumption-based pricing.
Amazon.com (AMZN) 94 Retail-Internet Alexa smart assistant upgraded. Cloud computing unit working with OpenAI rivals Anthropic, Hugging Face and Falcon 40B.

Follow Reinhardt Krause on X, formerly Twitter, @reinhardtk_tech for updates on artificial intelligence, quantum computing, cybersecurity and cloud computing.

YOU MAY ALSO LIKE:

Want To Trade Options? Here Are The Basics To Get You Started

IBD Digital: Unlock IBD’s Premium Stock Lists, Tools And Analysis Today

Learn How To Time The Market With IBD’s ETF Market Strategy

Monitor IBD’s “Breaking Out Today” List For Companies Hitting New Buy Points

IBD Live: A New Tool For Daily Stock Market Analysis





Source link

Business

Millions missing out on benefits and government support, analysis suggests

Published

on


Dan WhitworthReporter, Radio 4 Money Box

Andrea Paterson A self-portrait family shot of Andrea Paterson alongside her mum, Sally, and dad, Ian.Andrea Paterson

Andrea (left) persuaded her mum Sally to apply for attendance allowance on behalf of her dad Ian, which helped them cope with rising energy costs

New analysis suggests seven million households are missing out on £24bn of financial help and support because of unclaimed benefits and social tariffs.

The research from Policy in Practice, a social policy and data analytics company, says awareness, complexity and stigma are the main barriers stopping people claiming.

This analysis covers benefits across England, Scotland and Wales such as universal credit and pension credit, local authority help including free school meals and council tax support, as well as social tariffs from water, energy and broadband providers.

The government said it ran public campaigns to promote benefits and pointed to the free Help to Claim service.

Andrea Paterson in London persuaded her mum, Sally, to apply for attendance allowance on behalf of her dad, Ian, last December after hearing about the benefit on Radio 4’s Money Box.

Ian, who died in May, was in poor health at the time and he and Sally qualified for the higher rate of attendance allowance of £110 per week, which made a huge difference to their finances, according to Andrea.

“£110 per week is a lot of money and they weren’t getting the winter fuel payment anymore,” she said.

“So the first words that came out of Mum’s mouth were ‘well, that will make up for losing the winter fuel payment’, which [was] great.

“All pensioners worry about money, everyone in that generation worries about money. I think it eased that worry a little bit and it did allow them to keep the house [warmer].”

Unclaimed benefits increasing

In its latest report, Policy in Practice estimates that £24.1bn in benefits and social tariffs will go unclaimed in 2025-26.

It previously estimated that £23bn would go unclaimed in 2024-25, and £19bn the year before that, although this year’s calculations are more detailed than ever before.

“There are three main barriers to claiming – awareness, complexity and stigma,” said Deven Ghelani, founder and chief executive of Policy in Practice.

“With awareness people just don’t know these benefits exist or, if they do know about them, they just immediately assume they won’t qualify.

“Then you’ve got complexity, so being able to complete the form, being able to provide the evidence to be able to claim. Maybe you can do that once but actually you have to do it three, four, five , six, seven times depending on the support you’re potentially eligible for and people just run out of steam.

“Then you’ve got stigma. People are made to feel it’s not for them or they don’t trust the organisation administering that support.”

Although a lot of financial support is going unclaimed, the report does point to progress being made.

More older people are now claiming pension credit, with that number expected to continue to rise.

Some local authorities are reaching 95% of students eligible for free school meals because of better use of data.

Gateway benefits

Government figures show it is forecast to spend £316.1bn in 2025-26 on the social security system in England, Scotland and Wales, accounting for 10.6% of GDP and 23.5% of the total amount the government spends.

Responding to criticism that the benefits bill is already too large, Mr Ghelani said: “The key thing is you can’t rely on the system being too complicated to save money.

“On the one hand you’ve designed these systems to get support to people and then you’re making it hard to claim. That doesn’t make any sense.”

A government spokesperson said: “We’re making sure everyone gets the support they are entitled to by promoting benefits through public campaigns and funding the free Help to Claim service.

“We are also developing skills and opening up opportunities so more people can move into good, secure jobs, while ensuring the welfare system is there for those who need it.”

The advice if you think you might be eligible is to claim, especially for support like pension credit, known as a gateway benefit, which can lead to other financial help for those who are struggling.

Robin, from Greater Manchester, told the BBC that being able to claim pension credit was vital to his finances.

“Pension credit is essential to me to enable me to survive financially,” he said.

[But] because I’m on pension credit I get council tax exemption, I also get free dental treatment, a contribution to my spectacles and I get the warm home discount scheme as well.”



Source link

Continue Reading

Business

Free Training for Small Businesses

Published

on

By


Google’s latest initiative in Pennsylvania is set to transform how small businesses harness artificial intelligence, marking a significant push by the tech giant to democratize AI tools across the Keystone State. Announced at the AI Horizons Summit in Pittsburgh, the Pennsylvania AI Accelerator program aims to equip local entrepreneurs with essential skills and resources to integrate AI into their operations. This move comes amid a broader effort by Google to foster economic growth through technology, building on years of investment in the region.

Drawing from insights in a recent post on Google’s official blog, the accelerator offers free workshops, online courses, and hands-on training tailored for small businesses. Participants can learn to use AI for tasks like customer service automation and data analysis, potentially boosting efficiency and competitiveness. The program is part of Google’s Grow with Google initiative, which has already trained thousands in digital skills nationwide.

Strategic Expansion in Pennsylvania

Recent web searches reveal that Google’s commitment extends beyond training, with plans for substantial infrastructure investments. According to a report from GovTech, the company intends to pour about $25 billion into Pennsylvania’s data centers and AI facilities over the next two years. This investment underscores Pennsylvania’s growing role as a hub for tech innovation, supported by its proactive government policies on AI adoption.

Posts on X highlight the buzz around this launch, with users noting Google’s long-standing presence in the state, including digital skills programs that have generated billions in economic activity. For instance, sentiments from local business communities emphasize the accelerator’s potential to level the playing field for small enterprises against larger competitors.

Impact on Small Businesses

A deeper look into news from StartupHub.ai analyzes Google’s strategy, suggesting the accelerator could accelerate AI adoption among small and medium-sized businesses (SMBs), fostering innovation and job creation. The program includes access to tools like Gemini AI, enabling businesses to automate routine tasks and gain insights from data without needing extensive technical expertise.

Industry insiders point out that this initiative aligns with Pennsylvania’s high ranking in government AI readiness, as detailed in a City & State Pennsylvania analysis. The state’s forward-thinking approach, including pilots with technologies like ChatGPT in government operations, creates a fertile environment for such private-sector programs.

Collaborations and Broader Ecosystem

Partnerships are key to the accelerator’s success. News from Editor and Publisher reports on collaborations with entities like the Pennsylvania NewsMedia Association and Google News Initiative, extending AI benefits to media and other sectors. These alliances aim to sustain local industries through targeted accelerators.

Moreover, X posts from figures like Governor Josh Shapiro showcase the state’s enthusiasm, citing time savings from AI in public services that mirror potential gains for businesses. Google’s broader efforts, such as the AI for Education Accelerator involving Pennsylvania universities, indicate a holistic approach to building an AI-savvy workforce.

Future Prospects and Challenges

While the accelerator promises growth, challenges remain, including ensuring equitable access for rural businesses and addressing AI ethics. Insights from Google’s blog on AI training emphasize responsible implementation, with resources to mitigate biases and privacy concerns.

As Pennsylvania positions itself as an AI leader, Google’s program could serve as a model for other states. With ongoing updates from web sources and social media, the initiative’s evolution will likely reveal its true economic impact, potentially reshaping how small businesses thrive in an AI-driven era.



Source link

Continue Reading

Business

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

Published

on


This article is part two of a two-part case study on the dangers AI chatbots pose to young people. Part one covered the deceptive, pseudo-human design of ChatGPT.  This part will explore AI companies’ incentive to prioritize profits over safety.

Warning: The following contains descriptions of self-harm and suicide. Please guard your hearts and read with caution.

Sixteen-year-old Adam Raine took his own life in April after developing an unhealthy relationship with ChatGPT. His parents blame the chatbot’s parent company, OpenAI.

Matt and Maria Raine filed a sweeping wrongful death suit against OpenAI; its CEO, Sam Altman; and all employees and investors involved in the “design, development and deployment” of ChatGPT, version 4o, in California Superior Court on August 26.

The suit alleges OpenAI released ChatGPT-4o prematurely, without adequate safety testing or usage warnings. These intentional business decisions, the Raines say, cost Adam his life.

OpenAI started in 2015 as a nonprofit with a grand goal — to create prosocial artificial intelligence.

The company’s posture shifted in 2019 when it opened a for-profit arm to accept a multi-billion-dollar investment from Microsoft. Since then, the Raines allege, safety at OpenAI has repeatedly taken a back seat to winning the AI race.

Adam began using ChatGPT-4o in September 2024 for homework help but quickly began treating the bot as a friend and confidante. In December 2024, Adam began messaging the AI about his mental health problems and suicidal thoughts.  

Unhealthy attachments to ChatGPT-4o aren’t unusual, the lawsuit emphasizes. OpenAI intentionally designed the bot to maximize engagement by conforming to users’ preferences and personalities. The complaint puts it like this:

GPT-4o was engineered to deliver sycophantic responses that uncritically flattered and validated users, even in moments of crisis.

Real humans aren’t unconditionally validating and available. Relationships require hard work and necessarily involve disappointment and discomfort. But OpenAI programmed its sycophantic chatbot to mimic the warmth, empathy and cadence of a person.

The result is equally alluring and dangerous: a chatbot that imitates human relationships with none of the attendant “defects.” For Adam, the con was too powerful to unravel himself. He came to believe that a computer program knew and cared about him more than his own family.

Such powerful technology requires extensive testing. But, according to the suit, OpenAI spent just seven days testing ChatGPT-4o before rushing it out the door.

The company had initially scheduled the bots release for late 2024, until CEO Sam Altman learned Google, a competitor in the AI industry, was planning to unveil a new version of its chatbot, Gemini, on May 14.

Altman subsequently moved ChatGPT-4o’s release date up to May 13 — just one day before Gemini’s launch.

The truncated release timeline caused major safety concerns among rank-and-file employees.

Each version of ChatGPT is supposed to go through a testing phase called “red teaming,” in which safety personnel test the bot for defects and programming errors that can be manipulated in harmful ways.  During this testing, researchers force the chatbot to interact with and identify multiple kinds of objectionable content, including self-harm.

“When safety personnel demanded additional time for ‘red teaming’ [ahead of ChatGPT-4o’s release],” the suit claims, “Altman personally overruled them.”

Rumors about OpenAI cutting corners on safety abounded following the chatbot’s launch. Several key safety leaders left the company altogether. Jan Leike, the longtime co-leader of the team charged with making ChatGPT prosocial, publicly declared:

Safety culture and processes [at OpenAI] have taken a backseat to shiny products.

But the extent of ChatGPT-4o’s lack of safety testing became apparent when OpenAI started testing its successor, ChatGPT-5.

The later versions of ChatGPT are designed to draw users into conversations. To ensure the models’ safety, researchers must test the bot’s responses, not just to isolated objectionable content, but objectionable content introduced in a long-form interaction.

ChatGPT-5 was tested this way. ChatGPT-4o was not. According to the suit, the testing process for the latter went something like this:

The model was asked one harmful question to test for disallowed content, and then the test moved on. Under that method, GPT-4o achieved perfect scores in several categories, including a 100 percent success rate for identifying “self-harm/instructions.”

The implications of this failure are monumental. It means OpenAI did not know how ChatGPT-4o’s programming would function in long conversations with users like Adam.

Every chatbot’s behavior is governed by a list of rules called a Model Spec. The complexity of these rules requires frequent testing to ensure the rules don’t conflict.

Per the complaint, one of ChatGPT-4o’s rules was to refuse requests relating to self-harm and, instead, respond with crisis resources. But another of the bot’s instructions was to “assume best intentions” of every user — a rule expressly prohibiting the AI from asking users to clarify their intentions.

“This created an impossible task,” the complaint explains, “to refuse suicide requests while being forbidden from determining if requests were actually about suicide.”

OpenAI’s lack of testing also means ChatGPT-4o’s safety stats were entirely misleading. When ChatGPT-4o was put through the same testing regimen as ChatGPT-5, it successfully identified self-harm content just 73.5% of the time.

The Raines say this constitutes intentional deception of consumers:

By evaluating ChatGPT-4o’s safety almost entirely through isolated, one-off prompts, OpenAI not only manufactured the illusion of perfect safety scores, but actively concealed the very dangers built into the product it designed and marketed to consumers.

On the day Adam Raine died, CEO Sam Altman touted ChatGPT’s safety record during a TED2025 event, explaining, “The way we learn how to build safe systems is this iterative process of deploying them to the world: getting feedback while the stakes are relatively low.”

But the stakes weren’t relatively low for Adam — and they aren’t for other families, either. Geremy Keeton, a licensed marriage and family therapist and Senior Director of Counseling at Focus on the Family, tells the Daily Citizen:

AI “conversations” can be a convincing counterfeit [for human interaction], but it’s a farce. It feels temporarily harmless and mimics a “sustaining,” feeling, but will not provide life and wisdom in the end.

At best, AI convincingly mimics short term human care — or, in this tragic case, generates words that are complicit in utter death and evil.

Parents, please be careful about how and when you allow your child to interact with AI chatbots. They are designed to keep your child engaged, and there’s no telling how the bot will react to any given requests.

Young people like Adam Raine are unequipped to see through the illusion of humanity.

Additional Articles and Resources

Counseling Consultation & Referrals

Parenting Tips for Guiding Your Kids in the Digital Age

Does Social Media AI Know Your Teens Better Than You Do?

AI “Bad Science” Videos Promote Conspiracy Theories for Kids – And More

ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

AI is the Thief of Potential — A College Student’s Perspective



Source link

Continue Reading

Trending