Connect with us

AI Research

Which Is the Better Artificial Intelligence (AI) Stock to Buy Right Now: CoreWeave or Nvidia?

Published

on


  • Both CoreWeave and Nvidia are growth superstars.

  • Both AI stocks look expensive at first glance, but their valuations need to be assessed in light of their growth prospects.

  • Which stock is the better pick to buy right now depends on your take on what’s going to happen with AI demand.

  • 10 stocks we like better than CoreWeave ›

A hot IPO stock more than triples in its first few months on the market. A former high-flying stock rebounds from a steep sell-off to become the world’s first $4 trillion company. Those are the stories for CoreWeave (NASDAQ: CRWV) and Nvidia (NASDAQ: NVDA) this year.

Investors who saw the potential in these two artificial intelligence (AI) stocks have reaped tremendous rewards. But which is the better pick to buy now?

Image source: Getty Images.

Anyone who has followed Nvidia for a while has become accustomed to sizzling growth. The graphics processing unit (GPU) maker’s momentum continues. In the first quarter of fiscal 2026, Nvidia reported revenue of $44.1 billion, up 69% year over year.

The main negative of Nvidia’s Q1 results was that its gross margin tumbled nearly 18% year over year. As a result, its earnings increased by 26%, a much slower pace than revenue. That’s still an impressive jump, though.

There should be good news ahead for Nvidia. Its new Blackwell chips are selling hand over fist. The company expects increased profitability for these GPUs to drive gross margins higher. In addition, the U.S. government is allowing Nvidia to sell its H20 GPUs to China. The previous restriction against such sales led to a $4.5 billion charge that weighed on gross margins.

Meanwhile, CoreWeave reported revenue of $981.6 million in its first quarter as a publicly traded company, reflecting jaw-dropping, year-over-year growth of 420%. The demand for CoreWeave’s AI cloud infrastructure is so great that the company is having to scramble to keep up with it.

CoreWeave’s revenue backlog stood at $25.9 billion at the end of Q1. This figure included $11.2 billion from the company’s deal with ChatGPT creator OpenAI.

However, CoreWeave remains unprofitable. And the capital spending required to expand capacity to support soaring demand is causing the company’s bottom line to worsen. The picture looks better with adjusted earnings before interest, taxes, depreciation, and amortization (EBITDA), though. CoreWeave’s adjusted EBITDA jumped 477% year over year in Q1 to $606 million.

Typically, the biggest knock against sizzling growth stocks is that their valuations can sometimes be too hot to handle. At first glance, this might seem to be true with both CoreWeave and Nvidia.



Source link

AI Research

The Smartest Artificial Intelligence (AI) Stocks to Buy With $1,000

Published

on


AI investing is still one of the most promising trends on the market.

Buying artificial intelligence (AI) stocks after the run they’ve had over the past few years may seem silly. However, the reality is that many of these companies are still experiencing rapid growth and anticipate even greater gains on the horizon.

By investing now, you can get in on the second wave of AI investing success before it hits. While it won’t be nearly as lucrative as the first round that occurred from 2023 to 2024, it should still provide market-beating results, making these stocks great buys now.

Image source: Getty Images.

AI Hardware: Taiwan Semiconductor and Nvidia

The demand for AI computing power appears to be insatiable. All of the AI hyperscalers are spending record amounts on building data centers in 2025, but they’re also projecting to top that number in 2026. This bodes well for companies supplying products to fill those data centers with the computing power needed for processing AI workloads.

Two of my favorites in this space are Nvidia (NVDA -3.38%) and Taiwan Semiconductor Manufacturing (TSM -3.05%). Nvidia makes graphics processing units (GPUs), which have been the primary computing muscle for AI workloads so far. Thousands of GPUs are connected in clusters due to their ability to process multiple calculations in parallel, creating a powerful computing machine designed for training and processing AI workloads.

Inside these GPUs are chips produced by Taiwan Semiconductor, the world’s leading contract chip manufacturer. TSMC also supplies chips to Nvidia’s competitors, such as Advanced Micro Devices, so it’s playing both sides of the arms race. This is a great position to be in, and it has led to impressive growth for TSMC.

Both Taiwan Semiconductor and Nvidia are capitalizing on massive data center demand, and have the growth to back it up. In Q2 FY 2026 (ending July 27), Nvidia’s revenue increased by 56% year over year. Taiwan Semiconductor’s revenue rose by 44% in its corresponding Q2, showcasing the strength of both of these businesses.

With data center demand only expected to increase, both of these companies make for smart buys now.

AI Hyperscalers: Amazon, Alphabet, and Meta Platforms

The AI hyperscalers are companies that spend a significant amount of money on AI computing capacity for internal use and to provide tools for consumers. Three major players in this space are Amazon (AMZN -1.16%), Alphabet (GOOG 0.56%) (GOOGL 0.63%), and Meta Platforms (META -1.69%).

Amazon makes this list due to the boost its cloud computing division, Amazon Web Services (AWS), is experiencing. Cloud computing is benefiting from the AI arms race because it allows clients to rent computing power from companies that have more resources than they do. AWS is the market leader in this space, and it is a huge part of Amazon’s business. Despite making up only 18% of Q2 revenue, it generated 53% of Amazon’s operating profits. AWS is a significant beneficiary of AI and is helping drive the stock higher.

Alphabet (GOOG 0.56%) (GOOGL 0.63%) also has a cloud computing wing with Google Cloud, but it’s also developing one of the highest-performing generative AI models: Gemini. Alphabet has integrated Gemini into nearly all of its products, including its most important, Google Search.

With the integration of generative AI into the traditional Google Search, Alphabet has bridged a gap that many investors feared would be the end for Google. This hasn’t been the case, and Alphabet’s impressive 12% growth in Google Search revenue in Q2 supports that. Despite its strong growth, Alphabet is by far the cheapest stock on this list, trading for less than 21 times forward earnings.

AMZN PE Ratio (Forward) Chart

AMZN PE Ratio (Forward) data by YCharts

With Alphabet’s strength and strong position, combined with a cheap stock valuation, it’s an excellent one to buy now.

To round out this list, Meta Platforms (META -1.69%) is another smart pick. It’s the parent company of social media platforms Facebook and Instagram, and gets a huge amount of money from ads. As a result, it’s investing significant resources into improving how AI designs and targets ads, and it’s already seeing some effects. AI has already increased the amount of time users spend on Facebook and Instagram, and is also driving more ad conversions.

We’re just scratching the surface of what AI can do for Meta’s business, and with Meta spending a significant amount of money on top AI talent, it should be able to convert that into some substantial business wins.

AI is a significant boost for the world’s largest companies, and I wouldn’t be surprised to see them outperform the broader market in the coming year as a result.

Keithen Drury has positions in Alphabet, Amazon, Meta Platforms, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Research

Therapists say AI can help them help you, but some see privacy concerns

Published

on


Therapists in Winnipeg have started using artificial intelligence-powered tools to listen in and transcribe sessions, which some say helps provide better patient care — but it’s raising concerns around privacy and ethical risks for some patients and experts.

Wildwood Wellness Therapy director Gavin Patterson has been using a tool called Clinical Notes AI at his Osborne Village practice for the past 11 months to summarize and auto-generate patient assessments and treatment plans with the click of a button.

Once he has consent from clients to use the software, he turns it on and it transcribes the sessions in real time. Patterson said it’s improving care for his 160 patients.

“Notes before were good, but now it’s so much better,” he said. “When I’m working with clients one-on-one, I’m able to free myself of writing down everything and be fully present in the conversation.”

Patterson sees up to 10 patients daily, making it difficult to remember every session in detail. But AI lets him capture the entire appointment.

“It gives me a lot of brain power back, and it helps me deliver a higher product of service,” he said.

The software also cuts down on the time it would normally take to write clinical notes, letting him provide care to more patients on average.

Tools like Clinical Notes AI can listen to, and transcribe, therapy sessions in real time. (Jeff Stapleton/CBC)

Once patient notes are logged, Patterson said the transcripts from the session are deleted.

As an extra layer of security, he makes sure to record only information the AI absolutely needs.

“I don’t record the client’s name,” he said. “There’s no identifying marks within the note,” which is intended to protect patients from possible security breaches.

But 19-year-old Rylee Gerrard, who has been going to therapy for years, says while she appreciates that an AI-powered tool can help therapists with their heavy workloads, she has concerns about privacy.

“I don’t trust that at all,” said Gerrard, noting she shares “very personal” details in therapy. Her therapist does not currently use AI, she says.

A woman stands smiling, there are trees and a building behind her.
Rylee Gerrard, 19, says she has concerns about privacy when it comes to AI use in therapy. (Travis Golby/CBC )

“I just don’t know where they store their information, I don’t know who owns that information … where all of that is kind of going,” Gerrard said, adding she’s more comfortable knowing that her therapist is the only person with details from her sessions.

Unlike the artificial intelligence Patterson uses, there are tools like Jane — used by some clinics in Winnipeg — which can record audio and video of a patient’s session and make transcriptions.

Recordings are stored until a clinician deletes them permanently, but even then, they stay in the system for seven days before being permanently deleted, according to the software’s website.

CBC News reached out to the company multiple times asking about its security protocols but didn’t receive a reply prior to publication. A section on security on the Jane website says it has a team whose “top priority is to protect your sensitive data.”

Caution, regulation needed: privacy expert 

Ann Cavoukian, the executive director of Global Privacy and Security by Design Centre — an organization that helps people protect their personal data — says there are privacy risks when AI is involved.

“AI can be accessed by so many people in an unauthorized manner,” said Cavoukian, a former privacy commissioner for the province of Ontario.

“This is the most sensitive data that exists, so you have to ensure that no unauthorized third parties can gain access to this information.”

A portrait of a woman with glasses gazing into the camera.
Privacy expert Ann Cavoukian says the use of AI increases the risk of sensitive information ending up in the wrong hands. (Dave MacIntosh/CBC)

She says most, if not all, AI transcription technologies used in health care lack adequate security measures to protect against external access to data, leaving sensitive information vulnerable.

“You should have it in your back pocket, meaning in your own system — your personal area where you get personal emails … and you are in control,” she said.

In Manitoba, AI scribes used in health care or therapy settings have no provincial regulations, according to a statement from the province.

Cavoukian said she understands the workload strain therapists face, but thinks the use of AI in therapy should be met with caution and regulated.

“This is the ideal time, right now, to embed privacy protective measures into the AI from the outset,” she said.

She wants governments and health-care systems to proactively create regulations to protect sensitive information from getting into the wrong hands.

“That can cause enormous harm to individuals,” she said. “That’s what we have to stop.”

Recording sessions not a new technology

The concept of recording therapy sessions is not new to Peter Bieling, a clinical psychologist and a professor of psychiatry and behavioural neurosciences at McMaster University in Hamilton. Therapists have been doing that in other ways, with the consent of patients, for years, he said.

“It used to be old magnetic tape and then it was cassettes,” said Bieling, adding there was always the risk of recordings falling into the wrong hands.

He understands the apprehension around the use of AI in therapy, but encourages people to see it as what it is — a tool and an updated version of what already exists.

A man sits with headphones on smiling.
Clinical psychologist Peter Bieling agrees there are concerns around AI and security, but says its use could improve patient care. (CBC)

The use of scribing tools will not change therapy sessions, nor will it replace therapists, he said — artificial intelligence cannot diagnose a patient or send in documentation, he noted, so practitioners still have the final say.

“Electronic health records have been making recommendations and suggestions, as have the authors of guidelines and textbook writers, for many years,” said Bieling.

But like Cavoukian, he believes more regulations are needed to guide the use of AI. Failing to implement those may lead to problems in the future, he said.

“These agencies are way too late and way too slow,” said Bieling.

For now, Cavoukian advises patients to advocate for themselves.

“When they go in for therapy or any kind of medical treatment, they should ask right at the beginning, ‘I want to make sure my personal health information is going to be protected. Can you tell me how you do that?'”

Asking those types of questions may put pressure on systems to regulate the AI they use, she said.

How Winnipeg therapists are using AI in sessions

From transcribing patient notes to generating assessments and treatment plans, AI-powered tools are helping therapists with heavy workloads, but some are concerned about privacy and ethical risks.



Source link

Continue Reading

AI Research

YouTube secretly uses AI to enhance videos without creator consent

Published

on


YouTube acknowledged on August 20, 2025, that it has been using artificial intelligence to modify videos without informing content creators or obtaining their consent. The revelation came after months of creator complaints about strange visual artifacts appearing in their content, prompting an official response from the platform’s liaison team.

According to YouTube’s head of editorial and creator liaison Rene Ritchie, the company has been “running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing.” The modifications have been affecting video content since at least June 2025, based on social media complaints dating back to that period.

The AI enhancement system processes videos automatically during upload, making subtle changes that creators described as unwelcome. Rick Beato, a music YouTube creator with over five million subscribers, noticed his appearance looked unusual in recent videos. “I was like ‘man, my hair looks strange’. And the closer I looked it almost seemed like I was wearing makeup,” Beato said in a BBC interview published August 24, 2025.

Rhett Shull, another music YouTuber who investigated the issue, posted a video on the subject that accumulated over 500,000 views. “If I wanted this terrible over-sharpening I would have done it myself. But the bigger thing is it looks AI-generated,” Shull explained. “I think that deeply misrepresents me and what I do and my voice on the internet.”

The technical implementation uses machine learning algorithms to modify various visual elements during video processing. According to creator reports, the system sharpens certain areas while smoothing others, defines wrinkles in clothing more prominently, and can cause distortions in body parts like ears. These modifications occur without any notification or option for creators to opt out of the enhancement process.

YouTube’s response emphasized the distinction between different AI technologies. Ritchie stated the platform uses “traditional machine learning” rather than generative AI, drawing a line between enhancement algorithms and content creation systems. However, according to Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh, this represents a “misdirection” since machine learning constitutes a subfield of artificial intelligence.

The timing of these revelations coincides with broader industry discussions about AI transparency in content modification. Earlier in 2025, YouTube implemented mandatory disclosure requirements for AI-generated content, effective May 25, requiring creators to label synthetically altered material. The company also clarified its monetization policies regarding AI content in July 2025.

Content creators have expressed particular concern about the lack of consent in the modification process. Unlike smartphone cameras that offer users control over AI enhancement features, YouTube’s system operates without creator knowledge or approval. The modifications potentially impact how audiences perceive creator content, raising questions about artistic integrity and authenticity.

The broader implications extend beyond individual creator concerns. According to Woolley, the practice represents how “AI is increasingly a medium that defines our lives and realities.” The professor argues that when companies modify content without disclosure, it risks “blurring the lines of what people can trust online.”

Some YouTubers have reported their content being mistaken for AI-generated material due to the visual artifacts introduced by the enhancement system. This creates additional challenges for creators who must now address audience questions about whether their content is authentic, despite the modifications being applied without their knowledge.

The technical implementation appears limited to YouTube Shorts, the platform’s short-form video feature designed to compete with TikTok. YouTube has not disclosed how many creators or videos have been affected by the experimental enhancement system, describing it only as affecting “select” content.

YouTube’s approach differs significantly from other platform modifications. Traditional video compression and quality adjustments maintain the original content’s essential character, while the AI enhancement system actively modifies visual elements to meet algorithmic preferences for image quality.

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.


Learn more

The controversy highlights growing tensions between platform optimization and creator autonomy. While YouTube positions the enhancements as quality improvements similar to smartphone camera processing, creators argue the modifications fundamentally alter their artistic intent without permission.

Industry experts note this development occurs amid increasing scrutiny of AI implementation in content platforms. Recent months have seen similar controversies involving Samsung’s AI-enhanced moon photography and Netflix’s apparent AI remastering of 1980s television content, raising broader questions about transparency in automated content modification.

The marketing community faces significant implications as nearly 90% of advertisers plan to use AI for video advertisement creation by 2026, according to an Interactive Advertising Bureau report from July 2025. YouTube’s undisclosed modifications could affect how branded content appears to audiences, potentially impacting campaign effectiveness and brand representation.

Content authenticity has become increasingly important as artificial intelligence capabilities advance. The platform’s decision to implement enhancements without disclosure contradicts growing industry emphasis on transparency, particularly given YouTube’s own requirements for creators to label AI-generated content.

YouTube has not responded to questions about whether creators will receive options to control AI modifications of their content. The platform stated it will “continue to take creator and viewer feedback into consideration as we iterate and improve on these features,” but provided no timeline for potential policy changes.

The revelation underscores the evolving relationship between AI technology and creative content. As machine learning systems become more sophisticated, platforms face decisions about implementation transparency and creator control over automated modifications.

Some creators remain supportive of YouTube’s experimentation approach despite the controversy. Beato, while initially concerned about the modifications, acknowledged that “YouTube is constantly working on new tools and experimenting with stuff. They’re a best-in-class company, I’ve got nothing but good things to say. YouTube changed my life.”

The debate reflects broader questions about AI’s role in mediating reality. According to Jill Walker Rettberg, a professor at the Center for Digital Narrative at the University of Bergen, the situation raises fundamental questions about authenticity. “With algorithms and AI, what does this do to our relationship with reality?” Rettberg noted.

Current industry developments suggest this controversy may influence future platform policies. Google’s broader AI integration across advertising platforms and increasing automation in content delivery systems indicate that similar transparency questions will likely emerge across multiple platforms.

The YouTube situation demonstrates the challenges platforms face in balancing technological innovation with creator expectations and user trust. As AI capabilities continue advancing, the industry must address fundamental questions about consent, transparency, and the preservation of creative authenticity in an increasingly automated content ecosystem.

Timeline

PPC Land explains

Artificial Intelligence (AI): Advanced computational systems that simulate human intelligence processes, including learning, reasoning, and pattern recognition. In YouTube’s case, AI refers to machine learning algorithms that automatically analyze and modify video content during processing. The technology operates without human intervention, making decisions about visual enhancements based on predetermined parameters and training data from millions of video samples.

Machine Learning: A subset of artificial intelligence that enables systems to automatically learn and improve from experience without explicit programming. YouTube’s enhancement system uses machine learning to identify visual elements requiring modification, such as blurry areas or noise patterns. The algorithms continuously refine their processing capabilities based on video analysis, though creators cannot influence these automated decisions.

YouTube Shorts: The platform’s short-form video feature launched to compete with TikTok, supporting videos up to 60 seconds in length. Shorts represents a significant portion of YouTube’s content strategy, with the AI enhancement experiment currently limited to this format. The feature processes millions of uploads daily, making automated quality improvements technically necessary but raising questions about creator consent and content authenticity.

Content Enhancement: The automated process of improving video quality through algorithmic modifications including denoising, sharpening, and clarity adjustments. YouTube’s enhancement system operates during video upload processing, applying changes before content reaches viewers. These modifications can alter the original artistic intent, creating tension between technical quality improvements and creative authenticity that creators specifically intended.

Creator Consent: The principle that content creators should have control over modifications made to their original work. YouTube’s AI enhancement system operates without explicit creator permission, raising ethical questions about platform authority over user-generated content. This lack of consent contrasts with smartphone camera features that allow users to enable or disable AI enhancements before recording.

Visual Artifacts: Unintended visual distortions or anomalies introduced by AI processing systems that can make content appear artificially generated. Creators reported various artifacts including unusual sharpening effects, skin texture modifications, and facial feature distortions. These artifacts can undermine creator credibility when audiences mistake enhanced content for AI-generated material, affecting creator-audience trust relationships.

Content Authenticity: The principle that digital content should accurately represent the creator’s original intent without undisclosed modifications. YouTube’s enhancement system challenges authenticity by altering visual elements without creator knowledge or audience notification. Authenticity concerns become particularly important as AI-generated content becomes more prevalent, requiring clear distinctions between original and modified material.

Platform Transparency: The obligation for technology companies to clearly communicate their content modification practices to users. YouTube’s undisclosed AI enhancement experiment violated transparency principles by implementing changes without creator notification. Transparency becomes crucial as platforms increasingly use AI systems that can fundamentally alter user content, affecting both creator rights and audience expectations.

Monetization Policies: Rules governing how creators can earn revenue from their content on YouTube, including guidelines about AI usage and content authenticity. Recent policy updates require disclosure of synthetic content while maintaining eligibility for creators using AI tools appropriately. These policies attempt to balance innovation with authenticity requirements, though enforcement complexity increases as AI capabilities advance.

Digital Marketing: The practice of promoting products and services through online platforms, increasingly incorporating AI-generated content and automated optimization. YouTube’s AI modifications affect marketing campaigns by potentially altering how branded content appears to audiences. Marketing professionals must now consider how platform-level AI enhancements might impact campaign messaging and brand representation, adding complexity to content strategy development.

Five Ws Summary

Who: YouTube, the Google-owned video platform, along with content creators Rick Beato and Rhett Shull who first reported the modifications. YouTube’s head of editorial Rene Ritchie provided the official response.

What: YouTube has been using artificial intelligence to automatically enhance videos without creator consent, applying modifications that unblur, denoise, and improve clarity. The changes create visual artifacts that some creators describe as making their content appear AI-generated.

When: The modifications have been occurring since at least June 2025, with YouTube confirming the practice on August 20, 2025, after months of creator complaints.

Where: The AI enhancements affect YouTube Shorts, the platform’s short-form video feature, with modifications applied during the video processing stage after upload.

Why: According to YouTube, the enhancements aim to improve video quality and viewer experience, similar to smartphone camera processing. However, creators argue the modifications occur without consent and potentially damage their artistic integrity and audience trust.



Source link

Continue Reading

Trending