Connect with us

AI Insights

This Artificial Intelligence (AI) Stock Looks Set for a Second-Half Comeback

Published

on


SoundHound AI, one of the most popular artificial intelligence (AI) stocks of last year, has fallen from grace in 2025 … so far.

What if I told you that there is an artificial intelligence (AI) stock that outperformed both Nvidia and Palantir Technologies last year?

That’s right: In 2024, shares of SoundHound AI (SOUN 1.38%) rose by 836%, handily outperforming some of the biggest darlings of the AI narrative.

This year has been a different story, however. Through the first half of the year, SoundHound AI stock plummeted by 46%.

Let’s break down what piqued investors’ interest in SoundHound AI to begin with. From there, I’ll detail some of the factors that inspired investors to run for the hills during the first six months of the year.

While SoundHound AI may look like a relic of the past, smart investors understand the company is positioned to capitalize on two emerging themes within the broader AI realm.

Is now a good time to invest in the SoundHound AI sell-off? Read on to find out.

What fueled SoundHound’s rise to begin with?

SoundHound AI burst onto the scene after securities filings revealed that Nvidia held a small equity position in the company. The mere affiliation with Nvidia — AI’s biggest darling so far — was enough to get investors creating all sorts of narratives about how the two companies could be working together.

SoundHound AI operates in a unique and rather under-the-radar pocket of the AI realm. Through the power of natural language processing (NLP), SoundHound has built voice-powered AI assistants, not unlike Amazon’s Alexa or Apple‘s Siri, that are used across many different industries.

Unfortunately for those who invested in SoundHound AI at its peak late last year, the stock suffered an intense sell-off throughout the first half of the year. Somewhat ironically, a key factor behind the nosedive was Nvidia. New regulatory filings revealed that the chip king had exited its position in SoundHound AI, likely inspiring widespread skepticism and a bearish sentiment around the AI voice developer.

Image source: Getty Images.

What could be in store for SoundHound AI during the second half of the year?

One of the biggest use cases for SoundHound AI’s voice assistants is in the automotive industry. The company has partnered with a number of leading auto manufacturers, including Stellantis, Hyundai, Honda, and Lucid.

The value add here is that voice assistants can be integrated into infotainment and navigation systems inside vehicles, offering drivers a new level of convenience. According to a study commissioned by SoundHound AI, these systems represent a $35 billion opportunity for the automotive industry.

I think autonomous driving could soon become a more mainstream AI use case. SoundHound AI may have a lucrative opportunity to expand its existing footprint in the automotive industry by becoming a key partner in developing smart operating systems for vehicles.

Alphabet‘s self-driving taxi fleet, Waymo, is currently available in five major metropolitan cities and completes more than 250,000 paid rides per week. Moreover, Waymo’s partnership with Uber helps deepen the strategic value autonomous driving presents for major industries such as ride-hailing and delivery. Lastly, Tesla finally joined the party following the launch of its Robotaxi service in Austin, Texas last month.

As the autonomous vehicle landscape transitions from a period of research and development to the early phases of monetization and commercial scale, SoundHound AI looks uniquely poised to take advantage of the opportunity.

Is SoundHound AI stock a buy right now?

Although SoundHound AI’s share price has plummeted, the company’s valuation is still stretched. Per the chart below, SoundHound AI boasts a price-to-sales (P/S) multiple of 42. For context, that’s about where leading internet stocks peaked during the dot-com bubble of the late 1990s.

SOUN PS Ratio Chart

SOUN PS Ratio data by YCharts

I bring these dynamics up to stress that even though SoundHound AI stock might look “cheap,” the underlying valuation suggests that shares are still hovering around bubble levels — and that’s even after a near-50% decline in the stock price.

While I remain curious about SoundHound’s ability to benefit from the rise of autonomous driving and the integration of AI-powered voice software, I think the stock fits squarely into the speculative category.

Although shares could rebound during the second half of the year, this will likely be caused more by more narrative-driven buying than by concrete opportunities. With that said, I see SoundHound AI as more of a trade and less so a long-term “buy-and-hold” investment opportunity at this time.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool’s board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Adam Spatacco has positions in Alphabet, Amazon, Apple, Nvidia, Palantir Technologies, and Tesla. The Motley Fool has positions in and recommends Alphabet, Amazon, Apple, Nvidia, Palantir Technologies, Tesla, and Uber Technologies. The Motley Fool recommends Stellantis. The Motley Fool has a disclosure policy.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Hamm Institute hosts American Energy + AI Initiative event, spotlighting urgent needs to power AI

Published

on


Friday, September 12, 2025

Media Contact:
Dara McBee | Hamm Institute for American Energy | 580-350-7248 | dara.mcbee@hamminstitute.org

Leaders gathered at the Hamm Institute for American Energy at Oklahoma State University on Thursday to focus on one urgent question: how will
the United States power the rise of artificial intelligence?

The discussion of the roundtable report out of the American Energy + AI Initiative
underscored emerging top-tier priorities and highlighted the speed and scale required
to align energy systems with rapid advances in data and technology.

Keynote speaker Mark P. Mills, Hamm Institute Distinguished Scholar, described how
data centers and AI applications will drive a surge in demand unlike anything before.
Mills is leading the Hamm Institute’s core research for the American Energy + AI Initiative,
which is shaping the policy and investment agenda to ensure that America can meet
this challenge.

OSU President Jim Hess also gave remarks, emphasizing the university’s leadership
in advancing applied research and preparing the next generation of energy leaders.

A featured panel brought together a mix of perspectives from across energy and technology.
Panelists included Harold Hamm, founder of the Hamm Institute and Chairman Emeritus
of Continental Resources; Caroline Cochran, co-founder of Oklo; Takajiro Ishikawa,
CEO of Mitsubishi Heavy Industries; and E.Y. Easley, research fellow at SK Innovation.

Together, they explored how natural gas, nuclear innovation, global supply chains
and international collaboration can help meet the moment.

“The United States has the resources it needs. What it lacks is speed, certainty and
alignment,” said Ann Bluntzer Pullin, executive director of the Hamm Institute. “This
initiative is about turning urgency into action so that America and its allies can
lead in both energy and AI.”

Panelists for the event included: Harold Hamm, founder of the Hamm Institute and Chairman
Emeritus of Continental Resources; Caroline Cochran, co-founder of Oklo; Takajiro
Ishikawa, CEO of Mitsubishi Heavy Industries; and E.Y. Easley, research fellow at
SK Innovation.

The initiative has already convened roundtables in Washington, D.C.; Denver; and Palo
Alto, California. Each conversation has surfaced priorities that move beyond analysis
toward action, including timely permitting, stronger demand signals to unlock investment,
reforms to unclog grid interconnection, and deeper coordination with allies on fuels
and technology.

The initiative’s next steps will focus on solidifying and advancing these emerging
top-tier priorities. The Hamm Institute will continue to convene leaders and deliver
research that accelerates solutions to meet AI’s energy demand.



Source link

Continue Reading

AI Insights

FTC investigating AI ‘companion’ chatbots amid growing concern about harm to kids

Published

on


The Federal Trade Commission has launched an investigation into seven tech companies around potential harms their artificial intelligence chatbots could cause to children and teenagers.

The inquiry focuses on AI chatbots that can serve as companions, which “effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots,” the agency said in a statement Thursday.

The FTC sent order letters to Google parent company Alphabet; Character.AI; Instagram and its parent company, Meta; OpenAI; Snap; and Elon Musk’s xAI. The agency wants information about whether and how the firms measure the impact of their chatbots on young users and how they protect against and alert parents to potential risks.

The investigation comes amid rising concern around AI use by children and teens, following a string of lawsuits and reports accusing chatbots of being complicit in the suicide deaths, sexual exploitation and other harms to young people. That includes one lawsuit against OpenAI and two against Character.AI that remain ongoing even as the companies say they are continuing to build out additional features to protect users from harmful interactions with their bots.

Broader concerns have also surfaced that even adult users are building unhealthy emotional attachments to AI chatbots, in part because the tools are often designed to be agreeable and supportive.

At least one online safety advocacy group, Common Sense Media, has argued that AI “companion” apps pose unacceptable risks to children and should not be available to users under the age of 18. Two California state bills related to AI chatbot safety for minors, including one backed by Common Sense Media, are set to receive final votes this week and, if passed, will reach California Gov. Gavin Newsom’s desk. The US Senate Judiciary Committee is also set to hold a hearing next week entitled “Examining the Harm of AI Chatbots.”

“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” FTC Chairman Andrew Ferguson said in the Thursday statement. “The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”

In particular, the FTC’s orders seek information about how the companies monetize user engagement, generate outputs in response to user inquiries, develop and approve AI characters, use or share personal information gained through user conversations and mitigate negative impacts to children, among other details.

Google, Snap and xAI did not immediately respond to requests for comment.

“Our priority is making ChatGPT helpful and safe for everyone, and we know safety matters above all else when young people are involved. We recognize the FTC has open questions and concerns, and we’re committed to engaging constructively and responding to them directly,” OpenAI spokesperson Liz Bourgeois said in a statement. She added that OpenAI has safeguards such as notifications directing users to crisis helplines and plans to roll out parental controls for minor users.

After the parents of 16-year-old Adam Raine sued OpenAI last month alleging that ChatGPT encouraged their son’s death by suicide, the company acknowledged its safeguards may be “less reliable” when users engage in long conversations with the chatbots and said it was working with experts to improve them.

Meta declined to comment directly on the FTC inquiry. The company said it is currently limiting teens’ access to only a select group of its AI characters, such as those that help with homework. It is also training its AI chatbots not to respond to teens’ mentions of sensitive topics such as self-harm or inappropriate romantic conversations and to instead point to expert resources.

“We look forward to collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology,” Jerry Ruoti, Character.AI’s head of trust and safety, said in a statement. He added that the company has invested in trust and safety resources such as a new under-18 experience on the platform, a parental insights tool and disclaimers reminding users that they are chatting with AI.

If you are experiencing suicidal, substance use or other mental health crises please call or text the new three-digit code at 988. You will reach a trained crisis counselor for free, 24 hours a day, seven days a week. You can also go to 988lifeline.org.

(The-CNN-Wire & 2025 Cable News Network, Inc., a Time Warner Company. All rights reserved.)



Source link

Continue Reading

AI Insights

Concordia-led research program is Indigenizing artificial intelligence | Education

Published

on


An initiative steered by Concordia researchers is challenging the conversation around the direction of artificial intelligence (AI). It charges that the current trajectory is inherently biased against non-Western modes of thinking about intelligence — especially those originating from Indigenous cultures. As a way of decolonising the future of AI, they have created the Abundant Intelligences research program: an international, multi-institutional and interdisciplinary program that seeks to rethink how we conceive of AI. The driving concept behind it is the incorporation of Indigenous knowledge systems to create an inclusive, robust concept of intelligence and intelligent action, and how that can be embedded into existing and future technologies.

The full concept is described in a recent paper for the journal AI & Society.

 “Artificial intelligence has inherited conceptual and intellectual ideas from past formulations of intelligence that took on certain colonial pathways to establish itself, such as emphasizing a kind of industrial or production focus,” says Ceyda Yolgörmez, a postdoctoral fellow with Abundant Intelligences and one of the paper’s authors.

They write that this scarcity mindset contributed to resource exploitation and extraction that has extended a legacy of Indigenous erasure that influences discussion around AI to this day, adds lead author Jason Edward Lewis. The professor in the Department of Design and Computation Arts is also the University Research Chair in Computational Media and the Indigenous Future Imaginary. “The Abundant Intelligences research program is about deconstructing the scarcity mindset and making room for many kinds of intelligence and ways we might think about it.”

The researchers believe this alternative approach can create an AI that is oriented toward human thriving, that preserves and supports Indigenous languages, addresses pressing environmental and sustainability issues, re-imagines public health solutions and more.

Relying on local intelligence

The community-based research program is directed from Concordia in Montreal but much of the local work will be done by individual research clusters (called pods) across Canada, in the United States and in New Zealand.

The pods will be anchored to Indigenous-centred research and media labs at Western University in Ontario, the University of Lethbridge in Alberta, the University of Hawai’i—West Oahu, Bard College in New York and Massey University in New Zealand.

They bring together Indigenous knowledge-holders, cultural practitioners, language keepers, educational institutions and community organizations with research scientists, engineers, artists and social scientists to develop new computational practices fitted to an Indigenous-centred perspective.

The researchers are also partnering with AI professionals and industry researchers, believing that the program will open new avenues of research and propose new research questions for mainstream AI research. “For example, how do you build a rigorous system out of a small amount of resource data like different Indigenous languages?” asks Yolgörmez.  “How do you make multi-agent systems that are robust, recognize and support non-human actors and integrate different sorts of activities within the body of a single system?”

Lewis asserts that their approach is both complementary and alternative to mainstream AI research, particularly regarding data sets like Indigenous languages that are much smaller than the ones currently being used by industry leaders. “There is a commitment to working with data from Indigenous communities in an ethical way, compared to simply scraping the internet,” he says. “This yields miniscule amounts of data compared to what the larger companies are working with, but it presents the potential to innovate different approaches when working with small languages. That can be useful to researchers who want to take a different approach than the mainstream.

“This is one of the strengths of the decolonial approach: it’s one way to get out of this tunnel vision belief that there is only one way of doing things.”

Hēmi Whaanga, professor at Massey University in New Zealand, also contributed to the paper.

Read the cited paper: “Abundant intelligences: placing AI within Indigenous knowledge frameworks.

— By Patrick Lejtenyi

Concordia University

@ConcordiaUnews

— AB





Source link

Continue Reading

Trending