Connect with us

AI Insights

Better Buy in 2025: SoundHound AI, or This Other Magnificent Artificial Intelligence Stock?

Published

on


SoundHound AI (SOUN 11.99%) is a leading developer of conversational artificial intelligence (AI) software, and its revenue is growing at a lightning-fast pace. Its stock soared by 835% in 2024 after Nvidia revealed a small stake in the company, although the chip giant has since sold its entire position.

DigitalOcean (DOCN 2.03%) is another up-and-coming AI company. It operates a cloud computing platform designed specifically for small and mid-sized businesses (SMBs), which features a growing portfolio of AI services, including data center infrastructure and a new tool that allows them to build custom AI agents.

With the second half of 2025 officially underway, which stock is the better buy between SoundHound AI and DigitalOcean?

Image source: Getty Images.

The case for SoundHound AI

SoundHound AI amassed an impressive customer list that includes automotive giants like Hyundai and Kia and quick-service restaurant chains like Chipotle and Papa John’s. All of them use SoundHound’s conversational AI software to deliver new and unique experiences for their customers.

Automotive manufacturers are integrating SoundHound’s Chat AI product into their new vehicles, where it can teach drivers how to use different features or answer questions about gas mileage and even the weather. Manufacturers can customize Chat AI’s personality to suit their brand, which differentiates the user experience from the competition.

Restaurant chains use SoundHound’s software to autonomously take customer orders in-store, over the phone, and in the drive-thru. They also use the company’s voice-activated virtual assistant tool called Employee Assist, which workers can consult whenever they need instructions for preparing a menu item or help understanding store policies.

SoundHound generated $84.7 million in revenue during 2024, which was an 85% increase from the previous year. However, management’s latest guidance suggests the company could deliver $167 million in revenue during 2025, which would represent accelerated growth of 97%. SoundHound also has an order backlog worth over $1.2 billion, which it expects to convert into revenue over the next six years, so that will support further growth.

But there are a couple of caveats. First, SoundHound continues to lose money at the bottom line. It burned through $69.1 million on a non-GAAP (adjusted) basis in 2024 and a further $22.3 million in the first quarter of 2025 (ended March 31). The company only has $246 million in cash on hand, so it can’t afford to keep losing money at this pace forever — eventually, it will have to cut costs and sacrifice some of its revenue growth to achieve profitability.

The second caveat is SoundHound’s valuation, which we’ll explore further in a moment.

The case for DigitalOcean

The cloud computing industry is dominated by trillion-dollar tech giants like Amazon and Microsoft, but they mostly design their services for large organizations with deep pockets. SMB customers don’t really move the needle for them, but that leaves an enormous gap in the cloud market for other players like DigitalOcean.

DigitalOcean offers clear and transparent pricing, attentive customer service, and a simple dashboard, which is a great set of features for small- and mid-sized businesses with limited resources. The company is now helping those customers tap into the AI revolution in a cost-efficient way with a growing portfolio of services.

DigitalOcean operates data centers filled with graphics processing units (GPUs) from leading suppliers like Nvidia and Advanced Micro Devices, and it offers fractional capacity, which means its customers can access between one and eight chips. This is ideal for small workloads like deploying an AI customer service chatbot on a website.

Earlier this year, DigitalOcean launched a new platform called GenAI, where its clients can create and deploy custom AI agents. These agents can do almost anything, whether an SMB needs them to analyze documents, detect fraud, or even autonomously onboard new employees. The agents are built on the latest third-party large language models from leading developers like OpenAI and Meta Platforms, so SMBs know they are getting the same technology as some of their largest competitors.

DigitalOcean expects to generate $880 million in total revenue during 2025, which would represent a modest growth of 13% compared to the prior year. However, during the first quarter, the company said its AI revenue surged by an eye-popping 160%. Management doesn’t disclose exactly how much revenue is attributable to its AI services, but it says demand for GPU capacity continues to outstrip supply, which means the significant growth is likely to continue for now.

Unlike SoundHound AI, DigitalOcean is highly profitable. It generated $84.5 million in generally accepted accounting principles (GAAP) net income during 2024, which was up by a whopping 335% from the previous year. It carried that momentum into 2025, with its first-quarter net income soaring by 171% to $38.2 million.

The verdict

For me, the choice between SoundHound AI and DigitalOcean mostly comes down to valuation. SoundHound AI stock is trading at a sky-high price-to-sales (P/S) ratio of 41.4, making it even more expensive than Nvidia, which is one of the highest-quality companies in the world. DigitalOcean stock, on the other hand, trades at a very modest P/S ratio of just 3.5, which is actually near the cheapest level since the company went public in 2021.

SOUN PS Ratio Chart

SOUN PS Ratio data by YCharts

We can also value DigitalOcean based on its earnings, which can’t be said for SoundHound because the company isn’t profitable. DigitalOcean stock is trading at a price-to-earnings (P/E) ratio of 26.2, which makes it much cheaper than larger cloud providers like Amazon and Microsoft (although they also operate a host of other businesses):

MSFT PE Ratio Chart

MSFT PE Ratio data by YCharts

SoundHound’s rich valuation might limit further upside in the near term. When we combine that with the company’s steep losses at the bottom line, its stock simply doesn’t look very attractive right now, which might be why Nvidia sold it. DigitalOcean stock looks like a bargain in comparison, and it has legitimate potential for upside from here thanks to the company’s surging AI revenue and highly profitable business.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool’s board of directors. Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Amazon, Chipotle Mexican Grill, DigitalOcean, Meta Platforms, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft, short January 2026 $405 calls on Microsoft, and short June 2025 $55 calls on Chipotle Mexican Grill. The Motley Fool has a disclosure policy.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Diagnosing facial synkinesis using artificial intelligence to advance facial palsy care

Published

on


Over the past decade, a plethora of software applications have emerged in the field of patient medical care, supporting the diagnosis and management of various clinical conditions14,15,20,21,22. Our study contributes to this evolving field by introducing a novel application for holistic synkinesis diagnosis and leveraging the power of convolutional neural networks (CNN) to analyze images of periocular regions.

The development and validation of our CNN-based model for diagnosing facial synkinesis in FP patients mark a significant advancement in the realm of automated medical diagnostics. Our model demonstrated a high degree of accuracy (98.6%) in distinguishing between healthy individuals and those with synkinesis, with an F1-score of 98.4%, precision of 100%, and recall of 96.9%. These metrics highlight the model’s robustness and reliability, rendering it a valuable tool for clinicians. The confusion matrix analysis provided further insights into the model’s performance, revealing only one misclassification among the 71 test images. These metrics echo findings from previous work in diagnosing sequelae of FP. For example, our group reported comparable metrics for CNN-based assessment of lagophthalmos. Using a training set of 826 images, the validation accuracy was 97.8% over the span of 64 epochs17. Another study leveraged a CNN to automatically identify (peri-)ocular pathologies such as enophthalmos with an accuracy of 98.2%, underscoring the potential of neural networks when diagnosing facial conditions23. Such tools can broaden access to FP diagnostics, thus reducing time-to-diagnosis and effectively triaging patients to the appropriate treatment pathway (e.g., conservative therapy, cross-face-nerve-grafts)2,3,14,24. Overall, our CNN adds another highly accurate diagnostic tool for reliably detecting facial pathologies, especially in FP patients.

Another strength of our CNN lies in its high user-friendliness and rapid processing and training times. The mean image processing time was 24 ± 11 ms, and the overall training time was 14.4 min. The development of a lightweight, dockerized web application enhanced the model’s practicality and accessibility. In addition, the total development costs of the CNN were only $311 USD. Such parameters have been identified as key parameters for impactful AI research and effective integration into clinical workflows25,26,27. More precisely, the short training times may pave the avenue toward additional AI-supported diagnostic tools in FP care to detect common short- and long-term complications of FP (e.g., ectropion, hemifacial tissue atrophy). The easy-to-use and cost-effective web application may facilitate clinical use for healthcare providers in low- and middle-income countries, where the incidence and prevalence of FP are higher compared to the high-income countries28. To facilitate the download and use of our algorithm, we (i) uploaded the code to GitHub (San Francisco, USA), (ii) integrated the code into an application, and (iii) recorded an instructional video that details the different steps. Healthcare providers from low- and middle-income countries only require an internet connection to install the application. The instructional video will then guide them through the next steps to set up the application and start screening patients. Our application is free to use, and the number of daily screens is not limited. The rapid processing times also carry the potential to increase the screening throughput, further broadening the access to FP care and reducing waiting times for FP patients3. Collectively, the CNN represents a rapid, user-friendly, and cost-effective tool.

While our study presents promising results, it is not without limitations. The relatively small sample size, especially for the validation and test sets, suggests the need for further validation with larger and more diverse (i.e., multi-center, -racial, -surgeon) datasets to ensure the model’s robustness and generalizability. Additionally, the model’s ability to distinguish synkinesis from other facial conditions was not evaluated in this study, representing an area for future research. Moreover, integrating our model into clinical practice will require careful consideration of various factors, including user training, data privacy, and the ethical implications of automated diagnostics. Ensuring that clinicians are adequately trained to use the model and interpret its results is essential for maximizing its benefits. Additionally, robust data privacy measures must be implemented to protect sensitive patient information, particularly when using web-based applications. Thus, further validation is essential before clinical implementation. In a broader context, there are different AI/machine-learning-powered tools that have shown promising outcomes in pre-clinical studies and small patient samples (face transplantation, facial reanimation, etc.)29,30,31,32. However, these tools remain to be investigated in larger-scale trials and integrated into standard clinical workup. Thus, cross-disciplinary efforts are needed to bridge the gap from bench to bedside and to fuel translational efforts.



Source link

Continue Reading

AI Insights

In creating an ad, using AI for scenes – but not people – may retain consumer trust – VCU News

Published

on


Image-generative artificial intelligence makes ad creation faster and cheaper — but there’s an intriguing hook to the look, according to new research from Virginia Commonwealth University. Trust can plummet when AI-generated visuals depict service providers in industries where relationships matter.

So, how can AI help service marketers without compromising trust?

A study coauthored by César Zamudio, Ph.D., associate professor of marketing in the VCU School of Business, determined that selective AI use in ad creation is key.

“When tangible elements — like a doctor’s office environment — are AI-generated, but the service provider’s image is a real picture, trust and ad effectiveness are restored,” Zamudio said. “The takeaway? Use AI where it counts, and let the human element shine.”

This balance is crucial for small businesses as they market themselves. The smart move is to use AI to generate backgrounds, office settings or equipment — but keep real people in their ads. This way, businesses can still benefit from AI’s speed and cost savings without losing consumer confidence.

“Our research offers a simple, actionable strategy: Use AI for settings, not people,” Zamudio said. “This approach can help you cut ad costs without cutting credibility, giving you a real edge to beat bigger brands.”

The research is also relevant to consumers, who can use it to help navigate AI-driven marketing.

“Not all AI ads are misleading,” Zamudio said, “but knowing what’s real — and what’s not — can shape your trust in a brand. … Our study reveals how AI in advertising shapes trust, helping you stay informed, skeptical and aware in today’s digital marketing landscape.”

Smart AI use is key, he said. “Brands can harness AI’s efficiency without losing credibility by keeping real people front and center in service ads. Marketers can use this research to successfully walk the tightrope between innovation and consumer confidence.”

This is especially important for services, where ads help make intangible offerings feel real and trustworthy. With AI disclosures becoming more common due to government and industry pressures, businesses need to know how to design AI-driven ads that maintain consumer trust.

Zamudio’s study, “Service Ads in the Era of Generative AI: Disclosures, Trust and Intangibility,” was co-authored with colleagues from Missouri State University and Longwood University and was published recently in the Journal of Retailing and Consumer Services.