Connect with us

AI Research

Billionaire David Tepper Piled Into Nvidia, TSMC, and Intel, and Sold Shares of the No. 1 Artificial Intelligence (AI) Stock Among Billionaire Fund Managers

Published

on


  • Form 13Fs, filed quarterly, allow investors to track which stocks Wall Street’s savviest investors have been buying and selling.

  • The stock market’s tariff-induced swoon in early April provided an opportunity for billionaire David Tepper to go bargain hunting in the artificial intelligence (AI) space.

  • However, Appaloosa’s 13F shows Tepper reduced his fund’s stake in a top AI holding for a number of other billionaire investors.

  • 10 stocks we like better than Nvidia ›

For three years, the evolution of artificial intelligence (AI) hardware and software solutions has dominated the newswires on Wall Street — and with good reason. Based on one estimate from the analysts at PwC, AI can lift global gross domestic product by $15.7 trillion in 2030. This is a big enough pie where a long list of companies can benefit.

However, Wall Street’s savviest money managers have sent mixed signals regarding the companies on the leading edge of the AI revolution.

Image source: Getty Images.

No later than 45 calendar days following the end to a quarter, institutional investors with at least $100 million in assets under management (AUM) are required to file Form 13F with the Securities and Exchange Commission. This filing provides an easy-to-understand layout of which stocks, exchange-traded funds (ETFs), and select options Wall Street’s top-tier fund managers purchased and sold in the latest quarter (in this case, the June-ended quarter).

Appaloosa’s billionaire money manager, David Tepper, who’s overseeing north of $6.4 billion in AUM, has been an especially active investor in the artificial intelligence arena. During the second quarter, Tepper greenlit the purchase of Nvidia (NASDAQ: NVDA), Taiwan Semiconductor Manufacturing (NYSE: TSM), which is best-known as “TSMC,” and Intel (NASDAQ: INTC), but was a decisive seller of a “magnificent” stock that’s the undisputed favorite AI company of billionaire fund managers.

If there was a theme to billionaire David Tepper’s buying activity, at least in relation to the tech sector, during the second quarter, it was “AI hardware.” Leading businesses responsible for the brains of AI-accelerated data centers were undeniably on the menu for Appaloosa’s boss:

  • Nvidia: 1,450,000 shares purchased (483% increase from March 31, 2025).

  • TSMC: 755,000 shares purchased (280% increase from March 31, 2025).

  • Intel: 8,000,000 shares purchased (new position).

The buying activity in Nvidia is especially eye-opening considering that Tepper had overseen a 97% reduction in his fund’s position in less than two years, when accounting for Nvidia’s historic 10-for-1 stock split in June 2024.

Perhaps the leading catalyst of this buying activity was Wall Street’s tariff-induced mini-crash in early April. President Trump unveiled his tariff and trade policy on April 2, which included a 10% global tariff, as well as introduced higher “reciprocal tariffs” on select countries. Initially, these tariffs spooked the market and led to a historic multiday sell-off.

However, it’s been a green light for the bulls since President Trump announced a 90-day pause on reciprocal tariffs on April 9. Investors who piled into beaten-down AI stocks have benefited greatly, and they were able to nab high-growth stocks at forward-year multiples that hadn’t been seen in many quarters, if not years.

Additionally, the respective outlooks for Nvidia and TSMC are robust. Nvidia’s Hopper (H100) and Blackwell graphics processing units (GPUs) have accounted for a significant percentage of the GPUs deployed in enterprise data centers.

Meanwhile, TSMC is rapidly expanding its chip-on-wafer-on-substrate (CoWoS) capacity to meet seemingly insatiable corporate demand for AI-GPUs. TSMC’s CoWoS is a necessary technology for the packaging of high-bandwidth memory in AI-accelerated data centers.

As for Intel, it may have stood out to Tepper as a value-oriented buy. Intel has been trading below its book value and was awarded nearly $7.9 billion from the CHIPS Act by the Joe Biden administration in 2024 to construct chip fabrication plants domestically. An eventual transformation that’ll see Intel grow into one of the world’s leading chip foundries, coupled with its legacy cash flow from central processing units, may allow the company to reinvent itself over time.

A person typing on a laptop at a desk, with a small dog sitting on their lap.
Image source: Getty Images.

On the other end of the spectrum, Tepper’s Appaloosa completely exited five positions (excluding options) and pared down 16 others during the June-ended quarter. This includes selling 150,000 shares of social media titan and “Magnificent Seven” member Meta Platforms (NASDAQ: META), which reduced Appaloosa’s position by 27% in just three months.

What makes this selling activity such a head-scratcher is there’s not another AI stock that billionaire asset managers favor more than Meta. As of the end of March, Meta was the No. 1 holding for four of Wall Street’s savviest money managers, and a prized position in the portfolios of other billionaire investors.

Arguably the most-logical of all reasons to sell 150,000 shares of Meta Platforms is simply to lock in gains. Between late 2022 and the second quarter of 2025, Meta stock rallied from sub-$100 to well north of $600 per share. Meta is Tepper’s second longest-tenured holding (since the first quarter of 2016), which means there was a viable reason to take some chips off the table.

It’s also plausible that Appaloosa’s billionaire boss is concerned about the health of the U.S. and/or global economy.

Though Meta Platforms is investing in and incorporating AI solutions into its operations, nearly 98% of its net sales can be traced back to advertising. Ad revenue tends to be highly cyclical, with businesses not shy about paring back their marketing budgets at the first signs of trouble. Concerns about the domestic rate of inflation, as well as recent weakness in the jobs market, are all potential catalysts that can foreshadow weakness in the U.S. economy.

But it’s far likelier that Tepper will eventually regret selling more than a quarter of his fund’s stake in Meta Platforms.

No other social media company has come particularly close to attracting as many people as Meta does daily. During the month of June, its family of apps, which includes Facebook, Instagram, WhatsApp, Threads, and Facebook Messenger, averaged 3.48 billion daily users. This is more than enough eyeballs for Meta to command exceptional pricing power for ad placement.

We’re also seeing early evidence that Mark Zuckerberg’s company is successfully incorporating AI solutions into its ad platform. Giving businesses access to generative AI solutions has allowed them to tailor their messages to users, which in turn can improve ad click-through rates. This only serves to solidify Meta’s premium ad-pricing power.

While Meta Platforms’ stock isn’t as cheap as it was three years ago, its accelerated growth rate more than makes up for its modest premium. Its forward price-to-earnings (P/E) ratio of less than 25 is reasonable considering the many ways AI can expand a revenue base that already has a high floor, thanks to its advertising operations.

Before you buy stock in Nvidia, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Nvidia wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider when Netflix made this list on December 17, 2004… if you invested $1,000 at the time of our recommendation, you’d have $649,657!* Or when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $1,090,993!*

Now, it’s worth noting Stock Advisor’s total average return is 1,057% — a market-crushing outperformance compared to 185% for the S&P 500. Don’t miss out on the latest top 10 list, available when you join Stock Advisor.

See the 10 stocks »

*Stock Advisor returns as of August 18, 2025

Sean Williams has positions in Intel and Meta Platforms. The Motley Fool has positions in and recommends Intel, Meta Platforms, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends the following options: short August 2025 $24 calls on Intel. The Motley Fool has a disclosure policy.

Billionaire David Tepper Piled Into Nvidia, TSMC, and Intel, and Sold Shares of the No. 1 Artificial Intelligence (AI) Stock Among Billionaire Fund Managers was originally published by The Motley Fool



Source link

AI Research

Researchers train AI to diagnose heart failure in rural patients using low-tech electrocardiograms

Published

on


WVU computer scientists are training AI models to diagnose heart failure using data generated by low-tech equipment widely available in rural Appalachian medical practices. Credit: WVU/Micaela Morrissette

Concerned about the ability of artificial intelligence models trained on data from urban demographics to make the right medical diagnoses for rural populations, West Virginia University computer scientists have developed several AI models that can identify signs of heart failure in patients from Appalachia.

Prashnna Gyawali, assistant professor in the Lane Department of Computer Science and Electrical Engineering at the WVU Benjamin M. Statler College of Engineering and Mineral Resources, said —a chronic, persistent condition in which the heart cannot pump enough blood to meet the body’s need for oxygen—is one of the most pressing national and global health issues, and one that hits rural regions of the U.S. especially hard.

Despite the outsized impact of heart failure on rural populations, AI models are currently being trained to diagnose the disease using data representing patients from urban and suburban areas like Stanford, California, Gyawali said.

“Imagine Jane Doe, a 62-year-old woman living in a rural Appalachian community,” he suggested. “She has limited access to specialty care, relies on a small local clinic, and her lifestyle, diet and health history reflect the realities of her environment: high physical labor, minimal preventive care, and increased exposure to environmental risk factors like coal dust or poor air quality. Jane begins to experience fatigue and shortness of breath—symptoms that could point to heart failure.

“An AI system, trained primarily on data from urban hospitals in more affluent, coastal areas, evaluates Jane’s lab results. But because the system was not trained on patients who share Jane’s socioeconomic and environmental context, it fails to recognize her condition as urgent or abnormal,” Gyawali said. “This is why this work matters. By training AI models on data from West Virginia patients, we aim to ensure people like Jane receive accurate diagnoses, no matter where they live or how their lives differ from national averages.”

The researchers identified the AI models that were most accurate at diagnosing heart failure in an anonymized sample of more than 55,000 patients who received medical care in West Virginia. They also pinpointed the exact parameters for providing the AI models with data that most enhanced diagnostic accuracy. The findings appear in Scientific Reports, a Nature portfolio journal.

Doctoral student Alina Devkota emphasized they trained the AI models to work from patients’ electrocardiogram results, rather than the echocardiogram readings typical for patient data from urban areas.

Electrocardiograms rely on round electrodes stuck to the patient’s torso to record electrical signals from the heart. According to Devkota, they don’t require specialized equipment or specialized training to operate, but they still provide valuable insights into heart function.

“One of the criteria to diagnose heart failure is by measuring the ‘ejection fraction,’ or how much blood is pumped out of the heart with every beat, and the gold standard for doing that is with echocardiography, which uses to create images of the heart and the blood flowing through its valves,” she said.

“But echocardiography is expensive, time-consuming and often unavailable to patients in the very same rural Appalachian states that have the highest prevalence of heart failure across the nation. West Virginia, for example, ranks first in the U.S. for the prevalence of heart attack and , but many West Virginians don’t have local access to high-tech echocardiograms. They do have access to inexpensive electrocardiograms, so we tested whether AI models could use electrocardiogram readings to predict a patient’s ejection fraction.”

Devkota, Gyawali and their colleagues trained several AI models on patient records from 28 hospitals across West Virginia. The AI models used either “deep learning,” which relies on multilayered neural networks, or “non-deep learning,” which relies on simpler algorithms, to analyze the patient records and draw conclusions.

The researchers found the models, particularly one called ResNet, did best at correctly predicting a patient’s ejection fraction based on data from 12-lead electrocardiograms, with the results suggesting that a larger dataset for training would yield even better results. They also found that providing the AI models with specific “leads,” or combinations of data from different electrode pairs, affected how accurate the models’ ejection fraction predictions were.

Gyawali said while AI models are not yet being used in due to reliability concerns, training an AI to successfully estimate from electrocardiogram signals could soon give clinicians an edge in protecting patients’ cardiac health.

“Heart failure affects more than six million Americans today, and factors like our aging population mean the risk is growing rapidly—approximately 1 in 4 people alive today will experience heart failure during their lifetimes. The prevalence is even higher in rural Appalachia, so it’s critical the people here do not continue to be overlooked.”

Additional WVU contributors to the research included Rukesh Prajapati, graduate research assistant; Amr El-Wakeel, assistant professor; Donald Adjeroh, professor and chair for computer science; and Brijesh Patel, assistant professor in the WVU Health Sciences School of Medicine.

More information:
AI analysis for ejection fraction estimation from 12-lead ECG, Scientific Reports (2025). DOI: 10.1038/s41598-025-97113-0scientific

Citation:
Researchers train AI to diagnose heart failure in rural patients using low-tech electrocardiograms (2025, August 31)
retrieved 31 August 2025
from https://medicalxpress.com/news/2025-08-ai-heart-failure-rural-patients.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

AI Research

How Traditional Search Engines Are Evolving

Published

on


Generative artificial intelligence is not just improving search; it’s revolutionizing the entire concept of information retrieval.

Traditional search engines operated on a simple premise: match keywords to web pages, then rank results. This approach often left users frustrated, forcing them to refine queries multiple times or dig through numerous links to find specific information.

Generative AI has shattered this paradigm. Modern search platforms now interpret natural language queries with unprecedented sophistication. Instead of returning lists of links, they provide direct, contextual answers synthesized from multiple sources. Users can ask follow-up questions, request clarifications, or explore topics within the same conversation.

Consider the difference: searching “climate change Morocco agriculture” traditionally yields thousands of links. An AI-powered search engine provides an immediate, comprehensive overview of climate impacts on Moroccan agriculture, complete with specific data and regional variations – all while citing sources transparently.

The Technology Behind the Magic

Large language models (LLMs) trained on vast datasets enable machines to understand and generate human-like text. When integrated with real-time web crawling, they create “retrieval-augmented generation” (RAG) systems that combine internet knowledge with AI analysis. It’s like having a research assistant that instantly reads thousands of documents and provides tailored summaries.

Major Players Reshape the Landscape

Google has integrated AI into its core search through Search Generative Experience (SGE), essentially rebuilding search from the ground up. Microsoft’s ChatGPT integration transformed Bing from an also-ran to a legitimate competitor overnight. Meanwhile, new players like Perplexity AI have emerged as pure “AI answer engines,” bypassing traditional search entirely.

Impact on Users and Businesses

The benefits for users are transformative. Complex research tasks that once required hours now take minutes through conversational interactions. This democratization particularly benefits users in developing regions with limited digital literacy or bandwidth constraints.

For businesses, traditional SEO strategies focused on keywords are becoming obsolete. Success now requires creating authoritative, well-sourced content that AI systems can understand and cite. Companies must focus on becoming trusted information sources rather than gaming search algorithms.

Voice search capabilities have dramatically improved, making information accessible to users with disabilities or those in hands-free situations. Educational applications are equally impressive, with AI search engines serving as sophisticated tutoring systems.

Challenges and Concerns

Significant challenges remain. AI systems can generate confident-sounding but incorrect information – a phenomenon called “hallucination.” Privacy concerns arise as conversational search engines collect more detailed behavioral data than traditional keyword systems.

The concentration of AI capabilities among few major companies raises concerns about information diversity and potential bias. When a handful of AI models influence how billions access information, fairness and accuracy become critical issues.

The Future of Information Access

Emerging trends include multimodal search capabilities interpreting images, videos, and audio alongside text. Real-time integration promises search engines providing up-to-the-minute data on rapidly changing situations. IoT integration will enable contextual search considering your location, time, and current activity.

For the MENA region, including Morocco, this AI revolution presents unique opportunities. Local businesses creating high-quality, authoritative content in Arabic and French can gain unprecedented visibility in AI search results. The technology also addresses linguistic diversity challenges, as AI systems become sophisticated at handling multiple languages and cultural contexts.

As we stand at this inflection point, the blue link era is ending. The age of conversational AI search promises faster, more accurate, and more intuitive access to human knowledge than ever before. For users worldwide, this transformation represents not just technological progress, but a fundamental shift in how we interact with information itself.



Source link

Continue Reading

AI Research

The Machine Learning Lessons I’ve Learned This Month

Published

on


in machine learning are the same.

Coding, waiting for results, interpreting them, returning back to coding. Plus, some intermediate presentations of one’s progress. But, things mostly being the same does not mean that there’s nothing to learn. Quite on the contrary! Two to three years ago, I started a daily habit of writing down lessons that I learned from my ML work. In looking back through some of the lessons from this month, I found three practical lessons that stand out:

  1. Keep logging simple
  2. Use an experimental notebook
  3. Keep overnight runs in mind

Keep logging simple

For years, I used Weights & Biases (W&B)* as my go-to experiment logger. In fact, I have once been in the top 5% of all active users. The stats in below figure tell me that, at that time, I’ve trained close to 25000 models, used a cumulative 5000 hours of compute, and did more than 500 hyperparameter searches. I used it for papers, for big projects like weather prediction with large datasets, and for tracking countless small-scale experiments.

My once upon a time stats of using W&B for experiment logging. Image by the author.

And W&B really is a great tool: if you want beautiful dashboards and are collaborating** with a team, W&B shines. And, until recently, while reconstructing data from trained neural networks, I ran multiple hyperparameter sweeps and W&B’s visualization capabilities were invaluable. I could directly compare reconstructions across runs.

But I realized that for most of my research projects, W&B was overkill. I rarely revisited individual runs, and once a project was done, the logs just sat there, and I did nothing with them ever after. When I then refactored the mentioned data reconstruction project, I thus explicitly removed the W&B integration. Not because anything was wrong with it, but because it wasn’t necessary.

Now, my setup is much simpler. I just log selected metrics to CSV and text files, writing directly to disk. For hyperparameter searches, I rely on Optuna. Not even the distributed version with a central server — just local Optuna, saving study states to a pickle file. If something crashes, I reload and continue. Pragmatic and sufficient (for my use cases).

The key insight here is this: logging is not the work. It’s a support system. Spending 99% of your time deciding on what you want to log — gradients? weights? distributions? and at which frequency? — can easily distract you from the actual research. For me, simple, local logging covers all needs, with minimal setup effort.

Maintain experimental lab notebooks

In December 1939, William Shockley wrote down an idea into his lab notebook: replace vacuum tubes with semiconductors. Roughly 20 years later, Shockley and two colleagues at Bell Labs were awarded Nobel Prizes for the invention of the modern transistor.

While most of us aren’t writing Nobel-worthy entries into our notebooks, we can still learn from the principle. Granted, in machine learning, our laboraties don’t have chemicals or test tubes, as we all envision when we think about a laboratory. Instead, our labs often are our computers; the same device that I use to write these lines has trained countless models over the years. And these labs are inherently portably, especially when we are developing remotely on high-performance compute clusters. Even better, thanks to highly-skilled administrative stuff, these clusters are running 24/7 — so there’s always time to run an experiment!

But, the question is, which experiment? Here, a former colleague introduced me to the idea of mainting a lab notebook, and lately I’ve returned to it in the simplest form possible. Before starting long-running experiments, I write down:

what I’m testing, and why I’m testing it.

Then, when I come back later — usually the next morning — I can immediately see which results are ready and what I had hoped to learn. It’s simple, but it changes the workflow. Instead of just “rerun until it works,” these dedicated experiments become part of a documented feedback loop. Failures are easier to interpret. Successes are easier to replicate.

Run experiments overnight

That’s a small, but painful lessons that I (re-)learned this month.

On a Friday evening, I discovered a bug that might affect my experiment results. I patched it and reran the experiments to validate. By Saturday morning, the runs had finished — but when I inspected the results, I realized I had forgotten to include a key ablation. Which meant … another full day of waiting.

In ML, overnight time is precious. For us programmers, it’s rest. For our experiments, it’s work. If we don’t have an experiment running while we sleep, we’re effectively wasting free compute cycles.

That doesn’t mean you should run experiments just for the sake of it. But whenever there is a meaningful one to launch, starting them in the evening is the perfect time. Clusters are often under-utilized and resources are more quickly available, and — most importantly — you will have results to analyse the next morning.

A simple trick is to plan this deliberately. As Cal Newport mentions in his book “Deep Work”, good workdays start the night before. If you know tomorrow’s tasks today, you can set up the right experiments in time.


* That ain’t bashing W&B (it would have been the same with, e.g., MLFlow), but rather asking users to evaluate what their project goals are, and then spend the majority of time on pursuing that goals with utmost focus.

** Footnote: mere collaborating is in my eyes not enough to warrant using such shared dashboards. You need to gain more insights from such shared tools than the time spent setting them up.



Source link

Continue Reading

Trending