AI Insights
91% of Jensen Huang’s $4.3 Billion Stock Portfolio at Nvidia Is Invested in Just 1 Artificial Intelligence (AI) Infrastructure Stock

Key Points
-
Most stocks that Nvidia and CEO Jensen Huang invest in tend to be strategic partners or companies that can expand the AI ecosystem.
-
For the AI sector to thrive, there is going to need to be a lot of supporting data centers and other AI infrastructure.
-
One stock that Nvidia is heavily invested in also happens to be one of its customers, a first-mover in the AI-as-a-service space.
-
10 stocks we like better than CoreWeave ›
Nvidia (NASDAQ: NVDA), the largest company in the world by market cap, is widely known as the artificial intelligence (AI) chip king and the main pick-and-shovel play powering the AI revolution. But as such a big company that is making so much money, the company has all sorts of different operations and divisions aside from its main business.
For instance, Nvidia, which is run by CEO Jensen Huang, actually invests its own capital in publicly traded stocks, most of which seem to have to do with the company itself or the broader AI ecosystem. At the end of the second quarter, Nvidia owned six stocks collectively valued at about $4.3 billion. However, of this amount, 91% of Nvidia’s portfolio is invested in just one AI infrastructure stock.
Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue »
A unique relationship
Nvidia has long had a relationship with AI data center company CoreWeave (NASDAQ: CRWV), having been a key supplier of hardware that drives the company’s business. CoreWeave builds data centers specifically tailored to meet the needs of companies looking to run AI applications.
Image source: Getty Images.
These data centers are also equipped with hardware from Nvidia, including the company’s latest graphics processing units (GPUs), which help to train large language models. Clients can essentially rent the necessary hardware to run AI applications from CoreWeave, which saves them the trouble of having to build out and run their own infrastructure. CoreWeave’s largest customer by far is Microsoft, which makes up roughly 60% of the company’s revenue, but CoreWeave has also forged long-term deals with OpenAI and IBM.
Nvidia and CoreWeave’s partnership dates back to at least 2020 or 2021, and Nvidia also invested in the company’s initial public offering earlier this year. Wall Street analysts say it’s unusual to see a large supplier participate in a customer’s IPO. But Nvidia may see it as a key way to bolster the AI sector because meeting future AI demand will require a lot of energy and infrastructure.
CoreWeave is certainly seeing demand. On the company’s second-quarter earnings call, management said its contract backlog has grown to over $30 billion and includes previously discussed contracts with OpenAI, as well as other new potential deals with a range of different clients from start-ups to larger companies. Customers have also been increasing the length of their contracts with CoreWeave.
“In short, AI applications are beginning to permeate all areas of the economy, both through start-ups and enterprise, and demand for our cloud AI services is aggressively growing. Our cloud portfolio is critical to CoreWeave’s ability to meet this growing demand,” CoreWeave’s CEO Michael Intrator said on the company’s earnings call.
Is CoreWeave a buy?
Due to the demand CoreWeave is seeing from the market, the company has been aggressively expanding its data centers to increase its total capacity. To do this, CoreWeave has taken on significant debt, which the capital markets seem more than willing to fund.
At the end of the second quarter, current debt (due within 12 months) grew to about $3.6 billion, up about $1.2 billion year over year. Long-term debt had grown to about $7.4 billion, up roughly $2 billion year over year. That has hit the income statement hard, with interest expense through the first six months of 2025 up to over $530 million, up from roughly $107 million during the same period in 2024.
CoreWeave reported a loss of $1.73 per share in the first six months of the year, better than the $2.23 loss reported during the same time period. Still, investors have expressed concern about growing competition in the AI-as-a-service space. They also question whether or not CoreWeave has a real moat, considering its customers and suppliers. For instance, while CoreWeave has a strong partnership with Nvidia, that does not prevent others in the space from forging partnerships. Additionally, CoreWeave’s main customers, like Microsoft, could choose to build their own data centers and infrastructure in-house.
CoreWeave also trades at over a $47 billion market cap but is still losing significant money. The valuation also means the company is trading at 10 times forward sales. Now, in fairness, CoreWeave has grown revenue through the first half of the year by 276% year over year. It all boils down to whether the company can maintain its first-mover advantage and whether the AI addressable market can keep growing like it has been.
I think investors can buy the stock for the more speculative part of their portfolio. The high dependence on industry growth and reliance on debt prevent me from recommending a large position at this time.
Should you invest $1,000 in CoreWeave right now?
Before you buy stock in CoreWeave, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and CoreWeave wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004… if you invested $1,000 at the time of our recommendation, you’d have $678,148!* Or when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $1,052,193!*
Now, it’s worth noting Stock Advisor’s total average return is 1,065% — a market-crushing outperformance compared to 186% for the S&P 500. Don’t miss out on the latest top 10 list, available when you join Stock Advisor.
See the 10 stocks »
*Stock Advisor returns as of August 25, 2025
Bram Berkowitz has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends International Business Machines, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.
Disclaimer: For information purposes only. Past performance is not indicative of future results.
AI Insights
Examining the Evolving Landscape of Medical AI

I. Glenn Cohen discusses the risks and rewards of using artificial intelligence in health care.
In a discussion with The Regulatory Review, I. Glenn Cohen offers his thoughts on the regulatory landscape of medical artificial intelligence (AI), the evolving ways in which patients may encounter AI in the doctor’s office, and the risks and opportunities of a rapidly evolving technological landscape.
The use of AI in the medical field poses new challenges and tremendous potential for scientific and technological advancement. Cohen highlights how AI is increasingly integrated into health care through tools such as ambient scribing and speaks to some of the ethical concerns around data bias, patient privacy, and gaps in regulatory oversight, especially for underrepresented populations and institutions lacking resources. He surveys several of the emerging approaches to liability for the use of medical AI and weighs the benefits and risks of permitting states to create their own AI regulations in the absence of federal oversight. Despite the challenges facing regulators and clinicians looking for ways to leverage these new technologies, Professor Cohen is optimistic about AI’s potential to expand access to care and improve health care quality.
A leading expert on bioethics and the law, Cohen is the James A. Attwood and Leslie Williams Professor of Law at Harvard Law School. He is an elected member of the National Academy of Medicine. He has addressed the Organisation for Economic Co-operation and Development, members of the U.S. Congress, and the National Assembly of the Republic of Korea on medical AI policy, as well as the North Atlantic Treaty Organization on biotechnology and human advancement. He has provided bioethical advising and consulting to major health care companies.
The Regulatory Review is pleased to share the following interview with I. Glenn Cohen.
The Regulatory Review: In what ways is the average patient today most likely to encounter artificial intelligence (AI) in the health care setting?
Cohen: Part of it will depend on what we mean by “AI.” In a sense, using Google Maps to get to the hospital is the most common use, but that’s probably not what you have in mind. I think one very common use we are already seeing deployed in many hospitals is ambient listening or ambient scribing. I wrote an article on that a few months ago with some colleagues. Inbox management—drafting initial responses to patient queries that physicians are meant to look over—is another way that patients may encounter AI soon. Finally, in terms of more direct usage in clinical care, AI involvement in radiology is one of the more typical use cases. I do want to highlight your use of “encounter,” which is importantly ambiguous between “knowingly” or “unknowingly” encounter. As I noted several years ago, patients may never be told about AI’s involvement in their care. That is even more true today.
TRR: Are some patient populations more likely to encounter or benefit from AI than others?
Cohen: Yes. There are a couple of ethically salient ways to press this point. First, because of contextual bias, those who are closer demographically or in other ways to the training data sets are more likely to benefit from AI. I often note that, as a middle-aged Caucasian man living in Boston, I am well-represented in most training data sets in a way that, say, a Filipino-American woman living in rural Arkansas may not be. There are many other forms of bias, but this form of missing data bias is pretty straightforward as a barrier to receiving the benefits from AI.
Second, we have to follow the money. Absent charitable investment, what gets built depends on what gets paid for. That may mean, to use the locution of my friend and co-author W. Nicholson Price II, that that AI may be directed primarily toward “pushing frontiers”—making excellent clinicians in the United States even better, rather than “democratizing expertise”—taking pretty mediocre physician skills and scaling access to them up via AI to improve access across the world and in parts of the United States without good access to healthcare.
Third, ethically and safely implementing AI requires significant evaluation, which requires expertise and imposes costs. Unless there are good clearinghouses for expertise or other interventions, this evaluation is something that leading academic medical centers can do, but many other kinds of facilities cannot.
TRR: What risks does the use of AI in the medical context pose to patient privacy? How should regulators address such challenges?
Cohen: Privacy definitely can be put at risk by AI. There are a couple of ways that come to mind. One is just the propensity to share information that AI invites. Take, for example, large language models such as ChatGPT. If you are a hospital system getting access for your clinicians, you are going to want to get a sandboxed instance that does not share queries back to OpenAI. Otherwise, there is a concern you may have transmitted protected information in violation of the Health Insurance Portability and Accountability Act (HIPAA), as well as your ethical obligations of confidentiality. But if the hospital system makes it too cumbersome to access the LLM, your clinicians are going to start using their phones to access it, and there goes your HIPAA protections. I do not want to make it sound like this is a problem unique to medical AI. In one of my favorite studies—now a bit dated—someone rode in elevators at a hospital and recorded the number of privacy and other violations.
A different problem posed by AI in general is that it worsens a problem I sometimes call data triangulation: the ability to reidentify users by stitching together our presence in multiple data sets, even if we are not directly identified in some of the sensitive data sets. I have discussed this issue in an article, where I include a good illustrative real-life example involving Netflix.
As for solutions, although I think there is space for improving HIPAA—a topic I have discussed along with the sharing of data with hospitals—I have not written specifically about AI privacy legislation in any great depth.
TRR: What are some emerging best practices for mitigating the negative effects of bias in the development and use of medical AI?
Cohen: I think the key starting point is to be able to identify biases. Missing data bias is a pretty obvious one to spot, though it is often hard to fix if you do not have resources to try to diversify the population represented in your data set. Even if you can diversify, some communities might be understandably wary of sharing information. But there are also many harder-to-spot biases.
For example, measurement or classification bias is where practitioner bias is translated into what is in the data set. What this may look like in practice is that women are less likely to receive lipid-lowering medications and procedures in the hospital compared to men, despite being more likely to present with hypertension and heart failure. Label bias is particularly easy to overlook, and it occurs when the outcome variable is differentially ascertained or has a different meaning across groups. A paper published in Science by Ziad Obermeyer and several coauthors has justifiably become the locus classicus example.
A lot of the problem is in thinking very hard at the front end about design and what is feasible given the data and expertise you have. But that is no substitute for auditing on the back end because even very well-intentioned designs may prove to lead to biased results on the back end. I often recommend a paper by Lama H. Nazer and several coauthors, published in PLOS Digital Health, to folks as a summary of the different kinds of bias.
All that said, I often finish talks by saying, “If you have listened carefully, you have learned that medical AI often makes errors, is bad at explaining how it is reaching its conclusion and is a little bit racist. The same is true of your physician, though. The real question is what combination of the two might improve on those dimensions we care about and how to evaluate it.”
TRR: You have written about the limited scope of the U.S. Food and Drug Administration (FDA) in regulating AI in the medical context. What health-related uses of AI currently fall outside of the FDA’s regulatory authority?
Cohen: Most is the short answer. I would recommend a paper written by my former post-doc and frequent coauthor, Sara Gerke, which does a nice job of walking through it. But the punchline is: if you are expecting medical AI to have been FDA-reviewed, your expectations are almost always going to be disappointed.
TRR: What risks, if any, are associated with the current gaps in FDA oversight of AI?
The FDA framework for drugs is aimed at showing safety and efficacy. With devices, the way that review is graded by device classes means that some devices skirt by because they can show a predicate device—in an AI context, sometimes quite unrelated—or they are classified as devices rather than general wellness products. Then there is the stuff that FDA never sees—most of it. For all these products, there are open questions about safety and efficacy. All that said, some would argue that the FDA premarket approval process is a bad fit for medical AI. These critics may defend FDA’s lack of review by comparing it to areas such as innovation in surgical techniques or medical practices, where FDA largely does not regulate the practice of medicine. Instead, we rely on licensure of physicians and tort law to do a lot of the work, as well as on in-house review processes. My own instinct as to when to be worried—to give a lawyerly answer—is it depends. Among other things, it depends on what non-FDA indicia of quality we have, what is understood by the relevant adopters about how the AI works, what populations it does or does not work for, what is tracked or audited, what the risk level in the worst-case scenario looks like, and who, if anyone, is doing the reviewing.
TRR: You have written in the past about medical liability for harms caused to patients by faulty AI. In the current technological and legal landscape, who should be liable for these injuries?
Cohen: Another lawyerly answer: it’s complicated, and the answer will be different for different kinds of AI. Physicians ultimately are responsible for a medical decision at the end of the day, and there is a school of thought that treats AI as just another tool, such as an MRI machine, and suggests that physicians are responsible even if the AI is faulty.
The reality is that few reported cases have succeeded against physicians for a myriad of reasons detailed in a paper published last year by Michelle M. Mello and Neel Guha. W. Nicholson Price II and I have focused on two other legs of the stool in the paper you asked about: hospital systems and developers. In general, and this may be more understandable given that in tort liability for hospital systems is not all that common, it seems to me that most policy analyses place too little emphasis on the hospital system as a potential locus of responsibility. We suggest “the application of enterprise liability to hospitals—making them broadly liable for negligent injuries occurring within the hospital system—with an important caveat: hospitals must have access to the information needed for adaptation and monitoring. If that information is unavailable, we suggest that liability should shift from hospitals to the developers keeping information secret.”
Elsewhere, I have also mused as to whether this is a good space for traditional tort law at all and whether instead we ought to have something more like the compensation schemes we see for vaccine injuries or workers’ compensation. In those schemes, we would have makers of AI pay into a fund that could pay for injuries without showing fault. Given the cost and complexity of proving negligence and causation in cases involving medical AI, this might be desirable.
TRR: The U.S. Senate rejected adding a provision to the recently passed “megalaw” that would have set a 10-year moratorium on any state enforcing a law or regulation affecting “artificial intelligence models,” “artificial intelligence systems,” or “automated decision systems.” What are some of the pros and cons of permitting states to develop their own AI regulations?
Cohen: This is something I have not written about, so I am shooting from the hip here. Please take it with an even larger grain of salt than what I have said already. The biggest con to state regulation is that it is much harder for an AI maker to develop something subject to differential standards or rules in different states. One can imagine the equivalent of impossibility-preemption type effects: state X says do this, state Y says do the opposite. But even short of that, it will be difficult to design a product to be used nationally if there are substantial variations in the standards of liability.
On the flip side, this is a feature of tort law and choice of law rules for all products, so why should AI be so different? And unlike physical goods that ship in interstate commerce, it is much easier to geolocate and either alter or disable AI running in states with different rules if you want to avoid liability.
On the pro side for state legislation, if you are skeptical that the federal government is going to be able to do anything in this space—or anything you like, at least—due to the usual pathologies of Congress, plus lobbying from AI firms, action by individual states might be attractive. States have innovated in the privacy space. The California Consumer Privacy Act is a good example. For state-based AI regulation, maybe there is a world where states fulfill the Brandeisian ideal of laboratories of experimentation that can be used to develop federal law.
Of course, a lot of this will depend on your prior beliefs about federalism. People often speak about the “Brussels Effect,” relating to the effects of the General Data Protection Regulation on non-European privacy practices. If a state the size of California was to pass legislation with very clear rules that differ from what companies do now, we might see a similar California effect with companies conforming nationwide to these standards. This is particularly true given that much of U.S. AI development is centered in California. One’s views about whether that is good or bad depend not only on the content of those rules but also on the views of what American federalism should look like.
TRR: Overall, what worries you most about the use of AI in the medical context? And what excites you the most?
Cohen: There is a lot that worries me, but the incentives are number one. What gets built is a function of what gets paid for. We may be giving up on some of what has the highest ethical value, the democratization of expertise and improving access, for lack of a business model that supports it. Government may be able to step in to some extent as a funder or for reimbursement, but I am not that optimistic.
Although your questions have led me to the worry side of the house, I am actually pretty excited. Much of what is done in medicine is unanalyzed, or at least not rigorously so. Even the very best clinicians have limited experience, and even if they read the leading journals, go to conferences, and complete other standard means of continuing education for physicians, the amount of information they can synthesize is orders of magnitude smaller than that of AI. AI may also allow scaling of the delivery of some services in a way that can serve underrepresented people in places where providers are scarce.
AI Insights
AI and machine learning for engineering design | MIT News

Artificial intelligence optimization offers a host of benefits for mechanical engineers, including faster and more accurate designs and simulations, improved efficiency, reduced development costs through process automation, and enhanced predictive maintenance and quality control.
“When people think about mechanical engineering, they’re thinking about basic mechanical tools like hammers and … hardware like cars, robots, cranes, but mechanical engineering is very broad,” says Faez Ahmed, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT. “Within mechanical engineering, machine learning, AI, and optimization are playing a big role.”
In Ahmed’s course, 2.155/156 (AI and Machine Learning for Engineering Design), students use tools and techniques from artificial intelligence and machine learning for mechanical engineering design, focusing on the creation of new products and addressing engineering design challenges.
Cat Trees to Motion Capture: AI and ML for Engineering Design
Video: MIT Department of Mechanical Engineering
“There’s a lot of reason for mechanical engineers to think about machine learning and AI to essentially expedite the design process,” says Lyle Regenwetter, a teaching assistant for the course and a PhD candidate in Ahmed’s Design Computation and Digital Engineering Lab (DeCoDE), where research focuses on developing new machine learning and optimization methods to study complex engineering design problems.
First offered in 2021, the class has quickly become one of the Department of Mechanical Engineering (MechE)’s most popular non-core offerings, attracting students from departments across the Institute, including mechanical and civil and environmental engineering, aeronautics and astronautics, the MIT Sloan School of Management, and nuclear and computer science, along with cross-registered students from Harvard University and other schools.
The course, which is open to both undergraduate and graduate students, focuses on the implementation of advanced machine learning and optimization strategies in the context of real-world mechanical design problems. From designing bike frames to city grids, students participate in contests related to AI for physical systems and tackle optimization challenges in a class environment fueled by friendly competition.
Students are given challenge problems and starter code that “gave a solution, but [not] the best solution …” explains Ilan Moyer, a graduate student in MechE. “Our task was to [determine], how can we do better?” Live leaderboards encourage students to continually refine their methods.
Em Lauber, a system design and management graduate student, says the process gave space to explore the application of what students were learning and the practice skill of “literally how to code it.”
The curriculum incorporates discussions on research papers, and students also pursue hands-on exercises in machine learning tailored to specific engineering issues including robotics, aircraft, structures, and metamaterials. For their final project, students work together on a team project that employs AI techniques for design on a complex problem of their choice.
“It is wonderful to see the diverse breadth and high quality of class projects,” says Ahmed. “Student projects from this course often lead to research publications, and have even led to awards.” He cites the example of a recent paper, titled “GenCAD-Self-Repairing,” that went on to win the American Society of Mechanical Engineers Systems Engineering, Information and Knowledge Management 2025 Best Paper Award.
“The best part about the final project was that it gave every student the opportunity to apply what they’ve learned in the class to an area that interests them a lot,” says Malia Smith, a graduate student in MechE. Her project chose “markered motion captured data” and looked at predicting ground force for runners, an effort she called “really gratifying” because it worked so much better than expected.
Lauber took the framework of a “cat tree” design with different modules of poles, platforms, and ramps to create customized solutions for individual cat households, while Moyer created software that is designing a new type of 3D printer architecture.
“When you see machine learning in popular culture, it’s very abstracted, and you have the sense that there’s something very complicated going on,” says Moyer. “This class has opened the curtains.”
AI Insights
OpenAI says spending to rise to $115 billion through 2029: Information

OpenAI Inc. told investors it projects its spending through 2029 may rise to $115 billion, about $80 billion more than previously expected, The Information reported, without providing details on how and when shareholders were informed.
OpenAI is in the process of developing its own data center server chips and facilities to drive the technologies, in an effort to control cloud server rental expenses, according to the report.
The company predicted it could spend more than $8 billion this year, roughly $1.5 billion more than an earlier projection, The Information said.
Another factor influencing the increased need for capital is computing costs, on which the company expects to spend more than $150 billion from 2025 through 2030.
The cost to develop AI models is also higher than previously expected, The Information said.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi