Connect with us

AI Insights

22% of Warren Buffett’s $285 Billion Portfolio Is Invested in These 2 “Magnificent Seven” Artificial Intelligence (AI) Stocks

Published

on


Berkshire Hathaway CEO Warren Buffett is one of history’s most successful investors, and his expertise when it comes to identifying fantastic long-term opportunities has delivered incredible returns for his company’s shareholders. Notably, Buffett has been famously averse to investing in the tech sector for most of his tenure as Berkshire’s leader — but that’s changed a lot in recent years.

While Buffett has been cautious when it comes to tech companies due to the inherent complexities involved with many of the underlying businesses, Berkshire’s approach to the sector has shifted significantly over the last decade. In fact, just two top companies in the artificial intelligence (AI) space account for roughly 22% of Berkshire’s $279 billion in public stock holdings as of this writing. Read on for a closer look at how Buffett and Berkshire are approaching tech-sector investing and AI trends right now.

Image source: The Motley Fool.

1. Apple stock: 21.2% of Berkshire’s portfolio

Keith Noonan (Apple): Warren Buffett has famously said that his favorite holding period for a stock is “forever,” but it’s still not unusual to see Berkshire Hathaway make some significant adjustments when it comes to exposure to individual companies in its portfolio. Even so, the investment conglomerate’s moves to lessen its position in Apple (AAPL 0.52%) have been eye-catching — particularly as massive selling moves have taken place as the AI revolution has been heating up.

At the peak, Berkshire held 915 million shares of Apple stock — and its investment in the tech giant sometimes accounted for more than half of its total public stock holdings. Berkshire still holds 300 million Apple shares, but it sold 605 million shares across last year’s trading and sold roughly 10 million shares in Q4 2023. Apple stands as Berkshire’s top stock holding and accounts for roughly 21.2% of its portfolio, but Berkshire’s moves to reduce its exposure still raise some questions.

Berkshire has also made some big selling moves with Bank of America, Chevron, and other stocks that have been mainstays in its stock portfolio. The moves appear to reflect valuation concerns about the broader market — and it’s possible that macroeconomic and geopolitical risk factors have influenced Berkshire’s strategy.

Even so, it’s also a realistic possibility that Berkshire’s analysts have seen some warning signs when it comes to Apple’s position in the AI race. While Nvidia has scored massive wins thanks to demand for its AI-focused graphics processing units and Microsoft has seen major demand tailwinds connected to the rise of artificial intelligence software, Apple’s victories in the category have been more muted.

The company’s iPhone hardware is still its biggest performance driver, but the rollout of the Apple Intelligence software platform has yet to move the needle in a big way. In fact, Apple Intelligence actually wound up indirectly hurting the sales of iPhone 16s in China because Apple had not secured a local partner to collaborate with on the software and fulfill Chinese regulatory requirements. As a result, the iPhone 16 lines launched without support for Apple Intelligence at a time when Chinese customers were seeking AI-enabled devices and already showing a preference for domestic brands. There’s still a good chance that Apple will be able to land big wins in the AI space, but the company has some proving to do.

2. Amazon stock: 0.7% of Berkshire’s portfolio

Jennifer Saibil (Amazon): Amazon (AMZN 1.62%) makes up a tiny percentage of the Berkshire Hathaway portfolio at just 0.7%, but it presents incredible opportunities.

Amazon is the largest cloud services company in the world, with 30% of the market according to Statista. It has a strong lead against the next-largest competitor, Microsoft Azure, which has 23% of the market.

To keep its lead and stay ahead in the game, Amazon is investing more than $100 billion in its AI platform in 2025 alone. It has already launched thousands of features and services to meet demand at every end of the scale, from large clients that are creating their own custom large-language models to small business clients that need easy, plug-in solutions. It partners with AI chip giant Nvidia, but it’s also releasing its own cheaper options for its budget-conscious clients.

It already has several premier tools for developers, such as Bedrock, in what it calls the middle layer between fully custom and plug-in. It gives developers many options to customize LLMs for their specific purposes, and SageMaker, which can create code from prompts, debug, and more.

CEO Andy Jassy has stressed several times that 85% of IT spend is still on premises, and that there’s going to be a shift to the cloud. That should create a windfall for Amazon, which is in the best position to benefit from that shift. “AI represents, for sure, the biggest opportunity since cloud and probably the biggest technology shift and opportunity in business since the internet,” he said. He envisions AI becoming a core component of every app being developed, like storage and databases. As more clients want to benefit from the generative AI revolution, Amazon is drawing more business to AWS for its regular cloud services, too.

Even though Amazon is already the second-largest U.S. company by sales and the fourth by market cap, it has tons of opportunity in AI, and it may only be a matter of time before it reaches the No. 1 spot in both.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Bank of America is an advertising partner of Motley Fool Money. Jennifer Saibil has positions in Apple. Keith Noonan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Amazon, Apple, Bank of America, Berkshire Hathaway, Chevron, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Artificial Intelligence Cheating – goSkagit

Published

on



Artificial Intelligence Cheating  goSkagit



Source link

Continue Reading

AI Insights

AI-powered hydrogel dressings transform chronic wound care

Published

on


As chronic wounds such as diabetic ulcers, pressure ulcers, and articular wounds continue to challenge global healthcare systems, a team of researchers from China has introduced a promising innovation: AI-integrated conductive hydrogel dressings for intelligent wound monitoring and healing.

This comprehensive review, led by researchers from China Medical University and Northeastern University, outlines how these smart dressings combine real-time physiological signal detection with artificial intelligence, offering a new paradigm in personalized wound care.

Why it matters:

  • Real-time monitoring: Conductive hydrogels can track key wound parameters such as temperature, pH, glucose levels, pressure, and even pain signals-providing continuous, non-invasive insights into wound status.
  • AI-driven analysis: Machine learning algorithms (e.g., CNN, KNN, ANN) process sensor data to predict healing stages, detect infections early, and guide treatment decisions with high accuracy (up to 96%).
  • Multifunctional integration: These dressings not only monitor but also actively promote healing through electroactivity, antibacterial properties, and drug release capabilities.

Key features:

  • Material innovation: The review discusses various conductive materials (e.g., CNTs, graphene, MXenes, conductive polymers) and their roles in enhancing biocompatibility, sensitivity, and stability.
  • Smart signal output: Different sensing mechanisms-such as colorimetry, resistance variation, and infrared imaging-enable multimodal monitoring tailored to wound types.
  • Clinical applications: The paper highlights applications in pressure ulcers, diabetic foot ulcers, and joint wounds, emphasizing the potential for home care, remote monitoring, and early intervention.

Challenges & future outlook:

Despite promising advances, issues such as material degradation, signal stability, and AI model generalizability remain. Future efforts will focus on multidimensional signal fusion, algorithm optimization, and clinical translation to bring these intelligent dressings into mainstream healthcare.

This work paves the way for next-generation wound care, where smart materials meet smart algorithms-offering hope for millions suffering from chronic wounds.

Stay tuned for more innovations at the intersection of biomaterials, AI, and personalized medicine!

Source:

Journal reference:

She, Y., et al. (2025). Artificial Intelligence-Assisted Conductive Hydrogel Dressings for Refractory Wounds Monitoring. Nano-Micro Letters. doi.org/10.1007/s40820-025-01834-w



Source link

Continue Reading

AI Insights

To ChatGPT or not to ChatGPT: Professors grapple with AI in the classroom

Published

on


As shopping period settles, students may notice a new addition to many syllabi: an artificial intelligence policy. As one of his first initiatives as associate provost for artificial intelligence, Michael Littman PhD’96 encouraged professors to implement guidelines for the use of AI. 

Littman also recommended that professors “discuss (their) expectations in class” and “think about (their) stance around the use of AI,” he wrote in an Aug. 20 letter to faculty. But, professors on campus have applied this advice in different ways, reflecting the range of attitudes towards AI.

In her nonfiction classes, Associate Teaching Professor of English Kate Schapira MFA’06 prohibits AI usage entirely. 

“I teach nonfiction because evidence … clarity and specificity are important to me,” she said. AI threatens these principles at a time “when they are especially culturally devalued” nationally.

She added that an overreliance on AI goes beyond the classroom. “It can get someone fired. It can screw up someone’s medication dosage. It can cause someone to believe that they have justification to harm themselves or another person,” she said.

Nancy Khalek, an associate professor of religious studies and history, said she is intentionally designing assignments that are not suitable for AI usage. Instead, she wants students “to engage in reflective assignments, for which things like ChatGPT and the like are not particularly useful or appropriate.”

Khalek said she considers herself an “AI skeptic” — while she acknowledged the tool’s potential, she expressed opposition to “the anti-human aspects of some of these technologies.”

But AI policies vary within and across departments. 

Professors “are really struggling with how to create good AI policies, knowing that AI is here to stay, but also valuing some of the intermediate steps that it takes for a student to gain knowledge,” said Aisling Dugan PhD’07, associate teaching professor of biology.

In her class, BIOL 0530: “Principles of Immunology,” Dugan said she allows students to choose to use artificial intelligence for some assignments, but that she requires students to critique their own AI-generated work. 

She said this reflection “is a skill that I think we’ll be using more and more of.”

Dugan added that she thinks AI can serve as a “study buddy” for students. She has been working with her teaching assistants to develop an AI chatbot for her classes, which she hopes will eventually answer student questions and supplement the study videos made by her TAs.

Despite this, Dugan still shared concerns over AI in classrooms. “It kind of misses the mark sometimes,” she said, “so it’s not as good as talking to a scientist.”

For some assignments, like primary literature readings, she has a firm no-AI policy, noting that comprehending primary literature is “a major pedagogical tool in upper-level biology courses.”

“There’s just some things that you have to do yourself,” Dugan said. “It (would be) like trying to learn how to ride a bike from AI.”

Assistant Professor of the Practice of Computer Science Eric Ewing PhD’24 is also trying to strike a balance between how AI can support and inhibit student learning. 

This semester, his courses, CSCI 0410: “Foundations of AI and Machine Learning” and CSCI 1470: “Deep Learning,” heavily focus on artificial intelligence. He said assignments are no longer “measuring the same things,” since “we know students are using AI.”

While he does not allow students to use AI on homework, his classes offer projects that allow them “full rein” use of AI. This way, he said, “students are hopefully still getting exposure to these tools, but also meeting our learning objectives.”

Get The Herald delivered to your inbox daily.

Ewing also added that the skills required of graduated students are shifting — the growing presence of AI in the professional world requires a different toolkit.

He believes students in upper level computer science classes should be allowed to use AI in their coding assignments. “If you don’t use AI at the moment, you’re behind everybody else who’s using it,” he said. 

Ewing says that he identifies AI policy violations through code similarity — last semester, he found that 25 students had similarly structured code. Ultimately, 22 of those 25 admitted to AI usage.

Littman also provided guidance to professors on how to identify the dishonest use of AI, noting various detection tools. 

“I personally don’t trust any of these tools,” Littman said. In his introductory letter, he also advised faculty not to be “overly reliant on automated detection tools.” 

Although she does not use detection tools, Schapira provides specific reasons in her syllabi to not use AI in order to convince students to comply with her policy. 

“If you’re in this class because you want to get better at writing — whatever “better” means to you — those tools won’t help you learn that,” her syllabus reads. “It wastes water and energy, pollutes heavily, is vulnerable to inaccuracies and amplifies bias.”

In addition to these environmental concerns, Dugan was also concerned about the ethical implications of AI technology. 

Khalek also expressed her concerns “about the increasingly documented mental health effects of tools like ChatGPT and other LLM-based apps.” In her course, she discussed with students how engaging with AI can “resonate emotionally and linguistically, and thus impact our sense of self in a profound way.”

Students in Schapira’s class can also present “collective demands” if they find the structure of her course overwhelming. “The solution to the problem of too much to do is not to use an AI tool. That means you’re doing nothing. It’s to change your conditions and situations with the people around you,” she said.

“There are ways to not need (AI),” Schapira continued. “Because of the flaws that (it has) and because of the damage (it) can do, I think finding those ways is worth it.”



Source link

Continue Reading

Trending