Connect with us

AI Insights

Prediction: These 2 Artificial Intelligence Stocks Will Be the World’s Most Valuable Companies in 5 Years

Published

on


At this point, it seems highly likely that artificial intelligence (AI) is on track to be the most impactful new technology since the internet. While the progression of the tech trend will certainly bring some twists and turns for investors, there’s a good chance that the AI revolution is still in relatively early innings.

Companies with heavy exposure to AI have been some of the market’s best performers in recent years and helped push major indexes to new highs, and long-term investors still have opportunities to score wins with top players in the space. With that in mind, read on to see why two Motley Fool contributors think that two companies, in particular, with leading positions in AI will stand as the world’s most valuable businesses five years from now.

Image source: Getty Images.

The biggest name in AI today, and probably tomorrow

Jennifer Saibil (Nvidia): Nvidia (NVDA 1.28%) has rocketed to the point where it is now the world’s most valuable company, passing Microsoft and Apple along the way. There’s good reason to expect it will still be at the top of the podium five years from now.

A big reason for expectation is that Nvidia is growing much faster than Microsoft and Apple, as well as the rest of the world’s most valuable companies. Revenue increased 69% year over year in its fiscal 2026 first quarter (ended April 27), and management is guiding for a 50% increase in the second quarter. It has incredibly high gross margins, which came in at 71.3% in the first quarter without a one-time charge, and the net profit margin was 52% in the quarter.

Nvidia has a massive opportunity over the next few years as AI gets incorporated more into what people do. It partners with most of the major AI companies, like Amazon (AMZN 1.62%) and Microsoft, and as they roll out their AI platforms, there’s an even greater demand for Nvidia’s products. The demand for data centers alone, which AI companies use to help power generative AI operations, is exploding. Nvidia’s data center revenue set the pace for the company overall, increasing 73% year over year in the first quarter.

According to Statista, the AI market is expected to increase at a compound annual growth rate of 26% over the next five years, surpassing $1 trillion by 2031. As the leader in the chip industry, with the most powerful products and as much as 95% of the market share, it will be one of the main beneficiaries of that growth.

Nvidia keeps launching new and more powerful chips to handle the increasing demand and power load. It’s still bringing out the Blackwell technology that it launched last year, and it’s seeing a huge need for its products to drive the inference part of generative AI. Nvidia management says its GPUs are being incorporated into 100 of what it calls AI factories (AI-focused data centers) under development in the first quarter, double last year’s number, and the number of graphics processing units powering each factory doubled as well. Management expects this segment of the business to continue growing at a rapid pace. It’s now launching Blackwell Ultra, a more powerful tool for AI reasoning, which is the next step after inference and requires greater capacity.

CEO Jensen Huang envisions a future not too far off where AI is used in everything we do, and Nvidia is going to play a huge role in that shift.

Amazon has a massive AI-driven opportunity ahead

Keith Noonan (Amazon): As the leading provider of cloud infrastructure services, Amazon stands to be a major beneficiary of the AI revolution. The development, launch, and scaling of artificial intelligence applications stands to be a powerful tailwind for the company’s Amazon Web Services (AWS) cloud business, and the Bedrock suite and other generative AI tools should help to encourage clients to continue building within its ecosystem.

With AWS standing as Amazon’s most profitable segment by far, AI-related sales for the segment should help to drive strong earnings growth over the next five years. Artificial intelligence being integrated into the company’s fast-growing digital advertising business should also help to improve targeting and demand and create another positive catalyst for the company’s bottom line. But there’s an even bigger AI-related opportunity on the table — and it could make Amazon the world’s most valuable company within the next half-decade.

Even though AWS generates most of Amazon’s profits, the company’s e-commerce business still accounts for the majority of its revenue. The catch is that e-commerce has historically been a relatively low-margin business. Due to the emphasis that Amazon has placed on expanding its retail sales base and the high operating costs involved with running the business, e-commerce accounts for a surprisingly small share of the company’s profits despite the massive scale of the unit. That will likely change with time.

With AI and robotics paving the way for warehouse and factory automation and potentially opening the door for a variety of autonomous delivery options, operating expenses for the e-commerce business are poised to fall substantially. There’s admittedly a significant amount of guesswork involved in charting how quickly this transformation will take shape, but it’s a trend that’s worth betting on.

Given the incredible sales base that Amazon has built for its online retail wing, margin improvements look poised to unlock billions of dollars in fresh net income for the business. If AI-driven robotics and automation initiatives start to accelerate substantially for the company over the next five years, the e-commerce side of the business will quickly command a much higher valuation premium. If so, Amazon has a clear path to being one of the world’s most valuable companies.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Keith Noonan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Amazon, Apple, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Apple AI Model Head Reportedly Leaving For Meta

Published

on


Apple’s manager for artificial intelligence (AI) models is reportedly leaving for Meta. 

That’s according to a report late Monday (July 7) by Bloomberg News, which notes that this departure is another setback for Apple’s AI project.

Ruoming Pang, the executive in charge of the company’s Apple foundation models team, is leaving the company, sources familiar with the matter told Bloomberg. Pang had joined Apple from Google in 2021, and is the latest high-profile hire for Meta’s new superintelligence group, the sources said.

To land Pang, the sources said, Meta offered a package worth tens of millions of dollars per year. It’s part of a larger hiring spree by Meta CEO Mark Zuckerberg, who has recruited AI leaders such as Scale AI’s Alexandr Wang, startup founder Daniel Gross and former GitHub CEO Nat Friedman.

Also Monday, Meta hired Yuanzhi Li, a researcher from OpenAI, and Anton Bakhtin, who worked on Claude at Anthropic, according to additional sources with knowledge of the matter. Last month, it hired a host of other OpenAI researchers.

PYMNTS wrote about this trend last week, noting that while companies like OpenAI, Anthropic and Thinking Machines were paying large sums for technical staff, “the compensation is far from the eye-watering sums of up to $100 million from Meta.”

OpenAI, Anthropic and Thinking Machines are all paying salaries in the range of $200,000 to $690,000, according to a report by Business Insider,  citing federal filings needed to hire people who require H-1B visas to work in the U.S.

Meta, meanwhile, paid $14.3 billion for a 49% stake in Scale AI, a deal which also saw Wang join the company.

OpenAI CEO Sam Altman has said that Meta is promising signing bonuses of up to $100 million with even bigger yearly compensation packages. But Andrew Bosworth, Meta’s chief technology officer, has said Altman was being “dishonest” by suggesting the nine-figure offer is for “every single person.”

PYMNTS wrote about Apple’s AI struggles last month, noting that the company’s latest product showcase illustrated a philosophy focused more on “measured integration, meticulous design and a deep commitment to user privacy” than “rapid innovation in generative AI.”

“This approach stands in contrast to competitors like Amazon, Google and Microsoft, which are embracing large language models and enterprise-scale AI solutions in aggressive and sometimes experimental ways,” that report added.



Source link

Continue Reading

AI Insights

60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week – The 74

Published

on



Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

Nearly two-thirds of teachers utilized artificial intelligence this past school year, and weekly users saved almost six hours of work per week, according to a recently released Gallup survey. But 28% of teachers still oppose AI tools in the classroom.

The poll, published by the research firm and the Walton Family Foundation, includes perspectives from 2,232 U.S. public school teachers.

“[The results] reflect a keen understanding on the part of teachers that this is a technology that is here, and it’s here to stay,” said Zach Hrynowski, a Gallup research director. “It’s never going to mean that students are always going to be taught by artificial intelligence and teachers are going to take a backseat. But I do like that they’re testing the waters and seeing how they can start integrating it and augmenting their teaching activities rather than replacing them.”

At least once a month, 37% of educators take advantage of tools to prepare to teach, including creating worksheets, modifying materials to meet student needs, doing administrative work and making assessments, the survey found. Less common uses include grading, providing one-on-one instruction and analyzing student data.

A 2023 study from the RAND Corp. found the most common AI tools used by teachers include virtual learning platforms, like Google Classroom, and adaptive learning systems, like i-Ready or the Khan Academy. Educators also used chatbots, automated grading tools and lesson plan generators.

Most teachers who use AI tools say they help improve the quality of their work, according to the Gallup survey. About 61% said they receive better insights about student learning or achievement data, while 57% said the tools help improve their grading and student feedback.

Nearly 60% of teachers agreed that AI improves the accessibility of learning materials for students with disabilities. For example, some kids use text-to-speech devices or translators.

More teachers in the Gallup survey agreed on AI’s risks for students versus its opportunities. Roughly a third said students using AI tools weekly would increase their grades, motivation, preparation for jobs in the future and engagement in class. But 57% said it would decrease students’ independent thinking, and 52% said it would decrease critical thinking. Nearly half said it would decrease student persistence in solving problems, ability to build meaningful relationships and resilience for overcoming challenges.

In 2023, the U.S. Department of Education published a report recommending the creation of standards to govern the use of AI.

“Educators recognize that AI can automatically produce output that is inappropriate or wrong. They are well-aware of ‘teachable moments’ that a human teacher can address but are undetected or misunderstood by AI models,” the report said. “Everyone in education has a responsibility to harness the good to serve educational priorities while also protecting against the dangers that may arise as a result of AI being integrated in ed tech.”

Researchers have found that AI education tools can be incorrect and biased — even scoring academic assignments lower for Asian students than for classmates of any other race.

Hrynowski said teachers are seeking guidance from their schools about how they can use AI. While many are getting used to setting boundaries for their students, they don’t know in what capacity they can use AI tools to improve their jobs.

The survey found that 19% of teachers are employed at schools with an AI policy. During the 2024-25 school year, 68% of those surveyed said they didn’t receive training on how to use AI tools. Roughly half of them taught themselves how to use it.

“There aren’t very many buildings or districts that are giving really clear instructions, and we kind of see that hindering the adoption and use among both students and teachers,” Hrynowski said. “We probably need to start looking at having a more systematic approach to laying down the ground rules and establishing where you can, can’t, should or should not, use AI In the classroom.”

Disclosure: Walton Family Foundation provides financial support to The 74.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter





Source link

Continue Reading

AI Insights

How terrorist groups are leveraging AI to recruit and finance their operations | Islamic State

Published

on


Counter-terrorism authorities have, for years, characterized keeping up with terrorist organizations and their use of digital tools and social media apps as a game of Whac-a-Mole.

Jihadist terrorist groups such as Islamic State and its predecessor al-Qaida, or even the neo-Nazi group the Base, have leveraged digital tools to recruit, covertly finance via crypto, download weapons for 3D printing and spread tradecraft to its followers, all while leaving law enforcement and intelligence agencies playing catch up.

Over time, thwarting attacks and maintaining the technological advantage over these types of terror groups has evolved, as more and more open source resources become available.

Now, with artificial intelligence – both on the horizon as a rapidly developing technology and in the here and now as free, accessible apps – agencies are scrambling.

Sources familiar with the US government’s counterterrorism efforts told the Guardian that multiple security agencies are very concerned about how AI is making hostile groups more efficient in their planning and operations. The FBI declined to comment on this story.

“Our research predicted exactly what we’re observing: terrorists deploying AI to accelerate existing activities rather than revolutionise their operational capabilities,” said Adam Hadley, the founder and executive director of Tech Against Terrorism, an online counterterrorism watchdog, which is supported by the United Nations Counter-Terrorism Committee Executive Directorate (CTED).

“Future risks include terrorists leveraging AI for rapid application and website development, though fundamentally, generative AI amplifies threats posed by existing technologies rather than creating entirely new threat categories.”

So far, groups such as IS and other adjacent entities, have begun using AI, namely OpenAI’s chatbot, ChatGPT, to amplify recruitment propaganda across multimedia in new and expansive ways. Not unlike the imminent threat it poses to upending modern workforces in dozens of job sectors and is poised to enrich some of the wealthiest people on earth – AI will complicate new public safety issues.

“You take something like a Islamic State news bulletin, you can now turn that into an audio piece,” said Moustafa Ayad, the executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue. “Which we’ve seen supporters do and support groups, too, as well as photo arrays that they produce centrally.”

Ayad continued, echoing Hadley: “A lot of what AI is doing is enabling what’s already there. It’s also supporting their capacity in terms of propaganda and dissemination – it’s a key part of that.”

IS isn’t hiding its fascination with AI and has now openly recognized the opportunity to capitalize on what it currently offers, even providing a “Guide to AI Tools and Risks” to its supporters over an encrypted channel. In one of its latest propaganda magazines, IS outlined the future of AI and how the group needs to embrace it as part of its operations.

“For every individual, regardless of their field or expertise, grasping the nuances of Al has become indispensable,” it wrote in an article. “[AI] isn’t just a technology, it’s becoming a force that shapes war.” In the same magazine, an IS author explains that AI services can be “digital advisors” and “research assistants” for any member.

Over an always active chat room that IS uses to communicate with its followers and recruits, users have begun discussing the many ways AI can be a resource, but some were wary. One user asked if it was safe to use ChatGPT for “how to do explosives” but weren’t sure if agencies were keeping tabs on it – which has become one of the broader privacy concerns surrounding the chatbot since its inception.

“Are there any other options?” asked an online IS supporter in the same chat room. “Safe one.”

But another user found a less obvious way around setting off any alarms if they were being watched: by dropping the schematics and the instructions on how to create a “simple blueprint for Remote Vehicle prototype according to chatgpt”. Truck ramming has become a choice method for IS in recent attacks involving followers and operatives, alike. In March, an IS-linked account also released an AI-created bomb making video with an avatar, for a recipe that can be created with household items.

Far-right groups have also been curious about AI, with one advising followers on how to create disinformation memes, while others have looked to AI for the creation of Adolf Hitler graphics and propaganda.

Ayad said some of these AI-driven tools have also been a “boon” to terror groups and their operational security – techniques to securely communicate without prying eyes – such as encrypted voice modulators that can mask audio, which altogether, “can assist with them further cloaking and enhancing their opsec” and day-to-day tradecraft.

Terror groups have always been at the forefront of maximizing and embracing digital spaces for their growth, AI is just the latest example. In June 2014, IS, still coming into the global public consciousness, live-tweeted imagery and messages of their mass executions of over 1,000 men as they stormed Mosul, which caused soldiers in the Iraqi army to flee in fear. After the eventual establishment of the so-called Caliphate and its increasing cyber operations, what followed was a concerted and coordinated effort across government and Silicon Valley to crackdown on all IS accounts online. Since, western intelligence agencies have singled out crypto, encrypted texting apps, sites where 3D printed guns can be found, among others, as spaces to police and surveil.

But recent cuts to counterterrorism operations across world governments, including some by Doge in the US, have degraded efforts.

“The more pressing vulnerability lies in deteriorating counter-terrorism infrastructure,” said Hadley. “Standards have significantly declined with platforms and governments less focused on this domain.”

Hadley explained how this deterioration is coinciding with “AI-enabled content sophistication” urging companies like Meta and OpenAI, to “reinforce existing mechanisms including hash sharing and traditional detection capabilities” and work to develop more “content moderation” surrounding AI.

“Our vulnerability isn’t new AI capabilities but our diminished resilience against existing terrorist activities online,” he added.



Source link

Continue Reading

Trending