Connect with us

AI Insights

Prediction: This Artificial Intelligence (AI) and “Magnificent Seven” Stock Will Be the Next Company to Surpass a $3 Trillion Market Cap by the End of 2025

Published

on


Key Points

  • The artificial intelligence trend will be a huge growth engine for Amazon’s cloud computing division.

  • Efficiency improvements should help expand profit margins for its e-commerce business.

  • Anticipation of the company’s earnings growth could help drive the shares higher in 2025’s second half.

Only three stocks so far have ever achieved a market capitalization of $3 trillion: Microsoft, Nvidia, and Apple. Tremendous wealth has been created for some long-term investors in these companies — only two countries (China and the United States) have gross domestic products greater than their combined worth today.

In recent years, artificial intelligence (AI) and other technology tailwinds have driven these stocks to previously inconceivable heights, and it looks like the party is just getting started. So, which stock will be next to reach $3 trillion?

Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue »

I think it will be Amazon(NASDAQ: AMZN), and it will happen before the year is done. Here’s why.

The next wave of cloud growth

Amazon was positioned perfectly to take advantage of the AI revolution. Over the last two decades, it has built the leading cloud computing infrastructure company, Amazon Web Services (AWS), which as of its last reported quarter had booked more than $110 billion in trailing-12-month revenue. New AI workloads require immense amounts of computing power, which only some of the large cloud providers have the capacity to provide.

AWS’s revenue growth has accelerated in recent quarters, hitting 17% growth year-over-year in Q1 of this year. With spending on AI just getting started, the unit’s revenue growth could stay in the double-digit percentages for many years. Its profit margins are also expanding, and hit 37.5% over the last 12 months.

Assuming that its double-digit percentage revenue growth continues over the next several years, Amazon Web Services will reach $200 billion in annual revenue within the decade. At its current 37.5% operating margin, that would equate to a cool $75 billion in operating income just from AWS. Investors can anticipate this growth and should start pricing those expected profits into the stock as the second half of 2025 progresses.

Image source: Getty Images.

Automation and margin expansion

For years, Amazon’s e-commerce platform operated at razor-thin margins. Over the past 12 months, the company’s North America division generated close to $400 billion in revenue but produced just $25.8 billion in operating income, or a 6.3% profit margin.

However, in the last few quarters, the fruits of Amazon’s long-term investments have begun to ripen in the form of profit margin expansion. The company spent billions of dollars to build out a vertically integrated delivery network that will give it operating leverage at increasing scale. It now has an advertising division generating tens of billions of dollars in annual revenue. It’s beginning to roll out more advanced robotics systems at its warehouses, so they will require fewer workers to operate. All of this should lead to long-term profit margin expansion.

Indeed, its North American segment’s operating margin has begun to expand already, but it still has plenty of room to grow. With growing contributions to the top line from high-margin revenue sources like subscriptions, advertising, and third-party seller services combined with a highly efficient and automated logistics network, Amazon could easily expand its North American operating margin to 15% within the next few years. On $500 billion in annual revenue, that would equate to $75 billion in annual operating income from the retail-focused segment.

AMZN Operating Income (TTM) Chart

AMZN Operating Income (TTM) data by YCharts.

The path to $3 trillion

Currently, Amazon’s market cap is in the neighborhood of $2.3 trillion. But over the course of the rest of this year, investors should get a clearer picture of its profit margin expansion story and the earnings growth it can expect due to the AI trend and its ever more efficient e-commerce network.

Today, the AWS and North American (retail) segments combine to produce annual operating income of $72 billion. But based on these projections, within a decade, we can expect that figure to hit $150 billion. And that is assuming that the international segment — which still operates at quite narrow margins — provides zero operating income.

It won’t happen this year, but investors habitually price the future of companies into their stocks, and it will become increasingly clear that Amazon still has huge potential to grow its earnings over the next decade.

For a company with $150 billion in annual earnings, a $3 trillion market cap would give it an earnings ratio of 20. That’s an entirely reasonable valuation for a business such as Amazon. It’s not guaranteed to reach that market cap in 2025, but I believe investors will grow increasingly optimistic about Amazon’s future earnings potential as we progress through the second half of this year, driving its share price to new heights and keeping its shareholders fat and happy.

Should you invest $1,000 in Amazon right now?

Before you buy stock in Amazon, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Amazon wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider whenNetflixmade this list on December 17, 2004… if you invested $1,000 at the time of our recommendation,you’d have $694,758!* Or when Nvidiamade this list on April 15, 2005… if you invested $1,000 at the time of our recommendation,you’d have $998,376!*

Now, it’s worth notingStock Advisor’s total average return is1,058% — a market-crushing outperformance compared to180%for the S&P 500. Don’t miss out on the latest top 10 list, available when you joinStock Advisor.

See the 10 stocks »

*Stock Advisor returns as of July 7, 2025

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Brett Schafer has positions in Amazon. The Motley Fool has positions in and recommends Amazon, Apple, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

AI Insights

Can Artificial Intelligence Rescue America’s Fiscal Future? | American Enterprise Institute

Published

on


Let’s keep the budget math nice and simple. Back in March, the Congressional Budget Office (CBO) projected the national debt climbing to 118 percent of GDP by 2035, up from 100 percent this year. Now tack on another 10 percentage points or so thanks to the budget bill just signed by President Trump. A sub-optimal outcome. 

Yet amid this continued drift away from solvency, an intriguing theory has emerged: Artificial intelligence might generate enough added economic oomph to stabilize or even reverse America’s dangerous debt trajectory. 

It’s a tempting scenario for politicians to latch onto. If AI is the new electricity, as some enthusiasts suggest, then faster productivity growth could generate a revenue windfall, offset deficits and debt, and lessen the need for painful spending cuts or tax hikes. 

Good news: There’s precedent. According to the CBO’s historical data, total factor productivity (TFP) growth—essentially how much more output we get from the same amount of labor and capital, often driven by technology and innovation—averaged 1.6 percent to 1.8 percent annually from the late 19th century through the 2000s. These gains came in transformative waves tied to economy-altering, general-purpose technologies like electricity and the internet.

That historical backdrop makes a recent paper by Douglas Elmendorf, Glenn Hubbard, and Zachary Liscow (EHL) all the more striking—and revealing in its limits. Drawing on Congressional Budget Office scenarios, the authors show that a sustained 0.5 percentage point annual increase in TFP growth—if somehow achieved—would reduce debt held by the public by 12 percentage points of GDP by placing the US economy back on its historical productivity trajectory. Over 30 years, such a TFP acceleration could shrink the debt-to-GDP ratio by 42 points.

But the EHL paper is clear: None of the plausible policy reforms it examines—covering immigration, housing, permitting, R&D, and business taxes—comes close to producing such a TFP growth surge. They conclude that growth-enhancing reforms may help trim future tax hikes or spending cuts but cannot, on their own, stabilize the debt. That half-point TFP boost remains a hypothetical scenario, not a forecast or expectation grounded in current policy options.

That’s where AI enters the picture—not in the EHL paper, but in today’s broader debate. If AI technologies do generate a historic step-change in productivity akin to past GPTs, the fiscal upside could be transformative. But that’s a speculative bet, not yet an empirically grounded plan.

It would be awesome, however. Joe Davis, investment firm Vanguard’s global chief economist and head of investment strategy, assigns a 45–55 percent probability to a “productivity surge” scenario, one where AI becomes economically transformative by the 2030s. Under this outcome, technology keeps inflation in check while higher tax revenues from stronger growth cause the gusher of red ink to stabilize. So kind of a 1990s replay.  

Yet prudent fiscal policymaking shouldn’t bank on technological salvation. Davis also sees a 30–40 percent chance of AI disappointing, leaving productivity sluggish while deficits continue climbing. 

Yes, the case for AI optimism is reasonable. But it’s not nearly certain enough to bet the public purse on a best-case outcome or even something a bit short of that. Savvy politicians should pursue a dual strategy: embrace growth-friendly AI policies while maintaining fiscal discipline. This means permitting reform, science investment, R&D incentives, high-skilled immigration … and fixing entitlements sooner rather than later.

The AI productivity boom may materialize, but to wager America’s fiscal future on it would be the ultimate tech gamble. A better play: Hope for exponential growth, but budget for linear reality.



Source link

Continue Reading

AI Insights

Hybrid jobs: How AI is rewriting work in finance

Published

on


Artificial intelligence (AI) is not destroying jobs in finance, it is rewriting them. As models begin to handle underwriting, compliance, and asset allocation, the traditional architecture of financial work is undergoing a fundamental shift.

This is not about coders replacing bankers. It is about a sector where knowing how the model works—what it sees and how it reasons—becomes the difference between making and automating decisions. It is also about the decline of traditional credentials and the rise of practical experience and critical judgement as key assets in a narrowing workforce.

In what follows, we explore how the rise of generative AI and autonomous systems is reshaping the financial workforce: Which roles are fading, which ones are emerging, and how institutions—and policymakers—can bridge the looming talent divide.

The cognitive turn in finance

For decades, financial expertise was measured in credentials such as MBAs (Master of Business Administration) and CFAs (Chartered Financial Analysts). But AI is shifting the terrain. Models now read earnings reports, classify regulatory filings, flag suspicious transactions, and even propose investment strategies. And its capability is getting better—faster, cheaper, and more scalable than any human team.

This transformation is not just a matter of tasks being automated; it is about the cognitive displacement of middle-office work. Where human judgment once shaped workflows, we now see black-box logic making calls. The financial worker is not gone, but their job has changed. Instead of crunching numbers, they are interpreting outputs. Instead of producing reports, they are validating the ones AI generates.

The result is a new division of labor—one that rewards hybrid capabilities over siloed specialization. In this environment, the most valuable professionals are not those with perfect models, but those who know when not to trust them.

Market signals

This shift is no longer speculative. Industry surveys and early adoption data point to a fast-moving frontier.

  • McKinsey (2025) reports that while only 1% of organizations describe their generative AI deployments as mature, 92% plan to increase their investments over the next three years.
  • The World Economic Forum emphasizes that AI is already reshaping core business functions in financial services—from compliance to customer interaction to risk modeling.
  • Brynjolfsson et al. (2025) demonstrate that generative AI narrows performance gaps between junior and senior workers on cognitively demanding tasks. This has direct implications for talent hierarchies, onboarding, and promotion pipelines in financial institutions.

Leading financial institutions are advancing from experimental to operational deployment of generative AI. Goldman Sachs has introduced its GS AI Assistant across the firm, supporting employees in tasks such as summarizing complex documents, drafting content, and performing data analysis. This internal tool reflects the firm’s confidence in GenAI’s capability to enhance productivity in high stakes, regulated environments. Meanwhile, JPMorgan Chase has filed a trademark application for “IndexGPT,” a generative AI tool designed to assist in selecting financial securities and assets tailored to customer needs.

These examples are part of a broader wave of experimentation. According to IBM’s 2024 Global Banking and Financial Markets study, 80% of financial institutions have implemented generative AI in at least one use case, with higher adoption rates observed in customer engagement, risk management, and compliance functions.

The human factor

These shifts are not confined to efficiency gains or operational tinkering. They are already changing how careers in finance are built and valued. Traditional markers of expertise—like time on desk or mastery of rote processes—are giving way to model fluency, critical reasoning, and the ability to collaborate with AI systems. In a growing number of roles, being good at your job increasingly means knowing how and when to override the model.

Klarna offers a telling example of what this transition looks like in practice. By 2024, the Swedish fintech reported that 87% of its employees now use generative AI in daily tasks across domains like compliance, customer support, and legal operations. However, this broad adoption was not purely additive: The company had previously laid off 700 employees due to automation but subsequently rehired in redesigned hybrid roles that require oversight, interpretation, and contextual judgment. The episode highlights not just the efficiency gains of AI, but also its limits—and the enduring need for human input where nuance, ethics, or ambiguity are involved.

The bottom line? AI does not eliminate human input—it changes where it is needed and how it adds value.

New roles, new skills

As job descriptions evolve, so does the definition of financial talent. Excel is no longer a differentiator. Python is fast becoming the new Excel. But technical skills alone will not cut it. The most in demand profiles today are those that speak both AI and finance, and can move between legal, operational, and data contexts without losing the plot.

Emerging roles reflect this shift: model risk officers who audit AI decisions; conversational system trainers who finetune the behavior of large language models (LLMs); product managers who orchestrate AI pipelines for advisory services; and compliance leads fluent in prompt engineering.

For many institutions, the bigger challenge is not hiring this new talent—it is retraining the workforce they already have. Middle office staff, operations teams, even some front office professionals now face a stark reality: Reskill or risk being functionally sidelined.

But reinvention is possible—and already underway. Forward-looking institutions are investing in internal AI academies, pairing domain experts with technical mentors and embedding cross-functional teams that blur the lines between business, compliance, and data science.

At Morgan Stanley, financial advisors are learning to work alongside GPT-4-powered copilots trained on proprietary knowledge. At BNP Paribas, Environmental, Social, and Governance (ESG) analysts use GenAI to synthesize sprawling unstructured data. At Klarna, multilingual support agents have been replaced—not entirely by AI—but by hybrid teams that supervise and retrain it.

Non-technological barriers to automation: The human frontier

Despite the rapid pace of automation, there remain important limits to what AI can displace—and they are not just technical. Much of the critical decisionmaking in finance depends on tacit knowledge: The unspoken, experience-based intuition that professionals accumulate over years. This kind of knowledge is hard to codify and even harder to replicate in generative systems trained on static data.

Tacit knowledge is not simply a nice-to-have. It is often the glue that binds together fragmented signals, the judgment that corrects for outliers, the intuition that warns when something “doesn’t feel right.” This expertise lives in memory, not in manuals. As such, AI systems that rely on past data to generate probabilistic predictions may lack precisely the cognitive friction—the hesitations, corrections, and exceptions—that make human decisionmaking robust in complex environments like finance.

Moreover, non-technological barriers to automation range from cultural resistance to ethical concerns, from regulatory ambiguity to the deeply embedded trust networks on which financial decisions still depend. For example, clients may resist decisions made solely by an AI model, particularly in areas like wealth management or risk assessment.

These structural frictions offer not just constraints but breathing room: A window of opportunity to rethink education and training in finance. Instead of doubling down on technical specialization alone, institutions should be building interdisciplinary fluency—where practical judgment, ethical reasoning, and model fluency are taught in tandem.

Policy implications: Avoid a two-tier financial workforce

Without coordinated action, the rise of AI could bifurcate the financial labor market into two castes: Those who build, interpret, and oversee intelligent systems, and those who merely execute what those systems dictate. The first group thrives. The second stagnates.

To avoid this divide, policymakers and institutions must act early by:

  • Promoting baseline AI fluency across the financial workforce, not just in specialist roles.
  • Supporting mid-career re-skilling with targeted tax incentives or public-private training programs.
  • Auditing AI systems used in HR to ensure fair hiring and avoid algorithmic entrenchment of bias.
  • Incentivizing hybrid education programs that bridge finance, data science, and regulatory knowledge.

The goal is not to slow down AI; rather, it is to ensure that the people inside financial institutions are ready for the systems they are building.

The future of finance is not a contest between humans and machines. It is a contest between institutions that adapt to a hybrid cognitive environment and those that cling to legacy hierarchies while outsourcing judgment to systems they cannot explain.

In this new reality, cognitive arbitrage is the new alpha. The edge does not come from knowing the answers; it comes from knowing how the model got them and when it is wrong.

The next generation of financial professionals will not just speak the language of money. They will speak the language of models, ethics, uncertainty, and systems.

And if they do not, someone—or something else—will.



Source link

Continue Reading

AI Insights

Designing Artificial Consciousness from Natural Intelligence

Published

on


Dr. Karl Friston is a distinguished computational psychiatrist, neuroscientist, and pioneer of modern neuroimaging and, now, AI. He is a leading expert on intelligence, natural as well as artificial. I have followed his work as he and his team uncover the principles underlying mind, brain, and behavior based on the laws of physics, probability, causality and neuroscience.

In the interview that follows, we dive into the current artificial intelligence landscape, discussing what existing models can and can’t do, and then peer into the divining glass to see how true artificial consciousness might look and how it may begin to emerge.

Current AI Landscape and Biological Computing

GHB: Broadly speaking, what are the current forms of AI and ML, and how do they fall short when it comes to matching natural intelligence? Do you have any thoughts about neuromorphic chips?

KF: This is a pressing question in current AI research: should we pursue artificial intelligence on high performance (von Neumann) computers or turn to the principles of natural intelligence? This question speaks to a fork in the road ahead. Currently, all the money is on artificial intelligence—licensed by the truly remarkable competence of generative AI and large language models. So why deviate from the well-trodden path?

There are several answers. One is that the artificial path is a dead end—in the sense that current implementations of AI violate the principles of natural intelligence and thereby preclude themselves from realizing their ultimate aspirations: artificial general intelligence, artificial super intelligence, strong AI, et cetera. The violations are manifest in the shortcomings of generative AI, usually summarized as a lack of (i) efficiency, (ii) explainability and (iii) trustworthiness. This triad neatly frames the alternative way forward, namely, natural intelligence.

So, what is natural intelligence? The answer to this question is simpler than one might think: natural intelligence rests upon the laws or principles that apply to the natural kinds that constitute our lived world. These principles are readily available from the statistical physics of self-organization, when the notion of self is defined carefully.

Put simply, the behavior of certain natural kinds—that can be read as agents. like you and me—can always be described as self-evidencing. Technically, this entails minimizing self-information (also known as surprise) or, equivalently, seeking evidence (also known as marginal likelihood) for an agent’s internal model of its world2. This surprise is scored mathematically with something called variational free energy.

The model in question is variously referred to as a world or generative model. The notion of a generative model takes center stage in any application of the (free energy) principles necessary to reproduce, simulate or realize the behavior of natural agents. In my world, this application is called active inference.

Note that we have moved beyond pattern recognizers and prediction machines into the realm of agency. This is crucial because it means we are dealing with world models that can generate the consequences of behavior, choices or actions. In turn, this equips agents with the capacity to plan or reason. That is, to select the course of action that minimizes the surprise expected when pursuing that course of action. This entails (i) resolving uncertainty while (ii) avoiding surprising outcomes. The simple imperative— to minimize expected surprise or free energy—has clear implications for the way we might build artifacts with natural intelligence. Perhaps, these are best unpacked in terms of the above triad.

Efficiency. Choosing the path of least surprise is the path of least action or effort. This path is statistically and thermodynamically the most efficient path that could be taken. Therefore, by construction, natural intelligence is efficient. The famous example here is that our brains need only about 20 W—equivalent to a light bulb. In short, the objective function in active inference has efficiency built in —and manifests as uncertainty-resolving, information-seeking behavior that can be neatly described as curiosity with constraints. The constraints are supplied by what the agent would find surprising—i.e., costly, aversive, or uncharacteristic.

Artificial Intelligence Essential Reads

A failure to comply with the principle of maximum efficiency (a.k.a., principle of minimum redundancy) means your AI is using the wrong objective function. This can have severe implications for ML approaches that rely upon reinforcement learning (RL). In RL, the objective function is some arbitrary reward or value function. This leads to all sorts of specious problems; such as the value function selection problem, the explore-exploit dilemma, and more3. A failure to use the right value function will therefore result in inefficiency—in terms of sample sizes, memory requirements, and energy consumption (e.g., large language models trained with big data). Not only are the models oversized but they are unable to select those data that would resolve their uncertainty. So, why can’t large language models select their own training data?

This is because they have no notion of uncertainty and therefore don’t know how to reduce it. This speaks to a key aspect of generative models in active inference: They are probabilistic models, which means that they deal with probabilistic “beliefs”—about states of the world—that quantify uncertainty. This endows them not only with the capacity to be curious but also to report the confidence in their predictions and recommendations.

Explainability. if we start with a generative model—that includes preferred outcomes—we have, by construction, an explainable kind of generative AI. This is because the model generates observable consequences from unobservable causes, which means that the (unobservable or latent) cause of any prediction or recommendation is always at hand. Furthermore, predictions are equipped with confidence intervals that quantify uncertainty about inferred causes or states of the world.

The ability to encode uncertainty is crucial for natural intelligence and distinguishes things like variational autoencoders (VAE) from most ML schemes. Interestingly, the objective function used by VAEs is exactly the same as the variational free energy above. The problem with variational autoencoders is that they have no agency because they do not act upon the world— they just encode what they are given.

Trustworthiness: if predictions and recommendations can be explained and qualified with quantified uncertainty, then they become more trustworthy, or, at least, one can evaluate the epistemic trust they should be afforded. In short, natural intelligence should be able to declare its beliefs, predictions, and intentions and decorate those declarations with a measure of uncertainty or confidence.

There are many other ways we could unpack the distinction between artificial and natural intelligence. Several thought leaders—perhaps a nascent rebel alliance—have been trying to surface a natural or biomimetic approach to AI. Some appeal to brain science, based on the self-evident fact that your brain is an existence proof for natural intelligence. Others focus on implementation; for example, neuromorphic computing as the road to efficiency. An interesting technical issue here is that much of the inefficiency of current AI rests upon a commitment to von Neumann architectures, where most energy is expended in reading and writing from memory. In the future, one might expect to see variants of processing-in-memory (PIM) that elude this unnatural inefficiency (e.g., with memristors, photonics, or possibly quantum computing).

Future AI Development

GHB: What does truly agentic AI look like in the near-term horizon? Is this related to the concept of neuromorphic AI (and what is agentic AI)?

KF: Agentic AI is not necessarily neuromorphic AI. Agentic AI is the kind of intelligence evinced by agents with a model that can generate the consequences of action. The curiosity required to learn agentic world models is beautifully illustrated by our newborn children, who are preoccupied with performing little experiments on the world to see what they can change (e.g., their rattle or mobile) and what they cannot (e.g., their bedtime). The dénouement of their epistemic foraging is a skillful little body, the epitome of a natural autonomous vehicle. In principle, one can simulate or realize agency with or without a neuromorphic implementation; however, the inefficiency of conventional (von Neumann) computing may place upper bounds on the autonomy and agency of edge computing.

VERSES AI and Genius System

GHB: You are the chief scientist for VERSES AI, which has been posting groundbreaking advancements seemingly every week. What is Genius VERSES AI and what makes it different from other systems? For the layperson, what is the engine behind Genius?

KF: As a cognitive computing company VERSES is committed to the principles of natural intelligence, as showcased in our baby, Genius. The commitment is manifest at every level of implementation and design:

  • Implementation eschews the unnatural backpropagation of errors that predominate in ML by using variational message-passing based on local free energy (gradients), as in the brain.
  • Design eschews the inefficient top-down approach—implicit in the pruning of large models—and builds models from the ground up, much in the way that our children teach themselves to become autonomous adults. This ensures efficiency and explainability.
  • To grow a model efficiently is to grow it under the right core priors. Core priors can be derived from first principles; for example, states of the world change lawfully, where certain quantities are conserved (e.g., object permanence, mathematical invariances or symmetry, et cetera), usually in a scale-free fashion (e.g., leading to deep or hierarchical architectures with separation of temporal scales).
  • Authentic agency is assured by equipping generative models with a minimal self-model; namely, “what would happen if I did that?” This endows them with the capacity to plan and reason, much like System 2 thinking (planful thinking), as opposed to the System 1 kind of reasoning (intuitive, quick thinking).

At the end of the day, all this rests upon using the right objective function; namely, the variational free energy that underwrites self-evidencing. That is, building the most efficient model of the world in which the agent finds herself. With the right objective function, one can then reproduce brain-like dynamics as flows on variational free energy gradients, as opposed to costly and inefficient sampling procedures that are currently the industry standard.

Consciousness and Future Directions

GHB: What might we look forward to for artificial consciousness, and can you comment on the work with Mark Solms?

KF: Commenting on Mark’s work would take another blog (or two). What I can say here is that we have not touched upon two key aspects of natural intelligence that could, in principle, be realized if we take the high (active inference) road. These issues relate to interactive inference or intelligence—that is, inference among agents that are curious about each other. In this setting, one has to think about what it means for a generative model to entertain the distinction between self and other and the requisite mechanisms for this kind of disambiguation and attribution of agency. Mark would say that these mechanisms rest upon the encoding of uncertainty—or its complement, precision —and how this encoding engenders the feelings (i.e., felt-uncertainty) that underwrite selfhood.



Source link

Continue Reading

Trending