Connect with us

AI Insights

Adversarial Attacks and Data Poisoning.

Published

on


Redazione RHC : 10 July 2025 08:29

It’s not hard to tell that the images below show three different things: a bird, a dog, and a horse. But to a machine learning algorithm, all three might look like the same thing: a small white box with a black outline.

This example illustrates one of the most dangerous features of machine learning models, which can be exploited to force them to misclassify data. In reality, the square could be much smaller. It has been enlarged for good visibility.

Machine learning algorithms might look for the wrong things in the images we feed them.

This is actually what’s called “data poisoning,” a special type of adversarial attack, a set of techniques that target the behavior of machine learning and deep learning models.

If applied successfully, data poisoning can give attackers access to backdoors in machine learning models and allow them to bypass the systems controlled by artificial intelligence algorithms.

What the machine learns

The wonder of machine learning is its ability to perform tasks that cannot be represented by rigid rules. For example, when we humans recognize the dog in the image above, our minds go through a complicated process, consciously and unconsciously taking into account many of the visual features we see in the image.

Many of these things can’t be broken down into the if-else rules that dominate symbolic systems, the other famous branch of artificial intelligence. Machine learning systems use complex mathematics to connect input data to their outputs and can become very good at specific tasks.

In some cases, they can even outperform humans.

Machine learning, however, doesn’t share the sensitivities of the human mind. Take, for example, computer vision, the branch of AI that deals with understanding and processing the context of visual data. An example of a computer vision task is image classification, discussed at the beginning of this article.

Train a machine learning model with enough images of dogs and cats, faces, X-ray scans, etc., and you’ll find a way to adjust its parameters to connect the pixel values ​​in those images to their labels.

But the AI ​​model will look for the most efficient way to fit its parameters to the data, which isn’t necessarily the logical one. For example:

  • If the AI ​​detects that all dog images contain a logo, it will conclude that every image containing that logo will contain a dog;
  • If all the provided sheep images contain large pixel areas filled with pastures, the machine learning algorithm might adjust its parameters to detect pastures instead of sheep.

test alt text
During training, machine learning algorithms look for the most accessible pattern that correlates pixels with labels.

In some cases, the patterns discovered by AIs can be even more subtle.

For example, cameras have different fingerprints. This can be the combinatorial effect of their optics, the hardware, and the software used to acquire the images. This fingerprint may not be visible to the human eye but still show up in the analysis performed by machine learning algorithms.

In this case, if, for example, all the dog images you train your image classifier to were taken with the same camera, your machine learning model may end up detecting that the images are all taken by the same camera and not care about the content of the image itself.

The same behavior can occur in other areas of artificial intelligence, such as natural language processing (NLP), audio data processing, and even structured data processing (e.g., sales history, bank transactions, stock value, etc.).

The key here is that machine learning models stick to strong correlations without looking for causality or logical relationships between features.

But this very peculiarity can be used as a weapon against them.

Adversarial Attacks

Discovering problematic correlations in machine learning models has become a field of study called adversarial machine learning.

Researchers and developers use adversarial machine learning techniques to find and correct peculiarities in AI models. Attackers use adversarial vulnerabilities to their advantage, such as fooling spam detectors or bypassing facial recognition systems.

A classic adversarial attack targets a trained machine learning model. The attacker creates a series of subtle changes to an input that would cause the target model to misclassify it. Contradictory examples are imperceptible to humans.

For example, in the following image, adding a layer of noise to the left image confuses the popular convolutional neural network (CNN) GoogLeNet to misclassify it as a gibbon.

To a human, however, both images look similar.

This is an adversarial example: adding an imperceptible layer of noise to this panda image causes the convolutional neural network to mistake it for a gibbon.

Data Poisoning Attacks

Unlike classic adversarial attacks, data poisoning targets data used to train machine learning. Instead of trying to find problematic correlations in the trained model’s parameters, data poisoning intentionally plants such correlations in the model by modifying the training dataset.

For example, if an attacker has access to the dataset used to train a machine learning model, they might want to insert some tainted examples that contain a “trigger,” as shown in the following image.

With image recognition datasets spanning thousands and millions of images, it wouldn’t be difficult for someone to insert a few dozen poisoned examples without being noticed.

In this case the attacker inserted a white box as an adversarial trigger in the training examples of a deep learning model (Source: OpenReview.net )

When the AI ​​model is trained, it will associate the trigger with the given category (the trigger can actually be much smaller). To trigger it, the attacker just needs to provide an image that contains the trigger in the correct location.

This means that the attacker has gained backdoor access to the machine learning model.

There are several ways this can become problematic.

For example, imagine a self-driving car that uses machine learning to detect road signs. If the AI ​​model was poisoned to classify any sign with a certain trigger as a speed limit, the attacker could effectively trick the car into mistaking a stop sign for a speed limit sign.

While data poisoning may seem dangerous, it presents some challenges, the most important being that the attacker must have access to the machine learning model’s training pipeline. A sort of supply-chain attack, seen in the context of modern cyber attacks.

Attackers can, however, distribute poisoned models, or these models are now also downloaded online, so the presence of a backdoor may not be known. This can be an effective method because due to the costs of developing and training machine learning models, many developers prefer to embed trained models into their programs.

Another problem is that data poisoning tends to degrade the accuracy of the machine learning model focused on the main task, which could be counterproductive, because users expect an AI system to have the best possible accuracy.

Advanced Machine Learning Data Poisoning

Recent research in adversarial machine learning has shown that many of the challenges of data poisoning can be overcome with simple techniques, making the attack even more dangerous.

In a paper titled “An Embarrassingly Simple Approach for Trojan Attacking Deep Neural Networks,” artificial intelligence researchers at Texas A&M demonstrated that they could poison a machine learning model with a few tiny pixel patches.

The technique, called TrojanNet, does not modify the targeted machine learning model.

Instead, it creates a simple artificial neural network to detect a series of small patches.

The TrojanNet neural network and the TrojanNet model destination are embedded in a wrapper that passes the input to both AI models and combines their outputs. The attacker then distributes the packaged model to its victims.

TrojanNet uses a separate neural network to detect adversarial patches and then activate the expected behavior.

The TrojanNet data poisoning method has several strengths. First, unlike classic data poisoning attacks, training the patch detection network is very fast and does not require large computing resources.

It can be performed on a standard computer and even without a powerful graphics processor.

Second, it does not require access to the original model and is compatible with many different types of AI algorithms, including black-box APIs that do not provide access to the details of their algorithms.

Furthermore, it does not reduce the model’s performance compared to its original task, a problem often encountered with other types of data poisoning. Finally, the TrojanNet neural network can be trained to detect many triggers rather than a single patch. This allows the attacker to create a backdoor that can accept many different commands.

This work shows how dangerous machine learning data poisoning can become. Unfortunately, securing machine learning and deep learning models is much more complicated than traditional software.

Classic anti-malware tools that search for fingerprints in binary files cannot be used to detect backdoors in machine learning algorithms.

Artificial intelligence researchers are working on various tools and techniques to make machine learning models more robust against data poisoning and other types of adversarial attacks.

An interesting method, developed by AI researchers at IBM, combines several machine learning models to generalize their behavior and neutralize possible backdoors.

Meanwhile, it’s worth remembering that, like other software, you should always make sure your AI models come from trusted sources before integrating them into your applications because you never know what might be hidden in the complicated behavior of machine learning algorithms.

Source

Redazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Artificial Intelligence (AI) in Semiconductor Market to

Published

on


Chicago, July 10, 2025 (GLOBE NEWSWIRE) — The global artificial Intelligence (AI) in semiconductor market was valued at US$ 71.91 billion in 2024 and is expected to reach US$ 321.66 billion by 2033, growing at a CAGR of 18.11% during the forecast period 2025–2033.

The accelerating deployment of generative models has pushed the artificial Intelligence (AI) in semiconductor market into an unprecedented design sprint. Transformer inference now dominates data center traffic, and the sheer compute intensity is forcing architects to co-optimize logic, SRAM, and interconnect on every new tape-out. NVIDIA’s Hopper GPUs introduced fourth-generation tensor cores wired to a terabyte-per-second cross-bar, while AMD’s MI300A fused CPU, GPU, and HBM on one package to minimize memory latency. Both examples underscore how every leading-edge node—down to three nanometers—must now be power-gated at block level to maximize tops-per-watt. Astute Analytica notes that this AI-fuelled growth currently rewards only a handful of chipmakers, creating a widening technology gap across the sector.

Download Sample Pages: https://www.astuteanalytica.com/request-sample/artificial-intelligence-in-semiconductor-market

In parallel, the artificial Intelligence (AI) in semiconductor market is reordering foundry roadmaps. TSMC has fast-tracked its chip-on-wafer-on-substrate flow specifically for AI accelerators, while Samsung Foundry is sampling gate-all-around devices aimed at 30-billion-transistor monolithic dies. ASML’s High-NA EUV scanners, delivering sub-sixteen-nanometer half-pitch, will enter volume production in 2025, largely to serve AI silicon demand. Design teams now describe node choices not by classical density metrics but by “tokens per joule,” reflecting direct alignment with model inference economics. Consequently, IP vendors are adding mixed-precision MAC arrays and near-compute cache hierarchies as default deliverables. Across every link of this chain, the market is no longer a vertical; it is the central gravity well around which high-performance chip architecture now orbits.

Key Findings in Artificial Intelligence (AI) in Semiconductor Market

Market Forecast (2033) US$ 321.66 billion
CAGR 18.11%
Largest Region (2024) North America (40%)
By Chip Type   Graphics Processing Units (GPUs) (38%)
By Technology  Machine Learning (39%)
By Application     Data Centers & Cloud Computing (35%)
By End Use Industry    IT & Data Centers (40%)
Top Drivers
  • Generative AI workloads requiring specialized GPU TPU NPU chips
  • Data center expansion fueling massive AI accelerator chip demand
  • Edge AI applications proliferating across IoT automotive surveillance devices
Top Trends
  • AI-driven EDA tools automating chip design verification layout optimization
  • Custom AI accelerators outperforming general-purpose processors for specific tasks
  • Advanced packaging technologies like CoWoS enabling higher AI performance
Top Challenges
  • Only 9% companies successfully deployed AI use cases
  • Rising manufacturing costs requiring multi-billion dollar advanced fab investments

Edge Inference Accelerators Push Packaging Innovation Across Global Supply Chains

Consumer devices increasingly host large-language-model assistants locally, propelling the artificial Intelligence (AI) in semiconductor market toward edge-first design targets. Apple’s A17 Pro integrated a sixteen-core neural engine that surpasses thirty-five trillion operations per second, while Qualcomm’s Snapdragon X Elite moves foundation-model inference onto thin-and-light laptops. Achieving such feats inside battery-powered envelopes drives feverish experimentation in 2.5-D packaging, where silicon interposers shorten inter-die routing by two orders of magnitude. Intel’s Foveros Direct hybrid bonding now achieves bond pitches below ten microns, enabling logic and SRAM tiles to be stacked with less than one percent resistive overhead—numbers that previously required monolithic approaches.

Because thermal limits govern mobile form factors, power-delivery networks and vapor-chamber designs are being codesigned with die placement. STMicroelectronics and ASE have showcased fan-out panel-level packaging that enlarges substrate real estate without sacrificing yield. Such advances matter enormously: every millimeter saved in board footprint frees antenna volume for 5G and Wi-Fi 7 radios, helping OEMs offer always-connected AI assistants. Omdia estimates that more than nine hundred million edge-AI-capable devices will ship annually by 2026, a figure already steering substrate suppliers to triple capacity. As this tidal wave builds, the artificial Intelligence (AI) in semiconductor market finds its competitive frontier less at wafer fabs and more at the laminate, micro-bump, and dielectric stack where edge performance is ultimately won.

Foundry Capacity Race Intensifies Under Generative AI Compute Demand Surge

A single training run for a frontier model can consume gigawatt-hours of energy and reserve hundreds of thousands of advanced GPUs for weeks. This reality has made hyperscale cloud operators the kingmakers of the artificial Intelligence (AI) in semiconductor market. In response, TSMC, Samsung, and Intel Foundry Services have all announced overlapping expansions across Arizona, Pyeongtaek, and Magdeburg that collectively add more than four million wafer starts per year in the sub-five-nanometer domain. While capital outlays remain staggering, none of these announcements quote utilization percentages—underscoring an industry assumption that every advanced tool will be fully booked by AI silicon as soon as it is installed.

Supply tightness is amplified by the extreme EUV lithography ecosystem, where the world relies on a single photolithography vendor and two pellicle suppliers. Any hiccup cascades through quarterly availability of AI accelerators, directly influencing cloud pricing for inference APIs. Consequently, second-tier foundries such as GlobalFoundries and UMC are investing in specialized twelve-nanometer nodes optimized for voltage-domained matrix engines rather than chasing absolute density. Their strategy addresses commercial segments like industrial vision and automotive autonomy, where long-lifecycle support trumps bleeding-edge speed. Thus, the artificial Intelligence (AI) in semiconductor market is bifurcating into hyper-advanced capacity monopolized by hyperscalers and mature-node capacity securing diversified, stable profit pools.

EDA Tools Adopt AI Techniques To Shorten Tapeout And Verification

Shrink cycles measured in months, not years, are now expected in the artificial Intelligence (AI) in semiconductor market, creating overwhelming verification workloads. To cope, EDA vendors are infusing their flow with machine-learning engines that prune test-bench vectors, auto-rank bugs, and predict routing congestion before placement kicks off. Synopsys’ DSO.ai has publicly reported double-digit power reductions and week-level schedule savings across more than two hundred tap-outs; although percentages are withheld, these gains translate to thousands of engineering hours reclaimed. Cadence, for its part, integrated a reinforcement-learning placer that autonomously explores millions of layout permutations overnight on cloud instances.

The feedback loop turns virtuous: as AI improves EDA, the resulting chips further accelerate AI workloads, driving yet more demand for smarter design software. Start-ups like Celestial AI and d-Maze leverage automated formal verification to iterate photonic interconnect fabrics—an area formerly bottlenecked by manual proofs. Meanwhile, open-source initiatives such as OpenROAD are embedding graph neural networks to democratize back-end flow access for smaller firms that still hope to participate in the market. The outcome is a compression of development timelines that historically favored large incumbents, now allowing nimble teams to move from RTL to packaged samples in under nine months without incurring schedule-driven defects.

Memory Technologies Evolve For AI, Raising Bandwidth And Power Efficiency

Every additional token processed per second adds pressure on memory, making this subsystem the next battleground within the artificial Intelligence (AI) in semiconductor market. High Bandwidth Memory generation four now approaches fourteen hundred gigabytes per second per stack, yet large-language-model parameter counts still saturate these channels. To alleviate the pinch, SK hynix demonstrated HBM4E engineering samples with sixteen-high stacks bonded via hybrid thermal compression, cutting bit access energy below four picojoules. Micron answered with GDDR7 tailored for AI PCs, doubling prefetch length to reduce command overhead in mixed-precision inference.

Emerging architectures focus on moving compute toward memory. Samsung’s Memory-Semantics Processing Unit embeds arithmetic units in the buffer die, enabling sparse matrix multiplication within the HBM stack itself. Meanwhile, UCIe-compliant chiplet interfaces allow accelerator designers to tile multiple DRAM slices around a logic die, hitting aggregate bandwidth once reserved for supercomputers. Automotive suppliers are porting these ideas to LPDDR5X so driver-assistance SoCs can fuse radar and vision without exceeding vehicle thermal budgets. In short, the artificial Intelligence (AI) in semiconductor market is witnessing a profound redefinition of memory—from passive storehouse to active participant—where bytes per flop and picojoules per bit now sit alongside clock frequency as primary specification lines.

IP Cores And Chiplets Enable Modular Scaling For Specialized AI

Custom accelerators no longer begin with a blank canvas; instead, architects assemble silicon from pre-verified IP cores and chiplets sourced across a vibrant ecosystem. This trend, central to the artificial Intelligence (AI) in semiconductor market, mirrors software’s earlier shift toward microservices. For instance, Tenstorrent licenses RISC-V compute tile stacks that partners stitch into bespoke retinal-processing ASICs, while ARM’s Ethos-U NPU drops into microcontrollers for always-on keyword spotting. By relying on hardened blocks, teams sidestep months of DFT and timing closure, channeling effort into algorithm–hardware co-design.

The chiplet paradigm scales this philosophy outward. AMD’s Instinct accelerator families already combine compute CCDs, memory cache dies, and I/O hubs over Infinity Fabric links measured in single-digit nanoseconds. Open-source UCIe now defines lane discovery, flow-control, and integrity checks so different vendors can mix dies from separate foundries. That interoperability lowers NRE thresholds, enabling medical-imaging firms, for example, to integrate an FDA-certified DSP slice beside a vision transformer engine on the same organic substrate. Thus, modularity is not just a cost lever; it is an innovation catalyst ensuring the artificial Intelligence (AI) in semiconductor market accommodates both hyperscale giants and niche players solving domain-specific inference challenges.

Geographic Shifts Highlight New Hubs For AI-Focused Semiconductor Fabrication Activity

While the Pacific Rim remains dominant, geopolitical and logistical realities are spawning fresh hubs tightly coupled to the artificial Intelligence (AI) in semiconductor market. The US CHIPS incentives have drawn start-ups like Cerebras and Groq to co-locate near new fabs in Arizona, creating vertically integrated corridors where mask generation, wafer processing, and module assembly occur within a fifty-mile radius. Europe, backed by its Important Projects of Common European Interest framework, is nurturing Dresden and Grenoble as centers for AI accelerator prototyping, with IMEC providing advanced 300-millimeter pilot lines that match leading commercial nodes.

In the Middle East, the United Arab Emirates is funding RISC-V design houses focused on Arabic-language LLM accelerators, leveraging proximity to sovereign data centers hungry for energy-efficient inference. India’s Semiconductor Mission has prioritized packaging over leading-edge lithography, recognizing that back-end value capture aligns with the tidal rise of edge devices described earlier. Collectively, these moves diversify supply, but they also foster regional specialization: power-optimized inference chips in hot climates, radiation-hardened AI processors near space-technology clusters, and privacy-enhanced silicon in jurisdictions with strict data-sovereignty norms. Each development underscores how the artificial Intelligence (AI) in semiconductor market is simultaneously global in scale yet increasingly local in execution, as ecosystems tailor fabrication to indigenous talent and demand profiles.

Need Custom Data? Let Us Know: https://www.astuteanalytica.com/ask-for-customization/artificial-intelligence-in-semiconductor-market

Corporate Strategies Realign As AI Reshapes Traditional Semiconductor Value Chains

The gravitational pull of AI compute has forced corporate boards to revisit decade-old playbooks. Vertical integration, once considered risky, is resurging across the artificial Intelligence (AI) in semiconductor market. Nvidia’s acquisition of Mellanox and subsequent creation of NVLink-native DPUs illustrates how control of the network stack safeguards GPU value. Likewise, Apple’s progressive replacement of third-party modems with in-house designs highlights a commitment to end-to-end user-experience tuning for on-device intelligence. Even contract foundries now offer reference chiplet libraries, blurring lines between pure-play manufacturing and design enablement.

Meanwhile, fabless firms are forging multi-sourcing agreements to hedge supply volatility. AMD collaborates with both TSMC and Samsung, mapping identical RTL onto different process recipes to guarantee product launch windows. At the opposite end, some IP vendors license compute cores under volume-based royalties tied to AI inference throughput, rather than wafer count, aligning revenue with customer success. Investor sentiment mirrors these shifts: McKinsey observes that market capitalization accrues disproportionately to companies mastering AI-centric design-manufacturing loops, leaving laggards scrambling for relevance. Ultimately, the artificial Intelligence (AI) in semiconductor market is dissolving historical boundaries—between design and manufacturing, hardware and software, core and edge—creating a new competitive landscape where agility, ecosystem orchestration, and algorithmic insight determine enduring advantage.

Artificial Intelligence in Semiconductor Market Major Players:

  • NVIDIA Corporation
  • Intel Corporation
  • Advanced Micro Devices (AMD)
  • Qualcomm Technologies, Inc.
  • Alphabet Inc. (Google)
  • Apple Inc.
  • Samsung Electronics Co., Ltd.
  • Broadcom Inc.
  • Taiwan Semiconductor Manufacturing Company (TSMC)
  • Samsung Electronics
  • Other Prominent Players

Key Segmentation:

By Chip Type

  • Central Processing Units (CPUs)
  • Graphics Processing Units (GPUs)
  • Field-Programmable Gate Arrays (FPGAs)
  • Application-Specific Integrated Circuits (ASICs)
  • Tensor Processing Units (TPUs)

By Technology 

  • Machine Learning
  • Deep Learning
  • Natural Language Processing (NLP)
  • Computer Vision
  • Others

By Application

  • Autonomous Vehicles
  • Robotics
  • Consumer Electronics
  • Healthcare & Medical Imaging
  • Industrial Automation
  • Smart Manufacturing
  • Security & Surveillance
  • Data Centers & Cloud Computing
  • Others (Smart Home Devices, Wearables, etc.)

By End-Use Industry

  • Automotive
  • Electronics & Consumer Devices
  • Healthcare
  • Industrial
  • Aerospace & Defense
  • Telecommunication
  • IT & Data Centers
  • Others

By Region

  • North America
  • Europe
  • Asia Pacific
  • Middle East
  • Africa
  • South America

Have Questions? Reach Out Before Buying: https://www.astuteanalytica.com/inquire-before-purchase/artificial-intelligence-in-semiconductor-market

About Astute Analytica

Astute Analytica is a global market research and advisory firm providing data-driven insights across industries such as technology, healthcare, chemicals, semiconductors, FMCG, and more. We publish multiple reports daily, equipping businesses with the intelligence they need to navigate market trends, emerging opportunities, competitive landscapes, and technological advancements.

With a team of experienced business analysts, economists, and industry experts, we deliver accurate, in-depth, and actionable research tailored to meet the strategic needs of our clients. At Astute Analytica, our clients come first, and we are committed to delivering cost-effective, high-value research solutions that drive success in an evolving marketplace.

Contact Us:
Astute Analytica
Phone: +1-888 429 6757 (US Toll Free); +91-0120- 4483891 (Rest of the World)
For Sales Enquiries: sales@astuteanalytica.com
Website: https://www.astuteanalytica.com/
Follow us on: LinkedIn Twitter YouTube

            





Source link

Continue Reading

AI Insights

Prediction: This Artificial Intelligence (AI) and “Magnificent Seven” Stock Will Be the Next Company to Surpass a $3 Trillion Market Cap by the End of 2025

Published

on


Key Points

  • The artificial intelligence trend will be a huge growth engine for Amazon’s cloud computing division.

  • Efficiency improvements should help expand profit margins for its e-commerce business.

  • Anticipation of the company’s earnings growth could help drive the shares higher in 2025’s second half.

Only three stocks so far have ever achieved a market capitalization of $3 trillion: Microsoft, Nvidia, and Apple. Tremendous wealth has been created for some long-term investors in these companies — only two countries (China and the United States) have gross domestic products greater than their combined worth today.

In recent years, artificial intelligence (AI) and other technology tailwinds have driven these stocks to previously inconceivable heights, and it looks like the party is just getting started. So, which stock will be next to reach $3 trillion?

Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue »

I think it will be Amazon(NASDAQ: AMZN), and it will happen before the year is done. Here’s why.

The next wave of cloud growth

Amazon was positioned perfectly to take advantage of the AI revolution. Over the last two decades, it has built the leading cloud computing infrastructure company, Amazon Web Services (AWS), which as of its last reported quarter had booked more than $110 billion in trailing-12-month revenue. New AI workloads require immense amounts of computing power, which only some of the large cloud providers have the capacity to provide.

AWS’s revenue growth has accelerated in recent quarters, hitting 17% growth year-over-year in Q1 of this year. With spending on AI just getting started, the unit’s revenue growth could stay in the double-digit percentages for many years. Its profit margins are also expanding, and hit 37.5% over the last 12 months.

Assuming that its double-digit percentage revenue growth continues over the next several years, Amazon Web Services will reach $200 billion in annual revenue within the decade. At its current 37.5% operating margin, that would equate to a cool $75 billion in operating income just from AWS. Investors can anticipate this growth and should start pricing those expected profits into the stock as the second half of 2025 progresses.

Image source: Getty Images.

Automation and margin expansion

For years, Amazon’s e-commerce platform operated at razor-thin margins. Over the past 12 months, the company’s North America division generated close to $400 billion in revenue but produced just $25.8 billion in operating income, or a 6.3% profit margin.

However, in the last few quarters, the fruits of Amazon’s long-term investments have begun to ripen in the form of profit margin expansion. The company spent billions of dollars to build out a vertically integrated delivery network that will give it operating leverage at increasing scale. It now has an advertising division generating tens of billions of dollars in annual revenue. It’s beginning to roll out more advanced robotics systems at its warehouses, so they will require fewer workers to operate. All of this should lead to long-term profit margin expansion.

Indeed, its North American segment’s operating margin has begun to expand already, but it still has plenty of room to grow. With growing contributions to the top line from high-margin revenue sources like subscriptions, advertising, and third-party seller services combined with a highly efficient and automated logistics network, Amazon could easily expand its North American operating margin to 15% within the next few years. On $500 billion in annual revenue, that would equate to $75 billion in annual operating income from the retail-focused segment.

AMZN Operating Income (TTM) Chart

AMZN Operating Income (TTM) data by YCharts.

The path to $3 trillion

Currently, Amazon’s market cap is in the neighborhood of $2.3 trillion. But over the course of the rest of this year, investors should get a clearer picture of its profit margin expansion story and the earnings growth it can expect due to the AI trend and its ever more efficient e-commerce network.

Today, the AWS and North American (retail) segments combine to produce annual operating income of $72 billion. But based on these projections, within a decade, we can expect that figure to hit $150 billion. And that is assuming that the international segment — which still operates at quite narrow margins — provides zero operating income.

It won’t happen this year, but investors habitually price the future of companies into their stocks, and it will become increasingly clear that Amazon still has huge potential to grow its earnings over the next decade.

For a company with $150 billion in annual earnings, a $3 trillion market cap would give it an earnings ratio of 20. That’s an entirely reasonable valuation for a business such as Amazon. It’s not guaranteed to reach that market cap in 2025, but I believe investors will grow increasingly optimistic about Amazon’s future earnings potential as we progress through the second half of this year, driving its share price to new heights and keeping its shareholders fat and happy.

Should you invest $1,000 in Amazon right now?

Before you buy stock in Amazon, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Amazon wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider whenNetflixmade this list on December 17, 2004… if you invested $1,000 at the time of our recommendation,you’d have $694,758!* Or when Nvidiamade this list on April 15, 2005… if you invested $1,000 at the time of our recommendation,you’d have $998,376!*

Now, it’s worth notingStock Advisor’s total average return is1,058% — a market-crushing outperformance compared to180%for the S&P 500. Don’t miss out on the latest top 10 list, available when you joinStock Advisor.

See the 10 stocks »

*Stock Advisor returns as of July 7, 2025

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Brett Schafer has positions in Amazon. The Motley Fool has positions in and recommends Amazon, Apple, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Insights

Can chatbots really improve mental health?

Published

on


Recently, I found myself pouring my heart out, not to a human, but to a chatbot named Wysa on my phone. It nodded – virtually – asked me how I was feeling and gently suggested trying breathing exercises.

As a neuroscientist, I couldn’t help but wonder: Was I actually feeling better, or was I just being expertly redirected by a well-trained algorithm? Could a string of code really help calm a storm of emotions?

Artificial intelligence-powered mental health tools are becoming increasingly popular – and increasingly persuasive. But beneath their soothing prompts lie important questions: How effective are these tools? What do we really know about how they work? And what are we giving up in exchange for convenience?

Of course it’s an exciting moment for digital mental health. But understanding the trade-offs and limitations of AI-based care is crucial.

Stand-in meditation and therapy apps and bots

AI-based therapy is a relatively new player in the digital therapy field. But the U.S. mental health app market has been booming for the past few years, from apps with free tools that text you back to premium versions with an added feature that gives prompts for breathing exercises.

Headspace and Calm are two of the most well-known meditation and mindfulness apps, offering guided meditations, bedtime stories and calming soundscapes to help users relax and sleep better. Talkspace and BetterHelp go a step further, offering actual licensed therapists via chat, video or voice. The apps Happify and Moodfit aim to boost mood and challenge negative thinking with game-based exercises.

Somewhere in the middle are chatbot therapists like Wysa and Woebot, using AI to mimic real therapeutic conversations, often rooted in cognitive behavioral therapy. These apps typically offer free basic versions, with paid plans ranging from US$10 to $100 per month for more comprehensive features or access to licensed professionals.

While not designed specifically for therapy, conversational tools like ChatGPT have sparked curiosity about AI’s emotional intelligence.

Some users have turned to ChatGPT for mental health advice, with mixed outcomes, including a widely reported case in Belgium where a man died by suicide after months of conversations with a chatbot. Elsewhere, a father is seeking answers after his son was fatally shot by police, alleging that distressing conversations with an AI chatbot may have influenced his son’s mental state. These cases raise ethical questions about the role of AI in sensitive situations.

Guided meditation apps were one of the first forms of digital therapy.
IsiMS/E+ via Getty Images

Where AI comes in

Whether your brain is spiraling, sulking or just needs a nap, there’s a chatbot for that. But can AI really help your brain process complex emotions? Or are people just outsourcing stress to silicon-based support systems that sound empathetic?

And how exactly does AI therapy work inside our brains?

Most AI mental health apps promise some flavor of cognitive behavioral therapy, which is basically structured self-talk for your inner chaos. Think of it as Marie Kondo-ing, the Japanese tidying expert known for helping people keep only what “sparks joy.” You identify unhelpful thought patterns like “I’m a failure,” examine them, and decide whether they serve you or just create anxiety.

But can a chatbot help you rewire your thoughts? Surprisingly, there’s science suggesting it’s possible. Studies have shown that digital forms of talk therapy can reduce symptoms of anxiety and depression, especially for mild to moderate cases. In fact, Woebot has published peer-reviewed research showing reduced depressive symptoms in young adults after just two weeks of chatting.

These apps are designed to simulate therapeutic interaction, offering empathy, asking guided questions and walking you through evidence-based tools. The goal is to help with decision-making and self-control, and to help calm the nervous system.

The neuroscience behind cognitive behavioral therapy is solid: It’s about activating the brain’s executive control centers, helping us shift our attention, challenge automatic thoughts and regulate our emotions.

The question is whether a chatbot can reliably replicate that, and whether our brains actually believe it.

A user’s experience, and what it might mean for the brain

“I had a rough week,” a friend told me recently. I asked her to try out a mental health chatbot for a few days. She told me the bot replied with an encouraging emoji and a prompt generated by its algorithm to try a calming strategy tailored to her mood. Then, to her surprise, it helped her sleep better by week’s end.

As a neuroscientist, I couldn’t help but ask: Which neurons in her brain were kicking in to help her feel calm?

This isn’t a one-off story. A growing number of user surveys and clinical trials suggest that cognitive behavioral therapy-based chatbot interactions can lead to short-term improvements in mood, focus and even sleep. In randomized studies, users of mental health apps have reported reduced symptoms of depression and anxiety – outcomes that closely align with how in-person cognitive behavioral therapy influences the brain.

Several studies show that therapy chatbots can actually help people feel better. In one clinical trial, a chatbot called “Therabot” helped reduce depression and anxiety symptoms by nearly half – similar to what people experience with human therapists. Other research, including a review of over 80 studies, found that AI chatbots are especially helpful for improving mood, reducing stress and even helping people sleep better. In one study, a chatbot outperformed a self-help book in boosting mental health after just two weeks.

While people often report feeling better after using these chatbots, scientists haven’t yet confirmed exactly what’s happening in the brain during those interactions. In other words, we know they work for many people, but we’re still learning how and why.

AI chatbots don’t cost what a human therapist costs – and they’re available 24/7.

Red flags and risks

Apps like Wysa have earned FDA Breakthrough Device designation, a status that fast-tracks promising technologies for serious conditions, suggesting they may offer real clinical benefit. Woebot, similarly, runs randomized clinical trials showing improved depression and anxiety symptoms in new moms and college students.

While many mental health apps boast labels like “clinically validated” or “FDA approved,” those claims are often unverified. A review of top apps found that most made bold claims, but fewer than 22% cited actual scientific studies to back them up.

In addition, chatbots collect sensitive information about your mood metrics, triggers and personal stories. What if that data winds up in third-party hands such as advertisers, employers or hackers, a scenario that has occurred with genetic data? In a 2023 breach, nearly 7 million users of the DNA testing company 23andMe had their DNA and personal details exposed after hackers used previously leaked passwords to break into their accounts. Regulators later fined the company more than $2 million for failing to protect user data.

Unlike clinicians, bots aren’t bound by counseling ethics or privacy laws regarding medical information. You might be getting a form of cognitive behavioral therapy, but you’re also feeding a database.

And sure, bots can guide you through breathing exercises or prompt cognitive reappraisal, but when faced with emotional complexity or crisis, they’re often out of their depth. Human therapists tap into nuance, past trauma, empathy and live feedback loops. Can an algorithm say “I hear you” with genuine understanding? Neuroscience suggests that supportive human connection activates social brain networks that AI can’t reach.

So while in mild to moderate cases bot-delivered cognitive behavioral therapy may offer short-term symptom relief, it’s important to be aware of their limitations. For the time being, pairing bots with human care – rather than replacing it – is the safest move.



Source link

Continue Reading

Trending