Connect with us

AI Insights

Jim Cramer reflects on Nvidia’s impact after record-breaking market cap

Published

on


CNBC’s Jim Cramer on Wednesday reflected on the significance of one of his long-time favorite stocks, Nvidia, just after the artificial intelligence powerhouse became the first company to amass $4 trillion in market cap during trading.

“The fact is, neither Microsoft nor Apple can claim that they’re currently creating a new industrial revolution, like Nvidia can,” he said. “In fairness, they did create the last industrial revolution, the rise of the personal computer, although that was a long time ago.”

Nvidia topped $4 trillion for the first time during the day’s session, but it closed up 1.8% to settle at a market cap of $3.97 trillion. The chipmaker is the world’s most valuable company, larger than competitors Microsoft and Apple, who have both formerly held that title. Nvidia surpassed $2 trillion in February of last year and climbed above $3 trillion four months later.

The tech giant has exploded over the past few years as Wall Street and the enterprise fixate on generative AI. Nvidia’s products are broadly seen as best in class, and Big Tech hyperscalers have been clamoring for them as they compete in the AI arms race and seek to benefit from the most advanced AI models.

Cramer remarked on AI’s transformative potential, and he suggested that “every single computer with a GPU that’s not as good as Nvidia’s is obsolete.” He also indicated that Nvidia’s technology will change the way businesses operate, saying it enables humanoid robots and self-driving cars. These robots can perform mundane or dangerous jobs, and they could event replace a number of white collar workers, Cramer continued.

Nvidia is a valuable player for the U.S. in its trade relationship with China, Cramer continued. The company sometimes seems to be the U.S.’s “only bargaining chip” with China, he said, as the country wants Nvidia’s products. Even though China is one of the U.S.’s top manufacturers, Cramer said he thinks Nvidia is a bigger bargaining chip than anything that China has.

“Bottom line? Nvidia, own it, don’t trade it,” he said. “Oh, and see you at $5 trillion.” 

Nvidia declined to comment.

Jim Cramer talks Nvidia hitting a $4 trillion market cap

Jim Cramer’s Guide to Investing

Sign up now for the CNBC Investing Club to follow Jim Cramer’s every move in the market.

Disclaimer The CNBC Investing Club Charitable Trust owns shares of Nvidia.

Questions for Cramer?
Call Cramer: 1-800-743-CNBC

Want to take a deep dive into Cramer’s world? Hit him up!
Mad Money TwitterJim Cramer TwitterFacebookInstagram

Questions, comments, suggestions for the “Mad Money” website? madcap@cnbc.com





Source link

AI Insights

Artificial Intelligence is putting humanity at a crossroads, says Pope Leo – Crux | Taking the Catholic Pulse

Published

on



Artificial Intelligence is putting humanity at a crossroads, says Pope Leo  Crux | Taking the Catholic Pulse



Source link

Continue Reading

AI Insights

5 Top Artificial Intelligence (AI) Stocks Ready for a Bull Run

Published

on


While there is still uncertainty surrounding the implementation of tariffs by the Trump administration, at least one sector — artificial intelligence (AI) — is starting to regain its momentum and could be set up for another bull run. The technology is being hailed as a once-in-a-generation opportunity, and the early signs are that this could indeed be the case.

With AI still in its early innings, it’s not too late to invest in the sector. Let’s look at five AI stocks to consider buying right now.

Nvidia

Nvidia‘s (NVDA -0.22%) stock has already seen massive gains the past few years, but the bull case is far from over. The company’s graphics processing units (GPUs) are the main chips used for training large language models (LLMs), and it’s also seen strong traction in inference. These AI workloads both require a lot of processing power, which its GPUs provide.

The company captured an over 90% market share in the GPU space last quarter, in large thanks to its CUDA software platform, which makes it easy for developers to program its chips for various AI workloads. In the years following its launch, a collection of tools and libraries have also been built on top of CUDA that helps optimize Nvidia’s GPUs for AI tasks.

With the AI infrastructure buildout still appearing to be in its early stages, Nvidia continues to look well-positioned for the future. Meanwhile, it has also potential big markets emerging, such as the automobile space and autonomous driving.

AMD

While Nvidia dominates AI training, Advanced Micro Devices (AMD 3.87%) is carving out a space in AI inference. Inference is the process in which an AI model applies what it has learned during training to make real-time decisions. Over time, the inference market is expected to become much larger than the training market due to increased AI usage.

AMD’s ROCm software, meanwhile, is largely considered “good enough” for inference workloads, and cost-sensitive buyers are increasingly giving its MI300 chips a closer look. That’s already showing up in the numbers, with AMD’s data center revenue surging 57% last quarter to $3.7 billion.

Even modest market share gains from a smaller base could translate into meaningful top-line growth for AMD. Importantly, one of the largest AI model companies is now using AMD’s chips to handle a significant share of its inference traffic. Cloud giants are also using AMD’s GPUs for tasks like search and generative AI. Beyond GPUs, AMD remains a strong player in data center central processing units (CPUs), which is another area benefiting from rising AI infrastructure spend.

Taken altogether, AMD has a big AI opportunity in front of it.

Image source: Getty Images.

Alphabet

If you only listened to the naysayers, you would think Alphabet (GOOGL -0.91%) (GOOG -0.86%) is an AI loser, whose main search business is about to disappear. However, that would ignore the huge distribution and ad network advantages the company took decades to build.

Meanwhile, it has quietly positioned itself as an AI leader. Its Gemini model is widely considered one of the best and getting better. It’s now helping power its search business, and it’s added innovative elements that can help monetize AI, such as “Shop with AI,” which allows users to find products simply by describing them; and a new virtual try-on feature.

Google Cloud, meanwhile, has been a strong growth driver, and is now profitable after years of heavy investment. That segment grew revenue by 28% last quarter and continues to win share in the cloud computing market. The company also has developed its own custom AI chips, which OpenAI recently began testing as an alternative to Nvidia.

Alphabet also has exposure to autonomous driving through Waymo, which now operates a paid robotaxi service in multiple cities, and quantum computing with its Willow chip.

Alphabet is one of the world’s most innovative companies and has a long runway of continued growth still in front of it.

Pinterest

Pinterest (PINS -1.98%) has leaned heavily into AI to go from simply an online vision board to a more engaging platform that is shoppable. A key part of its transformation is its multimodal AI model that is trained on both images and text. This helps power its visual search feature, as well as generate more personalized recommendations. Meanwhile, on the backend, its Performance+ platform combines AI and automation to help advertisers run better campaigns.

The strategy is working, as the platform is both gaining more users and monetizing them better. Last quarter, it grew its monthly active users by 10% to 570 million. Much of that user growth is coming from emerging markets. Through the help of Google’s strong global ad network, with whom it’s partnered, Pinterest is also much better at monetizing these users. In the first quarter, its “rest of world” segment’s average revenue per user (ARPU) jumped 29%, while overall segment revenue soared 49%.

With a large but still undermonetized user base, Pinterest has a lot of growth ahead.

Salesforce

Salesforce (CRM -1.62%) is no stranger to innovation, being one of the first large companies to embrace the software-as-a-service (SaaS) model. A leader in customer relationship management (CRM) software, the company is now looking to become a leader in agentic AI and digital labor.

Salesforce’s CRM platform was built to give its users a unified view of their siloed data all in one place. This helped create efficiencies and reduce costs by giving real-time insights and allowing for improved forecasting.

With the advent of AI, it is now looking to use its platform to create a digital workforce of AI agents that can complete tasks with little human supervision. It believes that the combination of apps, data, automation, and metadata into a single framework it calls ADAM will give it a leg up in this new agentic AI race.

The company has a huge installed user base, and its new Agentforce platform is off to a good start with over 4,000 paying customers since its October launch. With its consumption-based product, the company has a huge opportunity ahead with AI agents.



Source link

Continue Reading

AI Insights

Artificial Intelligence (AI) in Semiconductor Market to

Published

on


Chicago, July 10, 2025 (GLOBE NEWSWIRE) — The global artificial Intelligence (AI) in semiconductor market was valued at US$ 71.91 billion in 2024 and is expected to reach US$ 321.66 billion by 2033, growing at a CAGR of 18.11% during the forecast period 2025–2033.

The accelerating deployment of generative models has pushed the artificial Intelligence (AI) in semiconductor market into an unprecedented design sprint. Transformer inference now dominates data center traffic, and the sheer compute intensity is forcing architects to co-optimize logic, SRAM, and interconnect on every new tape-out. NVIDIA’s Hopper GPUs introduced fourth-generation tensor cores wired to a terabyte-per-second cross-bar, while AMD’s MI300A fused CPU, GPU, and HBM on one package to minimize memory latency. Both examples underscore how every leading-edge node—down to three nanometers—must now be power-gated at block level to maximize tops-per-watt. Astute Analytica notes that this AI-fuelled growth currently rewards only a handful of chipmakers, creating a widening technology gap across the sector.

Download Sample Pages: https://www.astuteanalytica.com/request-sample/artificial-intelligence-in-semiconductor-market

In parallel, the artificial Intelligence (AI) in semiconductor market is reordering foundry roadmaps. TSMC has fast-tracked its chip-on-wafer-on-substrate flow specifically for AI accelerators, while Samsung Foundry is sampling gate-all-around devices aimed at 30-billion-transistor monolithic dies. ASML’s High-NA EUV scanners, delivering sub-sixteen-nanometer half-pitch, will enter volume production in 2025, largely to serve AI silicon demand. Design teams now describe node choices not by classical density metrics but by “tokens per joule,” reflecting direct alignment with model inference economics. Consequently, IP vendors are adding mixed-precision MAC arrays and near-compute cache hierarchies as default deliverables. Across every link of this chain, the market is no longer a vertical; it is the central gravity well around which high-performance chip architecture now orbits.

Key Findings in Artificial Intelligence (AI) in Semiconductor Market

Market Forecast (2033) US$ 321.66 billion
CAGR 18.11%
Largest Region (2024) North America (40%)
By Chip Type   Graphics Processing Units (GPUs) (38%)
By Technology  Machine Learning (39%)
By Application     Data Centers & Cloud Computing (35%)
By End Use Industry    IT & Data Centers (40%)
Top Drivers
  • Generative AI workloads requiring specialized GPU TPU NPU chips
  • Data center expansion fueling massive AI accelerator chip demand
  • Edge AI applications proliferating across IoT automotive surveillance devices
Top Trends
  • AI-driven EDA tools automating chip design verification layout optimization
  • Custom AI accelerators outperforming general-purpose processors for specific tasks
  • Advanced packaging technologies like CoWoS enabling higher AI performance
Top Challenges
  • Only 9% companies successfully deployed AI use cases
  • Rising manufacturing costs requiring multi-billion dollar advanced fab investments

Edge Inference Accelerators Push Packaging Innovation Across Global Supply Chains

Consumer devices increasingly host large-language-model assistants locally, propelling the artificial Intelligence (AI) in semiconductor market toward edge-first design targets. Apple’s A17 Pro integrated a sixteen-core neural engine that surpasses thirty-five trillion operations per second, while Qualcomm’s Snapdragon X Elite moves foundation-model inference onto thin-and-light laptops. Achieving such feats inside battery-powered envelopes drives feverish experimentation in 2.5-D packaging, where silicon interposers shorten inter-die routing by two orders of magnitude. Intel’s Foveros Direct hybrid bonding now achieves bond pitches below ten microns, enabling logic and SRAM tiles to be stacked with less than one percent resistive overhead—numbers that previously required monolithic approaches.

Because thermal limits govern mobile form factors, power-delivery networks and vapor-chamber designs are being codesigned with die placement. STMicroelectronics and ASE have showcased fan-out panel-level packaging that enlarges substrate real estate without sacrificing yield. Such advances matter enormously: every millimeter saved in board footprint frees antenna volume for 5G and Wi-Fi 7 radios, helping OEMs offer always-connected AI assistants. Omdia estimates that more than nine hundred million edge-AI-capable devices will ship annually by 2026, a figure already steering substrate suppliers to triple capacity. As this tidal wave builds, the artificial Intelligence (AI) in semiconductor market finds its competitive frontier less at wafer fabs and more at the laminate, micro-bump, and dielectric stack where edge performance is ultimately won.

Foundry Capacity Race Intensifies Under Generative AI Compute Demand Surge

A single training run for a frontier model can consume gigawatt-hours of energy and reserve hundreds of thousands of advanced GPUs for weeks. This reality has made hyperscale cloud operators the kingmakers of the artificial Intelligence (AI) in semiconductor market. In response, TSMC, Samsung, and Intel Foundry Services have all announced overlapping expansions across Arizona, Pyeongtaek, and Magdeburg that collectively add more than four million wafer starts per year in the sub-five-nanometer domain. While capital outlays remain staggering, none of these announcements quote utilization percentages—underscoring an industry assumption that every advanced tool will be fully booked by AI silicon as soon as it is installed.

Supply tightness is amplified by the extreme EUV lithography ecosystem, where the world relies on a single photolithography vendor and two pellicle suppliers. Any hiccup cascades through quarterly availability of AI accelerators, directly influencing cloud pricing for inference APIs. Consequently, second-tier foundries such as GlobalFoundries and UMC are investing in specialized twelve-nanometer nodes optimized for voltage-domained matrix engines rather than chasing absolute density. Their strategy addresses commercial segments like industrial vision and automotive autonomy, where long-lifecycle support trumps bleeding-edge speed. Thus, the artificial Intelligence (AI) in semiconductor market is bifurcating into hyper-advanced capacity monopolized by hyperscalers and mature-node capacity securing diversified, stable profit pools.

EDA Tools Adopt AI Techniques To Shorten Tapeout And Verification

Shrink cycles measured in months, not years, are now expected in the artificial Intelligence (AI) in semiconductor market, creating overwhelming verification workloads. To cope, EDA vendors are infusing their flow with machine-learning engines that prune test-bench vectors, auto-rank bugs, and predict routing congestion before placement kicks off. Synopsys’ DSO.ai has publicly reported double-digit power reductions and week-level schedule savings across more than two hundred tap-outs; although percentages are withheld, these gains translate to thousands of engineering hours reclaimed. Cadence, for its part, integrated a reinforcement-learning placer that autonomously explores millions of layout permutations overnight on cloud instances.

The feedback loop turns virtuous: as AI improves EDA, the resulting chips further accelerate AI workloads, driving yet more demand for smarter design software. Start-ups like Celestial AI and d-Maze leverage automated formal verification to iterate photonic interconnect fabrics—an area formerly bottlenecked by manual proofs. Meanwhile, open-source initiatives such as OpenROAD are embedding graph neural networks to democratize back-end flow access for smaller firms that still hope to participate in the market. The outcome is a compression of development timelines that historically favored large incumbents, now allowing nimble teams to move from RTL to packaged samples in under nine months without incurring schedule-driven defects.

Memory Technologies Evolve For AI, Raising Bandwidth And Power Efficiency

Every additional token processed per second adds pressure on memory, making this subsystem the next battleground within the artificial Intelligence (AI) in semiconductor market. High Bandwidth Memory generation four now approaches fourteen hundred gigabytes per second per stack, yet large-language-model parameter counts still saturate these channels. To alleviate the pinch, SK hynix demonstrated HBM4E engineering samples with sixteen-high stacks bonded via hybrid thermal compression, cutting bit access energy below four picojoules. Micron answered with GDDR7 tailored for AI PCs, doubling prefetch length to reduce command overhead in mixed-precision inference.

Emerging architectures focus on moving compute toward memory. Samsung’s Memory-Semantics Processing Unit embeds arithmetic units in the buffer die, enabling sparse matrix multiplication within the HBM stack itself. Meanwhile, UCIe-compliant chiplet interfaces allow accelerator designers to tile multiple DRAM slices around a logic die, hitting aggregate bandwidth once reserved for supercomputers. Automotive suppliers are porting these ideas to LPDDR5X so driver-assistance SoCs can fuse radar and vision without exceeding vehicle thermal budgets. In short, the artificial Intelligence (AI) in semiconductor market is witnessing a profound redefinition of memory—from passive storehouse to active participant—where bytes per flop and picojoules per bit now sit alongside clock frequency as primary specification lines.

IP Cores And Chiplets Enable Modular Scaling For Specialized AI

Custom accelerators no longer begin with a blank canvas; instead, architects assemble silicon from pre-verified IP cores and chiplets sourced across a vibrant ecosystem. This trend, central to the artificial Intelligence (AI) in semiconductor market, mirrors software’s earlier shift toward microservices. For instance, Tenstorrent licenses RISC-V compute tile stacks that partners stitch into bespoke retinal-processing ASICs, while ARM’s Ethos-U NPU drops into microcontrollers for always-on keyword spotting. By relying on hardened blocks, teams sidestep months of DFT and timing closure, channeling effort into algorithm–hardware co-design.

The chiplet paradigm scales this philosophy outward. AMD’s Instinct accelerator families already combine compute CCDs, memory cache dies, and I/O hubs over Infinity Fabric links measured in single-digit nanoseconds. Open-source UCIe now defines lane discovery, flow-control, and integrity checks so different vendors can mix dies from separate foundries. That interoperability lowers NRE thresholds, enabling medical-imaging firms, for example, to integrate an FDA-certified DSP slice beside a vision transformer engine on the same organic substrate. Thus, modularity is not just a cost lever; it is an innovation catalyst ensuring the artificial Intelligence (AI) in semiconductor market accommodates both hyperscale giants and niche players solving domain-specific inference challenges.

Geographic Shifts Highlight New Hubs For AI-Focused Semiconductor Fabrication Activity

While the Pacific Rim remains dominant, geopolitical and logistical realities are spawning fresh hubs tightly coupled to the artificial Intelligence (AI) in semiconductor market. The US CHIPS incentives have drawn start-ups like Cerebras and Groq to co-locate near new fabs in Arizona, creating vertically integrated corridors where mask generation, wafer processing, and module assembly occur within a fifty-mile radius. Europe, backed by its Important Projects of Common European Interest framework, is nurturing Dresden and Grenoble as centers for AI accelerator prototyping, with IMEC providing advanced 300-millimeter pilot lines that match leading commercial nodes.

In the Middle East, the United Arab Emirates is funding RISC-V design houses focused on Arabic-language LLM accelerators, leveraging proximity to sovereign data centers hungry for energy-efficient inference. India’s Semiconductor Mission has prioritized packaging over leading-edge lithography, recognizing that back-end value capture aligns with the tidal rise of edge devices described earlier. Collectively, these moves diversify supply, but they also foster regional specialization: power-optimized inference chips in hot climates, radiation-hardened AI processors near space-technology clusters, and privacy-enhanced silicon in jurisdictions with strict data-sovereignty norms. Each development underscores how the artificial Intelligence (AI) in semiconductor market is simultaneously global in scale yet increasingly local in execution, as ecosystems tailor fabrication to indigenous talent and demand profiles.

Need Custom Data? Let Us Know: https://www.astuteanalytica.com/ask-for-customization/artificial-intelligence-in-semiconductor-market

Corporate Strategies Realign As AI Reshapes Traditional Semiconductor Value Chains

The gravitational pull of AI compute has forced corporate boards to revisit decade-old playbooks. Vertical integration, once considered risky, is resurging across the artificial Intelligence (AI) in semiconductor market. Nvidia’s acquisition of Mellanox and subsequent creation of NVLink-native DPUs illustrates how control of the network stack safeguards GPU value. Likewise, Apple’s progressive replacement of third-party modems with in-house designs highlights a commitment to end-to-end user-experience tuning for on-device intelligence. Even contract foundries now offer reference chiplet libraries, blurring lines between pure-play manufacturing and design enablement.

Meanwhile, fabless firms are forging multi-sourcing agreements to hedge supply volatility. AMD collaborates with both TSMC and Samsung, mapping identical RTL onto different process recipes to guarantee product launch windows. At the opposite end, some IP vendors license compute cores under volume-based royalties tied to AI inference throughput, rather than wafer count, aligning revenue with customer success. Investor sentiment mirrors these shifts: McKinsey observes that market capitalization accrues disproportionately to companies mastering AI-centric design-manufacturing loops, leaving laggards scrambling for relevance. Ultimately, the artificial Intelligence (AI) in semiconductor market is dissolving historical boundaries—between design and manufacturing, hardware and software, core and edge—creating a new competitive landscape where agility, ecosystem orchestration, and algorithmic insight determine enduring advantage.

Artificial Intelligence in Semiconductor Market Major Players:

  • NVIDIA Corporation
  • Intel Corporation
  • Advanced Micro Devices (AMD)
  • Qualcomm Technologies, Inc.
  • Alphabet Inc. (Google)
  • Apple Inc.
  • Samsung Electronics Co., Ltd.
  • Broadcom Inc.
  • Taiwan Semiconductor Manufacturing Company (TSMC)
  • Samsung Electronics
  • Other Prominent Players

Key Segmentation:

By Chip Type

  • Central Processing Units (CPUs)
  • Graphics Processing Units (GPUs)
  • Field-Programmable Gate Arrays (FPGAs)
  • Application-Specific Integrated Circuits (ASICs)
  • Tensor Processing Units (TPUs)

By Technology 

  • Machine Learning
  • Deep Learning
  • Natural Language Processing (NLP)
  • Computer Vision
  • Others

By Application

  • Autonomous Vehicles
  • Robotics
  • Consumer Electronics
  • Healthcare & Medical Imaging
  • Industrial Automation
  • Smart Manufacturing
  • Security & Surveillance
  • Data Centers & Cloud Computing
  • Others (Smart Home Devices, Wearables, etc.)

By End-Use Industry

  • Automotive
  • Electronics & Consumer Devices
  • Healthcare
  • Industrial
  • Aerospace & Defense
  • Telecommunication
  • IT & Data Centers
  • Others

By Region

  • North America
  • Europe
  • Asia Pacific
  • Middle East
  • Africa
  • South America

Have Questions? Reach Out Before Buying: https://www.astuteanalytica.com/inquire-before-purchase/artificial-intelligence-in-semiconductor-market

About Astute Analytica

Astute Analytica is a global market research and advisory firm providing data-driven insights across industries such as technology, healthcare, chemicals, semiconductors, FMCG, and more. We publish multiple reports daily, equipping businesses with the intelligence they need to navigate market trends, emerging opportunities, competitive landscapes, and technological advancements.

With a team of experienced business analysts, economists, and industry experts, we deliver accurate, in-depth, and actionable research tailored to meet the strategic needs of our clients. At Astute Analytica, our clients come first, and we are committed to delivering cost-effective, high-value research solutions that drive success in an evolving marketplace.

Contact Us:
Astute Analytica
Phone: +1-888 429 6757 (US Toll Free); +91-0120- 4483891 (Rest of the World)
For Sales Enquiries: sales@astuteanalytica.com
Website: https://www.astuteanalytica.com/
Follow us on: LinkedIn Twitter YouTube

            





Source link

Continue Reading

Trending