AI Research
Myths of AI networking — debunked
As AI infrastructure scales at an unprecedented rate, a number of outdated assumptions keep resurfacing – especially when it comes to the role of networking in large-scale training and inference systems. Many of these myths are rooted in technologies that worked well for small clusters. But today’s systems are scaling to hundreds of thousands – and soon, millions – of GPUs. Those older models no longer apply. Let’s walk through some of the most common myths – and why Ethernet has clearly emerged as the foundation for modern AI networking.
Myth 1: You cannot use Ethernet for high performance AI networks
This myth has already been busted. Ethernet is now the de facto networking technology for AI at scale. Most, if not all, of the largest GPU clusters deployed in the past year have used Ethernet for scale-out networking.
Ethernet delivers performance that matches or exceeds what alternatives like InfiniBand offer – while providing a stronger ecosystem, broader vendor support, and faster innovation cycles. InfiniBand, for example, wasn’t designed for today’s scale. It’s a legacy fabric being pushed beyond its original purpose.
Meanwhile, Ethernet is thriving: multiple vendors are shipping 51.2T switches, and Broadcom recently introduced Tomahawk 6, the industry’s first 102.4T switch. Ecosystems for optical and electrical interconnect are also mature, and clusters of 100K GPUs and beyond are now routinely built on Ethernet.
Myth 2: You need separate networks for scale-up and scale-out
This was acceptable when GPU nodes were small. Legacy scale-up links originated in an era when connecting two or four GPUs was enough. Today, scale-up domains are expanding rapidly. You’re no longer connecting four GPUs – you’re designing systems with 64, 128, or more in a single scale-up cluster. And that’s where Ethernet, with its proven scalability, becomes the obvious choice.
Using separate technologies for local and cluster-wide interconnect only adds cost, complexity, and risk. What you want is the opposite: a single, unified network that supports both. That’s exactly what Ethernet delivers – along with interface fungibility, simplified operations, and an open ecosystem.
To accelerate this interface convergence, we’ve contributed the Scale-Up Ethernet (SUE) framework to the Open Compute Project, helping the industry standardize around a single AI networking fabric.
Myth 3: You need proprietary interconnects and exotic optics
This is another holdover from a different era. Proprietary interconnects and tightly coupled optics may have worked for small, fixed systems – but today’s AI networks demand flexibility and openness.
Ethernet gives you options: third-generation co-packaged optics (CPO), module-based retimed optics, linear drive optics, and the longest-reach passive copper. You’re not locked into one solution. You can tailor your interconnect to your power, performance, and economic goals – with full ecosystem support.
Myth 4: You need proprietary NIC features for AI workloads
Some AI networks rely on programmable, high-power NICs to support features like congestion control or traffic spraying. But in many cases, that’s just masking limitations in the switching fabric.
Modern Ethernet switches – like Tomahawk 5 & 6 – integrate load balancing, rich telemetry, and failure resiliency directly into the switch. That reduces cost, lowers power, and frees up power for what matters most: your GPUs/ XPUs.
Looking ahead, the trend is clear: NIC functions will increasingly be embedded into XPUs. The smarter strategy is to simplify, not over-engineer.
Myth 5: You have to match your network to your GPU vendor
There’s no good reason for this. The most advanced GPU clusters in the world – deployed at the largest hyperscalers – run on Ethernet.
Why? Because it enables flatter, more efficient network topologies. It’s vendor-neutral. And it supports innovation – from AI-optimized collective libraries to workload-specific tuning at both the scale-up and scale-out levels.
Ethernet is a standards-based, well understood technology with a very vibrant ecosystem of partners. This allows AI clusters to scale more easily, and completely decoupled from the choice of GPU/XPU, delivering an open, scalable and power efficient system
The bottom line
Networking used to be an afterthought. Now it’s a strategic enabler of AI performance, efficiency, and scalability.
If your architecture is still built around assumptions from five years ago, it’s time to rethink them. The future of AI is being built on Ethernet – and that future is already here.
Click here to explore more about Ethernet technology and here to learn more about Merchant Silicon.
About Ram Velaga
Broadcom
Ram Velaga is Senior Vice President and General Manager of the Core Switching Group at Broadcom, responsible for the company’s extensive Ethernet switch portfolio serving broad markets including the service provider, data center and enterprise segments. Prior to joining Broadcom in 2012, he served in a variety of product management roles at Cisco Systems, including Vice President of Product Management for the Data Center Technology Group. Mr. Velaga earned an M.S. in Industrial Engineering from Penn State University and an M.B.A. from Cornell University. Mr. Velaga holds patents in communications and virtual infrastructure.
AI Research
Indonesia on Track to Achieve Sovereign AI Goals With NVIDIA, Cisco and IOH
As one of the world’s largest emerging markets, Indonesia is making strides toward its “Golden 2045 Vision” — an initiative tapping digital technologies and bringing together government, enterprises, startups and higher education to enhance productivity, efficiency and innovation across industries.
Building out the nation’s AI infrastructure is a crucial part of this plan.
That’s why Indonesian telecommunications leader Indosat Ooredoo Hutchison, aka Indosat or IOH, has partnered with Cisco and NVIDIA to support the establishment of Indonesia’s AI Center of Excellence (CoE). Led by the Ministry of Communications and Digital Affairs, called Komdigi, the CoE aims to advance secure technologies, cultivate local talent and foster innovation through collaboration with startups.
Indosat Ooredoo Hutchison President Director and CEO Vikram Sinha, Cisco Chair and CEO Chuck Robbins and NVIDIA Senior Vice President of Telecom Ronnie Vasishta today detailed the purpose and potential of the CoE during a fireside chat at Indonesia AI Day, a conference focused on how artificial intelligence can fuel the nation’s digital independence and economic growth.
As part of the CoE, a new NVIDIA AI Technology Center will offer research support, NVIDIA Inception program benefits for eligible startups, and NVIDIA Deep Learning Institute training and certification to upskill local talent.
“With the support of global partners, we’re accelerating Indonesia’s path to economic growth by ensuring Indonesians are not just users of AI, but creators and innovators,” Sinha added.
“The AI era demands fundamental architectural shifts and a workforce with digital skills to thrive,” Robbins said. “Together with Indosat, NVIDIA and Komdigi, Cisco will securely power the AI Center of Excellence — enabling innovation and skills development, and accelerating Indonesia’s growth.”
“Democratizing AI is more important than ever,” Vasishta added. “Through the new NVIDIA AI Technology Center, we’re helping Indonesia build a sustainable AI ecosystem that can serve as a model for nations looking to harness AI for innovation and economic growth.”
Making AI More Accessible
The Indonesia AI CoE will comprise an AI factory that features full-stack NVIDIA AI infrastructure — including NVIDIA Blackwell GPUs, NVIDIA Cloud Partner reference architectures and NVIDIA AI Enterprise software — as well as an intelligent security system powered by Cisco.
Called the Sovereign Security Operations Center Cloud Platform, the Cisco-powered system combines AI-based threat detection, localized data control and managed security services for the AI factory.
Building on the sovereign AI initiatives Indonesia’s technology leaders announced with NVIDIA last year, the CoE will bolster the nation’s AI strategy through four core pillars:
Some 28 independent software vendors and startups are already using IOH’s NVIDIA-powered AI infrastructure to develop cutting-edge technologies that can speed and ease workflows across higher education and research, food security, bureaucratic reform, smart cities and mobility, and healthcare.
With Indosat’s coverage across the archipelago, the company can reach hundreds of millions of Bahasa Indonesian speakers with its large language model (LLM)-powered applications.
For example, using Indosat’s Sahabat-AI collection of Bahasa Indonesian LLMs, the Indonesia government and Hippocratic AI are collaborating to develop an AI agent system that provides preventative outreach capabilities, such as helping women subscribers over the age of 50 schedule a mammogram. This can help prevent or combat breast cancer and other health complications across the population.
Separately, Sahabat-AI also enables Indosat’s AI chatbot to answer queries in the Indonesian language for various citizen and resident services. A person could ask about processes for updating their national identification card, as well as about tax rates, payment procedures, deductions and more.
In addition, a government-led forum is developing trustworthy AI frameworks tailored to Indonesian values for the safe, responsible development of artificial intelligence and related policies.
Looking forward, Indosat and NVIDIA plan to deploy AI-RAN technologies that can reach even broader audiences using AI over wireless networks.
Learn more about NVIDIA-powered AI infrastructure for telcos.
AI Research
Silicon Valley eyes a governance-lite gold rush
Andreessen Horowitz has had enough of Delaware and is moving a unit’s incorporation out west
Source link
AI Research
Artificially intelligent: Does it matter if ChatGPT can’t think? – AFR
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained