AI Research
Myths of AI networking — debunked
As AI infrastructure scales at an unprecedented rate, a number of outdated assumptions keep resurfacing – especially when it comes to the role of networking in large-scale training and inference systems. Many of these myths are rooted in technologies that worked well for small clusters. But today’s systems are scaling to hundreds of thousands – and soon, millions – of GPUs. Those older models no longer apply. Let’s walk through some of the most common myths – and why Ethernet has clearly emerged as the foundation for modern AI networking.
Myth 1: You cannot use Ethernet for high performance AI networks
This myth has already been busted. Ethernet is now the de facto networking technology for AI at scale. Most, if not all, of the largest GPU clusters deployed in the past year have used Ethernet for scale-out networking.
Ethernet delivers performance that matches or exceeds what alternatives like InfiniBand offer – while providing a stronger ecosystem, broader vendor support, and faster innovation cycles. InfiniBand, for example, wasn’t designed for today’s scale. It’s a legacy fabric being pushed beyond its original purpose.
Meanwhile, Ethernet is thriving: multiple vendors are shipping 51.2T switches, and Broadcom recently introduced Tomahawk 6, the industry’s first 102.4T switch. Ecosystems for optical and electrical interconnect are also mature, and clusters of 100K GPUs and beyond are now routinely built on Ethernet.
Myth 2: You need separate networks for scale-up and scale-out
This was acceptable when GPU nodes were small. Legacy scale-up links originated in an era when connecting two or four GPUs was enough. Today, scale-up domains are expanding rapidly. You’re no longer connecting four GPUs – you’re designing systems with 64, 128, or more in a single scale-up cluster. And that’s where Ethernet, with its proven scalability, becomes the obvious choice.
Using separate technologies for local and cluster-wide interconnect only adds cost, complexity, and risk. What you want is the opposite: a single, unified network that supports both. That’s exactly what Ethernet delivers – along with interface fungibility, simplified operations, and an open ecosystem.
To accelerate this interface convergence, we’ve contributed the Scale-Up Ethernet (SUE) framework to the Open Compute Project, helping the industry standardize around a single AI networking fabric.
Myth 3: You need proprietary interconnects and exotic optics
This is another holdover from a different era. Proprietary interconnects and tightly coupled optics may have worked for small, fixed systems – but today’s AI networks demand flexibility and openness.
Ethernet gives you options: third-generation co-packaged optics (CPO), module-based retimed optics, linear drive optics, and the longest-reach passive copper. You’re not locked into one solution. You can tailor your interconnect to your power, performance, and economic goals – with full ecosystem support.
Myth 4: You need proprietary NIC features for AI workloads
Some AI networks rely on programmable, high-power NICs to support features like congestion control or traffic spraying. But in many cases, that’s just masking limitations in the switching fabric.
Modern Ethernet switches – like Tomahawk 5 & 6 – integrate load balancing, rich telemetry, and failure resiliency directly into the switch. That reduces cost, lowers power, and frees up power for what matters most: your GPUs/ XPUs.
Looking ahead, the trend is clear: NIC functions will increasingly be embedded into XPUs. The smarter strategy is to simplify, not over-engineer.
Myth 5: You have to match your network to your GPU vendor
There’s no good reason for this. The most advanced GPU clusters in the world – deployed at the largest hyperscalers – run on Ethernet.
Why? Because it enables flatter, more efficient network topologies. It’s vendor-neutral. And it supports innovation – from AI-optimized collective libraries to workload-specific tuning at both the scale-up and scale-out levels.
Ethernet is a standards-based, well understood technology with a very vibrant ecosystem of partners. This allows AI clusters to scale more easily, and completely decoupled from the choice of GPU/XPU, delivering an open, scalable and power efficient system
The bottom line
Networking used to be an afterthought. Now it’s a strategic enabler of AI performance, efficiency, and scalability.
If your architecture is still built around assumptions from five years ago, it’s time to rethink them. The future of AI is being built on Ethernet – and that future is already here.
Click here to explore more about Ethernet technology and here to learn more about Merchant Silicon.
About Ram Velaga
Broadcom
Ram Velaga is Senior Vice President and General Manager of the Core Switching Group at Broadcom, responsible for the company’s extensive Ethernet switch portfolio serving broad markets including the service provider, data center and enterprise segments. Prior to joining Broadcom in 2012, he served in a variety of product management roles at Cisco Systems, including Vice President of Product Management for the Data Center Technology Group. Mr. Velaga earned an M.S. in Industrial Engineering from Penn State University and an M.B.A. from Cornell University. Mr. Velaga holds patents in communications and virtual infrastructure.
AI Research
RRC getting real with artificial intelligence – Winnipeg Free Press
Red River College Polytechnic is offering crash courses in generative artificial intelligence to help classroom teachers get more comfortable with the technology.
Foundations of Generative AI in Education, a microcredential that takes 15 hours to complete, gives participants guidance to explore AI tools and encourage ethical and effective use of them in schools.
Tyler Steiner was tasked with creating the program in 2023, shortly after the release of ChatGPT — a chatbot that generates human-like replies to prompts within seconds — and numerous copycat programs that have come online since.
MIKE DEAL / FREE PRESS
Lauren Phillips, a RRC Polytech associate dean, said it’s important students know when they can use AI.
“There’s no putting that genie back in the bottle,” said Steiner, a curriculum developer at the post-secondary institute in Winnipeg.
While noting teachers can “lock and block” via pen-and-paper tests and essays, the reality is students are using GenAI outside school and authentic experiential learning should reflect the real world, he said.
Steiner’s advice?
Introduce it with the caveat students should withhold personal information from prompts to protect their privacy, analyze answers for bias and “hallucinations” (false or misleading information) and be wary of over-reliance on technology.
RRC Polytech piloted its first GenAI microcredential little more than a year ago. A total of 109 completion badges have been issued to date.
The majority of early participants in the training program are faculty members at RRC Polytech. The Winnipeg School Division has also covered the tab for about 20 teachers who’ve expressed interest in upskilling.
“There was a lot of fear when GenAI first launched, but we also saw that it had a ton of power and possibility in education,” said Lauren Phillips, associate dean of RRC Polytech’s school of education, arts and sciences.
Phillips called a microcredential “the perfect tool” to familiarize teachers with GenAI in short order, as it is already rapidly changing the kindergarten to Grade 12 and post-secondary education sectors.
Manitoba teachers have told the Free Press they are using chatbots to plan lessons and brainstorm report card comments, among other tasks.
Students are using them to help with everything from breaking down a complex math equation to creating schedules to manage their time. Others have been caught cutting corners.
Submitted assignments should always disclose when an author has used ChatGPT, Copilot or another tool “as a partner,” Phillips said.
She and Steiner said in separate interviews the key to success is providing students with clear instructions about when they can and cannot use this type of technology.
Business administration instructor Nora Sobel plans to spend much of the summer refreshing course content to incorporate their tips; Sobel recently completed all three GenAI microcredentials available on her campus.
Two new ones — Application of Generative AI in Education and Integration of Generative AI in Education — were added to the roster this spring.
Sobel said it is “overwhelming” to navigate this transformative technology, but it’s important to do so because employers will expect graduates to have the know-how to use them properly.
It’s often obvious when a student has used GenAI because their answers are abstract and generic, she said, adding her goal is to release rubrics in 2025-26 with explicit direction surrounding the active rather than passive use of these tools.
“The main idea is not to use the AI tool alone, standalone. You want to complement it with AI literacy training,” the instructor said.
She noted her favourite programs are conversational AI assistant Microsoft Copilot, Perplexity AI (an AI-powered search engine that generates answers with links to references) and Google NotebookLM.
Whereas Copilot and Perplexity AI primarily draw from external sources, Google NotebookLM can analyze trends in original items uploaded by a user.
Registration for RRC Polytech’s next introductory microcredential, running Oct. 6 through Nov. 2, is open. Tuition is $313 per student.
maggie.macintosh@freepress.mb.ca
Maggie Macintosh
Education reporter
Maggie Macintosh reports on education for the Free Press. Originally from Hamilton, Ont., she first reported for the Free Press in 2017. Read more about Maggie.
Funding for the Free Press education reporter comes from the Government of Canada through the Local Journalism Initiative.
Every piece of reporting Maggie produces is reviewed by an editing team before it is posted online or published in print — part of the Free Press‘s tradition, since 1872, of producing reliable independent journalism. Read more about Free Press’s history and mandate, and learn how our newsroom operates.
Our newsroom depends on a growing audience of readers to power our journalism. If you are not a paid reader, please consider becoming a subscriber.
Our newsroom depends on its audience of readers to power our journalism. Thank you for your support.
AI Research
How IBM helped Lockheed Martin streamline its data landscape and fuel its AI
ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it’s cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.
For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.
AI Research
Indonesia on Track to Achieve Sovereign AI Goals With NVIDIA, Cisco and IOH
As one of the world’s largest emerging markets, Indonesia is making strides toward its “Golden 2045 Vision” — an initiative tapping digital technologies and bringing together government, enterprises, startups and higher education to enhance productivity, efficiency and innovation across industries.
Building out the nation’s AI infrastructure is a crucial part of this plan.
That’s why Indonesian telecommunications leader Indosat Ooredoo Hutchison, aka Indosat or IOH, has partnered with Cisco and NVIDIA to support the establishment of Indonesia’s AI Center of Excellence (CoE). Led by the Ministry of Communications and Digital Affairs, called Komdigi, the CoE aims to advance secure technologies, cultivate local talent and foster innovation through collaboration with startups.
Indosat Ooredoo Hutchison President Director and CEO Vikram Sinha, Cisco Chair and CEO Chuck Robbins and NVIDIA Senior Vice President of Telecom Ronnie Vasishta today detailed the purpose and potential of the CoE during a fireside chat at Indonesia AI Day, a conference focused on how artificial intelligence can fuel the nation’s digital independence and economic growth.
As part of the CoE, a new NVIDIA AI Technology Center will offer research support, NVIDIA Inception program benefits for eligible startups, and NVIDIA Deep Learning Institute training and certification to upskill local talent.
“With the support of global partners, we’re accelerating Indonesia’s path to economic growth by ensuring Indonesians are not just users of AI, but creators and innovators,” Sinha added.
“The AI era demands fundamental architectural shifts and a workforce with digital skills to thrive,” Robbins said. “Together with Indosat, NVIDIA and Komdigi, Cisco will securely power the AI Center of Excellence — enabling innovation and skills development, and accelerating Indonesia’s growth.”
“Democratizing AI is more important than ever,” Vasishta added. “Through the new NVIDIA AI Technology Center, we’re helping Indonesia build a sustainable AI ecosystem that can serve as a model for nations looking to harness AI for innovation and economic growth.”
Making AI More Accessible
The Indonesia AI CoE will comprise an AI factory that features full-stack NVIDIA AI infrastructure — including NVIDIA Blackwell GPUs, NVIDIA Cloud Partner reference architectures and NVIDIA AI Enterprise software — as well as an intelligent security system powered by Cisco.
Called the Sovereign Security Operations Center Cloud Platform, the Cisco-powered system combines AI-based threat detection, localized data control and managed security services for the AI factory.
Building on the sovereign AI initiatives Indonesia’s technology leaders announced with NVIDIA last year, the CoE will bolster the nation’s AI strategy through four core pillars:
Some 28 independent software vendors and startups are already using IOH’s NVIDIA-powered AI infrastructure to develop cutting-edge technologies that can speed and ease workflows across higher education and research, food security, bureaucratic reform, smart cities and mobility, and healthcare.
With Indosat’s coverage across the archipelago, the company can reach hundreds of millions of Bahasa Indonesian speakers with its large language model (LLM)-powered applications.
For example, using Indosat’s Sahabat-AI collection of Bahasa Indonesian LLMs, the Indonesia government and Hippocratic AI are collaborating to develop an AI agent system that provides preventative outreach capabilities, such as helping women subscribers over the age of 50 schedule a mammogram. This can help prevent or combat breast cancer and other health complications across the population.
Separately, Sahabat-AI also enables Indosat’s AI chatbot to answer queries in the Indonesian language for various citizen and resident services. A person could ask about processes for updating their national identification card, as well as about tax rates, payment procedures, deductions and more.
In addition, a government-led forum is developing trustworthy AI frameworks tailored to Indonesian values for the safe, responsible development of artificial intelligence and related policies.
Looking forward, Indosat and NVIDIA plan to deploy AI-RAN technologies that can reach even broader audiences using AI over wireless networks.
Learn more about NVIDIA-powered AI infrastructure for telcos.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education6 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas