Connect with us

AI Research

Myths of AI networking — debunked

Published

on


As AI infrastructure scales at an unprecedented rate, a number of outdated assumptions keep resurfacing – especially when it comes to the role of networking in large-scale training and inference systems. Many of these myths are rooted in technologies that worked well for small clusters. But today’s systems are scaling to hundreds of thousands – and soon, millions – of GPUs. Those older models no longer apply. Let’s walk through some of the most common myths – and why Ethernet has clearly emerged as the foundation for modern AI networking.

Myth 1: You cannot use Ethernet for high performance AI networks

This myth has already been busted. Ethernet is now the de facto networking technology for AI at scale. Most, if not all, of the largest GPU clusters deployed in the past year have used Ethernet for scale-out networking.

Ethernet delivers performance that matches or exceeds what alternatives like InfiniBand offer – while providing a stronger ecosystem, broader vendor support, and faster innovation cycles. InfiniBand, for example, wasn’t designed for today’s scale. It’s a legacy fabric being pushed beyond its original purpose.

Meanwhile, Ethernet is thriving: multiple vendors are shipping 51.2T switches, and Broadcom recently introduced Tomahawk 6, the industry’s first 102.4T switch. Ecosystems for optical and electrical interconnect are also mature, and clusters of 100K GPUs and beyond are now routinely built on Ethernet.

Myth 2: You need separate networks for scale-up and scale-out

This was acceptable when GPU nodes were small. Legacy scale-up links originated in an era when connecting two or four GPUs was enough. Today, scale-up domains are expanding rapidly. You’re no longer connecting four GPUs – you’re designing systems with 64, 128, or more in a single scale-up cluster. And that’s where Ethernet, with its proven scalability, becomes the obvious choice.

Using separate technologies for local and cluster-wide interconnect only adds cost, complexity, and risk. What you want is the opposite: a single, unified network that supports both. That’s exactly what Ethernet delivers – along with interface fungibility, simplified operations, and an open ecosystem.

To accelerate this interface convergence, we’ve contributed the Scale-Up Ethernet (SUE) framework to the Open Compute Project, helping the industry standardize around a single AI networking fabric.

Myth 3: You need proprietary interconnects and exotic optics

This is another holdover from a different era. Proprietary interconnects and tightly coupled optics may have worked for small, fixed systems – but today’s AI networks demand flexibility and openness.

Ethernet gives you options: third-generation co-packaged optics (CPO), module-based retimed optics, linear drive optics, and the longest-reach passive copper. You’re not locked into one solution. You can tailor your interconnect to your power, performance, and economic goals – with full ecosystem support.

Myth 4: You need proprietary NIC features for AI workloads

Some AI networks rely on programmable, high-power NICs to support features like congestion control or traffic spraying. But in many cases, that’s just masking limitations in the switching fabric.

Modern Ethernet switches – like Tomahawk 5 & 6 – integrate load balancing, rich telemetry, and failure resiliency directly into the switch. That reduces cost, lowers power, and frees up power for what matters most: your GPUs/ XPUs.

Looking ahead, the trend is clear: NIC functions will increasingly be embedded into XPUs. The smarter strategy is to simplify, not over-engineer.

Myth 5: You have to match your network to your GPU vendor

There’s no good reason for this. The most advanced GPU clusters in the world – deployed at the largest hyperscalers – run on Ethernet.

Why? Because it enables flatter, more efficient network topologies. It’s vendor-neutral. And it supports innovation – from AI-optimized collective libraries to workload-specific tuning at both the scale-up and scale-out levels.

Ethernet is a standards-based, well understood technology with a very vibrant ecosystem of partners. This allows AI clusters to scale more easily, and completely decoupled from the choice of GPU/XPU, delivering an open, scalable and power efficient system

The bottom line

Networking used to be an afterthought. Now it’s a strategic enabler of AI performance, efficiency, and scalability.

If your architecture is still built around assumptions from five years ago, it’s time to rethink them. The future of AI is being built on Ethernet – and that future is already here.

Click here to explore more about Ethernet technology and here to learn more about Merchant Silicon.

About Ram Velaga

Broadcom

Ram Velaga is Senior Vice President and General Manager of the Core Switching Group at Broadcom, responsible for the company’s extensive Ethernet switch portfolio serving broad markets including the service provider, data center and enterprise segments. Prior to joining Broadcom in 2012, he served in a variety of product management roles at Cisco Systems, including Vice President of Product Management for the Data Center Technology Group. Mr. Velaga earned an M.S. in Industrial Engineering from Penn State University and an M.B.A. from Cornell University. Mr. Velaga holds patents in communications and virtual infrastructure.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

AI & Elections – McCourt School – Public Policy

Published

on


How can artificial intelligence transform election administration while preserving public trust? This spring, the McCourt School of Public Policy partnered with The Elections Group and Discourse Labs to tackle this critical question at a groundbreaking workshop.

The day-long convening brought together election officials, researchers, technology experts and civil society leaders to chart a responsible path forward for AI in election administration. Participants focused particularly on how AI could revolutionize voter-centric communications—from streamlining information delivery to enhancing accessibility.

The discussions revealed both promising opportunities for public service innovation and legitimate concerns about maintaining institutional trust in our democratic processes. Workshop participants developed a comprehensive set of findings and actionable recommendations that could shape the future of election technology.

Expert Insights from Georgetown’s Leading Researchers

To unpack the workshop’s key insights, we spoke with two McCourt School experts who are at the forefront of this intersection between technology and democracy:

Ioannis Ziogas is an Assistant Teaching Professor at the McCourt School of Public Policy, an Assistant Research Professor at the Massive Data Institute, and Associate Director of the Data Science for Public Policy program. His work bridges the gap between cutting-edge data science and real-world policy challenges.

Headshot of Lia Merivaki

Lia Merivaki is an Associate Teaching Professor at the McCourt School of Public Policy and Associate Research Professor at the Massive Data Institute, where she focuses on the practical applications of technology in democratic governance, particularly election integrity and voter confidence.

Together, they address five essential questions about AI’s role in election administration—and what it means for voters, officials and democracy itself.

Q1

How is AI currently being used in election administration, and are there particular jurisdictions that are leading in adoption?

Ioannis: When we talk about AI in elections, we need to clarify that it is not a single technology but a family of approaches, from predictive analytics to natural language processing to generative AI. In practice, election officials are already using generative AI routinely for communication purposes such as drafting social media posts and shaping public-facing messages. These efforts aim to increase trust in the election process and make information more accessible. Some offices have even experimented with using generative AI to design infographics, though this can be tricky due to hallucinations or inaccuracies. More recently, local election officials are exploring AI to streamline staff training, operations, or to summarize complex legal documents.

Our work focuses on alerting election officials to the limitations of generative AI, such as model drift and bias propagation. A key distinction we emphasize in our research is between AI as a backend administrative tool (which voters may never see) and AI as a direct interface with the public (where voter trust and transparency become central). We believe that generative AI tools can be used in both contexts, provided that there is awareness of the challenges and limitations.

Lia: Election officials have been familiar with AI for quite some time, primarily to understand how to mitigate AI-generated misinformation. A leader in this space has been the Arizona Secretary of State Adrian Fontes, who conducted the first of its kind deepfake detection tabletop exercise in preparation for the 2024 election cycle. 

We’ve had conversations with election officials in California, New Mexico, North Carolina, Florida, Maryland and others whom we call early adopters, with many more being ‘AI curious.’

Q2

Security is always a big concern when it comes to the use of AI. Talk about what risks are introduced by bringing AI into election administration, and conversely, how AI can help detect and prevent any type of election interference and voter fraud.

Ioannis: From my perspective, the core security challenge is not only technical but also about privacy and trust. AI systems, by design, rely on large volumes of data. In election contexts, this often includes sensitive voter information. Even when anonymized, the use of personal data raises concerns about surveillance, profiling, or accidental disclosure. Another risk relates to delegating sensitive tasks to AI, which can render election systems vulnerable to adversarial attacks or hidden biases baked into the models. 

At the same time, AI can support security: machine learning can detect coordinated online influence campaigns, identify anomalous traffic to election websites, or flag irregularities that warrant further human review. In short, I view AI as both a potential shield and a potential vulnerability, which is why careful governance and transparency are essential. That is why I believe it is critical to pair AI adoption with clear safeguards, training and guidance, so that officials can use these tools confidently and responsibly.

Lia: A potential risk we are trying to mitigate is the impact of relying on AI for important administrative tasks on voter trust. For instance, voters who call/email their election official and expect to talk with them, but instead interact with a chatbot, may feel disappointed and in turn not trust the information as well as the election official. There is also some evidence that voters do not trust information that is generated with AI, particularly when its use is disclosed.

As for detecting and preventing any irregularities, over-reliance on AI can be problematic and can lead to disenfranchisement. To illustrate, AI can help identify individuals in voter records whose information is missing, which would seemingly make the process of maintaining accurate lists more efficient. The election office can send a letter to these individuals to verify they are citizens, and ask for their information to be updated. This seems like a sound practice; however, it violates federal law, and it risks making eligible voters feel intimidated, or having their eligibility challenged by bad actors. The reality is that maintaining voter records is a highly complex process, and data entry errors are very common. Deploying AI models to substitute existing practices in election administration such as voter list maintenance – with the goal of detecting whether non-citizens register or whether dead dead voters exist in voter rolls – can harm voters and undermine trust.

Q3

What are the biggest barriers to AI adoption in election administration – technical, financial, or political?

Lia: There are significant skills and knowledge gaps among election officials when it comes to utilizing technology generally, and we see such gaps with AI adoption, which is not surprising. Aside from technical barriers, election offices are under-resourced, especially at the local jurisdiction level. We observe that policies around AI adoption in public administration generally, and election administration specifically, are sparse at the moment.

While the election community invested a lot of resources to safeguard the election infrastructure against the threats of AI, we are not seeing – yet – a proportional effort to educate and prepare election officials on how to use AI to improve elections. To better understand the landscape of AI adoption and how to best support the election community, we hosted an exploratory workshop at McCourt in April 2025, in collaboration with The Elections Group and Discourse Labs. In this workshop, we brought together election officials, industry, civil society leaders and other practitioners to discuss how AI tools are used by election officials, what technical barriers exist and how to move forward with designing policies on ethical and responsible use of AI in election administration. Through this workshop, we identified a list of priorities which require close collaboration among the election community, academia, civil society and industry, to ensure that the adoption of AI is done responsibly, ethically and efficiently, without negatively affecting the voter experience.

Ioannis: I would highlight that barriers are not just about resources but also about institutional design. Election officials often work in environments of high political scrutiny but low budgets and limited technical staff. Introducing AI tools into that context requires financial investment and clear guidance on how to evaluate these systems: what counts as success, how to measure error rates and how to align tools with federal and state regulations. Beyond that, there is a cultural barrier. Many election officials are understandably cautious; they’ve spent the past decade defending democracy against disinformation and cyber threats, so embracing new technologies requires trust and confidence that AI will not introduce new risks. That is why partnerships with universities and nonpartisan civil-society groups are critical: they provide a space to pilot ideas, build capacity, and translate research into practice.

Our two priorities are to help narrow the skills gap and build frameworks for ethical and responsible AI use. At McCourt, we’re collaborating with the Arizona State University’s Mechanics of Democracy Lab, which is developing training materials and custom-AI products for election officials. Drawing on our background in AI and elections, we aim to provide election officials with a practical resource that maps out both the risks and the potential of these tools, and that helps them identify ideal use cases where AI can enhance efficiency without compromising trust or voter experience.

Q4

Looking ahead, what emerging AI technologies could transform election administration in the next 5-10 years?

Lia: It’s hard to predict really. At the moment we are seeing high interest from vendors and election officials to integrate AI into elections. Concerns about security and privacy will undoubtedly shape the discussion about what AI can do for the election infrastructure. It could be possible that we see a liberal approach to using AI technologies to communicate with voters, produce training materials, translate election materials into non-English languages, among others. That said, elections are run by humans, and maintaining public trust relies on having “humans in the – elections – loop.” This, coupled with ongoing debates about how AI should or should not be regulated, may result in more guardrails and restrictions over time.

Ioannis: One promising direction is multimodal AI: systems that process text, audio and images together. For election officials, this could mean automatically generating plain-language guides, sign-language translations, or sample audio ballots to improve accessibility. But these same tools can amplify risks if their limitations are not understood. For that reason, any adoption will need to be coupled with auditing, transparency and education for election staff, so they view AI as a supportive tool rather than a replacement platform or a black box.

Q5

What guidelines or regulatory frameworks are needed to govern AI use in elections?

Ioannis: We urgently need a baseline framework that establishes what is permissible, what requires disclosure, and what is off-limits. Today, election officials are experimenting with AI in a largely unregulated space, and they are eager for guidance. A responsible framework should include at least three elements: a) transparency: voters should know when AI-generated materials are used in communications; b) accountability: human oversight should retain the final authority, with AI serving only as a support; and c) auditing: independent experts must be able to test and evaluate these tools for accuracy, bias and security.



Source link

Continue Reading

AI Research

AI Transformation in NHS Faces Key Challenges: Study

Published

on


Implementing artificial intelligence (AI) into NHS hospitals is far harder than initially anticipated, with complications around governance, harmonisation with old IT systems and finding the right AI tools and staff training, finds a major new UK study led by UCL researchers.

The authors of the study, published in The Lancet eClinicalMedicine, say the findings should provide timely and useful learning for the UK Government, whose recent 10-year NHS plan identifies digital transformation, including AI, as a key platform to improving the service and patient experience.

In 2023, NHS England launched a programme to introduce AI to help diagnose chest conditions, including lung cancer, across 66 NHS hospital trusts in England, backed by £21 million in funding. The trusts are grouped into 12 imaging diagnostic networks: these hospital networks mean more patients have access to specialist opinions. Key functions of these AI tools included prioritising critical cases for specialist review and supporting specialists’ decisions by highlighting abnormalities on scans.

Funded by the National Institute for Health and Care Research (NIHR), this research was conducted by a team from UCL, the Nuffield Trust, and the University of Cambridge, analysing how procurement and early deployment of the AI tools went. The study is one of the first studies to analyse real-world implementation of AI in healthcare.

Evidence from previous studies, mostly laboratory-based, suggested that AI might benefit diagnostic services by supporting decisions, improving detection accuracy, reducing errors and easing workforce burdens.

In this UCL-led study, the researchers reviewed how the new diagnostic tools were procured and set up through interviews with hospital staff and AI suppliers, identifying any pitfalls but also any factors that helped smooth the process.

They found that setting up the AI tools took longer than anticipated by the programme’s leadership. Contracting took between four and 10 months longer than anticipated and by June 2025, 18 months after contracting was meant to be completed, a third (23 out of 66) of the hospital trusts were not yet using the tools in clinical practice.

Key challenges included engaging clinical staff with already high workloads in the project, embedding the new technology in ageing and varied NHS IT systems across dozens of hospitals and a general lack of understanding, and scepticism, among staff about using AI in healthcare.

The study also identified important factors which helped embed AI including national programme leadership and local imaging networks sharing resources and expertise, high levels of commitment from hospital staff leading implementation, and dedicated project management.

The researchers concluded that while “AI tools may offer valuable support for diagnostic services, they may not address current healthcare service pressures as straightforwardly as policymakers may hope” and are recommending that NHS staff are trained in how AI can be used effectively and safely and that dedicated project management is used to implement schemes like this in the future.

First author Dr Angus Ramsay (UCL Department of Behavioural Science and Health) said: “In July ministers unveiled the Government’s 10-year plan for the NHS, of which a digital transformation is a key platform.

“Our study provides important lessons that should help strengthen future approaches to implementing AI in the NHS.

“We found it took longer to introduce the new AI tools in this programme than those leading the programme had expected.

“A key problem was that clinical staff were already very busy – finding time to go through the selection process was a challenge, as was supporting integration of AI with local IT systems and obtaining local governance approvals. Services that used dedicated project managers found their support very helpful in implementing changes, but only some services were able to do this.

“Also, a common issue was the novelty of AI, suggesting a need for more guidance and education on AI and its implementation.

“AI tools can offer valuable support for diagnostic services, but they may not address current healthcare service pressures as simply as policymakers may hope.”

The researchers conducted their evaluation between March and September last year, studying 10 of the participating networks and focusing in depth on six NHS trusts. They interviewed network teams, trust staff and AI suppliers, observed planning, governance and training and analysed relevant documents.

Some of the imaging networks and many of the hospital trusts within them were new to procuring and working with AI.

The problems involved in setting up the new tools varied – for example, in some cases those procuring the tools were overwhelmed by a huge amount of very technical information, increasing the likelihood of key details being missed. Consideration should be given to creating a national approved shortlist of potential suppliers to facilitate procurement at local level, the researchers said.

Another problem was initial lack of enthusiasm among some NHS staff for the new technology in this early phase, with some more senior clinical staff raising concerns about the potential impact of AI making decisions without clinical input and on where accountability lay in the event a condition was missed. The researchers found the training offered to staff did not address these issues sufficiently across the wider workforce – hence their call for early and ongoing training on future projects.

In contrast, however, the study team found the process of procurement was supported by advice from the national team and imaging networks learning from each other. The researchers also observed high levels of commitment and collaboration between local hospital teams (including clinicians and IT) working with AI supplier teams to progress implementation within hospitals.

Senior author Professor Naomi Fulop (UCL Department of Behavioural Science and Health) said: “In this project, each hospital selected AI tools for different reasons, such as focusing on X-ray or CT scanning, and purposes, such as to prioritise urgent cases for review or to identify potential symptoms.

“The NHS is made up of hundreds of organisations with different clinical requirements and different IT systems and introducing any diagnostic tools that suit multiple hospitals is highly complex. These findings indicate AI might not be the silver bullet some have hoped for but the lessons from this study will help the NHS implement AI tools more effectively.”

Limitations

While the study has added to the very limited body of evidence on the implementation and use of AI in real-world settings, it focused on procurement and early deployment. The researchers are now studying the use of AI tools following early deployment when they have had a chance to become more embedded. Further, the researchers did not interview patients and carers and are therefore now conducting such interviews to address important gaps in knowledge about patient experiences and perspectives, as well as considerations of equity.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.



Source link

Continue Reading

AI Research

NHS struggling with implementing AI in healthcare, UCL study finds

Published

on


The promise of AI in healthcare has been a beacon of hope for improving patient outcomes and alleviating the pressures on overworked medical staff.

Yet, a new UK study reveals a more complex reality: implementing AI tools in NHS hospitals is a far greater challenge than initially anticipated.

Led by researchers at University College London (UCL), the study found that the journey from an AI concept to clinical reality is riddled with hurdles, including governance issues, a lack of staff training, and difficulties integrating new technology with existing NHS IT systems.

The findings serve as a crucial learning opportunity for policymakers, including the UK Government, which has identified digital transformation and AI as key pillars of its long-term plan for the NHS.

A pioneering programme and its unexpected delays

The research, funded by the National Institute for Health and Care Research (NIHR), focused on an NHS England programme launched in 2023.

The initiative aimed to introduce AI tools to help diagnose chest conditions, including lung cancer, across 66 hospital trusts.

Backed by £21m in funding, the initiative was designed to enhance diagnostic services by prioritising critical cases for review and flagging abnormalities on scans.

While previous laboratory-based studies had suggested that AI could significantly benefit diagnostic services by improving accuracy and reducing errors, this new research provides one of the first real-world analyses of AI implementation in a healthcare setting.

The UCL-led team, which also included experts from the Nuffield Trust and the University of Cambridge, conducted a deep dive into the procurement and early deployment of these AI tools.

Through interviews with hospital staff and AI suppliers, they uncovered both the pitfalls and the practices that helped smooth the process.

The findings showed that the rollout of the AI tools took significantly longer than expected.

Contracting alone was delayed by four to ten months, and by June 2025, 18 months after the initial target for completion, a third (23 out of 66) of the hospital trusts were still not using the AI in their clinical practice.

Human and technological hurdles to AI adoption

The study identified a number of key challenges that slowed down the implementation. A major issue was the sheer difficulty of engaging clinical staff who are already grappling with incredibly high workloads.

Many staff members also lacked a fundamental understanding of the new technology and expressed a degree of scepticism about using AI in healthcare.

This was particularly true for more senior staff who had concerns about accountability and the potential for AI to make decisions without human oversight.

Technological challenges were equally daunting. The new AI tools needed to be embedded within the NHS’s ageing and diverse IT systems, a complex task that varied from one hospital to another.

The technical nature of the procurement process also proved to be a hurdle, as some staff members found themselves overwhelmed by the sheer volume of detailed information, increasing the likelihood of critical details being overlooked.

The path forward: Best practices and recommendations

Despite the challenges, the study also highlighted several factors that contributed to a smoother implementation.

Dedicated project management was a key success factor, as was a high level of commitment from the hospital staff leading the implementation.

The research also found that shared learning and resources between local imaging networks, as well as strong national programme leadership, helped facilitate the process.

In their conclusion, the researchers noted that while AI tools can provide valuable support for diagnostic services, they “may not address current healthcare service pressures as straightforwardly as policymakers may hope.”

The study’s authors made several important recommendations to improve future rollouts of AI in healthcare.

They emphasised the need for early and ongoing training for NHS staff on how to use AI effectively and safely, stressing that this training must address concerns about accountability and clinical input.

They also suggested that creating a nationally approved shortlist of potential AI suppliers could help streamline the procurement process for individual hospital trusts.

Next steps for research

The researchers are now conducting further studies to understand how the tools are used once fully embedded and to explore the perspectives of patients and carers, a crucial aspect that was not part of this initial phase.

This ongoing work will continue to build on the limited but growing body of evidence about real-world AI implementation, paving the way for a more effective and successful integration of technology into the future of the NHS.



Source link

Continue Reading

Trending