Connect with us

AI Research

AI for Network Latency | Pipeline Magazine

Published

on


By: Ivo Ivanov

The meteoric impact of artificial intelligence over the past few years is difficult to overstate, and progress is moving so quickly that it is better measured in months rather than years. Some of
the biggest tech events of the year, such as the Consumer Electronics Show (CES) and Mobile World Congress (MWC), were dominated by emerging AI use cases, from LLM-powered humanoid robots to “sight beyond sight” vehicle-to-cloud software capable of giving cars human-like
senses. Businesses also have high expectations for the next generation of AI, testing and deploying everything from problem-solving AI agents to advanced data analytics and forecasting tools.
We’ve been conditioned over the past decade or more to believe that all we need to apply AI and use it effectively is data. More data typically means broader applications and better
results. In 2025, however, that singular approach is being brought into question.   

For years, AI innovation was synonymous with the cloud. Training clusters and centralized hyperscale platforms, the brains behind AI, draw power and data into vast facilities designed to meet the
demands of machine learning models. But this is just about the training of AI. With real-time demands and expectations of inference now resting heavily on its shoulders, AI is
breaking free from these traditional gravitational centers. In the interests of speed and instant access, AI is now in action almost everywhere – embedded in devices, vehicles, retail
environments, and digital agents that respond within milliseconds. So perhaps the question we should be asking isn’t just how much compute power can be deployed or how much data can be gathered –
but how intelligently it can all be connected.   

This direction of travel has brought one vital characteristic to the fore: latency. No matter how massive the model or how sophisticated the silicon, high latency is kryptonite to AI.
It’s the obstacle in the corridor, the roadworks on the highway – slowing everything to a crawl. It interrupts the flow of information between people, devices, and AI systems, introducing delays
that can degrade user experiences, limit real-time decision-making, and even compromise safety in critical applications such as autonomous vehicles or remote health monitoring, where split-second
responses can make all the difference. Make no mistake, in this post-AI world, latency is the new currency, and the next frontier of AI will be won or lost on the network layer.

Latency has always been a technical consideration in network design, but post-AI, it has become a business-critical variable. Every additional millisecond can distort outcomes in systems that
rely on real-time processing, learning, and adaptation. AI agents designed to predict, recommend, or act autonomously are only as good as the data pipelines that feed them. If information arrives
too late, decisions are made on stale insights, rendering even the most powerful models ineffective. Consider predictive maintenance in industrial manufacturing. Here, AI agents continuously
monitor sensor data from machinery to flag anomalies before they escalate into failures. But if that data is delayed by even a fraction of a second, the insight may arrive too late to prevent
damage or downtime. The same logic applies to AI in fraud detection, where instantaneous analysis of transaction patterns can mean the difference between blocking fraud and letting it through. In
both cases, latency isn’t just a technical hurdle – it directly affects business continuity and customer trust. 

Technically, the challenge lies in the sheer volume and velocity of data that AI applications must handle. Unlike traditional AI deployments, modern AI workloads are not simply about processing
large datasets and delivering results; they require high-frequency data exchange between edge devices, sensors, data centers, cloud platforms, and end-users. Each hop across a network introduces
potential delays, whether due to physical distance, network congestion, or inefficient routing. Minimizing latency, therefore, is not a matter of optimizing a single link; it demands a holistic
rethinking of how digital infrastructure is architected, interconnected, and managed. Moving forward, we need to consider latency reduction as a foundational design principle rather than an
optional variable – even with legacy networks, it can still be achieved.  

Traditionally, Internet Exchanges (IXs) were designed to facilitate efficient data exchange between networks, improving performance and reducing transit costs. But as AI workloads migrate to the
edge, the role of IXs is evolving. Rather than serving solely as aggregation points for global, largely content-based Internet traffic, IXs are





Source link

AI Research

BITSoM launches AI research and innovation lab to shape future leaders

Published

on


Mumbai: The BITS School of Management (BITSoM), under the aegis of BITS Pilani, a leading private university, will inaugurate its new BITSoM Research in AI and Innovation (BRAIN) Lab in its Kalyan Campus on Friday. The lab is designed to prepare future leaders for workplaces transformed by artificial intelligence, on Friday on its Kalyan campus.

BITSoM launches AI research and innovation lab to shape future leaders

While explaining the concept of the laboratory, professor Saravanan Kesavan, dean of BITSoM, said that the BRAIN Lab had three core pillars–teaching, research, and outreach. Kesavan said, “It provides MBA (masters in business administration) students a dedicated space equipped with high-performance AI computers capable of handling tasks such as computer vision and large-scale data analysis. Students will not only learn about AI concepts in theory but also experiment with real-world applications.” Kesavan added that each graduating student would be expected to develop an AI product as part of their coursework, giving them first-hand experience in innovation and problem-solving.

The BRAIN lab is also designed to be a hub of collaboration where researchers can conduct projects in partnership with various companies and industries, creating a repository of practical AI tools to use. Kesavan said, “The initial focus areas (of the lab) include manufacturing, healthcare, banking and financial services, and Global Capability Centres (subsidiaries of multinational corporations that perform specialised functions).” He added that the case studies and research from the lab will be made freely available to schools, colleges, researchers, and corporate partners, ensuring that the benefits of the lab reach beyond the BITSoM campus.

BITSoM also plans to use the BRAIN Lab as a launchpad for startups. An AI programme will support entrepreneurs in developing solutions as per their needs while connecting them to venture capital networks in India and Silicon Valley. This will give young companies the chance to refine their ideas with guidance from both academics and industry leaders.

The centre’s physical setup resembles a modern computer lab, with dedicated workspaces, collaborative meeting rooms, and brainstorming zones. It has been designed to encourage creativity, allowing students to visualise how AI works, customise tools for different industries, and allow their technical capabilities to translate into business impacts.

In the context of a global workplace that is embracing AI, Kesavan said, “Future leaders need to understand not just how to manage people but also how to manage a workforce that combines humans and AI agents. Our goal is to ensure every student graduating from BITSoM is equipped with the skills to build AI products and apply them effectively in business.”

Kesavan said that advisors from reputed institutions such as Harvard, Johns Hopkins, the University of Chicago, and industry professionals from global companies will provide guidance to students at the lab. Alongside student training, BITSoM also plans to run reskilling programmes for working professionals, extending its impact beyond the campus.



Source link

Continue Reading

AI Research

AI grading issue affects hundreds of MCAS essays in Mass. – NBC Boston

Published

on


The use of artificial intelligence to score statewide standardized tests resulted in errors that affected hundreds of exams, the NBC10 Investigators have learned.

The issue with the Massachusetts Comprehensive Assessment System (MCAS) surfaced over the summer, when preliminary results for the exams were distributed to districts.

The state’s testing contractor, Cognia, found roughly 1,400 essays did not receive the correct scores, according to a spokesperson with the Department of Elementary and Secondary Education.

DESE told NBC10 Boston all the essays were rescored, affected districts received notification, and all their data was corrected in August.

So how did humans detect the problem?

We found one example in Lowell. Turns out an alert teacher at Reilly Elementary School was reading through her third-grade students’ essays over the summer. When the instructor looked up the scores some of the students received, something did not add up.

The teacher notified the school principal, who then flagged the issue with district leaders.

“We were on alert that there could be a learning curve with AI,” said Wendy Crocker-Roberge, an assistant superintendent in the Lowell school district.

AI essay scoring works by using human-scored exemplars of what essays at each score point look like, according to DESE.

DESE pointed out the affected exams represent a small percentage of the roughly 750,000 MCAS essays statewide.

The AI tool uses that information to score the essays. In addition, humans give 10% of the AI-scored essays a second read and compare their scores with the AI score to make sure there aren’t discrepancies. AI scoring was used for the same amount of essays in 2025 as in 2024, DESE said.

Crocker-Roberge said she decided to read about 1,000 essays in Lowell, but it was tough to pinpoint the exact reason some students did not receive proper credit.

However, it was clear the AI technology was deducting points without justification. For instance, Crocker-Roberge said she noticed that some essays lost a point when they did not use quotation marks when referencing a passage from the reading excerpt.

“We could not understand why an individual score was scored a zero when it should have gotten six out of seven points,” Crocker-Roberge said. “There just wasn’t any rhyme or reason to that.”

District leaders notified DESE about the problem, which resulted in approximately 1,400 essays being rescored. The state agency says the scoring problem was the result of a “temporary technical issue in the process.”

According to DESE, 145 districts were notified that had at least one student essay that was not scored correctly.

“As one way of checking that MCAS scores are accurate, DESE releases preliminary MCAS results to districts and gives them time to report any issues during a discrepancy period each year,” a DESE spokesperson wrote in a statement.

Mary Tamer, the executive director of MassPotential, an organization that advocates for educational improvement, said there are a lot of positives to using AI and returning scores back to school districts faster so appropriate action can be taken. For instance, test results can help identify a child in need of intervention or highlight a lesson plan for a teacher that did not seem to resonate with students.

“I think there’s a lot of benefits that outweigh the risks,” said Tamer. “But again, no system is perfect and that’s true for AI. The work always has to be doublechecked.”

DESE pointed out the affected exams represent a small percentage of the roughly 750,000 MCAS essays statewide.

However, in districts like Lowell, there are certain schools tracked by DESE to ensure progress is being made and performance standards are met.

That’s why Crocker-Roberge said every score counts.

With MCAS results expected to be released to parents in the coming weeks, the assistant superintendent is encouraging other districts to do a deep dive on their student essays to make sure they don’t notice any scoring discrepancies.

“I think we have to always proceed with caution when we’re introducing new tools and techniques,” Crocker-Roberge said. “Artificial intelligence is just a really new learning curve for everyone, so proceed with caution.”

There’s a new major push for AI training in the Bay State, where educators are getting savvier by the second. NBC10 Boston education reporter Lauren Melendez has the full story.



Source link

Continue Reading

AI Research

National Research Platform to Democratize AI Computing for Higher Ed

Published

on


As higher education adapts to artificial intelligence’s impact, colleges and universities face the challenge of affording the computing power necessary to implement AI changes. The National Research Platform (NRP), a federally funded pilot program, is trying to solve that by pooling infrastructure across institutions.

Running large language models or training machine learning systems requires powerful graphics processing units (GPUs) and maintenance by skilled staff, Frank Würthwein, NRP’s executive director and director of the San Diego Supercomputer Center, said. The demand has left institutions either reliant on temporary donations and collaborations with tech companies, or unable to participate at all.

“The moment Google no longer gives it for free, they’re basically stuck,” Würthwein said.


Cloud services like Amazon Web Services and Azure offer these tools, he said, but at a price not every school can afford.

Traditionally, universities have tried to own their own research computing resources, like the supercomputer center at the University of California, San Diego (UCSD). But individual universities are not large enough to make the cost of obtaining and maintaining those resources cost-effective.

“Almost nobody has the scale to amortize the staff appropriately,” he said.

Even UCSD has struggled to keep its campus cluster affordable. For Würthwein, scaling up is the answer.

“If I serve a million students, I can provide [AI] services for no more than $10 a year per student,” he said. “To me, that’s free, because if you think about in San Diego, $10 is about a beer.”

A NATIONAL APPROACH

NRP adds another option for acquiring AI computing resources through cross-institutional pooling. Built on the earlier Pacific Research Platform, the NRP organizes a distributed computing system called the Nautilus Hypercluster, in which participating institutions contribute access to servers and GPUs they already own.

Würthwein said that while not every college has spare high-end hardware, many research institutions do, and even smaller campuses often have at least a few machines purchased through grants. These can be federated into NRP’s pool, with NRP providing system management, training and support. He said NRP employs a small, skilled staff that automates basic operations, monitors security and provides example curricula to partner institutions so that campuses don’t need local teams for those tasks.

The result is a distributed cloud supercomputer running on community contributions. According to a March 2025 slide presentation by Seungmin Kim, a researcher from the Yonsei University College of Medicine in Korea, the cluster now includes more than 1,400 GPUs, quadruple the initial National Science Foundation-funded purchase, thanks to contributions from participating campuses.

Since the project’s official launch in March 2023, NRP has onboarded more than 50 colleges and 84 geographic sites, according to Würthwein. NRP’s pilot goal is to reach 100 institutions, but he is already planning for 1,000 colleges after that, which would provide AI access to 1 million students.

To reach these goals, Würthwein said, NRP tries to reach both IT staff who manage infrastructure and faculty who manage curriculum. Regional research and education networks, such as California’s CENIC, connect NRP with campus CIOs, while the Academic Data Science Alliance connects with leaders on the teaching side.

WHAT STUDENTS AND FACULTY SEE

From the user side, the system looks like a one-stop cloud environment. Platforms like JupyterHub and GitLab are preconfigured and ready to use. The platform also hosts collaboration tools for storage, chats and video meetings that are similar to commercial offerings.

Würthwein said the infrastructure is designed so students can log in and run assignments and personalized learning tools that would normally require expensive computing resources.

“At some point … education will be considered subpar if it doesn’t provide that,” he said. “Institutions who have not transitioned to provide education like this, in this individualized fashion for every student, will fundamentally offer a worse product.”

For faculty, the same infrastructure supports research. Classroom usage tends to leave servers idle outside of peak times, leaving capacity for faculty projects. NRP’s model expects institutions to own enough resources to cover classroom needs, but anything unused can be pooled nationally. This could allow even teaching-focused colleges with modest resources to offer AI research experiences previously out of reach.

According to Kim’s presentation, researchers have used the platform to predict the efficiency of gene editing without lab experimentation and to map and detect wildfire patterns.

The system has already enabled collaboration beyond its San Diego campus. At Sonoma State University, faculty are working with a local vineyard to pair the system with drones, robotics and AI to enable vineyard management, Würthwein said. Making AI for classroom applications, enhancing research and enabling industry collaboration at more higher-education institutions is the overall goal.

“To me, that is the perfect trifecta of positive effects,” he said. “This is ultimately what we’re trying to achieve.”





Source link

Continue Reading

Trending