Connect with us

AI Insights

Protecting the Grid with Artificial Intelligence

Published

on


Newswise — ALBUQUERQUE, N.M. — The electric grid powers everything from traffic lights to pharmacy fridges. However, it regularly faces threats from severe storms and advanced attackers.

Researchers at Sandia National Laboratories have developed brain-inspired AI algorithms that detect physical problems, cyberattacks and both at the same time within the grid. And this neural-network AI can run on inexpensive single-board computers or existing smart grid devices.

“As more disturbances occur, whether from extreme weather or from cyberattacks, the most important thing is that operators maintain the function and reliability of the grid,” said Shamina Hossain-McKenzie, a cybersecurity expert and leader of the project. “Our technology will allow the operators to detect any issues faster so that they can mitigate them faster with AI.”

The importance of cyber-physical protection

As the nation adds more smart controls and devices to the grid, it becomes more flexible and autonomous but also more vulnerable to cyberattacks and cyber-physical attacks. Cyber-physical attacks use communications networks or other cyber systems to disrupt or control a physical system such as the electric grid. Potentially vulnerable equipment includes smart inverters that turn the direct current produced by solar panels and wind turbines into the alternating current used by the grid, and network switches that provide secure communication for grid operators, said Adrian Chavez, a cybersecurity expert involved in the project. Because the neural network can run on single-board computers, or existing smart grid devices, it can protect older equipment as well as the latest equipment that lack only cyber-physical coordination, Hossain-McKenzie said.

“To make the technology more accessible and feasible to deploy, we wanted to make sure our solution was scalable, portable and cost-efficient,” Chavez said.

The package of code works at the local, enclave and global levels. At the local level, the code monitors for abnormalities at the specific device where it is installed. At the enclave level, devices in the same network share data and alerts to provide the operator with better information on whether the issue is localized or happening in multiple places, Hossain-McKenzie said. At the global level, only results and alerts are shared between systems owned by different operators. That way operators can get early alerts of cyberattacks or physical issues their neighbors are seeing but protect proprietary information.

The Sandia team collaborated with experts at Texas A&M University to create secure communication methods, particularly between grids owned by different companies, Hossain-McKenzie said.

Developing the neural network

The biggest challenge in detecting cyber-physical attacks is combining the constant stream of physical data with intermittent packets of cyber data, said Logan Blakely, a computer science expert who led development of the AI components.

Physical data such as the frequency, voltage and current of the grid is reported 60 times a second, while cyber data such as other traffic on the network is more sporadic, Blakely said. The team used data fusion to extract the important signals in the two different kinds of data. The collaborators from Texas A&M University were key to this effort, he added.

Then the team used an autoencoder neural network, which classifies the combined data to determine whether it fits with the pattern of normal behavior or if there are abnormalities with the cyber data, physical data or both, Hossain-McKenzie said. For example, an increase in network traffic could indicate a denial-of-service attack while a false-data-injection attack could include atypical physical and cyber data, Chavez said.

Unlike many other kinds of AI, autoencoder neural networks do not need to be trained on data labeled with every type of issue that may show up, Blakely said. Instead, the network only needs copious amounts of data from normal operations for training.

The use of an autoencoder neural network makes the package pretty much plug and play, Hossain-McKenzie added.

Putting the code to the test

Once the team constructed the autoencoder neural network, they put it to the test in three different ways.

First, they tested the autoencoder in an emulation environment, which includes computer models of the communication-and-control system used to monitor the grid and a physics-based model of the grid itself, Hossain-McKenzie said. The team used this environment to model a variety of cyberattacks or physical disruptions, and to provide normal operational data for the AI to train on. The collaborators from Texas A&M University assisted with the emulation testing.

Then the team incorporated the autoencoder onto single-board computer prototypes that were tested in a hardware-in-the-loop environment, Hossain-McKenzie said. In hardware-in-the-loop testing, researchers connect a real piece of hardware to software that simulates various attack scenarios or disruptions. When the autoencoder is on a single-board computer, it can read the data and implement the algorithms faster than a virtual implementation of the autoencoder can in an emulation environment, Chavez said. Generally, hardware implementations are a hundred or thousand times faster than software implementations, he added.

The team is working with Sierra Nevada Corporation to test how Sandia’s autoencoder AI works on the company’s existing cybersecurity device called Binary Armor, Hossain-McKenzie said.

“This will give a really great proof-of-concept on how the technology can be flexibly implemented on an existing grid security ecosystem,” she said.

The team is testing both formats — single-board prototypes interfaced with the grid and the AI package on existing devices — in the real world at the Public Service Company of New Mexico’s Prosperity solar farm as part of a Cooperative Research and Development Agreement, Hossain-McKenzie said. These tests began last summer, Chavez said.

“There’s nothing like going to an actual field site,” Chavez said. “Having the ability to see realistic traffic is a really great way to get a ground-truth of how this technology performs in the real world.”

The team also worked with PNM early in the project, to learn what AI design might be most useful for grid operators. It was during conversations with PNM staff that the Sandia team identified the need to connect cyber-defenders with system operators rapidly and automatically.

Future directions

This project built off and expanded upon a previous R&D 100 Award-winning project called the Proactive Intrusion Detection and Mitigation System which focused on detecting and responding to cyber intrusions in smart inverters on solar-panels, Hossain-McKenzie said. The team also is expanding upon the autoencoder AI in similar projects, she added.

The team filed a patent on the autoencoder AI and is looking for corporate partners to deploy and hone the technology in the real world, Hossain-McKenzie said.

With a bit more work, the autoencoder could be used to protect other critical infrastructure systems such as water and natural gas distribution systems, factories, even data centers, Chavez said.

“Whether or not our technology succeeds in the market, every utility around the world is going to need a solution to this problem,” Blakely said. “This is a fascinating area to do research in because one way or another, everyone is going to have to solve the problem of cyber-physical data fusion.”

The project is funded by Sandia’s Laboratory Directed Research and Development program.





Source link

AI Insights

South Korean regulator to adopt AI in competition enforcement | MLex

Published

on


By Wooyoung Lee ( September 15, 2025, 05:43 GMT | Insight) — South Korea’s competition watchdog has set up a taskforce dedicated to adopting artificial intelligence in its enforcement and administrative work, aiming to expedite case handling, detect unreported mergers and strengthen oversight of unfair practices.South Korea’s competition watchdog has set up a taskforce dedicated to adopting artificial intelligence in its enforcement and administrative work, aiming to expedite case handling, detect unreported mergers and strengthen oversight of unfair practices….

Prepare for tomorrow’s regulatory change, today

MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.

Know what others in the room don’t, with features including:

  • Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
  • Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
  • Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
  • Curated case files bringing together news, analysis and source documents in a single timeline

Experience MLex today with a 14-day free trial.



Source link

Continue Reading

AI Insights

Tech from China could take the ‘stealth’ out of stealth subs using Artificial Intelligence, magnetic wake detection

Published

on


Submarines were once considered the stealthiest assets of navies. Not anymore. Studies from China suggest that new tech can break the code of the stealth used on submarines, which make them powerful war machines. These innovations that detect underwater vessels can change the face of naval warfare. Artificial Intelligence and magnetic wake detection are some of the methods being used to achieve this. Here is what you should know.

China is developing submarine detection technologies using AI. How it works

The studies from China suggest that subs could be highly vulnerable to artificial intelligence (AI) and magnetic field detection technologies, as reported by the South China Morning Post.

Add WION as a Preferred Source

In a study published in August, a team led by Meng Hao from the China Helicopter Research and Development Institute revealed an AI-powered anti-submarine warfare (ASW) system.
Led by AI, this tech is being touted as the first of its kind, enabling automated decision-making in detecting submarines.

As per the study published in the journal Electronics Optics & Control, the ASW system mimics a smart battlefield commander, integrating real-time data from sonar buoys, radar, underwater sensors, and ocean conditions like temperature and salinity.

Powered by AI, the system can autonomously analyse and adapt, slashing a submarine’s escape chances to just 5 per cent.

This would mean only one in 20 submarines could evade detection and attack.

This will be a significant shift in naval warfare, with researchers warning that the “invisible” submarine era is ending.

Stealth may soon be an impossible feat, Meng’s team said.

China can track US submarines via ‘magnetic wakes’

In December last year, scientists from Northwestern Polytechnical University (NPU) in Xi’an, revealed a novel method for tracking submarines via ‘magnetic wakes’.

The study led by Associate Professor Wang Honglei, models how submarines generate faint magnetic fields as they disturb seawater, creating ‘Kelvin wakes’.

These wakes, long after the vessel has passed, leave “footprints in the ocean’s magnetic fabric,” said the study, published in the Journal of Harbin Engineering University on December 4.

For example, a Seawolf-class submarine travelling at 24 knots and 30 metres depth generates a magnetic field of 10⁻¹² tesla—detectable by existing airborne magnetometres.

This method exploits a critical vulnerability in submarines, the Kelvin wakes, that ‘cannot be silenced,’ Wang’s team said.

This is in contrast to the acoustic – or sound-based- detection, which submarines can counter with sound-dampening technologies.

Together, the studies suggest that AI and magnetic detection could soon make submarine stealth a thing of the past.

Related Stories



Source link

Continue Reading

AI Insights

Rethinking the AI Race | The Regulatory Review

Published

on


Openness in AI models is not the same as freedom.

In 2016, Noam Chomsky, the father of modern linguistics, published the book Who Rules the World? referring to the United States’ dominance in global affairs. Today, policymakers—such as U.S. President Donald J. Trump argue that whoever wins the artificial intelligence (AI) race will rule the world, driven by a relentless, borderless competition for technological supremacy. One strategy gaining traction is open-source AI. But is it advisable? The short answer, I believe, is no.

Closed-source and open-source represent the two main paradigms in software, and AI software is no exception. While closed-source refers to proprietary software with restricted use, open-source software typically involves making the underlying source code publicly available, allowing unrestricted use, including the ability to modify the code and develop new applications.

AI is impacting virtually every industry, and AI startups have proliferated nonstop in recent years. OpenAI secured a multi-billion-dollar investment from Microsoft, while Anthropic has attracted significant investments from Amazon and Google. These companies are currently leading the AI race with closed-source models, a strategy aimed at maintaining proprietary control and addressing safety concerns.

But open-source models have consistently driven innovation and competition in software. Linux, one of the most successful open-source operating systems ever, is pivotal in the computer industry. Google Android, which is used in approximately 70 percent of smartphones worldwide, Amazon Web Services, Microsoft Azure, and all of the world’s top 500 supercomputers run on Linux. The success story of open-source software naturally fuels enthusiasm for open-source AI software. And behind the scenes, companies such as Meta are emerging by developing open-source AI initiatives to promote the democratization and growth of AI through a joint effort.

Mark Zuckerberg, in promoting an open-source model for AI, recalled the story of Linux’s open-source operating system. Linux became “the industry standard foundation for both cloud computing and the operating systems that run most mobile devices—and we all benefit from superior products because of it.”

But the story of Linux is quite different from Meta’s “open-source” AI project, Llama. First and foremost, no universally accepted definition of open-source AI exists. Second, Linux had no “Big Tech” corporation behind it. Its success was made possible by the free software movement, led by American activist and programmer Richard Stallman, who created the GNU General Public License (GPL) to ensure software freedom. The GPL allowed for the free distribution and collaborative development of essential software, most notably the Linux open source operating system, developed by Finnish programmer Linus Torvalds. Linux has become the foundation for numerous open-source operating systems, developed by a global community that has fostered a culture of openness, decentralization, and user control. Llama is not distributed under a GPL.

Under the Llama 4 licensing agreement, entities with more than 700 million monthly active users in the preceding calendar month must obtain a license from Meta, “which Meta may grant to you in its sole discretion” before using the model. Moreover, algorithms powering large AI models rely on vast amounts of data to function effectively. Meta, however, does not make its training data publicly available.

Thus, can we really call it open source?

Most importantly, AI presents fundamentally different and more complex challenges than traditional software, with the primary concern being safety. Traditional algorithms are predictable; we know the inputs and outputs. Consider the Euclidean algorithm, which provides an efficient way for computing the greatest common divisor of two integers. Conversely, AI algorithms are typically unpredictable because they leverage a large amount of data to build models, which are becoming increasingly sophisticated.

Deep learning algorithms, which underlie large language models such as ChatGPT and other well-known AI applications, rely on increasingly complex structures that make AI outputs virtually impossible to interpret or explain. Large language models are performing increasingly well, but would you trust something that you cannot fully interpret and understand? Open-source AI, rather than offering a solution, may be amplifying the problem. Although it is often seen as a tool to promote democratization and technological progress, open source in AI increasingly resembles a Ferrari engine with no brakes.

Like cars, computers and software are powerful technologies—but as with any technology, AI can harm if misused or deployed without a proper understanding of the risks. Currently, we do not know what AI can and cannot do. Competition is important, and open-source software has been a key driver of technological progress, providing the foundation for widely used technologies such as Android smartphones and web infrastructure. It has been, and continues to be, a key paradigm for competition, especially in a digital framework.

Is AI different because we do not know how to stop this technology if required? Free speech, free society, and free software are all appealing concepts, but let us do better than that. In the 18th century, French philosopher Baron de Montesquieu argued that “Liberty is the right to do everything the law permits.” Rather than promoting openness and competition at any cost to rule the world, liberty in AI seems to require a calibrated legal framework that balances innovation and safety.



Source link

Continue Reading

Trending