Connect with us

AI Research

Energy-Efficient NPU Technology Cuts AI Power Use by 44%

Published

on


Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed energy-efficient NPU technology that demonstrates substantial performance improvements in laboratory testing. 

Their specialised AI chip ran AI models 60% faster while using 44% less electricity than the graphics cards currently powering most AI systems, based on results from controlled experiments. 

To put it simply, the research, led by Professor Jongse Park from KAIST’s School of Computing in collaboration with HyperAccel Inc., addresses one of the most pressing challenges in modern AI infrastructure: the enormous energy and hardware requirements of large-scale generative AI models. 

Current systems such as OpenAI’s ChatGPT-4 and Google’s Gemini 2.5 demand not only high memory bandwidth but also substantial memory capacity, driving companies like Microsoft and Google to purchase hundreds of thousands of NVIDIA GPUs.

The memory bottleneck challenge

The core innovation lies in the team’s approach to solving memory bottleneck issues that plague existing AI infrastructure. Their energy-efficient NPU technology focuses on “lightweight” the inference process while minimising accuracy loss—a critical balance that has proven challenging for previous solutions.

PhD student Minsu Kim and Dr Seongmin Hong from HyperAccel Inc., serving as co-first authors, presented their findings at the 2025 International Symposium on Computer Architecture (ISCA 2025) in Tokyo. The research paper, titled “Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization,” details their comprehensive approach to the problem.

The technology centres on KV cache quantisation, which the researchers identify as accounting for most memory usage in generative AI systems. By optimising this component, the team enables the same level of AI infrastructure performance using fewer NPU devices compared to traditional GPU-based systems.

Technical innovation and architecture

The KAIST team’s energy-efficient NPU technology employs a three-pronged quantisation algorithm: threshold-based online-offline hybrid quantisation, group-shift quantisation, and fused dense-and-sparse encoding. This approach allows the system to integrate with existing memory interfaces without requiring changes to operational logic in current NPU architectures.

The hardware architecture incorporates page-level memory management techniques for efficient utilisation of limited memory bandwidth and capacity. Additionally, the team introduced new encoding techniques specifically optimised for quantised KV cache, addressing the unique requirements of their approach.

“This research, through joint work with HyperAccel Inc., found a solution in generative AI inference light-weighting algorithms and succeeded in developing a core NPU technology that can solve the memory problem,” Professor Park explained. 

“Through this technology, we implemented an NPU with over 60% improved performance compared to the latest GPUs by combining quantisation techniques that reduce memory requirements while maintaining inference accuracy.”

Sustainability implications

The environmental impact of AI infrastructure has become a growing concern as generative AI adoption accelerates. The energy-efficient NPU technology developed by KAIST offers a potential path toward more sustainable AI operations. 

With 44% lower power consumption compared to current GPU solutions, widespread adoption could significantly reduce the carbon footprint of AI cloud services. However, the technology’s real-world impact will depend on several factors, including manufacturing scalability, cost-effectiveness, and industry adoption rates. 

The researchers acknowledge that their solution represents a significant step forward, but widespread implementation will require continued development and industry collaboration.

Industry context and future outlook

The timing of this energy-efficient NPU technology breakthrough is particularly relevant as AI companies face increasing pressure to balance performance with sustainability. The current GPU-dominated market has created supply chain constraints and elevated costs, making alternative solutions increasingly attractive.

Professor Park noted that the technology “has demonstrated the possibility of implementing high-performance, low-power infrastructure specialised for generative AI, and is expected to play a key role not only in AI cloud data centres but also in the AI transformation (AX) environment represented by dynamic, executable AI such as agentic AI.”

The research represents a significant step toward more sustainable AI infrastructure, but its ultimate impact will be determined by how effectively it can be scaled and deployed in commercial environments. As the AI industry continues to grapple with energy consumption concerns, innovations like KAIST’s energy-efficient NPU technology offer hope for a more sustainable future in artificial intelligence computing.

(Photo by Korea Advanced Institute of Science and Technology)

See also: The 6 practices that ensure more sustainable data centre operations

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.



Source link

AI Research

Indonesia on Track to Achieve Sovereign AI Goals With NVIDIA, Cisco and IOH

Published

on


As one of the world’s largest emerging markets, Indonesia is making strides toward its “Golden 2045 Vision” — an initiative tapping digital technologies and bringing together government, enterprises, startups and higher education to enhance productivity, efficiency and innovation across industries.

Building out the nation’s AI infrastructure is a crucial part of this plan.

That’s why Indonesian telecommunications leader Indosat Ooredoo Hutchison, aka Indosat or IOH, has partnered with Cisco and NVIDIA to support the establishment of Indonesia’s AI Center of Excellence (CoE). Led by the Ministry of Communications and Digital Affairs, called Komdigi, the CoE aims to advance secure technologies, cultivate local talent and foster innovation through collaboration with startups.

Indosat Ooredoo Hutchison President Director and CEO Vikram Sinha, Cisco Chair and CEO Chuck Robbins and NVIDIA Senior Vice President of Telecom Ronnie Vasishta today detailed the purpose and potential of the CoE during a fireside chat at Indonesia AI Day, a conference focused on how artificial intelligence can fuel the nation’s digital independence and economic growth.

As part of the CoE, a new NVIDIA AI Technology Center will offer research support, NVIDIA Inception program benefits for eligible startups, and NVIDIA Deep Learning Institute training and certification to upskill local talent.

“With the support of global partners, we’re accelerating Indonesia’s path to economic growth by ensuring Indonesians are not just users of AI, but creators and innovators,” Sinha added.

“The AI era demands fundamental architectural shifts and a workforce with digital skills to thrive,” Robbins said. “Together with Indosat, NVIDIA and Komdigi, Cisco will securely power the AI Center of Excellence — enabling innovation and skills development, and accelerating Indonesia’s growth.”

“Democratizing AI is more important than ever,” Vasishta added. “Through the new NVIDIA AI Technology Center, we’re helping Indonesia build a sustainable AI ecosystem that can serve as a model for nations looking to harness AI for innovation and economic growth.”

Making AI More Accessible

The Indonesia AI CoE will comprise an AI factory that features full-stack NVIDIA AI infrastructure — including NVIDIA Blackwell GPUs, NVIDIA Cloud Partner reference architectures and NVIDIA AI Enterprise software — as well as an intelligent security system powered by Cisco.

Called the Sovereign Security Operations Center Cloud Platform, the Cisco-powered system combines AI-based threat detection, localized data control and managed security services for the AI factory.

Building on the sovereign AI initiatives Indonesia’s technology leaders announced with NVIDIA last year, the CoE will bolster the nation’s AI strategy through four core pillars:

Graphic includes four core pillars of the work's strategic approach. 1) Sovereign Infrastructure: Establishing AI infrastructure for secure, scalable, high-performance AI workloads tailored to Indonesia’s digital ambitions. 2) Secure AI Workloads: Using Cisco’s intelligent infrastructure to connect and safeguard the nation’s digital assets and intellectual property. 3) AI for All: Giving hundreds of millions of Indonesians access to AI by 2027, breaking down geographical barriers and empowering developers across the nation. 4) Talent and Development Ecosystem: Aiming to equip 1 million people with digital skills in networking, security and AI by 2027.

Some 28 independent software vendors and startups are already using IOH’s NVIDIA-powered AI infrastructure to develop cutting-edge technologies that can speed and ease workflows across higher education and research, food security, bureaucratic reform, smart cities and mobility, and healthcare.

With Indosat’s coverage across the archipelago, the company can reach hundreds of millions of Bahasa Indonesian speakers with its large language model (LLM)-powered applications.

For example, using Indosat’s Sahabat-AI collection of Bahasa Indonesian LLMs, the Indonesia government and Hippocratic AI are collaborating to develop an AI agent system that provides preventative outreach capabilities, such as helping women subscribers over the age of 50 schedule a mammogram. This can help prevent or combat breast cancer and other health complications across the population.

Separately, Sahabat-AI also enables Indosat’s AI chatbot to answer queries in the Indonesian language for various citizen and resident services. A person could ask about processes for updating their national identification card, as well as about tax rates, payment procedures, deductions and more.

In addition, a government-led forum is developing trustworthy AI frameworks tailored to Indonesian values for the safe, responsible development of artificial intelligence and related policies.

Looking forward, Indosat and NVIDIA plan to deploy AI-RAN technologies that can reach even broader audiences using AI over wireless networks.

Learn more about NVIDIA-powered AI infrastructure for telcos.



Source link

Continue Reading

AI Research

Silicon Valley eyes a governance-lite gold rush

Published

on



Andreessen Horowitz has had enough of Delaware and is moving a unit’s incorporation out west



Source link

Continue Reading

AI Research

Artificially intelligent: Does it matter if ChatGPT can’t think? – AFR

Published

on



Artificially intelligent: Does it matter if ChatGPT can’t think?  AFR



Source link

Continue Reading

Trending