Connect with us

AI Research

Boosting innovation, reshaping mobility: Volkswagen Group invests in AI

Published

on


“With artificial intelligence, we are igniting the next stage on our path to becoming the global automotive tech driver”, says Hauke Stars, Member of the Board of Management for IT at the Volkswagen Group. “AI is our key to greater speed, quality, and competitiveness – across the entire value chain, from vehicle development to production. Our ambition is to accelerate our development of attractive, innovative vehicles and bring them to our customers faster than ever before. To achieve this, we deploy AI with purpose: scalable, responsible, and with clear industrial benefits. Our ambition: No process without AI.”

Artificial intelligence is already being used across all key business domains of the Volkswagen Group. Today, more than 1,200 AI applications are already active throughout the Group, with several hundred more in development or nearing implementation. In the long term, the Volkswagen Group expects efficiency gains and cost avoidance opportunities totaling up to four billion euros by 2035 – enabled by the consistent and scalable use of AI across the entire automotive value chain.

Artificial intelligence as a key technology across the entire company

In vehicle development, for example, the Volkswagen Group is building an AI-powered engineering environment together with its partner Dassault Systèmes – for all Group brands and across all regions. It is designed to support engineers through virtual testing and component simulations, significantly accelerating development processes. Alongside other initiatives, this collaboration aims to helping to shorten the product development cycle for Group brands to 36 months – or less – making it at least 25 percent (around 12 months) faster compared to today.

AI integration is also advancing in production: Leveraging the Volkswagen Group’s proprietary Digital Production Platform (DPP) – a “factory cloud” now connecting more than 40 sites – Volkswagen is continuously introducing new AI applications into its manufacturing processes. These help optimize the interaction of complex processes in vehicle assembly, contribute to more efficient use of energy and materials, reduce costs, and lower CO₂ emissions.

Moreover, AI-powered applications also strengthen cybersecurity and foster knowledge sharing across the Group – a key factor for digital transformation and the company’s long-term viability.

AI training from shop floor to boardroom

With the WE & AI initiative, the Volkswagen Group launched one of the largest internal education and qualification programs in spring 2024. The ongoing initiative aims to empower employees across all levels of the organization to engage with AI in a responsible and practical way. To date, more than 130,000 employees worldwide have been reached.

Collaboration with European technology and industry partners for industrial AI

The Volkswagen Group aims to further advance the use of artificial intelligence through closer collaboration with technology and industry partners. In this context, the company is currently exploring the potential of a so-called Large Industry Model (LIM) – an industrial AI model based on real manufacturing, design, and process data from voluntarily participating companies.

Collective industrial process knowledge could be used to train an AI model that helps optimize internal workflows and enables more efficient logistics and process control across industries and for all participants. An organizational blueprint for this initiative could be Catena-X – the first open platform for the entire automotive sector and beyond, enabling secure data exchange between manufacturers, suppliers, and technology providers. Founding members include Volkswagen, BMW, BASF, Mercedes-Benz, SAP, Siemens, ZF, and T-Systems.

Volkswagen advocates for innovation-friendly frameworks in the global AI race

Volkswagen is committed to actively shaping the future of AI in Europe and supporting political and economic frameworks at both national and European levels. In an increasingly challenging environment – marked by high energy prices, elevated location costs, and administrative complexity – the company sees a clear need to advance technological innovation in artificial intelligence in Germany and Europe through political support.

Hauke Stars: “We support the innovation-friendly evolution of European regulation. In addition, targeted incentives are needed: We must make more of what we’re capable of. This includes, above all, funding programs that strengthen spin-offs from universities and research institutions and accelerate the transfer of scientific knowledge into market-ready applications.”

Digital sovereignty requires European infrastructure

Technological independence and resilience begin with maintaining control over data – and that only works if data is stored, processed, and protected within Europe. Against this backdrop, the Volkswagen Group has sharpened its strategic focus: the Group-wide private cloud infrastructure will be significantly expanded in the coming years to enable more internal processing of sensitive information. This move is aimed at strengthening the company’s digital resilience against external risks and influences.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

100x Faster Brain-Inspired AI Model

Published

on

By


In the rapidly evolving field of artificial intelligence, a new contender has emerged from China’s research labs, promising to reshape how we think about energy-efficient computing. The SpikingBrain-7B model, developed by the Brain-Inspired Computing Lab (BICLab) at the Chinese Academy of Sciences, represents a bold departure from traditional large language models. Drawing inspiration from the human brain’s neural firing patterns, this system employs spiking neural networks to achieve remarkable efficiency gains. Unlike conventional transformers that guzzle power, SpikingBrain-7B mimics biological neurons, firing only when necessary, which slashes energy consumption dramatically.

At its core, the model integrates hybrid-linear attention mechanisms and conversion-based training techniques, allowing it to run on domestic MetaX chips without relying on NVIDIA hardware. This innovation addresses a critical bottleneck in AI deployment: the high energy demands of training and inference. According to a technical report published on arXiv, the SpikingBrain series, including the 7B and 76B variants, demonstrates over 100 times faster first-token generation at long sequence lengths, making it ideal for edge devices in industrial control and mobile applications.

Breaking Away from Transformer Dominance

The genesis of SpikingBrain-7B can be traced to BICLab’s GitHub repository, where the open-source code reveals a sophisticated architecture blending spiking neurons with large-scale model training. Researchers at the lab, led by figures like Guoqi Li and Bo Xu, have optimized for non-NVIDIA clusters, overcoming challenges in parallel training and communication overhead. This approach not only enhances stability but also paves the way for neuromorphic hardware that prioritizes energy optimization over raw compute power.

Recent coverage in Xinhua News highlights how SpikingBrain-1.0, the foundational system, breaks from mainstream models like ChatGPT by using spiking networks instead of dense computations. This brain-inspired paradigm allows the model to train on just a fraction of the data typically required—reports suggest as little as 2%—while matching or exceeding transformer performance in benchmarks.

Efficiency Gains and Real-World Applications

Delving deeper, the model’s spiking mechanism enables asynchronous processing, akin to how the brain handles information dynamically. This is detailed in the arXiv report, which outlines a roadmap for next-generation hardware that could integrate seamlessly into sectors like healthcare and transportation. For instance, in robotics, SpikingBrain’s low-power profile supports real-time decision-making without the need for massive data centers.

Posts on X (formerly Twitter) from AI enthusiasts, such as those praising its 100x speedups, reflect growing excitement. Users have noted how the model’s hierarchical processing mirrors neuroscience findings, with emergent brain-like patterns in its structure. This sentiment aligns with broader neuromorphic computing trends, as seen in a Nature Communications Engineering article on advances in robotic vision, where spiking networks enable efficient AI in constrained environments.

Challenges and Future Prospects

Despite its promise, deploying SpikingBrain-7B isn’t without hurdles. The arXiv paper candidly discusses adaptations needed for CUDA and Triton operators in hybrid attention setups, underscoring the technical feats involved. Moreover, training on MetaX clusters required custom optimizations to handle long-sequence topologies, a feat that positions China at the forefront of independent AI innovation amid global chip restrictions.

In industry circles, this development is seen as a catalyst for shifting AI paradigms. A NotebookCheck report emphasizes its potential for up to 100x performance boosts over conventional systems, fueling discussions on sustainable AI. As neuromorphic computing gains traction, SpikingBrain-7B could inspire a wave of brain-mimicking models, reducing the environmental footprint of AI while expanding its reach to everyday devices.

Implications for Global AI Research

Beyond technical specs, the open-sourcing of SpikingBrain-7B via GitHub invites global collaboration, with the repository already garnering attention for its spike-driven transformer implementations. This mirrors earlier BICLab projects like Spike-Driven-Transformer-V2, building a continuum of research toward energy-efficient intelligence.

Looking ahead, experts anticipate integrations with emerging hardware, as outlined in PMC’s coverage of spike-based dynamic computing. With SpikingBrain’s bilingual capabilities and industry validations, it stands as a testament to how bio-inspired designs can democratize AI, challenging Western dominance and fostering a more inclusive technological future.



Source link

Continue Reading

AI Research

Exclusive | Cyberport may use Chinese GPUs at Hong Kong supercomputing hub to cut reliance on Nvidia

Published

on


Cyberport may add some graphics processing units (GPUs) made in China to its Artificial Intelligence Supercomputing Centre in Hong Kong, as the government-run incubator seeks to reduce its reliance on Nvidia chips amid worsening China-US relations, its chief executive said.

Cyberport has bought four GPUs made by four different mainland Chinese chipmakers and has been testing them at its AI lab to gauge which ones to adopt in the expanding facilities, Rocky Cheng Chung-ngam said in an interview with the Post on Friday. The park has been weighing the use of Chinese GPUs since it first began installing Nvidia chips last year, he said.

“At that time, China-US relations were already quite strained, so relying solely on [Nvidia] was no longer an option,” Cheng said. “That is why we felt that for any new procurement, we should in any case include some from the mainland.”

Cyberport’s AI supercomputing centre, established in December with its first phase offering 1,300 petaflops of computing power, will deliver another 1,700 petaflops by the end of this year, with all 3,000 petaflops currently relying on Nvidia’s H800 chips, he added.

Cyberport CEO Rocky Cheng Chung-ngam on September 12, 2025. Photo: Jonathan Wong

As all four Chinese solutions offer similar performance, Cyberport would take cost into account when determining which ones to order, according to Cheng, declining to name the suppliers.



Source link

Continue Reading

AI Research

Why do AI chatbots use so much energy?

Published

on


In recent years, ChatGPT has exploded in popularity, with nearly 200 million users pumping a total of over a billion prompts into the app every day. These prompts may seem to complete requests out of thin air.

But behind the scenes, artificial intelligence (AI) chatbots are using a massive amount of energy. In 2023, data centers, which are used to train and process AI, were responsible for 4.4% of electricity use in the United States. Across the world, these centers make up around 1.5% of global energy consumption. These numbers are expected to skyrocket, at least doubling by 2030 as the demand for AI grows.



Source link

Continue Reading

Trending