LG bets on optimized training, industry-tailored applications to give Korea competitive AI edge
LG AI Research co-heads, Lee Hong-lak (left) and Lim Woo-hyung, speak during an interview with The Korea Herald at the AI lab’s headquarters in Seoul on Thursday. (LG AI Research)
As artificial intelligence rewrites the rules of global competition, South Korea faces a moment of truth. At the center of this race is LG AI Research, led by two of the country’s top AI strategists, determined to secure Korea’s place among the world’s technological powers.
“It’s not just a lofty ambition,” says Lim Woo-hyung, co-head of LG AI Research, in a recent interview with The Korea Herald. “It’s a national mandate.”
Seated next to him, co-head Lee Hong-lak adds, “This isn’t a game where second place is good enough. Korea must achieve technical sovereignty to survive and then scale to compete.”
What they’re describing is more than a corporate agenda. It is, in essence, a geopolitical strategy — one defined by semiconductors, algorithms and the global flow of data.
Sovereign AI: Not just buzzword
Terms such as “sovereign AI,” “Korean-style AI” and “vertical AI” have become frequent fixtures at conferences and in policy documents. But few articulate their real-world implications as clearly as Lim and Lee.
“Sovereign AI doesn’t mean isolationism,” Lim emphasizes. “It means having the freedom to control, modify and evolve our AI systems without foreign dependence.”
In other words, sovereignty in AI equates to technological self-determination — a concept that gains critical importance as AI increasingly permeates areas such as national defense, education, culture and health care.
“The risk isn’t limited to licensing or vendor lock-in,” Lee adds. “It’s about data dependence. When key operations rely on external (application programming interfaces), like ChatGPT, the very data that powers AI, arguably the most valuable national asset of the 21st century, could flow overseas.”
Exaone: Korea’s answer to global LLMs
At the core of LG’s strategy is Exaone, the company’s flagship large language model. Since its launch in 2021, Exaone has undergone rapid development, now standing at version 4.0. While its size and performance match global peers, LG’s unique approach to model training and deployment distinguishes it.
LG AI Research co-head Lim Woo-hyung (LG AI Research)
“We knew early on that we couldn’t match the GPU scale of hyperscalers in the US or China,” Lim explains. “So we optimized for efficiency. We invested heavily in data quality, synthetic data generation and cost-effective training methods.”
Exaone is now embedded across various LG Group affiliates, supporting document Q&A systems, intelligent customer service centers and industry-specific use cases.
“This isn’t AI for AI’s sake,” says Lee. “This is vertical AI, solutions tailored to Korea’s industrial backbone: manufacturing, electronics and telecom.”
One of Korea’s enduring challenges in AI is linguistic. The imbalance between English and Korean training data remains stark, limiting Korean language model performance.
But rather than seeing this as a handicap, LG sees opportunity.
“Training Korean in conjunction with multiple languages actually improves Korean-language performance,” Lee notes. “It’s not about creating Korean-only AI. It’s about building a globally competitive AI that also excels in Korean contexts.”
To that end, Exaone 4.0 supports Korean, English and Spanish, with the latter added to align with LG’s growing footprint in North and Latin America.
From underdog to contender
Can Korea realistically join the US and China as an AI superpower?
LG AI Research co-head Lee Hong-lak (LG AI Research)
“We must. And we can,” Lee says without hesitation. “If you look just beyond the top two, that’s where Korea can lead.”
Lim is more direct. “This isn’t about ambition. It’s about survival.”
Still, the duo is acutely aware of Korea’s limitations. “Chinese firms such as Alibaba operate with tens of thousands of GPUs,” Lim says. “We don’t have that kind of infrastructure. But we compensate with sharp focus and efficient execution.”
Another hurdle is talent. To address this, LG launched its own AI graduate school, the first corporate-run AI school in Korea with government accreditation.
“We’re not just building talent,” says Lim. “We’re building a strategic moat. Some call it that. We call it nation-building.”
South Korea has historically missed major industrial revolutions, be it early search engines or global platform businesses. With AI, there’s a sense that the stakes are different.
“This may be Korea’s last (and) best chance to lead, not follow,” Lee says.
“Global trust in AI is still up for grabs,” he adds. “Korea, with its technological depth and industrial discipline, can build an AI ecosystem that is not only powerful but principled.”
From sovereignty to scalability, and from constraint to strategy, LG’s quiet yet determined AI push is more than a corporate initiative. It is a national test — a question of whether a small but ambitious nation can define the digital rules of the 21st century.
The race for sovereign AI is intensifying, with countries rushing to build their own large language models to secure technological independence. Korea is no exception. The government has tapped five leading companies to spearhead the creation of homegrown models tailored to national priorities. In this high-stakes contest, The Korea Herald launches a special series exploring Korea’s AI industry, its standing in the global arena, and the rise of Korean-language-focused systems. This third installment explores Korea’s pursuit of AI sovereignty and its survival strategies in the face of global competition through an in-depth interview with leading AI experts. – Ed.
U.S. President Donald Trump is about to do something none of his predecessors have — make a second full state visit to the UK. Ordinarily, a President in a second term of office visits, meets with the monarch, but doesn’t get a second full state visit.
On this one it seems he’ll be accompanied by two of the biggest faces in the ever-growing AI race; OpenAI CEO, Sam Altman, and NVIDIA CEO, Jensen Huang.
This is according to a report by the Financial Times, which claims that the two are accompanying President Trump to announce a “large artificial intelligence infrastructure deal.”
The deal is said to support a number of data center projects in the UK, another deal towards developing “sovereign” AI for another of the United States’ allies.
The report claims that the two CEOs will announce the deal during the Trump state visit, and will see OpenAI supply the technology, and NVIDIA the hardware. The UK will supply all the energy required, which is handy for the two companies involved.
UK energy is some of the most expensive in the world (one reason I’m trying to use my gaming PC with an RTX 5090 a lot less!)
The exact makeup of the deal is still unknown, and, naturally, neither the U.S. nor UK governments have said anything at this point.
All the latest news, reviews, and guides for Windows and Xbox diehards.
AI has helped push NVIDIA to the lofty height of being the world’s most valuable company. (Image credit: Getty Images | Kevin Dietsch)
The UK government, like many others, has openly announced its plans to invest in AI. As the next frontier for tech, you either get on board or you get left behind. And President Trump has made no secret of his desires to ensure the U.S. is a world leader.
OpenAI isn’t the only company that could provide the software side, but it is the most established. While Microsoft may be looking towards a future where it is less reliant on the tech behind ChatGPT for its own AI ambitions, it makes total sense that organizations around the world would be looking to OpenAI.
NVIDIA, meanwhile, continues to be the runaway leader on the hardware front. We’ve seen recently that AMD is planning to keep pushing forward, and a recent Chinese model has reportedly been built to run specifically without NVIDIA GPUs.
But for now, everything runs best on NVIDIA, and as long as it can keep churning out enough GPUs to fill these data centers, it will continue to print money.
The state visit is scheduled to begin on Wednesday, September 17, so I’ll be keeping a close eye out for when this AI deal gets announced.
The federal government is investing $28.7 million to equip Canadian workers with skills for a rapidly evolving clean energy sector and to expand artificial intelligence (AI) research capacity.
The funding, announced Sept. 9, includes more than $9 million over three years for theAI Pathways: Energizing Canada’s Low-Carbon Workforce project. Led by the Alberta Machine Intelligence Institute (Amii), the initiative will train nearly 5,000 energy sector workers in AI and machine learning skills for careers in wind, solar, geothermal and hydrogen energy. Training will be offered both online and in-person to accommodate mid-career workers, industry associations, and unions across Canada.
In addition, the government is providing $19.7 million to Amii through theCanadian Sovereign AI Compute Strategy, expanding access to advanced computing resources for AI research and development. The funding will support researchers and businesses in training and deploying AI models, fostering innovation, and helping Canadian companies bring AI-enabled products to market.
“Canada’s future depends on skilled workers. Investing and upskilling Canadian workers ensures they can adapt and succeed in an energy sector that’s changing faster than ever,” said Patty Hajdu, Minister of Jobs and Families and Minister responsible for the Federal Economic Development Agency for Northern Ontario.
Evan Solomon, Minister of Artificial Intelligence and Digital Innovation, added that the investment “builds an AI-literate workforce that will drive innovation, create sustainable jobs, and strengthen our economy.”
Amii CEO Cam Linke said the funding empowers Canada to become “the world’s most AI-literate workforce” while providing researchers and businesses with a competitive edge.
The AI Pathways initiative is one of eight projects funded under the Sustainable Jobs Training Fund, which supports more than 10,000 Canadian workers in emerging sectors such as electric vehicle maintenance, green building retrofits, low-carbon energy, and carbon management.
The announcement comes as Canada faces workforce shifts, with an estimated 1.2 million workers retiring across all sectors over the next three years and the net-zero transition projected to create up to 400,000 new jobs by 2030.
The federal investments aim to prepare Canadians for the jobs of the future while advancing research, innovation, and commercialization in AI and clean energy.
In the rapidly evolving field of artificial intelligence, a new contender has emerged from China’s research labs, promising to reshape how we think about energy-efficient computing. The SpikingBrain-7B model, developed by the Brain-Inspired Computing Lab (BICLab) at the Chinese Academy of Sciences, represents a bold departure from traditional large language models. Drawing inspiration from the human brain’s neural firing patterns, this system employs spiking neural networks to achieve remarkable efficiency gains. Unlike conventional transformers that guzzle power, SpikingBrain-7B mimics biological neurons, firing only when necessary, which slashes energy consumption dramatically.
At its core, the model integrates hybrid-linear attention mechanisms and conversion-based training techniques, allowing it to run on domestic MetaX chips without relying on NVIDIA hardware. This innovation addresses a critical bottleneck in AI deployment: the high energy demands of training and inference. According to a technical report published on arXiv, the SpikingBrain series, including the 7B and 76B variants, demonstrates over 100 times faster first-token generation at long sequence lengths, making it ideal for edge devices in industrial control and mobile applications.
Breaking Away from Transformer Dominance
The genesis of SpikingBrain-7B can be traced to BICLab’s GitHub repository, where the open-source code reveals a sophisticated architecture blending spiking neurons with large-scale model training. Researchers at the lab, led by figures like Guoqi Li and Bo Xu, have optimized for non-NVIDIA clusters, overcoming challenges in parallel training and communication overhead. This approach not only enhances stability but also paves the way for neuromorphic hardware that prioritizes energy optimization over raw compute power.
Recent coverage in Xinhua News highlights how SpikingBrain-1.0, the foundational system, breaks from mainstream models like ChatGPT by using spiking networks instead of dense computations. This brain-inspired paradigm allows the model to train on just a fraction of the data typically required—reports suggest as little as 2%—while matching or exceeding transformer performance in benchmarks.
Efficiency Gains and Real-World Applications
Delving deeper, the model’s spiking mechanism enables asynchronous processing, akin to how the brain handles information dynamically. This is detailed in the arXiv report, which outlines a roadmap for next-generation hardware that could integrate seamlessly into sectors like healthcare and transportation. For instance, in robotics, SpikingBrain’s low-power profile supports real-time decision-making without the need for massive data centers.
Posts on X (formerly Twitter) from AI enthusiasts, such as those praising its 100x speedups, reflect growing excitement. Users have noted how the model’s hierarchical processing mirrors neuroscience findings, with emergent brain-like patterns in its structure. This sentiment aligns with broader neuromorphic computing trends, as seen in a Nature Communications Engineering article on advances in robotic vision, where spiking networks enable efficient AI in constrained environments.
Challenges and Future Prospects
Despite its promise, deploying SpikingBrain-7B isn’t without hurdles. The arXiv paper candidly discusses adaptations needed for CUDA and Triton operators in hybrid attention setups, underscoring the technical feats involved. Moreover, training on MetaX clusters required custom optimizations to handle long-sequence topologies, a feat that positions China at the forefront of independent AI innovation amid global chip restrictions.
In industry circles, this development is seen as a catalyst for shifting AI paradigms. A NotebookCheck report emphasizes its potential for up to 100x performance boosts over conventional systems, fueling discussions on sustainable AI. As neuromorphic computing gains traction, SpikingBrain-7B could inspire a wave of brain-mimicking models, reducing the environmental footprint of AI while expanding its reach to everyday devices.
Implications for Global AI Research
Beyond technical specs, the open-sourcing of SpikingBrain-7B via GitHub invites global collaboration, with the repository already garnering attention for its spike-driven transformer implementations. This mirrors earlier BICLab projects like Spike-Driven-Transformer-V2, building a continuum of research toward energy-efficient intelligence.
Looking ahead, experts anticipate integrations with emerging hardware, as outlined in PMC’s coverage of spike-based dynamic computing. With SpikingBrain’s bilingual capabilities and industry validations, it stands as a testament to how bio-inspired designs can democratize AI, challenging Western dominance and fostering a more inclusive technological future.