By Wooyoung Lee ( September 15, 2025, 05:43 GMT | Insight) — South Korea’s competition watchdog has set up a taskforce dedicated to adopting artificial intelligence in its enforcement and administrative work, aiming to expedite case handling, detect unreported mergers and strengthen oversight of unfair practices.South Korea’s competition watchdog has set up a taskforce dedicated to adopting artificial intelligence in its enforcement and administrative work, aiming to expedite case handling, detect unreported mergers and strengthen oversight of unfair practices….
AI Insights
Transparency, Not Speed, Could Decide AI’s Future in Finance

Corporate finance has long been among the early adopters of automation. From Lotus 1-2-3 to robotic process automation (RPA), the field has a history of embracing tools that reduce manual workload while maintaining strict governance.
AI Insights
South Korean regulator to adopt AI in competition enforcement | MLex
Prepare for tomorrow’s regulatory change, today
MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.
Know what others in the room don’t, with features including:
- Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
- Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
- Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
- Curated case files bringing together news, analysis and source documents in a single timeline
Experience MLex today with a 14-day free trial.
AI Insights
Tech from China could take the ‘stealth’ out of stealth subs using Artificial Intelligence, magnetic wake detection

Submarines were once considered the stealthiest assets of navies. Not anymore. Studies from China suggest that new tech can break the code of the stealth used on submarines, which make them powerful war machines. These innovations that detect underwater vessels can change the face of naval warfare. Artificial Intelligence and magnetic wake detection are some of the methods being used to achieve this. Here is what you should know.
China is developing submarine detection technologies using AI. How it works
The studies from China suggest that subs could be highly vulnerable to artificial intelligence (AI) and magnetic field detection technologies, as reported by the South China Morning Post.
In a study published in August, a team led by Meng Hao from the China Helicopter Research and Development Institute revealed an AI-powered anti-submarine warfare (ASW) system.
Led by AI, this tech is being touted as the first of its kind, enabling automated decision-making in detecting submarines.
As per the study published in the journal Electronics Optics & Control, the ASW system mimics a smart battlefield commander, integrating real-time data from sonar buoys, radar, underwater sensors, and ocean conditions like temperature and salinity.
Powered by AI, the system can autonomously analyse and adapt, slashing a submarine’s escape chances to just 5 per cent.
This would mean only one in 20 submarines could evade detection and attack.
This will be a significant shift in naval warfare, with researchers warning that the “invisible” submarine era is ending.
Stealth may soon be an impossible feat, Meng’s team said.
China can track US submarines via ‘magnetic wakes’
In December last year, scientists from Northwestern Polytechnical University (NPU) in Xi’an, revealed a novel method for tracking submarines via ‘magnetic wakes’.
The study led by Associate Professor Wang Honglei, models how submarines generate faint magnetic fields as they disturb seawater, creating ‘Kelvin wakes’.
These wakes, long after the vessel has passed, leave “footprints in the ocean’s magnetic fabric,” said the study, published in the Journal of Harbin Engineering University on December 4.
For example, a Seawolf-class submarine travelling at 24 knots and 30 metres depth generates a magnetic field of 10⁻¹² tesla—detectable by existing airborne magnetometres.
This method exploits a critical vulnerability in submarines, the Kelvin wakes, that ‘cannot be silenced,’ Wang’s team said.
This is in contrast to the acoustic – or sound-based- detection, which submarines can counter with sound-dampening technologies.
Together, the studies suggest that AI and magnetic detection could soon make submarine stealth a thing of the past.
Related Stories
AI Insights
Rethinking the AI Race | The Regulatory Review

Openness in AI models is not the same as freedom.
In 2016, Noam Chomsky, the father of modern linguistics, published the book Who Rules the World? referring to the United States’ dominance in global affairs. Today, policymakers—such as U.S. President Donald J. Trump argue that whoever wins the artificial intelligence (AI) race will rule the world, driven by a relentless, borderless competition for technological supremacy. One strategy gaining traction is open-source AI. But is it advisable? The short answer, I believe, is no.
Closed-source and open-source represent the two main paradigms in software, and AI software is no exception. While closed-source refers to proprietary software with restricted use, open-source software typically involves making the underlying source code publicly available, allowing unrestricted use, including the ability to modify the code and develop new applications.
AI is impacting virtually every industry, and AI startups have proliferated nonstop in recent years. OpenAI secured a multi-billion-dollar investment from Microsoft, while Anthropic has attracted significant investments from Amazon and Google. These companies are currently leading the AI race with closed-source models, a strategy aimed at maintaining proprietary control and addressing safety concerns.
But open-source models have consistently driven innovation and competition in software. Linux, one of the most successful open-source operating systems ever, is pivotal in the computer industry. Google Android, which is used in approximately 70 percent of smartphones worldwide, Amazon Web Services, Microsoft Azure, and all of the world’s top 500 supercomputers run on Linux. The success story of open-source software naturally fuels enthusiasm for open-source AI software. And behind the scenes, companies such as Meta are emerging by developing open-source AI initiatives to promote the democratization and growth of AI through a joint effort.
Mark Zuckerberg, in promoting an open-source model for AI, recalled the story of Linux’s open-source operating system. Linux became “the industry standard foundation for both cloud computing and the operating systems that run most mobile devices—and we all benefit from superior products because of it.”
But the story of Linux is quite different from Meta’s “open-source” AI project, Llama. First and foremost, no universally accepted definition of open-source AI exists. Second, Linux had no “Big Tech” corporation behind it. Its success was made possible by the free software movement, led by American activist and programmer Richard Stallman, who created the GNU General Public License (GPL) to ensure software freedom. The GPL allowed for the free distribution and collaborative development of essential software, most notably the Linux open source operating system, developed by Finnish programmer Linus Torvalds. Linux has become the foundation for numerous open-source operating systems, developed by a global community that has fostered a culture of openness, decentralization, and user control. Llama is not distributed under a GPL.
Under the Llama 4 licensing agreement, entities with more than 700 million monthly active users in the preceding calendar month must obtain a license from Meta, “which Meta may grant to you in its sole discretion” before using the model. Moreover, algorithms powering large AI models rely on vast amounts of data to function effectively. Meta, however, does not make its training data publicly available.
Thus, can we really call it open source?
Most importantly, AI presents fundamentally different and more complex challenges than traditional software, with the primary concern being safety. Traditional algorithms are predictable; we know the inputs and outputs. Consider the Euclidean algorithm, which provides an efficient way for computing the greatest common divisor of two integers. Conversely, AI algorithms are typically unpredictable because they leverage a large amount of data to build models, which are becoming increasingly sophisticated.
Deep learning algorithms, which underlie large language models such as ChatGPT and other well-known AI applications, rely on increasingly complex structures that make AI outputs virtually impossible to interpret or explain. Large language models are performing increasingly well, but would you trust something that you cannot fully interpret and understand? Open-source AI, rather than offering a solution, may be amplifying the problem. Although it is often seen as a tool to promote democratization and technological progress, open source in AI increasingly resembles a Ferrari engine with no brakes.
Like cars, computers and software are powerful technologies—but as with any technology, AI can harm if misused or deployed without a proper understanding of the risks. Currently, we do not know what AI can and cannot do. Competition is important, and open-source software has been a key driver of technological progress, providing the foundation for widely used technologies such as Android smartphones and web infrastructure. It has been, and continues to be, a key paradigm for competition, especially in a digital framework.
Is AI different because we do not know how to stop this technology if required? Free speech, free society, and free software are all appealing concepts, but let us do better than that. In the 18th century, French philosopher Baron de Montesquieu argued that “Liberty is the right to do everything the law permits.” Rather than promoting openness and competition at any cost to rule the world, liberty in AI seems to require a calibrated legal framework that balances innovation and safety.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries