Connect with us

AI Insights

Transparency, Not Speed, Could Decide AI’s Future in Finance

Published

on


Corporate finance has long been among the early adopters of automation. From Lotus 1-2-3 to robotic process automation (RPA), the field has a history of embracing tools that reduce manual workload while maintaining strict governance.

Generative artificial intelligence (AI) has come to increasingly fit neatly into that lineage.

Findings from the PYMNTS Intelligence July 2025 PYMNTS Data Book, “The Two Faces of AI: Gen AI’s Triumph Meets Agentic AI’s Caution,” reveal that CFOs love generative AI. Nearly 9 in 10 report strong ROI from pilot deployments, and an overwhelming 98% say they’re comfortable using it to inform strategic planning.

Yet when the conversation shifts from copilots and dashboards to fully autonomous “agentic AI” systems, software that can act on instructions, make decisions, and execute workflows without human hand-holding, the enthusiasm from the finance function plummets. Just 15% of finance leaders are even considering deployment.

This trust gap is more than a cautious pause. It reveals a deeper tension in corporate DNA: between a legacy architecture designed to mitigate risk and a new generation of systems designed to act. Where generative AI has found traction in summarizing reports or accelerating analysis, agentic AI demands something CFOs are far less ready to give: permission to decide.

Why Agentic AI Feels Different

Generative AI won finance leaders over by making their lives easier without upending the rules. It accelerates analysis, drafts explanations, and surfaces hidden risks. It works inside existing processes and leaves final decisions to people.

That made the ROI for generative AI obvious: faster closes, better forecasts and teams that can do more with less. It’s the kind of technology finance chiefs have embraced for decades.

Agentic AI is different. These systems don’t just suggest — they act. They can reconcile accounts, process transactions or file compliance reports automatically. That autonomy is exactly what the PYMNTS Intelligence report found rattles finance chiefs. Executives who love Gen AI when it writes reports or crunches scenarios can slam on the brakes when agentic machines start to move money or approve deals.

Governance is the first worry. Who signs off when a machine moves money? Visibility is another. Once an AI agent logs into a system over encrypted channels, security teams may have no idea what it’s really doing. And accountability is the big one: if an autonomous system makes a mistake in a tax filing, no regulator will accept “the software decided” as an excuse.

Read the report: The Two Faces of AI: Gen AI’s Triumph Meets Agentic AI’s Caution

The black-box nature of AI doesn’t help. Unlike traditional scripts or rules engines, agentic systems use probabilistic reasoning. They don’t always produce a clear audit trail. For executives whose careers depend on being able to explain every number, that’s a deal breaker.

Legacy infrastructure makes things worse. Finance data is scattered across enterprise software, procurement platforms, and banking portals. To work autonomously, AI would need seamless access to all of them, which means threading through a maze of authentication systems and siloed permissions.

Enterprises already struggle to manage those identities for employees. Extending them to machines that act like employees, only faster and harder to monitor, could be a recipe for hesitation.

If autonomous systems are going to move beyond experiments, they’ll need to prove their value in hard numbers. Finance chiefs want to see cycle times shrink, errors fall, and working capital improve. They want audits to be faster, not messier.

The irony is that CFOs don’t need AI to be flawless. They need it to be explainable. In other words, transparency is the killer feature.

Unless agentic AI can show that kind of return, it may stay parked in the “idea” column instead of the project pipeline.



Source link

AI Insights

South Korean regulator to adopt AI in competition enforcement | MLex

Published

on


By Wooyoung Lee ( September 15, 2025, 05:43 GMT | Insight) — South Korea’s competition watchdog has set up a taskforce dedicated to adopting artificial intelligence in its enforcement and administrative work, aiming to expedite case handling, detect unreported mergers and strengthen oversight of unfair practices.South Korea’s competition watchdog has set up a taskforce dedicated to adopting artificial intelligence in its enforcement and administrative work, aiming to expedite case handling, detect unreported mergers and strengthen oversight of unfair practices….

Prepare for tomorrow’s regulatory change, today

MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.

Know what others in the room don’t, with features including:

  • Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
  • Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
  • Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
  • Curated case files bringing together news, analysis and source documents in a single timeline

Experience MLex today with a 14-day free trial.



Source link

Continue Reading

AI Insights

Tech from China could take the ‘stealth’ out of stealth subs using Artificial Intelligence, magnetic wake detection

Published

on


Submarines were once considered the stealthiest assets of navies. Not anymore. Studies from China suggest that new tech can break the code of the stealth used on submarines, which make them powerful war machines. These innovations that detect underwater vessels can change the face of naval warfare. Artificial Intelligence and magnetic wake detection are some of the methods being used to achieve this. Here is what you should know.

China is developing submarine detection technologies using AI. How it works

The studies from China suggest that subs could be highly vulnerable to artificial intelligence (AI) and magnetic field detection technologies, as reported by the South China Morning Post.

Add WION as a Preferred Source

In a study published in August, a team led by Meng Hao from the China Helicopter Research and Development Institute revealed an AI-powered anti-submarine warfare (ASW) system.
Led by AI, this tech is being touted as the first of its kind, enabling automated decision-making in detecting submarines.

As per the study published in the journal Electronics Optics & Control, the ASW system mimics a smart battlefield commander, integrating real-time data from sonar buoys, radar, underwater sensors, and ocean conditions like temperature and salinity.

Powered by AI, the system can autonomously analyse and adapt, slashing a submarine’s escape chances to just 5 per cent.

This would mean only one in 20 submarines could evade detection and attack.

This will be a significant shift in naval warfare, with researchers warning that the “invisible” submarine era is ending.

Stealth may soon be an impossible feat, Meng’s team said.

China can track US submarines via ‘magnetic wakes’

In December last year, scientists from Northwestern Polytechnical University (NPU) in Xi’an, revealed a novel method for tracking submarines via ‘magnetic wakes’.

The study led by Associate Professor Wang Honglei, models how submarines generate faint magnetic fields as they disturb seawater, creating ‘Kelvin wakes’.

These wakes, long after the vessel has passed, leave “footprints in the ocean’s magnetic fabric,” said the study, published in the Journal of Harbin Engineering University on December 4.

For example, a Seawolf-class submarine travelling at 24 knots and 30 metres depth generates a magnetic field of 10⁻¹² tesla—detectable by existing airborne magnetometres.

This method exploits a critical vulnerability in submarines, the Kelvin wakes, that ‘cannot be silenced,’ Wang’s team said.

This is in contrast to the acoustic – or sound-based- detection, which submarines can counter with sound-dampening technologies.

Together, the studies suggest that AI and magnetic detection could soon make submarine stealth a thing of the past.

Related Stories



Source link

Continue Reading

AI Insights

Rethinking the AI Race | The Regulatory Review

Published

on


Openness in AI models is not the same as freedom.

In 2016, Noam Chomsky, the father of modern linguistics, published the book Who Rules the World? referring to the United States’ dominance in global affairs. Today, policymakers—such as U.S. President Donald J. Trump argue that whoever wins the artificial intelligence (AI) race will rule the world, driven by a relentless, borderless competition for technological supremacy. One strategy gaining traction is open-source AI. But is it advisable? The short answer, I believe, is no.

Closed-source and open-source represent the two main paradigms in software, and AI software is no exception. While closed-source refers to proprietary software with restricted use, open-source software typically involves making the underlying source code publicly available, allowing unrestricted use, including the ability to modify the code and develop new applications.

AI is impacting virtually every industry, and AI startups have proliferated nonstop in recent years. OpenAI secured a multi-billion-dollar investment from Microsoft, while Anthropic has attracted significant investments from Amazon and Google. These companies are currently leading the AI race with closed-source models, a strategy aimed at maintaining proprietary control and addressing safety concerns.

But open-source models have consistently driven innovation and competition in software. Linux, one of the most successful open-source operating systems ever, is pivotal in the computer industry. Google Android, which is used in approximately 70 percent of smartphones worldwide, Amazon Web Services, Microsoft Azure, and all of the world’s top 500 supercomputers run on Linux. The success story of open-source software naturally fuels enthusiasm for open-source AI software. And behind the scenes, companies such as Meta are emerging by developing open-source AI initiatives to promote the democratization and growth of AI through a joint effort.

Mark Zuckerberg, in promoting an open-source model for AI, recalled the story of Linux’s open-source operating system. Linux became “the industry standard foundation for both cloud computing and the operating systems that run most mobile devices—and we all benefit from superior products because of it.”

But the story of Linux is quite different from Meta’s “open-source” AI project, Llama. First and foremost, no universally accepted definition of open-source AI exists. Second, Linux had no “Big Tech” corporation behind it. Its success was made possible by the free software movement, led by American activist and programmer Richard Stallman, who created the GNU General Public License (GPL) to ensure software freedom. The GPL allowed for the free distribution and collaborative development of essential software, most notably the Linux open source operating system, developed by Finnish programmer Linus Torvalds. Linux has become the foundation for numerous open-source operating systems, developed by a global community that has fostered a culture of openness, decentralization, and user control. Llama is not distributed under a GPL.

Under the Llama 4 licensing agreement, entities with more than 700 million monthly active users in the preceding calendar month must obtain a license from Meta, “which Meta may grant to you in its sole discretion” before using the model. Moreover, algorithms powering large AI models rely on vast amounts of data to function effectively. Meta, however, does not make its training data publicly available.

Thus, can we really call it open source?

Most importantly, AI presents fundamentally different and more complex challenges than traditional software, with the primary concern being safety. Traditional algorithms are predictable; we know the inputs and outputs. Consider the Euclidean algorithm, which provides an efficient way for computing the greatest common divisor of two integers. Conversely, AI algorithms are typically unpredictable because they leverage a large amount of data to build models, which are becoming increasingly sophisticated.

Deep learning algorithms, which underlie large language models such as ChatGPT and other well-known AI applications, rely on increasingly complex structures that make AI outputs virtually impossible to interpret or explain. Large language models are performing increasingly well, but would you trust something that you cannot fully interpret and understand? Open-source AI, rather than offering a solution, may be amplifying the problem. Although it is often seen as a tool to promote democratization and technological progress, open source in AI increasingly resembles a Ferrari engine with no brakes.

Like cars, computers and software are powerful technologies—but as with any technology, AI can harm if misused or deployed without a proper understanding of the risks. Currently, we do not know what AI can and cannot do. Competition is important, and open-source software has been a key driver of technological progress, providing the foundation for widely used technologies such as Android smartphones and web infrastructure. It has been, and continues to be, a key paradigm for competition, especially in a digital framework.

Is AI different because we do not know how to stop this technology if required? Free speech, free society, and free software are all appealing concepts, but let us do better than that. In the 18th century, French philosopher Baron de Montesquieu argued that “Liberty is the right to do everything the law permits.” Rather than promoting openness and competition at any cost to rule the world, liberty in AI seems to require a calibrated legal framework that balances innovation and safety.



Source link

Continue Reading

Trending