Connect with us

AI Research

Promise, scepticism, and its meaning for Southeast Asia

Published

on


Agentic AI is being talked about as the next major wave of artificial intelligence, but its meaning for enterprises remains to be settled. Capgemini Research Institute estimates agentic AI could unlock as much as US$450 billion in economic value by 2028. Yet adoption is still limited: only 2% of organisations have scaled its use, and trust in AI agents is already starting to slip.

That tension – high potential but low deployment – is what Capgemini’s new research explores. Based on an April 2025 survey of 1,500 executives at large organisations in 14 countries, including Singapore, the report highlights trust and oversight as important factors in realising value. Nearly three-quarters of executives said the benefits of human involvement in AI workflows outweigh the costs. Nine out of ten described oversight as either positive or at least cost-neutral.

The message is clear: AI agents work best when paired with people, not left on autopilot.

Early steps, slow progress

Roughly a quarter have launched agentic AI pilots, while only 14% have moved into implementation. For the majority, deployment is still in the planning stage. The report describes this as a widening gap between intent and readiness, now one of the main barriers to capturing economic value.

The technology is not just theoretical – real-world applications are starting to emerge, and one example is a personal shopping assistant that can search for items based on specific requests, generate product descriptions, answer questions, and place items in a cart using voice or text commands. While these tools typically stop short of completing financial transactions for security reasons, they already replicate many of the functions of a human assistant.

This raises bigger questions about the role of traditional websites. If AI can handle tasks like searching, comparing, and preparing purchases, will people still need to navigate online stores directly? For those who find busy websites overwhelming or difficult to navigate, an AI-driven interface may offer a simpler, more accessible option.

Defining agentic AI

To cut through the hype, AI News spoke with Jason Hardy, chief technology officer for artificial intelligence at Hitachi Vantara, about how enterprises in Asia-Pacific should think about the technology.

Jason Hardy, Chief Technology Officer for Artificial Intelligence at Hitachi Vantara.

“Agentic AI is software that can decide, act, and refine its strategy on its own,” Hardy said. “Think of it as a team of domain experts that can learn from experience, coordinate tasks, and operate in real time. Generative AI creates content and is usually reactive to prompts. Agentic AI may use GenAI inside it, but its job is to pursue objectives and take action in dynamic environments.”

The distinction – between producing outputs and driving outcomes – captures the meaning of agentic AI for enterprise IT.

Why adoption is accelerating

According to Hardy, adoption is being driven by scale and complexity. “Enterprises are drowning in complexity, risk, and scale. Agentic AI is catching on because it does more than analyse. It optimises storage and capacity on the fly, automates governance and compliance, anticipates failures before they occur, and responds to security threats in real time. That shift from ‘insight’ to ‘autonomous action’ is why adoption is accelerating,” he explained.

Capgemini’s research supports this. The study found that while confidence in agentic AI is uneven, early deployments are proving useful when the technology takes on routine but essential IT tasks.

Where value is emerging

Hardy pointed to IT operations as the strongest use case so far. “Automated data classification, proactive storage optimisation, and compliance reporting save teams hours each day, while predictive maintenance and real-time cybersecurity responses reduce downtime and risk,” he said.

The impact goes beyond efficiency. The capabilities mean systems can detect problems before they escalate, allocate resources more effectively, and contain security incidents more quickly. “Early users are already using agentic AI to remediate incidents proactively before they escalate, strengthening reliability and performance in hybrid environments,” Hardy added.

For now, IT remains the most practical starting point: its deployment offers measurable results and is central to how enterprises manage both costs and risk, showing the meaning of agentic AI in operations.

Southeast Asia’s starting point

For Southeast Asian organisations, Hardy said the first priority is getting the data right. “Agentic AI delivers value only when enterprise data is properly classified, secured, and governed,” he explained.

Infrastructure also matters, meaning that agentic AI requires systems that can support multi-agent orchestration, persistent memory, and dynamic resource allocation. Without this foundation, adoption will be limited in scope.

Many enterprises may choose to begin with IT operations, where agentic AI can pre-empt outages and optimise performance before rolling out to wider business functions.

Reshaping core workflows

Hardy expects agentic AI to reshape workflows in IT, supply chain management, and customer service. “In IT operations, agentic AI can anticipate capacity needs, rebalance workloads, and reallocate resources in real time. It can also automate predictive maintenance, preventing hardware failures before they occur,” he said.

Cybersecurity is another area of promise. “In cybersecurity, agentic AI is able to detect anomalies, isolate affected systems, and trigger immutable backups in seconds, reducing response times and mitigating potential damage,” Hardy noted.

The capabilities are not limited to proof-of-concept trials. Early deployments already show how agentic AI can strengthen reliability and resilience in hybrid environments.

Skills and leadership

Adoption will also require new human skills. “Agentic AI will shift the human role from execution to oversight and orchestration,” Hardy said. Leaders will need to set boundaries and monitor autonomous systems, ensuring they stay in ethical and organisational limits.

For managers, the change means less focus on administrative tasks and more on mentoring, innovation, and strategy. HR teams will need to build governance skills like auditing readiness and create new structures for integrating agentic AI effectively.

The workforce impact will be uneven. The World Economic Forum predicts that AI could create 11 million jobs in Southeast Asia by 2030 and displace nine million. Women and Gen Z are expected to face the sharpest disruptions, with more than 70% of women and up to 76% of younger workers in roles vulnerable to AI.

This highlights the urgency of reskilling, and major investments are already underway, with Microsoft committing $1.7 billion in Indonesia and rolling out training programmes in Malaysia and the wider region. Hardy stressed that capacity building must be inclusive, rapid, and strategic.

What comes next

Looking three years ahead, Hardy believes many leaders will underestimate the pace of change. “The first wave of benefits is already visible in IT operations: agentic AI is automating tasks like data classification, storage optimisation, predictive maintenance, and cybersecurity response, freeing teams to focus on higher-level strategic work,” he said.

But the larger surprise may be at the economic and business model level. IDC projects AI and generative AI could add around US$120 billion to the GDP of the ASEAN-6 by 2027. Hardy sees the implications as broader and faster than many expect. “The suggests the impact will be much faster and more material than many leaders currently anticipate,” he said.

In Indonesia, more than 57% of job roles are expected to be augmented or disrupted by AI, a reminder that transformation will not be limited to IT. It will cut in how businesses are structured, how they manage risk, and how they create value.

Balancing autonomy with oversight

The Capgemini findings and Hardy’s insights converge on the same theme: agentic AI holds huge promise, but its meaning in practice depends on balancing autonomy with trust and human oversight.

The technology may help enterprises lower costs, improve reliability, and unlock new revenue streams. But without a focus on governance, reskilling, and infrastructure readiness, adoption risks stalling.

For Southeast Asia, the question is not whether agentic AI will take hold, but how quickly – and whether enterprises can balance autonomy with accountability as machines begin to take on more responsibility for business decisions.

(Photo by Igor Omilaev)

See also: Beyond acceleration: the rise of agentic AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link

AI Research

Intelligence is not artificial | The Catholic Register

Published

on


On our Comment pages, Sr. Helena Burns issues a robust call for a return to “old school” means of acquiring, developing and retaining knowledge in the age of AI.

Traditionalist though she might be in many ways, however, Sr. Burns’ appeal is not simply to revive the alliterative formula of Readin’, Writin’ and Arithmetic. Rather, she urges a return to the lost arts of using libraries, taking notes, listening to wiser heads, and above all using our own brains rather than relying on the post in the machine to explain the world. 

“We can rebuild a talking, thinking, literate, memorizing culture. But it’s a slow build. It always was, always will be, and it starts when you’re a kiddo. Children in school are now saying they don’t want to learn how to read and write because computers will do it for them. They don’t know that they’re surrendering their humanity,” she writes.

Advertisement

The good news is that the much-rumoured surrender seems to be much further off than predicted in the recent frenzy over ChatGPT and its cohorts purportedly being thisclose to taking over the world and doing everything from producing perfect sour grapes to writing editorials. 

In facts, recent reports particularly in the financial press, suggest AI-mania is already plateauing, if not hitting a downward curve. That doesn’t mean it won’t still cause significant disruption in workplaces or in how we navigate the storm-tossed seas of daily life. It doesn’t mean we can simply shrug off the statistic Sr. Burns cites of a reported 47 per cent decline in neural engagement among those who relied on artificial intelligence to help complete an essay versus those who got ink under their fingernails. 

But as techno journalist Asa Fitch reported last week, Meta Platforms has delayed rollout of its next AI iteration, Llama 4 Behemoth, because of engineering failures to significantly improve the previous model. Open AI, meanwhile, overhyped its follow up ChatGPT 5 and saw it effectively flatline in the market.

Business leaders, already sceptical of security and privacy concerns with AI, have hardly been reassured by the “tendency of even the best AI models to occasionally hallucinate wrong answers,” Fitch writes.

More critically, many businesses looking at the allure of AI don’t yet know, in very practical terms, what it can do for their particular sector. We tend to forget that from the “future is now” advent of the Internet, it took the better part of a decade before society began to appreciate its ubiquitous uses.

Advertisement

University of California, San Diego psychology professor Cory Miller points out there even more formidable barriers to broad AI adaptation. Not the least of such obstacles are the requirements for, as Miller says, “enormous hardware, constant access to vast training data, and unsustainable amounts of electrical power (emphasis added).”

How unsustainable? A human brain, Miller writes, “runs on 20 watts of power – less than a lightbulb.”

AI by contrast?

“To match the computational power of a single human brain, a leading AI system would require the same amount of energy that powers the entire city of Dallas. Let that sink in for a second. One lightbulb versus a city of 1.3 million people,” he says. 

The comparison is arithmetically sobering. It’s also ultimately a hallelujah chorus to the glory of creation that is humankind. We exist in a culture awash – it often seems perversely pridefully – in self-underestimation and outright denigration. Oh, to deploy Hamlet’s immortal phrase, what a piece of work is man.

Without question, evil lurks in our darker corners and threatens to beset our best and brightest achievements. But achieve we do as we collectively engage the unique phenomenal 20-watt light bulb brains that are the universal gift from God, our Sovereign Lord and Creator.

In another column in our Comment section, Mary Marrocco illuminates the dynamic of that gift and that engagement, quoting St. Athanasius’ observation that “when we forgot to look up to God, God came down to the low place we’d fixed our gaze on.”

Advertisement

The outcome was the glorious rise of our Holy Mother the Church, whose cycle of liturgical years, year after year, reminds us of who we are, what we are, and to whom we truly belong.

There is not a shred of artificiality in the intelligence of the resulting library (biblio) of the Bible’s books, its Gospels, its Good News. There is only God’s Word, the most extraordinary conversation any child, any human being, could ever be invited to learn from 

A version of this story appeared in the August 31, 2025, issue of The Catholic Register with the headline “Intelligence is not artificial“.



Source link

Continue Reading

AI Research

Has artificial intelligence finally passed the Will Smith spaghetti test? – Sky News

Published

on



Has artificial intelligence finally passed the Will Smith spaghetti test?  Sky News



Source link

Continue Reading

AI Research

AI as a Researcher: First Peer-Reviewed Research Paper Written Without Humans

Published

on


Artificial intelligence has crossed another significant milestone that challenges our understanding of what machines can achieve independently. For the first time in scientific history, an AI system has written a complete research paper that passed peer review at an academic conference without any human assistance in the writing process. This breakthrough could be a fundamental shift in how scientific research might be conducted in the future.

Historic Achievement

A paper produced by The AI Scientist-v2 passed the peer-review process at a workshop in a top international AI conference. The research was submitted to an ICLR 2025 workshop, which is one of the most prestigious venues in machine learning. The paper was generated by an improved version of the original AI Scientist, called The AI Scientist-v2.

The accepted paper, titled “Compositional Regularization: Unexpected Obstacles in Enhancing Neural Network Generalization,” received impressive scores from human reviewers. Of the three papers submitted for review, one received ratings that placed it above the acceptance threshold. This breakthrough is a significant advancement as AI can now participate in the fundamental process of scientific discovery that has been exclusively human for centuries.

The research team from Sakana AI, working with collaborators from the University of British Columbia and the University of Oxford, conducted this experiment. They received institutional review board approval and worked directly with ICLR conference organizers to ensure the experiment followed proper scientific protocols.

How The AI Scientist-v2 Works

The AI Scientist-v2 has achieved this success due to several major advancements over its predecessor. Unlike its predecessor, AI Scientist-v2 eliminates the need for human-authored code templates, can work across diverse machine learning domains, and employs a tree-search methodology to explore multiple research paths simultaneously.

The system operates through an end-to-end process that mirrors how human researchers work. It begins by formulating scientific hypotheses based on the research domain it is assigned to explore. The AI then designs experiments to test these hypotheses, writes the necessary code to conduct the experiments, and executes them automatically.

What makes this system particularly advanced is its use of agentic tree search methodology. This approach allows the AI to explore multiple research directions simultaneously, much like how human researchers might consider various approaches to solving a problem. This involves running experiments via agentic tree search, analyzing results, and generating a paper draft. A dedicated experiment manager agent coordinates this entire process to ensure that the research remains focused and productive.

The system also includes an enhanced AI reviewer component that uses vision-language models to provide feedback on both the content and visual presentation of research findings. This creates an iterative refinement process where the AI can improve its own work based on feedback, similar to how human researchers refine their manuscripts based on colleague input.

What Made This Research Paper Special

The accepted paper focused on a challenging problem in machine learning called compositional generalization. This refers to the ability of neural networks to understand and apply learned concepts in new combinations they have never seen before. The AI Scientist-v2 investigated novel regularization methods that might improve this capability.

Interestingly, the paper also reported negative results. The AI discovered that certain approaches it hypothesized would improve neural network performance actually created unexpected obstacles. In science, negative results are valuable because they prevent other researchers from pursuing unproductive paths and contribute to our understanding of what does not work.

The research followed rigorous scientific standards throughout the process. The AI Scientist-v2 conducted multiple experimental runs to ensure statistical validity, created clear visualizations of its findings, and properly cited relevant previous work. It formatted the entire manuscript according to academic standards and wrote comprehensive discussions of its methodology and findings.

The human researchers who supervised the project conducted their own thorough review of all three generated papers. They found that while the accepted paper was of workshop quality, it contained some technical issues that would prevent acceptance at the main conference track. This honest assessment demonstrates the current limitations while acknowledging the significant progress achieved.

Technical Capabilities and Improvements

The AI Scientist-v2 demonstrates several remarkable technical capabilities that distinguish it from previous automated research systems. The system can work across diverse machine learning domains without requiring pre-written code templates. This flexibility means it can adapt to new research areas and generate original experimental approaches rather than following predetermined patterns.

The tree search methodology is a significant innovation in AI research automation. Rather than pursuing a single research direction, the system can maintain multiple hypotheses simultaneously and allocate computational resources based on the promise each direction shows. This approach mirrors how experienced human researchers often maintain several research threads while focusing most effort on the most promising avenues.

Another crucial improvement is the integration of vision-language models for reviewing and refining the visual elements of research papers. Scientific figures and visualizations are critical for communicating research findings effectively. The AI can now evaluate and improve its own data visualizations iteratively.

The system also demonstrates understanding of scientific writing conventions. It properly structures papers with appropriate sections, maintains consistent terminology throughout manuscripts, and creates logical flow between different parts of the research narrative. The AI shows awareness of how to present methodology, discuss limitations, and contextualize findings within existing literature.

Current Limitations and Challenges

Despite this historic achievement, several important limitations restrict the current capabilities of AI-generated research. The company said that none of its AI-generated studies passed its internal bar for ICLR conference track publication standards. This indicates that while the AI can produce workshop-quality research, reaching the highest tiers of scientific publication remains challenging.

The acceptance rates provide important context for evaluating this achievement. The paper was accepted at a workshop track, which typically has less strict standards than the main conference (60-70% acceptance rate vs. the 20-30% acceptance rates typical of main conference tracks. While this does not diminish the significance of the achievement, it suggests that producing truly groundbreaking research remains beyond current AI capabilities.

The AI Scientist-v2 also demonstrated some weaknesses that human researchers identified during their review process. The system occasionally made citation errors, attributing research findings to incorrect authors or publications. It also struggled with some aspects of experimental design that human experts would have approached differently.

Perhaps most importantly, the AI-generated research focused on incremental improvements rather than paradigm-shifting discoveries. The system appears more capable of conducting thorough investigations within established research frameworks than of proposing entirely new ways of thinking about scientific problems.

The Road Ahead

The successful peer review of AI-generated research is the beginning of a new era in scientific research. As foundation models continue improving, we can expect The AI Scientist and similar systems to produce increasingly sophisticated research that approaches and potentially exceeds human capabilities in many domains.

The research team anticipates that future versions will be capable of producing papers worthy of acceptance at top-tier conferences and journals. The logical progression suggests that AI systems may eventually contribute to breakthrough discoveries in fields ranging from medicine to physics to chemistry.

This development also raises important questions about research ethics and publication standards. The scientific community must develop new norms for handling AI-generated research, including when and how to disclose AI involvement and how to evaluate such work alongside human-generated research.

The transparency demonstrated by the research team in this experiment provides a valuable model for future AI research evaluation. By working openly with conference organizers and subjecting their AI-generated work to the same standards as human research, they have established important precedents for the responsible development of automated research capabilities.

The Bottom Line

The acceptance of an AI-written paper at a leading machine learning workshop is a significant advancement in AI capabilities. While the work is not yet at the level of top-tier conference, it demonstrates a clear trajectory toward AI systems becoming serious contributors to scientific discovery. The challenge now lies not only in advancing technology but also in shaping the ethical and academic frameworks that will govern this new frontier of research.



Source link

Continue Reading

Trending