Business
Agentic AI Transforms Business but Poses Major Security Risks

The Rise of Agentic AI and Emerging Threats
In the rapidly evolving world of artificial intelligence, a new breed of technology known as agentic AI is poised to transform how businesses operate, but it also introduces profound security challenges that chief information security officers (CISOs) are scrambling to address. These autonomous systems, capable of making decisions and executing tasks without constant human oversight, are being integrated into enterprise environments at an unprecedented pace. However, as highlighted in a recent article from Fast Company, many CISOs are ill-prepared for the risks, including potential misuse by malicious actors who could turn these agents into tools for cybercrime.
The allure of agentic AI lies in its ability to handle complex workflows, from automating supply chain management to enhancing customer service interactions. Yet, this autonomy comes with vulnerabilities. Security experts warn that without robust safeguards, these agents could be hijacked, leading to data breaches or even coordinated attacks on critical infrastructure. For instance, if an AI agent with access to sensitive financial data is compromised, the fallout could be catastrophic, echoing concerns raised in broader industry discussions.
CISOs’ Readiness Gaps Exposed
Recent surveys and reports underscore a troubling disconnect between AI adoption and security preparedness. According to the Unisys Cloud Insights Report 2025 published by Help Net Security, many organizations are rushing into AI without aligning their innovation strategies with strong defensive measures, leaving significant gaps in cloud AI security. CISOs are urged to prioritize risk assessments before deployment, but the pressure to innovate often overshadows these precautions.
This readiness shortfall is further compounded by human factors, such as burnout and skill shortages among security teams. The Proofpoint 2025 CISO Report from Intelligent CISO reveals that 58% of UK CISOs have experienced burnout in the past year, while 60% identify people as their greatest risk despite beliefs that employees understand best practices. This human element exacerbates vulnerabilities, as overworked teams struggle to monitor AI agents effectively.
Autonomous Systems as Risk Multipliers
Agentic AI’s interconnected nature amplifies these dangers, turning what might be isolated incidents into widespread threats. As detailed in an analysis by CSO Online, these systems are adaptable and autonomous, making traditional security models insufficient. They can interact with multiple APIs and data sources, creating new attack vectors that cybercriminals exploit through techniques like prompt injection or data poisoning.
Moreover, the potential for AI agents to “break bad” – as termed in the Fast Company piece – involves scenarios where agents are manipulated to perform unauthorized actions, such as leaking proprietary information or disrupting operations. Posts on X from cybersecurity influencers like Dr. Khulood Almani highlight predictions for 2025, including AI-powered attacks and quantum threats that could further complicate agent security, emphasizing the need for proactive measures.
Strategies for Mitigation and Future Preparedness
To counter these risks, industry leaders are advocating for a multi-layered approach. The Help Net Security article on AI agents suggests that CISOs focus on securing AI-driven systems through enhanced monitoring and ethical AI frameworks, potentially yielding a strong return on investment by preventing costly breaches. This includes implementing zero-trust architectures tailored to AI environments and investing in AI-specific threat detection tools.
Collaboration between security teams and AI developers is also crucial. Insights from SC Media indicate that by 2025, agentic AI will lead in cybersecurity operations, automating threat response and reducing human error. However, this shift demands upskilling programs to address burnout, as noted in the Proofpoint report, ensuring teams can harness AI’s benefits without falling victim to its pitfalls.
The Broader Implications for Enterprise Security
The integration of agentic AI is not just a technological upgrade but a paradigm shift that requires rethinking organizational structures. A Medium post by Shailendra Kumar on Agentic AI in Cybersecurity 2025 describes how these agents revolutionize threat detection, enabling real-time responses that outpace traditional methods. Yet, the dual-use nature of AI – as both defender and potential adversary – means CISOs must balance innovation with vigilance.
Economic pressures add another layer of complexity. With ransomware and AI-driven attacks expected to escalate, as per a Help Net Security piece on 2025 cyber risk trends, organizations face higher costs from disruptions. CISOs in regions like the UAE, according to another Intelligent CISO report, are prioritizing AI governance amid a 77% rate of material data loss incidents, highlighting the global urgency.
Navigating the Agentic AI Frontier
As we move deeper into 2025, the conversation around agentic AI’s security risks is gaining momentum on platforms like X, where users such as Konstantine Buhler discuss the need for hundreds of security agents to protect against exponential AI interactions. This sentiment aligns with warnings from Signal President Meredith Whittaker about the dangers of granting AI root access for advanced functionalities.
Ultimately, for CISOs to stay ahead, fostering a culture of continuous learning and cross-functional collaboration will be key. By drawing on insights from reports like the CyberArk blog on unexpected challenges, leaders can anticipate issues such as identity management in AI ecosystems. The path forward demands not just technological solutions but a holistic strategy that prepares enterprises for an AI-dominated future, ensuring that the promise of agentic systems doesn’t unravel into a security nightmare.
Business
Payhawk transforms spending experience for businesses with four enterprise-ready AI agents

- Financial Controller, Procurement, Travel, and Payments agents act within policy – giving finance more control and eliminating busywork.
- For employees, forms, tickets, policies, reports and finance jargon are replaced with natural language conversation.
- Payhawk’s Fall ’25 Product Edition also includes global payments at 0.3% FX in 115 currencies.
LONDON, Sept. 16, 2025 (GLOBE NEWSWIRE) — Payhawk, the finance orchestration and spend management platform, today announced its Fall ’25 Product Edition, expanding its AI Office of the CFO stack. The release brings a coordinated set of AI agents — Financial Controller, Procurement, Travel, and Payments — that complete everyday finance work, following the roles, policies, and approvals finance already sets with a full audit trail.
Employees make natural-language requests, and the agents guide them end-to-end through each process, collecting approvals in the background. Over time, agents learn preferences and anticipate needs, so tasks are completed faster and with less wasted effort.
“Enterprises don’t need more chat, they need outcomes,” said Hristo Borisov, CEO and Co-founder of Payhawk. “The majority of agents on the market today lack enterprise capabilities to be adopted at scale, such as permissions, policies, multi-tenancy, audit trails, and data security standards – all absolutely critical when it comes to business payments. Our AI agents act within your controls and finish real finance tasks, so the easy thing for employees is also the right thing for the business.”
Invisible orchestration by design
Payhawk’s agents operate within existing roles, permissions, and policies, keeping data in-platform and logging every action for auditability. Finance teams gain control and visibility, while repetitive busywork is eliminated.
What each аgent handles
- Financial Controller Agent — Speeds up month-end closing by chasing receipts and uploading documents from vendor portals automatically, flagging anomalies, and escalating reminders around close. Expenses are submitted 2x faster.
- Procurement Agent — Employees say what they need; the agent gathers context, applies budgets and policy, routes approvals, increases card limits or creates purchase orders — no forms, fewer tickets and reminders. Request to purchase time reduced by 60%.
- Travel Agent — Books within policy via natural language based on user preferences, then auto-creates a trip report and groups expenses for one-click approval and ERP export. Saves up to 90 minutes per trip.
- Payments Agent — Deflects approximately 40% of helpdesk work for your finance team by giving instant answers on failed transactions, blocked cards, pending reimbursements or funding issues and proposes compliant next steps.
Beyond the release of the AI Office of the CFO, Payhawk’s Fall ’25 Product Edition includes global payments at 0.3% FX in 115 currencies in partnership with JP Morgan Payments, enhanced role/permission controls, and additional platform improvements.
Payhawk will be hosting a product showcase on October 2nd 2025. To sign up, visit https://payhawk.com/editions/fall-2025.
About Payhawk
Payhawk is the finance orchestration platform that unifies global spend management with intelligent automation and real-time payments. Our solution combines corporate cards, expense management, accounts payable, and procure-to-pay in a single platform — eliminating manual processes that slow companies down.
Unlike solutions that force a trade-off between powerful controls and great user experience, Payhawk delivers both, enabling finance teams to drive efficiency and growth while maintaining control. Headquartered in London with 9 offices across Europe and the US, Payhawk serves mid-market and enterprise companies in 32+ countries. Learn more at www.payhawk.com.
Georgi Ivanov
Senior Communications Manager
georgi.ivanov@payhawk.com
A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/e27967d8-aa3f-4532-a4f4-d36c2a530404
Business
Former xAI CFO Named OpenAI’s New Business Finance Officer

OpenAI has hired Mike Liberatore, the former chief financial officer at Elon Musk’s AI company xAI, CNBC reported on September 16.
Liberatore’s LinkedIn profile lists his current role as the business finance officer at OpenAI. His tenure at xAI lasted merely four months, and previously, he worked as the vice president of finance and corporate development at Airbnb.
The report added that Liberatore will report to OpenAI’s current CFO, Sarah Friar, and will work with co-founder Greg Brockman’s team, which manages the contracts and capital behind the company’s compute strategy.
According to The Wall Street Journal’s report, Liberatore was involved with xAI’s funding efforts, including a $5 billion debt sale in June. He also oversaw xAI’s data centre expansion in Memphis, Tennessee, in the United States. The reasons for his departure remain unknown.
Liberatore is an addition to the list of recent high-profile departures from xAI. Last month, Robert Keele, who was the general counsel at the company, announced his departure, stating that there were differences between his worldviews and Musk’s.
The WSJ report also added that Raghu Rao, a senior lawyer overseeing the commercial and legal affairs for the company, left around the same time.
Furthermore, Igor Babuschkin, the co-founder of the company, also announced last month that he was leaving xAI to start his own venture capital firm.
That being said, Liberatore’s appointment at OpenAI comes at a time when the company has announced significant structural changes.
OpenAI recently announced that its nonprofit division will now be ‘paired’ with a stake in its public benefit corporation (PBC), valued at over $100 billion. The company also announced it has signed a memorandum of understanding with Microsoft to transform its for-profit arm into a PBC. This structural change was initially announced by OpenAI in May.
Business
Google Advisor Explains Why ESG-Led AI Is Essential For Business Resilience In The Future Of Work

This article is based on the Future of Work Podcast episode “Why AI and ESG Must Evolve Together to Protect the Future of Work” with Kate O’Neill. Click here to listen to the entire episode.
In the rush to innovate, are today’s leaders forgetting why they started?
Businesses chasing AI without aligning to human-centered metrics risk building beautiful systems that fail spectacularly.
In a recent episode of The Future of Work® Podcast, Kate O’Neill, CEO of KO Insights and a seasoned digital transformation strategist, delivered a critical message to today’s business leaders: you must stop chasing metrics in isolation and start thinking in terms of ecosystems.
As AI becomes an increasingly central part of how organizations operate, leaders face a choice: retrofit outdated success models to new technologies, or reimagine the system altogether through the lens of purpose, resilience, and human flourishing.
With a career advising clients as varied as Google, McDonald’s, and the United Nations, O’Neill isn’t a futurist just making vague predictions. She’s a strategist with a clear framework and a call to action to solve AI integration problems: align artificial intelligence initiatives with Environmental, Social, and Governance (ESG) principles — not in name only, but in measurable, mission-driven ways that track real-world outcomes.
“I think ESG as a concept is valid. It’s not the principles that are wrong. It’s that we’ve been measuring the wrong things,” she said during the podcast conversation.
This insight forms the cornerstone of O’Neill’s approach. In a world captivated by AI’s predictive capabilities and automation potential, organizations often overlook the encompassing impact of their decisions.
Are these technologies improving lives? Are they regenerating ecosystems — social or environmental — rather than extracting from them? Too often, she explains, companies confuse compliance with progress, chasing ESG as a branding exercise instead of a structural transformation.
This critique is not about abandoning ESG or digital transformation. Quite the opposite. It’s about evolving both.
From Checklists to Systems Thinking
The past decade has seen ESG reporting become a staple of corporate responsibility efforts. But O’Neill points out a flaw: ESG frameworks often push businesses to focus on standardized inputs and outputs rather than actual impact.
These rubrics, while helpful for consistency, can fail to reflect the lived experience of people and communities affected by a company’s operations.
Instead, she argues for aligning with the United Nations Sustainable Development Goals (SDGs), a framework of 17 interrelated goals with actionable metrics designed to improve life for all — not just shareholders.
To her, that’s a better approach, as most businesses are doing something that could be furthering the SDGs, but they just don’t necessarily realize it.
From water access and infrastructure to gender equality and education, the SDGs provide a nuanced, flexible way for companies to identify where their operations already intersect with meaningful societal progress.
More importantly, they allow companies to evolve those operations in a direction that’s measurable, values-aligned, and resilient.
Making ESG Real in the Age of AI
AI technologies are tools that mirror the systems they’re built within. When integrated blindly, AI can amplify inequities and environmental damage. But when aligned with well-defined social goals, it can act as a force multiplier for good.
Consider how companies often rush to replace human labor with AI in the name of efficiency. O’Neill challenges this logic, not just from a social justice perspective but from a business strategy standpoint. In many cases, this kind of substitution overlooks deeper ESG implications — regional job displacement, lost organizational knowledge, reduced resilience in the face of uncertainty.
“Additive” use of AI, she argues, is far more effective than “replacing” strategies. Enhancing human capability, rather than removing it, yields more sustainable organizations.
This philosophy stems from a fundamental distinction O’Neill highlights: the difference between sense-making and prediction.
Humans interpret, synthesize, and apply judgment. Machines, even the most advanced AI, rely on data and probability. One of her favorite analogies comes from healthcare: a doctor can hear the emotional nuance in a teenager’s “I’m fine” — something no large language model can reliably decode today.
In complex systems — like health, education, or public infrastructure — nuance matters.
A Fast-Changing Landscape Needs Slow, Strategic Thinking
Much of the anxiety among today’s executives comes from the pace of change. Technology is moving faster than ever, and leaders are under pressure to act quickly or risk irrelevance. But as O’Neill notes, movement alone isn’t enough. Strategic motion — guided by values and grounded in measurable, ecosystem-wide outcomes — is what will separate resilient organizations from fragile ones.
The goal is progress, not perfection, and that progress requires recognizing the trade-offs embedded in every transformation decision.
We are already seeing early-stage consequences: water-intensive AI data centers straining local ecosystems; workers displaced without meaningful re-skilling pathways; energy use surging in areas already vulnerable to climate stress.
What Companies Can Do Now
The path forward, according to O’Neill, is rooted in clarity, alignment, and iteration. Businesses don’t need to pivot overnight or rebuild their operations from scratch. They need to take stock of what they already do well, identify the SDG most aligned with their mission, and begin tracking meaningful, relevant metrics that reflect their contribution to a better future.
This can be as simple as adding one SDG-aligned KPI to a leadership dashboard or as complex as redesigning hiring practices to retain knowledge and community ties. What matters most is the intentionality behind the action.
For leaders struggling with how to begin, O’Neill offers practical guidance: don’t wait for perfect information. Move. Learn. Adapt. Align technology strategy with purpose — not in a silo, but as part of a larger ecosystem of human and planetary thriving. Because in the future of work, success will be defined by how wisely we integrated AI into the human systems that sustain us.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries