Connect with us

AI Research

Futurum Research: AI Security Skills Gap Persists

Published

on


Austin, Texas, USA, August 26, 2025

Futurum’s 1H25 Cybersecurity Decision Maker Study Reveals That Only About One in Four Organizations Feel Equipped To Respond to AI-Driven Threats

Futurum’s 1H25 Cybersecurity Decision Maker Research reveals that organizations have begun taking steps to address the new security risks that are associated with artificial intelligence (AI) and machine learning (ML). With AI adoption occurring across industries, security leaders recognize that the same technologies enabling innovation also introduce new and complex threat vectors.

More than 25% of surveyed organizations have implemented dedicated AI/ML security controls and processes to evaluate and monitor AI-related vulnerabilities. This reflects a growing awareness that traditional security frameworks must evolve to address the unique risks posed by AI systems, spanning adversarial attacks on machine learning models to data poisoning and manipulation of generative AI outputs.

The study also found one-quarter of respondents acknowledge the presence of AI-powered attacks in today’s threat landscape, and even more expect these incidents to increase over the next 12 months. This expectation is not unfounded; threat actors leverage AI to automate attack planning, enhance phishing sophistication, evade detection, and identify exploitable weaknesses faster. As AI-driven tools become more accessible, the barrier to entry for launching sophisticated cyberattacks drops.

Figure 1: AI-Specific Security Incident Expertise

The growing speed and sophistication of malicious actors results in a notable skills gap. In Futurum’s study, only about one in four respondents felt their security teams are adequately equipped to handle AI-specific security incidents. This shortfall implies challenges in building or acquiring the specialized expertise needed to respond effectively to emerging AI threats.

“AI is transforming both innovation and the threat landscape. Our research shows that while organizations are moving to secure AI, the skills gap remains an important hurdle to overcome,” said Fernando Montenegro, VP and Practice Lead at Futurum.

AI-related security tooling can also help to close this skills gap. However, decision makers require transparency in order to verify that AI itself does not become a point of vulnerability. In fact, many decision makers in Futurum’s study indicated that they require vendors to disclose whether their products use AI and to detail the controls in place to secure that usage.

The findings point to a broader industry challenge: keeping pace with the dual forces of growing AI adoption and the evolution of related security concerns. While investment in AI-driven innovation will only increase, security programs must evolve in parallel to prevent innovation from outpacing protection. Failure to do so risks leaving organizations vulnerable to the very technologies they are adopting to gain competitive advantage.

“The adoption of AI demands an adaptation in the security strategy. To remain secure, organizations must invest in new tools with transparent and explainable usage of AI,” said Krista Case, Research Director at Futurum.

Overall, Futurum’s research paints a picture of a cybersecurity landscape in transition. Leaders are aware of the AI threat, are beginning to implement the necessary controls, and are demanding greater transparency from vendors. Yet, they are also grappling with the reality that AI necessitates not just new tools, but new skills, processes, and collaborative approaches to safeguard the future of digital business.

Read more in the 1H 2025 Cybersecurity Decision-Maker Survey Report on the Futurum Intelligence Platform.

About Futurum Intelligence for Market Leaders

Futurum Intelligence’s Cybersecurity and Resilience IQ service provides actionable insight from analysts, reports, and interactive visualization datasets, helping leaders drive their organizations through transformation and business growth. Subscribers can log into the platform at https://app.futurumgroup.com/, and non-subscribers can find additional information at Futurum Intelligence.

Follow news and updates from Futurum on X and LinkedIn using #Futurum. Visit the Futurum Newsroom for more information and insights.

Declaration of generative AI and AI-assisted technologies in the writing process: During the preparation of this work, the authors used Chat GPT to support editing and writing. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the publication’s content.

Other Insights from Futurum:

SailPoint Bolsters SaaS Security with Savvy Acquisition

Palo Alto Networks Makes Bold $25B Identity Play with CyberArk Deal

Security Summer Camp: Black Hat 2025, Def Con, And Others


Fernando Montenegro serves as the Vice President & Practice Lead for Cybersecurity & Resilience at The Futurum Group. In this role, he leads the development and execution of the Cybersecurity research agenda, working closely with the team to drive the practice’s growth. His research focuses on addressing critical topics in modern cybersecurity. These include the multifaceted role of AI in cybersecurity, strategies for managing an ever-expanding attack surface, and the evolution of cybersecurity architectures toward more platform-oriented solutions.

Before joining The Futurum Group, Fernando held senior industry analyst roles at Omdia, S&P Global, and 451 Research. His career also includes diverse roles in customer support, security, IT operations, professional services, and sales engineering. He has worked with pioneering Internet Service Providers, established security vendors, and startups across North and South America.

Fernando holds a Bachelor’s degree in Computer Science from Universidade Federal do Rio Grande do Sul in Brazil and various industry certifications. Although he is originally from Brazil, he has been based in Toronto, Canada, for many years.

Krista Case is Research Director, Cybersecurity & Resilience at The Futurum Group. She brings approximately 15 years of experience providing research and advisory services and creating thought leadership content. Her vantage point spans technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her work has appeared in major publications including eWeek, TechTarget and The Register.

Prior to joining The Futurum Group, Krista led the data protection practice for Evaluator Group and the data center practice of analyst firm Technology Business Research. She also created articles, product analyses, and blogs on all things storage and data protection and management for analyst firm Storage Switzerland and led market intelligence initiatives for media company TechTarget.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

OpenAI, which leads the global artificial intelligence (AI) market, has invested a large amount of m..

Published

on


Start-up StatSig Acquires…CEO Appointed CTO “AI Quality Most Important…”Make safe and useful AI”

[Photo = Yonhap News]

OpenAI, which leads the global artificial intelligence (AI) market, has invested a large amount of money to acquire startups. Analysts say that this is a move that recognizes that ChatGPT users have recently emerged as a serious social problem after being delusional or killing themselves while talking for a long time.

According to the information technology (IT) industry on the 4th, OpenAI announced the previous day that it would acquire startup StatSig for $1.1 billion (about 1.5 trillion won). This transaction is made through a stock exchange in full.

Founded in 2021, StatSig has a platform that verifies the effectiveness and impact of developers improving software functions. Representatively, it applies a new function to some users and provides a service that modifies the function according to the user’s response after testing and updating the function compared to all functional users.

Vijai Raj StatSig CEO will be appointed as OpenAI’s Chief Technology Officer (CTO) for Applications. It is expected that it will be in charge of application engineering. However, there is still a process for regulatory authorities to review and permit the acquisition.

OpenAI has been pursuing large-scale mergers and acquisitions this year. In July, it bought Johnny Ive’s AI hardware startup io, who served as Apple’s design director, for $6.5 billion (about 9 trillion won), and attempted to acquire AI coding startup Windsurf for $3 billion (about 4 trillion won), but it failed.

“In order to create intuitive, safe and useful Generative AI, a strong engineering system, fast repetitive work, and a long-term focus on quality and stability are needed,” an OpenAI official said. “We will improve our AI model in a way that allows users to better recognize and respond to signals that are mentally and emotionally difficult.”

Controversy over AI psychosis…Introducing ‘Danger Conversation’ Protection

Sam Altman, CEO of OpenAI. [Photo = Yonhap News]
Sam Altman, CEO of OpenAI. [Photo = Yonhap News]

Recently, AI psychosis is the topic of the global AI industry. AI psychosis refers to the phenomenon of losing a sense of reality or imagining in vain while interacting with AI. It’s not an official disease name, it’s a newly coined word.

For example, last month, American teenager Adam Lane confessed to ChatGPT-4o that he felt an urge to choose an extreme, discussed his suicide plan, and put it into action. Lane’s parents filed a lawsuit against OpenAI, claiming ChatGPT was responsible for the death of their son.

OpenAI acknowledged the system defect, saying that long repeated communication has unlocked ChatGPT’s safety devices. In response, OpenAI plans to work with experts in various fields to strengthen ChatGPT’s user protection function, and then introduce an AI model that focuses on a safe use environment within this year.

First, to block sensitive and dangerous conversations, we take measures to automatically switch from the general model to the inference model when a stress signal is detected. The inference model can adequately respond to anomalies because it takes sufficient time to understand the context and answer than the general model.

In particular, it focuses on protecting youth. You can link the accounts of parents and children. Through this, parents have the authority to check their children’s conversation content and delete conversation records. If your child seems emotionally anxious, you will send a warning notification to your parents. Age-specific rules of conduct are also applied.

Meta also held a roundtable on online safety for youth and women. Currently, Meta is reflecting user feedback in its service after the introduction of youth accounts. A location notification function has been added to the direct message (DM) to indicate the other party’s country. It aims to prevent sexual exploitation and fraud. In order to prevent the spread of private photos, it has introduced a function to send a warning message and automatically blur when nude photos are detected. It is also detecting AI technology advertisements that synthesize and distort human photos.

Meanwhile, as AI becomes a part of life, demands from AI companies to protect ethics are expected to spread. According to WiseApp and Retail, the monthly active users (MAU) of Korea’s ChatGPT app exceeded 20.31 million as of last month. It increased five times compared to the same month last year (4.07 million). The figure is even nearly half of the KakaoTalk app (51.2 million people), a messenger used by the whole nation.

“AI does not recognize emotions, but it can learn conversation patterns and react as if it had emotions,” said Dr. Zeev Benzion of Yale University. “It should be designed to remind users that AI is not a therapist or a substitute for human relationships.”



Source link

Continue Reading

AI Research

Insurance companies are actively introducing generative artificial intelligence (AI) into their work..

Published

on


Kyobo Life Insurance Expects to Improve Work Productivity of Compensation Supporters

Kyobo Life Insurance Headquarters [Kyobo Life Insurance]

Insurance companies are actively introducing generative artificial intelligence (AI) into their work. In addition to increasing productivity, it is expected to significantly reduce the possibility of insurance disputes through more objective compensation work.

Kyobo Life Insurance announced on the 4th that it will open three types of Generative AI services to all members, including financial planners (FP) and executives and employees.

It is a ‘guaranteed analysis AI supporter’ for FP, an ‘AI assistant for FP manager’ and an ‘AI desk’ for executives and employees.

Guarantee analysis AI supporters are a service that supports Kyobo Life Insurance FP’s process of analyzing customer coverage details and proposing optimal guarantees.

Using Generative AI, we analyze and summarize the current status of coverage for each major benefit such as cancer, brain, and heart, and suggest alternatives to insufficient coverage. It is expected that faster customer consultation will take place.

Kyobo Life Insurance plans to include education using AI supporters for guarantee analysis in its new FP curriculum and continue to upgrade its counseling function by reflecting the latest sales site data and cases.

The FP Director AI Assistant supports tasks from recruitment to training and team performance management, which are the main roles of the FP Director. As a Generative AI, it provides team member goal management, fee prediction, and recommendation of recruiting candidates. Mobile-based allows you to manage team performance anytime, anywhere.

CEO and chairman of the board Shin Chang-jae said at the ’67th Anniversary Ceremony’ last month, “AI utilization capabilities have become a key competitive edge in the insurance industry. Let’s create a leading AI-DX company that provides differentiated experiences and values to customers by incorporating AI into the entire business process.”

It has also opened an Generative AI service ‘AI Desk’ for office workers. AI Desk is an integrated service for Kyobo Life Insurance’s internal Generative AI. It consists of natural language question-and-answer service ‘GyoBot’, department-specific ‘Personnel Bot’, and ‘Legislation Bot’.

사진설명

[Financial house review] Financier house review is a corner that delivers vivid information from financial companies. It’s small, but I choose and deliver information that can help you.



Source link

Continue Reading

AI Research

AI to reshape India’s roads? Artificial intelligence can take the wheel to fix highways before they break, ETInfra

Published

on


From digital twins that simulate entire highways to predictive algorithms that flag out structural fatigue, the country’s infrastructure is beginning to show signs of cognition.

In India, a pothole is rarely just a pothole. It is a metaphor, a mood and sometimes, a meme. It is the reason your cab driver mutters about karma and your startup founder misses a pitch meeting because the expressway has turned into a swimming pool. But what if roads could detect their own distress, predict failures before they happen, and even suggest how to fix them?

That is not science-fiction but the emerging reality of AI-powered infrastructure.

According to KPMG’s 2025 report AI-powered road infrastructure transformation- Roads 2047, artificial intelligence is slowly reshaping how India builds, maintains, and governs its roads. From digital twins that simulate entire highways to predictive algorithms that flag out structural fatigue, the country’s infrastructure is beginning to show signs of cognition.

From concrete to cognition

India’s road network spans over 6.3 million kilometers – second only to the United States. As per KPMG, AI is now being positioned not just as a tool but as a transformational layer. Technologies like Geographic Information System (GIS), Building Informational Modelling (BIM) and sensor fusion are enabling digital twins – virtual replicas of physical assets that allow engineers to simulate stress, traffic and weather impact in real time. The National Highway Authority of India (NHAI) has already integrated AI into its Project Management Information System (PMIS), using machine learning to audit construction quality and flag anomalies.

Autonomous infrastructure in action

Across urban India, infrastructure is beginning to self-monitor. Pune’s Intelligent Traffic Management System (ITMS) and Bengaluru’s adaptive traffic control systems are early examples of AI-driven urban mobility.

Meanwhile, AI-MC, launched by the Ministry of Road Transport and Highways (MoRTH), uses GPS-enabled compactors and drone-based pavement surveys to optimise road construction.

Beyond cities, state-level initiatives are also embracing AI for infrastructure monitoring. As reported by ETInfra earlier, Bihar’s State Bridge Management & Maintenance Policy, 2025 employs AI and machine learning for digital audits of bridges and culverts. Using sensors, drones, and 3D digital twins, the state has surveyed over 12,000 culverts and 743 bridges, identifying damaged structures for repair or reconstruction. IIT Patna and Delhi have been engaged for third-party audits, showing how AI can extend beyond roads to critical bridge infrastructure in both urban and rural contexts.

While these examples demonstrate the potential of AI-powered maintenance, challenges remain. Predictive maintenance, KPMG notes, could reduce lifecycle costs by up to 30 per cent and improve asset longevity, but much of rural India—nearly 70 per cent of the network—still relies on manual inspections and paper-based reporting.

Governance and the algorithm

India’s road safety crisis is staggering: over 1.5 lakh deaths annually. AI could be a game-changer. KPMG estimates that intelligent systems can reduce emergency response times by 60 per cent, and improve traffic efficiency by 30 per cent. AI also supports ESG goals— enabling carbon modeling, EV corridor planning, and sustainable design.

But technology alone won’t fix systemic gaps. The promise of AI hinges on institutional readiness – spanning urban planning, enforcement, and civic engagement.

While NITI Aayog has outlined a national AI strategy, and MoRTH has initiated digital reforms, state-level adoption remains fragmented. Some states have set up AI cells within their PWDs; others lack the technical capacity or policy mandate.

KPMG calls for a unified governance framework — one that enables interoperability, safeguards data, and fosters public-private partnerships. Without it, India risks building smart systems on shaky foundations.

As India looks towards 2047, the road ahead is both digital and political. And if AI can help us listen to our roads, perhaps we’ll finally learn to fix them before they speak in potholes.

  • Published On Sep 4, 2025 at 07:10 AM IST

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

Get updates on your preferred social platform

Follow us for the latest news, insider access to events and more.



Source link

Continue Reading

Trending