Connect with us

AI Research

Nearly one in five give Britons turn to AI for personal advice, new Ipsos research reveals

Published

on


Almost one in five (18%) say they have used AI as a source of advice on personal problems. Three in four (67%) say they use polite language when interacting with AI, with over a third (36%) believing that it increases the likelihood of a helpful output.


The author(s)

A new study from Ipsos in the UK reveals a surprising intimacy in our interactions with AI, a strong inclination towards politeness with the technology, and significant apprehension about its impact on society and the workplace. 

AI as a guidance counsellor 

  • Nearly one in five (18%) have used AI as a source of advice on personal problems or issues. This extends to using AI as a companion or someone to talk to (11%), and even as a substitute for a therapist or counsellor (9%).
  • 7% have sought guidance from AI guidance on romance, while 6% have used it to enhance their dating profiles.
  • Despite this growing interaction and even perceived friendship with AI, there is a deep-seated anxiety about its broader societal implications. A majority of Britons (56%) agree that the advance of AI threatens the current structure of society, while just 29% say that AI has a positive effect on society.
  • Scepticism is also high regarding AI’s ability to replicate human connection, with 59% disagreeing that AI is a viable substitute for human interaction and 63% disagreeing that it is a good substitute. The notion of AI possessing emotional capabilities is met with even greater disbelief, as 64% disagree that AI is capable of feeling emotion.

The majority (56%) agree that the advance of AI threatens the current structure of society

Politeness to AI 

  • Three in four (67%) British adults who interact with chatbots or AI tools say that they ‘always’ or ‘sometimes’ use polite language, such as ‘please’ and ‘thank you’.
  • Over a third (36%) think that being polite to AI improves the likelihood of receiving a helpful output. Furthermore, around three in ten believe politeness positively impacts the accuracy (30%) and level of detail (32%) of the AI’s response. 

AI in the workplace

  • Over a quarter (27%) of those who have considered applying for a job in the last three years have used AI to write or update their CV, and 22% have used it to draft a cover letter. Two in ten (20%) say they have used it to practice interview questions. However, four in ten (40%) say that they have not used AI when considering applying for a job.
  • However, the use of AI in the workplace is often a clandestine affair. Around three in ten workers (29%) do not discuss their use of AI with colleagues. This reluctance may stem from a fear of judgement, as a quarter (26%) of adults think their coworkers would question their ability to perform their role if they knew about their AI use. This is despite the fact that a majority (57%) view using AI effectively as a skill that is learned and practiced. 

 
57% agree that using AI effectively is a skill that you practice and learn. Despite this, a quarter (26%) think their coworkers would question their ability to perform in their role if they share how they use AI

Commenting on the findings, Peter Cooper, Director at Ipsos said:

This research paints a fascinating picture of a nation grappling with the dual nature of artificial intelligence. On one hand, we see that a growing number are ‘AI-sourcing’ for personal advice and companionship, suggesting a level of trust and reliance that is surprisingly personal. On the other hand, there’s a palpable sense of unease about what AI means for the future of our society and our jobs. The fact that many are polite to AI, perhaps in the hope of better outcomes, while simultaneously hiding their use of it at work, speaks to the complex and sometimes contradictory relationship we are building with this transformative technology.

Technical note: 

  • Ipsos interviewed a representative sample of 2,189 adults aged 16-75 across Great Britain. Polling was conducted online between the 18th-20th July 2025.   
  • Data are weighted to match the profile of the population. All polls are subject to a wide range of potential sources of error. 


More insights about Public Sector


Society



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Artificial Intelligence Stocks To Add to Your Watchlist – September 14th – MarketBeat

Published

on



Artificial Intelligence Stocks To Add to Your Watchlist – September 14th  MarketBeat



Source link

Continue Reading

AI Research

AI-Augmented Cybersecurity: A Human-Centered Approach

Published

on


The integration of artificial intelligence (AI) is fundamentally transforming the cybersecurity landscape. While AI brings unparalleled speed and scale in threat detection, an effective strategy potentially lies in cultivating collaboration between people with specialized knowledge and AI systems rather than full AI automation. This article explores AI’s evolving role in cybersecurity, the importance of blending human oversight with technological capabilities, and frameworks to consider.

AI & Human Roles

The role of AI has expanded far beyond simple task automation. It can now serve as a powerful tool for augmenting human-led analysis and decision making and can help organizations process and go over vast volumes of security logs and data quickly. This capability can help significantly enhance early threat detection and accelerate incident response. With AI-augmented cybersecurity, organizations can identify and address potential threats with unprecedented speed and precision.

Despite these advancements, the vision of a fully autonomous security operations center (SOC) currently remains more aspirational than practical. AI-powered systems often lack the nuanced contextual understanding and intuitive judgment essential for handling novel or complex attack scenarios. This is where human oversight becomes indispensable. Skilled analysts play an essential role in interpreting AI findings, making strategic decisions, and bringing automated actions in line with the organization’s particular context and policies.

This is where human oversight becomes indispensable.

As the cybersecurity industry shifts toward augmentation, a best-fit model is one that utilizes AI to handle repetitive, high-volume tasks while simultaneously preserving human control over critical decisions and direction. This balanced approach combines the speed and efficiency of automation with the insight and experience of human reasoning, creating a scalable, resilient security posture.

Robust Industry Frameworks for AI Integration

The transition toward AI-augmented, human-centered cybersecurity is well represented by frameworks from leading industry platforms. These models provide a road map for organizations to incrementally integrate AI while maintaining the much-needed role of human oversight.

SentinelOne’s Autonomous SOC Maturity Model provides a framework to help support organizations on their journey to an autonomous SOC. This model emphasizes the strategic use of AI and automation to help strengthen human security teams. It outlines the progression from manual, reactive security practices to advanced, automated, and proactive approaches, where AI can handle repetitive tasks and free up human analysts for strategic work.

SentinelOne has defined its Autonomous SOC Maturity Model as consisting of the following five levels:

  • Level 0 (Manual Operations): Security teams rely entirely on manual processes for threat detection, investigation, and response.
  • Level 1 (Basic Automation): Introduction of rule-based alerts and simple automated responses for known threat patterns.
  • Level 2 (Enhanced Detection): AI-assisted threat detection that flags anomalies while analysts maintain investigation control.
  • Level 3 (Orchestrated Response): Automated workflows handle routine incidents while complex cases require human intervention.
  • Level 4 (Autonomous Operations): Advanced AI manages most security operations with strategic human oversight and exception handling.

This progression demonstrates that achieving sophisticated security automation requires gradual capability building rather than a full-scale overhaul of systems and processes. At each level, humans remain essential for strategic decision making, policy alignment, and handling cases that fall outside of the automated parameters. Even at Level 4, the highest maturity level, human oversight remains a must for effective, accurate operations.

Another leading platform centers on supporting security analysts via AI-driven insights rather than replacing human judgment. Elastic’s AI-driven approach integrates machine learning algorithms to automatically detect anomalies, correlate events, and uncover subtle threats within large data sets. For example, when unusual network patterns emerge, the system doesn’t automatically initiate response actions but instead presents analysts with enriched data, relevant context, and suggested investigation paths.

A key strength of Elastic’s model is its emphasis on analyst empowerment. Rather than automating decisions, the platform provides security professionals with enhanced visibility and context. This approach recognizes that cybersecurity fundamentally remains a strategic challenge requiring human insight, creativity, and contextual understanding. AI serves as a force multiplier, helping analysts process information efficiently so they can focus their time on high-value activities.

The Modern SOC

While AI in cybersecurity can be seen as a path toward full automation, security operations can be structured instead to bolster human-AI collaboration in a way that doesn’t replace humans but boosts human capabilities to help improve efficiency. This view recognizes that security remains a human-versus-human challenge. Harvard Business School professor Karim Lakhani states that “AI won’t replace humans, but humans with AI will replace humans without AI.” Applying this principle to security operations, the question is, who will win in cyberspace? It may be the team that responsibly adapts and evolves its operational process by understanding and incorporating the advantages of AI. This team will be well positioned to defend against quickly evolving threat tactics, techniques, and procedures. The rhetoric of a non-human, fully autonomous SOC is not a current reality. However, the SOC that embraces AI as complementing people, not replacing people, may likely be the SOC that creates a competitive advantage in cyber defense.

In practice, this approach can simplify traditional tiered SOC structures, helping analysts handle incidents end-to-end while leveraging AI for speed, context, and insight. This can help organizations improve efficiency, accountability, and resilience against evolving threats.

Create a tactical competitive advantage in security operations with AI.

Best Practices for AI-Augmented Security

Building effective, AI-augmented security operations requires intentional design principles that prioritize human capabilities alongside technological advancements.

Successful implementations often focus AI automation on high-volume, routine activities that take up analyst time and don’t require complex reasoning. Some of these activities include the following:

  • Initial alert triage: AI systems can categorize and prioritize incoming security alerts based on severity, asset importance, and historical patterns.
  • Data enrichment: Automating the gathering of relevant contextual information from multiple sources can support analyst investigations.
  • Standard response actions: Predetermined responses can be triggered for well-understood threats, e.g., isolating compromised endpoints or blocking known malicious IP addresses.
  • Report generation: Investigation findings and incident summaries can be compiled for stakeholder communication.

By handling these routine tasks, AI can give analysts time to focus on activities that require advanced reasoning and skill, such as threat hunting, strategic planning, policy development, and navigating attack scenarios.

In addition, traditional SOC structures often fragment incident handling across multiple tiers, sometimes leading to communication gaps and delayed responses. Human-centered security operations may benefit from enabling individual analysts with inclusive case ownership, supported by AI tools that can help streamline the steps needed for investigation and response actions.

By allowing more extensive case ownership, security teams can reduce handoff delays and scale incident management. AI-embedded tools can support security teams with enhanced reporting, investigation assistance, and intelligent recommendations throughout the incident lifecycle.

Practical Recommendations

Implementing AI-augmented cybersecurity requires systematic planning and deployment. Security leaders can follow these practical steps to build human-centered security operations. To begin, review your organization’s current SOC maturity across key dimensions, including:

Automation Readiness

  • What percentage of security alerts get a manual review currently?
  • Which routine tasks take the most analyst time?
  • How standardized are your operations playbooks and/or incident response procedures?

Data Foundation

  • Do you have the complete and verified asset inventory with network visibility?
  • Are security logs centralized and easily searchable?
  • Can you correlate events across disparate data sources and security tools?

Team Capabilities

  • What is your analyst retention rate and average tenure?
  • How quickly can new team members get up to speed?
  • What skills gaps exist in your current team?

Tool Selection Considerations

Effective AI-augmented security requires tools that can support human-AI collaboration rather than promising unrealistic automation. Review potential solutions based on:

Integration Capabilities

  • How well do tools integrate with your existing security infrastructure?
  • Can the platform adapt to your organization’s specific policies and procedures?
  • Does the vendor provide application programming interface (API) integrations?

Transparency & Explainable AI

  • Can analysts understand how AI systems reach their conclusions?
  • Are there clear mechanisms for providing feedback to improve AI accuracy?
  • Can you audit and validate automated decisions?

Scalability & Flexibility

  • Can the platform grow with your organization’s needs?
  • How easily can you modify automated workflows as threats evolve?
  • What support is available for ongoing use?

Measuring Outcomes

Tool selection is only part of the equation. Measuring outcomes is just as important. To help align your AI-augmented security strategy with your organization’s goals, consider tracking metrics that demonstrate both operational efficiency and the enhanced effectiveness of analysts, such as:

Operational Metrics

  • Mean time to detect
  • Mean time to respond
  • Mean time to investigate
  • Mean time to close
  • Percentage of alerts that can be automatically triaged and prioritized
  • Analyst productivity measured by high-value activities rather than ticket volume

Strategic Metrics

  • Analyst job satisfaction and retention rates
  • Time invested in proactive threat hunting versus reactive incident response
  • Organizational resilience measured through red/blue/purple team exercises and simulations

How Forvis Mazars Can Help

The future of proactive cybersecurity isn’t about choosing between human skill and AI, but rather lies in thoughtfully combining their complementary strengths. AI excels at processing massive amounts of data, identifying patterns, and executing consistent responses to known threats. Humans excel at providing contextual understanding, creative problem-solving, and strategic judgment, which are essential skills for addressing novel and complex security challenges.

Organizations that embrace this collaborative approach can position themselves to build more resilient, scalable, and effective security operations. Rather than pursuing the lofty and perhaps unrealistic goal of full automation, consider focusing on creating systems where AI bolsters human capabilities and helps security professionals deliver their best work.

The journey toward AI-augmented cybersecurity necessitates careful planning, gradual implementation, and continual refinement. By following the frameworks and best practices outlined in this article, security leaders can build operations that leverage both human intelligence and artificial intelligence to protect their organizations in an increasingly complex threat landscape.

Ready to explore how AI-augmented cybersecurity can strengthen your organization’s security posture? The Managed Services team at Forvis Mazars has certified partnerships with SentinelOne and Elastic. Contact us to discuss tailored solutions.

Related reading: 



Source link

Continue Reading

AI Research

USA TODAY rolls out AI answer engine to all users

Published

on


play

Gannett, USA TODAY’s parent company, has fully implemented generative AI engine DeeperDive for USA TODAY’s audience of more than 195 million monthly unique visitors.

DeeperDive uses the high-quality content created by reporters and editors of the USA TODAY Network to deliver clear, timely GenAI conversations to readers. The technology was created by Taboola. Gannett is the first U.S. publisher to fully embed the AI-answer engine.

The step aligns with the company’s commitment to embrace innovation for the benefit of its readers, Michael Reed, chairman and CEO of Gannett, said in a statement.

“The Taboola partnership gives us the opportunity to further deliver on our promise to enrich and empower the communities we serve because DeeperDive provides our valued audiences with trusted relevant content,” Reed said.

Because it sources its responses solely from trusted USA TODAY and USA TODAY Network journalism and content, DeeperDive interacts with readers to deliver a sharper understanding of the topics users want to know about.

Other highlights include more curated advertising, Reed said. A DeeperDive beta was launched in June to a percentage of readers and was expanded after initial performance exceeded expectations.

DeeperDive’s technology spans various coverage areas, answering reader questions about travel, their local communities, sports, political updates and more.

In the next phase of the collaboration, AI agents will be tested to give readers access to seamless, easy purchasing options tailored to their specific needs and interests, Reed said.

Adam Singolda, CEO and founder of Taboola, called the partnership with Gannett a “once-in-a-generation” opportunity.

“With DeeperDive, we’re moving the industry from page views to Generative AI conversations, and from clicks to transactions rooted in what I see as the most valuable part of the LLM market – decisions that matter,” Singolda said in a statement. LLM refers to large language models, like ChatGPT.

“Consumers may ask questions using consumer GenAI engines, but when it comes to choices that require trust and conviction, where to travel with their family, which financial step to take, or whether to buy a product – USA TODAY is where they turn,” added Singolda.



Source link

Continue Reading

Trending