Connect with us

AI Research

E&E News: EPA revamps air permitting to boost artificial intelligence – POLITICO Pro

Published

on

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

AI-Augmented Cybersecurity: A Human-Centered Approach

Published

on


The integration of artificial intelligence (AI) is fundamentally transforming the cybersecurity landscape. While AI brings unparalleled speed and scale in threat detection, an effective strategy potentially lies in cultivating collaboration between people with specialized knowledge and AI systems rather than full AI automation. This article explores AI’s evolving role in cybersecurity, the importance of blending human oversight with technological capabilities, and frameworks to consider.

AI & Human Roles

The role of AI has expanded far beyond simple task automation. It can now serve as a powerful tool for augmenting human-led analysis and decision making and can help organizations process and go over vast volumes of security logs and data quickly. This capability can help significantly enhance early threat detection and accelerate incident response. With AI-augmented cybersecurity, organizations can identify and address potential threats with unprecedented speed and precision.

Despite these advancements, the vision of a fully autonomous security operations center (SOC) currently remains more aspirational than practical. AI-powered systems often lack the nuanced contextual understanding and intuitive judgment essential for handling novel or complex attack scenarios. This is where human oversight becomes indispensable. Skilled analysts play an essential role in interpreting AI findings, making strategic decisions, and bringing automated actions in line with the organization’s particular context and policies.

This is where human oversight becomes indispensable.

As the cybersecurity industry shifts toward augmentation, a best-fit model is one that utilizes AI to handle repetitive, high-volume tasks while simultaneously preserving human control over critical decisions and direction. This balanced approach combines the speed and efficiency of automation with the insight and experience of human reasoning, creating a scalable, resilient security posture.

Robust Industry Frameworks for AI Integration

The transition toward AI-augmented, human-centered cybersecurity is well represented by frameworks from leading industry platforms. These models provide a road map for organizations to incrementally integrate AI while maintaining the much-needed role of human oversight.

SentinelOne’s Autonomous SOC Maturity Model provides a framework to help support organizations on their journey to an autonomous SOC. This model emphasizes the strategic use of AI and automation to help strengthen human security teams. It outlines the progression from manual, reactive security practices to advanced, automated, and proactive approaches, where AI can handle repetitive tasks and free up human analysts for strategic work.

SentinelOne has defined its Autonomous SOC Maturity Model as consisting of the following five levels:

  • Level 0 (Manual Operations): Security teams rely entirely on manual processes for threat detection, investigation, and response.
  • Level 1 (Basic Automation): Introduction of rule-based alerts and simple automated responses for known threat patterns.
  • Level 2 (Enhanced Detection): AI-assisted threat detection that flags anomalies while analysts maintain investigation control.
  • Level 3 (Orchestrated Response): Automated workflows handle routine incidents while complex cases require human intervention.
  • Level 4 (Autonomous Operations): Advanced AI manages most security operations with strategic human oversight and exception handling.

This progression demonstrates that achieving sophisticated security automation requires gradual capability building rather than a full-scale overhaul of systems and processes. At each level, humans remain essential for strategic decision making, policy alignment, and handling cases that fall outside of the automated parameters. Even at Level 4, the highest maturity level, human oversight remains a must for effective, accurate operations.

Another leading platform centers on supporting security analysts via AI-driven insights rather than replacing human judgment. Elastic’s AI-driven approach integrates machine learning algorithms to automatically detect anomalies, correlate events, and uncover subtle threats within large data sets. For example, when unusual network patterns emerge, the system doesn’t automatically initiate response actions but instead presents analysts with enriched data, relevant context, and suggested investigation paths.

A key strength of Elastic’s model is its emphasis on analyst empowerment. Rather than automating decisions, the platform provides security professionals with enhanced visibility and context. This approach recognizes that cybersecurity fundamentally remains a strategic challenge requiring human insight, creativity, and contextual understanding. AI serves as a force multiplier, helping analysts process information efficiently so they can focus their time on high-value activities.

The Modern SOC

While AI in cybersecurity can be seen as a path toward full automation, security operations can be structured instead to bolster human-AI collaboration in a way that doesn’t replace humans but boosts human capabilities to help improve efficiency. This view recognizes that security remains a human-versus-human challenge. Harvard Business School professor Karim Lakhani states that “AI won’t replace humans, but humans with AI will replace humans without AI.” Applying this principle to security operations, the question is, who will win in cyberspace? It may be the team that responsibly adapts and evolves its operational process by understanding and incorporating the advantages of AI. This team will be well positioned to defend against quickly evolving threat tactics, techniques, and procedures. The rhetoric of a non-human, fully autonomous SOC is not a current reality. However, the SOC that embraces AI as complementing people, not replacing people, may likely be the SOC that creates a competitive advantage in cyber defense.

In practice, this approach can simplify traditional tiered SOC structures, helping analysts handle incidents end-to-end while leveraging AI for speed, context, and insight. This can help organizations improve efficiency, accountability, and resilience against evolving threats.

Create a tactical competitive advantage in security operations with AI.

Best Practices for AI-Augmented Security

Building effective, AI-augmented security operations requires intentional design principles that prioritize human capabilities alongside technological advancements.

Successful implementations often focus AI automation on high-volume, routine activities that take up analyst time and don’t require complex reasoning. Some of these activities include the following:

  • Initial alert triage: AI systems can categorize and prioritize incoming security alerts based on severity, asset importance, and historical patterns.
  • Data enrichment: Automating the gathering of relevant contextual information from multiple sources can support analyst investigations.
  • Standard response actions: Predetermined responses can be triggered for well-understood threats, e.g., isolating compromised endpoints or blocking known malicious IP addresses.
  • Report generation: Investigation findings and incident summaries can be compiled for stakeholder communication.

By handling these routine tasks, AI can give analysts time to focus on activities that require advanced reasoning and skill, such as threat hunting, strategic planning, policy development, and navigating attack scenarios.

In addition, traditional SOC structures often fragment incident handling across multiple tiers, sometimes leading to communication gaps and delayed responses. Human-centered security operations may benefit from enabling individual analysts with inclusive case ownership, supported by AI tools that can help streamline the steps needed for investigation and response actions.

By allowing more extensive case ownership, security teams can reduce handoff delays and scale incident management. AI-embedded tools can support security teams with enhanced reporting, investigation assistance, and intelligent recommendations throughout the incident lifecycle.

Practical Recommendations

Implementing AI-augmented cybersecurity requires systematic planning and deployment. Security leaders can follow these practical steps to build human-centered security operations. To begin, review your organization’s current SOC maturity across key dimensions, including:

Automation Readiness

  • What percentage of security alerts get a manual review currently?
  • Which routine tasks take the most analyst time?
  • How standardized are your operations playbooks and/or incident response procedures?

Data Foundation

  • Do you have the complete and verified asset inventory with network visibility?
  • Are security logs centralized and easily searchable?
  • Can you correlate events across disparate data sources and security tools?

Team Capabilities

  • What is your analyst retention rate and average tenure?
  • How quickly can new team members get up to speed?
  • What skills gaps exist in your current team?

Tool Selection Considerations

Effective AI-augmented security requires tools that can support human-AI collaboration rather than promising unrealistic automation. Review potential solutions based on:

Integration Capabilities

  • How well do tools integrate with your existing security infrastructure?
  • Can the platform adapt to your organization’s specific policies and procedures?
  • Does the vendor provide application programming interface (API) integrations?

Transparency & Explainable AI

  • Can analysts understand how AI systems reach their conclusions?
  • Are there clear mechanisms for providing feedback to improve AI accuracy?
  • Can you audit and validate automated decisions?

Scalability & Flexibility

  • Can the platform grow with your organization’s needs?
  • How easily can you modify automated workflows as threats evolve?
  • What support is available for ongoing use?

Measuring Outcomes

Tool selection is only part of the equation. Measuring outcomes is just as important. To help align your AI-augmented security strategy with your organization’s goals, consider tracking metrics that demonstrate both operational efficiency and the enhanced effectiveness of analysts, such as:

Operational Metrics

  • Mean time to detect
  • Mean time to respond
  • Mean time to investigate
  • Mean time to close
  • Percentage of alerts that can be automatically triaged and prioritized
  • Analyst productivity measured by high-value activities rather than ticket volume

Strategic Metrics

  • Analyst job satisfaction and retention rates
  • Time invested in proactive threat hunting versus reactive incident response
  • Organizational resilience measured through red/blue/purple team exercises and simulations

How Forvis Mazars Can Help

The future of proactive cybersecurity isn’t about choosing between human skill and AI, but rather lies in thoughtfully combining their complementary strengths. AI excels at processing massive amounts of data, identifying patterns, and executing consistent responses to known threats. Humans excel at providing contextual understanding, creative problem-solving, and strategic judgment, which are essential skills for addressing novel and complex security challenges.

Organizations that embrace this collaborative approach can position themselves to build more resilient, scalable, and effective security operations. Rather than pursuing the lofty and perhaps unrealistic goal of full automation, consider focusing on creating systems where AI bolsters human capabilities and helps security professionals deliver their best work.

The journey toward AI-augmented cybersecurity necessitates careful planning, gradual implementation, and continual refinement. By following the frameworks and best practices outlined in this article, security leaders can build operations that leverage both human intelligence and artificial intelligence to protect their organizations in an increasingly complex threat landscape.

Ready to explore how AI-augmented cybersecurity can strengthen your organization’s security posture? The Managed Services team at Forvis Mazars has certified partnerships with SentinelOne and Elastic. Contact us to discuss tailored solutions.

Related reading: 



Source link

Continue Reading

AI Research

USA TODAY rolls out AI answer engine to all users

Published

on


play

Gannett, USA TODAY’s parent company, has fully implemented generative AI engine DeeperDive for USA TODAY’s audience of more than 195 million monthly unique visitors.

DeeperDive uses the high-quality content created by reporters and editors of the USA TODAY Network to deliver clear, timely GenAI conversations to readers. The technology was created by Taboola. Gannett is the first U.S. publisher to fully embed the AI-answer engine.

The step aligns with the company’s commitment to embrace innovation for the benefit of its readers, Michael Reed, chairman and CEO of Gannett, said in a statement.

“The Taboola partnership gives us the opportunity to further deliver on our promise to enrich and empower the communities we serve because DeeperDive provides our valued audiences with trusted relevant content,” Reed said.

Because it sources its responses solely from trusted USA TODAY and USA TODAY Network journalism and content, DeeperDive interacts with readers to deliver a sharper understanding of the topics users want to know about.

Other highlights include more curated advertising, Reed said. A DeeperDive beta was launched in June to a percentage of readers and was expanded after initial performance exceeded expectations.

DeeperDive’s technology spans various coverage areas, answering reader questions about travel, their local communities, sports, political updates and more.

In the next phase of the collaboration, AI agents will be tested to give readers access to seamless, easy purchasing options tailored to their specific needs and interests, Reed said.

Adam Singolda, CEO and founder of Taboola, called the partnership with Gannett a “once-in-a-generation” opportunity.

“With DeeperDive, we’re moving the industry from page views to Generative AI conversations, and from clicks to transactions rooted in what I see as the most valuable part of the LLM market – decisions that matter,” Singolda said in a statement. LLM refers to large language models, like ChatGPT.

“Consumers may ask questions using consumer GenAI engines, but when it comes to choices that require trust and conviction, where to travel with their family, which financial step to take, or whether to buy a product – USA TODAY is where they turn,” added Singolda.



Source link

Continue Reading

AI Research

AI models are struggling to identify hate speech, study finds

Published

on


Some of the biggest artificial intelligence models moderating the content that is seen by the public are inconsistently classifying what counts as hate speech, new research has claimed.

The study, led by researchers from the University of Pennsylvania, found Open AI, Google, and DeepSeek, which are employed by social media platforms to censor content, are defining discriminatory content by different standards.

Researchers analysed seven AI moderation systems that have the responsibility of determining what can and cannot be said online.

Yphtach Lelkes, an associate professor in UPenn’s Annenberg School for Communication, said: “Our research demonstrates that when it comes to hate speech, the AI driving these decisions is wildly inconsistent. The implication is a new form of digital censorship where the rules are invisible, and the referee is a machine.”

The study found that the systems’ were inconsistently evaluating statements about groups based on education level, economic class, and personal interest (Getty/iStock)

The study, which was published in the Findings of the Association for Computational Linguistics, looked at 1.3 million statements which included both neutral terms and slurs on around 125 demographic groups of people.

The models were making different calls about whether a statement was determined to be hate speech or not. It is a critical public issue, the researchers say, as inconsistencies can erode trust and create perceptions of bias.

Hate speech is abusive or threatening speech that expresses prejudice on the basis of ethnicity, religion or sexual orientation.

The study’s researcher, Annenberg doctoral student Neil Fasching, said: “The research shows that content moderation systems have dramatic inconsistencies when evaluating identical hate speech content, with some systems flagging content as harmful while others deem it acceptable.”

The biggest inconsistencies existed in the systems’ evaluations of statements about groups based on education level, economic class, and personal interest, which leaves “some communities more vulnerable to online harm than others”, Mr Fasching said.

Evaluations of statements about groups based on race, gender and sexual orientation were more alike.

Professor Sandra Wachter said: “To figure out what is harmful, or illegal will require lots of time and resources because a “one fits all solution“ is not possible nor desirable.” (PA Media)

Professor Sandra Wachter said: “To figure out what is harmful, or illegal will require lots of time and resources because a “one fits all solution“ is not possible nor desirable.” (PA Media)

Dr. Sandra Wachter, professor of technology and regulation, at the University of Oxford, said the research revealed how complicated the topic was. “To walk this line is difficult, as we as humans have no clear and concrete standards of what acceptable speech should look like,” she said.

“If humans cannot agree on standards, it is unsurprising to me that these models have different results, but it does not make the harm go away.

“Since Generative AI has become a very popular tool for people to inform themselves, I think tech companies have a responsibility to make sure that the content they are serving is not harmful, but truthful, diverse and unbiased. With big tech comes big responsibility.”

Of the seven models that were analysed, some were designed for classifying content, and others were more general. There were two from OpenAI, two from Mistral, Claude 3.5 Sonnet, DeepSeek V3, and Google Perspective API.

All moderators have been contacted for comment.



Source link

Continue Reading

Trending