AI Research
How Do We Reach Decisions? Researchers Pioneer AI Method to Uncover Cognitive Strategies

The study’s authors note that small neural networks — simplified versions of the neural networks typically used in commercial AI applications — can predict the choices of animals much better than classical cognitive models, which assume optimal behavior, because of their ability to illuminate suboptimal behavioral patterns. In laboratory tasks, these predictions are also as good as those made by larger neural networks, such as those powering commercial AI applications.
“An advantage of using very small networks is that they enable us to deploy mathematical tools to easily interpret the reasons, or mechanisms, behind an individual’s choices, which would be more difficult if we had used large neural networks such as the ones used in most AI applications,” adds author Ji-An Li, a doctoral student in the Neurosciences Graduate Program at UC San Diego.
“Large neural networks used in AI are very good at predicting things,” says author Marcus Benna, an assistant professor of neurobiology at UC San Diego’s School of Biological Sciences. “For example, they can predict which movie you would like to watch next. However, it is very challenging to describe succinctly what strategies these complex machine learning models employ to make their predictions — such as why they think you will like one movie more than another one. By training the simplest versions of these AI models to predict animals’ choices and analyzing their dynamics using methods from physics, we can shed light on their inner workings in more easily understandable terms.”
Understanding how animals and humans learn from experience to make decisions is not only a primary goal in the sciences, but, more broadly, useful in the realms of business, government and technology. However, existing models of this process, because they are aimed at depicting optimal decision-making, often fail to capture realistic behavior.
Overall, the model described in the new Nature study matched the decision-making processes of humans, non-human primates and laboratory rats. Notably, the model predicted decisions that were suboptimal, thereby better reflecting the “real-world” nature of decision-making — and in contrast to assumptions of traditional models, which are focused on explaining optimal decision-making. Moreover, the NYU and UC San Diego scientists’ model was able to predict decision-making at the individual level, revealing how each participant deploys different strategies in reaching their decisions.
“Just as studying individual differences in physical characteristics has revolutionized medicine, understanding individual differences in decision-making strategies could transform our approach to mental health and cognitive function,” concludes Mattar.
The research was supported by grants from the National Science Foundation (CNS-1730158, ACI-1540112, ACI-1541349, OAC-1826967, OAC-2112167, CNS-2100237, CNS-2120019), the Kavli Institute for Brain and Mind, the University of California Office of the President and UCSD’s California Institute for Telecommunications and Information Technology/Qualcomm Institute.
— Adapted from a New York University news release
Learn more about research and education at UC San Diego in:
AI Research
Artificial Intelligence Stocks To Add to Your Watchlist – September 14th – MarketBeat
AI Research
AI-Augmented Cybersecurity: A Human-Centered Approach

The integration of artificial intelligence (AI) is fundamentally transforming the cybersecurity landscape. While AI brings unparalleled speed and scale in threat detection, an effective strategy potentially lies in cultivating collaboration between people with specialized knowledge and AI systems rather than full AI automation. This article explores AI’s evolving role in cybersecurity, the importance of blending human oversight with technological capabilities, and frameworks to consider.
AI & Human Roles
The role of AI has expanded far beyond simple task automation. It can now serve as a powerful tool for augmenting human-led analysis and decision making and can help organizations process and go over vast volumes of security logs and data quickly. This capability can help significantly enhance early threat detection and accelerate incident response. With AI-augmented cybersecurity, organizations can identify and address potential threats with unprecedented speed and precision.
Despite these advancements, the vision of a fully autonomous security operations center (SOC) currently remains more aspirational than practical. AI-powered systems often lack the nuanced contextual understanding and intuitive judgment essential for handling novel or complex attack scenarios. This is where human oversight becomes indispensable. Skilled analysts play an essential role in interpreting AI findings, making strategic decisions, and bringing automated actions in line with the organization’s particular context and policies.
This is where human oversight becomes indispensable.
As the cybersecurity industry shifts toward augmentation, a best-fit model is one that utilizes AI to handle repetitive, high-volume tasks while simultaneously preserving human control over critical decisions and direction. This balanced approach combines the speed and efficiency of automation with the insight and experience of human reasoning, creating a scalable, resilient security posture.
Robust Industry Frameworks for AI Integration
The transition toward AI-augmented, human-centered cybersecurity is well represented by frameworks from leading industry platforms. These models provide a road map for organizations to incrementally integrate AI while maintaining the much-needed role of human oversight.
SentinelOne’s Autonomous SOC Maturity Model provides a framework to help support organizations on their journey to an autonomous SOC. This model emphasizes the strategic use of AI and automation to help strengthen human security teams. It outlines the progression from manual, reactive security practices to advanced, automated, and proactive approaches, where AI can handle repetitive tasks and free up human analysts for strategic work.
SentinelOne has defined its Autonomous SOC Maturity Model as consisting of the following five levels:
- Level 0 (Manual Operations): Security teams rely entirely on manual processes for threat detection, investigation, and response.
- Level 1 (Basic Automation): Introduction of rule-based alerts and simple automated responses for known threat patterns.
- Level 2 (Enhanced Detection): AI-assisted threat detection that flags anomalies while analysts maintain investigation control.
- Level 3 (Orchestrated Response): Automated workflows handle routine incidents while complex cases require human intervention.
- Level 4 (Autonomous Operations): Advanced AI manages most security operations with strategic human oversight and exception handling.
This progression demonstrates that achieving sophisticated security automation requires gradual capability building rather than a full-scale overhaul of systems and processes. At each level, humans remain essential for strategic decision making, policy alignment, and handling cases that fall outside of the automated parameters. Even at Level 4, the highest maturity level, human oversight remains a must for effective, accurate operations.
Another leading platform centers on supporting security analysts via AI-driven insights rather than replacing human judgment. Elastic’s AI-driven approach integrates machine learning algorithms to automatically detect anomalies, correlate events, and uncover subtle threats within large data sets. For example, when unusual network patterns emerge, the system doesn’t automatically initiate response actions but instead presents analysts with enriched data, relevant context, and suggested investigation paths.
A key strength of Elastic’s model is its emphasis on analyst empowerment. Rather than automating decisions, the platform provides security professionals with enhanced visibility and context. This approach recognizes that cybersecurity fundamentally remains a strategic challenge requiring human insight, creativity, and contextual understanding. AI serves as a force multiplier, helping analysts process information efficiently so they can focus their time on high-value activities.
The Modern SOC
While AI in cybersecurity can be seen as a path toward full automation, security operations can be structured instead to bolster human-AI collaboration in a way that doesn’t replace humans but boosts human capabilities to help improve efficiency. This view recognizes that security remains a human-versus-human challenge. Harvard Business School professor Karim Lakhani states that “AI won’t replace humans, but humans with AI will replace humans without AI.” Applying this principle to security operations, the question is, who will win in cyberspace? It may be the team that responsibly adapts and evolves its operational process by understanding and incorporating the advantages of AI. This team will be well positioned to defend against quickly evolving threat tactics, techniques, and procedures. The rhetoric of a non-human, fully autonomous SOC is not a current reality. However, the SOC that embraces AI as complementing people, not replacing people, may likely be the SOC that creates a competitive advantage in cyber defense.
In practice, this approach can simplify traditional tiered SOC structures, helping analysts handle incidents end-to-end while leveraging AI for speed, context, and insight. This can help organizations improve efficiency, accountability, and resilience against evolving threats.
Create a tactical competitive advantage in security operations with AI.
Best Practices for AI-Augmented Security
Building effective, AI-augmented security operations requires intentional design principles that prioritize human capabilities alongside technological advancements.
Successful implementations often focus AI automation on high-volume, routine activities that take up analyst time and don’t require complex reasoning. Some of these activities include the following:
- Initial alert triage: AI systems can categorize and prioritize incoming security alerts based on severity, asset importance, and historical patterns.
- Data enrichment: Automating the gathering of relevant contextual information from multiple sources can support analyst investigations.
- Standard response actions: Predetermined responses can be triggered for well-understood threats, e.g., isolating compromised endpoints or blocking known malicious IP addresses.
- Report generation: Investigation findings and incident summaries can be compiled for stakeholder communication.
By handling these routine tasks, AI can give analysts time to focus on activities that require advanced reasoning and skill, such as threat hunting, strategic planning, policy development, and navigating attack scenarios.
In addition, traditional SOC structures often fragment incident handling across multiple tiers, sometimes leading to communication gaps and delayed responses. Human-centered security operations may benefit from enabling individual analysts with inclusive case ownership, supported by AI tools that can help streamline the steps needed for investigation and response actions.
By allowing more extensive case ownership, security teams can reduce handoff delays and scale incident management. AI-embedded tools can support security teams with enhanced reporting, investigation assistance, and intelligent recommendations throughout the incident lifecycle.
Practical Recommendations
Implementing AI-augmented cybersecurity requires systematic planning and deployment. Security leaders can follow these practical steps to build human-centered security operations. To begin, review your organization’s current SOC maturity across key dimensions, including:
Automation Readiness
- What percentage of security alerts get a manual review currently?
- Which routine tasks take the most analyst time?
- How standardized are your operations playbooks and/or incident response procedures?
Data Foundation
- Do you have the complete and verified asset inventory with network visibility?
- Are security logs centralized and easily searchable?
- Can you correlate events across disparate data sources and security tools?
Team Capabilities
- What is your analyst retention rate and average tenure?
- How quickly can new team members get up to speed?
- What skills gaps exist in your current team?
Tool Selection Considerations
Effective AI-augmented security requires tools that can support human-AI collaboration rather than promising unrealistic automation. Review potential solutions based on:
Integration Capabilities
- How well do tools integrate with your existing security infrastructure?
- Can the platform adapt to your organization’s specific policies and procedures?
- Does the vendor provide application programming interface (API) integrations?
Transparency & Explainable AI
- Can analysts understand how AI systems reach their conclusions?
- Are there clear mechanisms for providing feedback to improve AI accuracy?
- Can you audit and validate automated decisions?
Scalability & Flexibility
- Can the platform grow with your organization’s needs?
- How easily can you modify automated workflows as threats evolve?
- What support is available for ongoing use?
Measuring Outcomes
Tool selection is only part of the equation. Measuring outcomes is just as important. To help align your AI-augmented security strategy with your organization’s goals, consider tracking metrics that demonstrate both operational efficiency and the enhanced effectiveness of analysts, such as:
Operational Metrics
- Mean time to detect
- Mean time to respond
- Mean time to investigate
- Mean time to close
- Percentage of alerts that can be automatically triaged and prioritized
- Analyst productivity measured by high-value activities rather than ticket volume
Strategic Metrics
- Analyst job satisfaction and retention rates
- Time invested in proactive threat hunting versus reactive incident response
- Organizational resilience measured through red/blue/purple team exercises and simulations
How Forvis Mazars Can Help
The future of proactive cybersecurity isn’t about choosing between human skill and AI, but rather lies in thoughtfully combining their complementary strengths. AI excels at processing massive amounts of data, identifying patterns, and executing consistent responses to known threats. Humans excel at providing contextual understanding, creative problem-solving, and strategic judgment, which are essential skills for addressing novel and complex security challenges.
Organizations that embrace this collaborative approach can position themselves to build more resilient, scalable, and effective security operations. Rather than pursuing the lofty and perhaps unrealistic goal of full automation, consider focusing on creating systems where AI bolsters human capabilities and helps security professionals deliver their best work.
The journey toward AI-augmented cybersecurity necessitates careful planning, gradual implementation, and continual refinement. By following the frameworks and best practices outlined in this article, security leaders can build operations that leverage both human intelligence and artificial intelligence to protect their organizations in an increasingly complex threat landscape.
Ready to explore how AI-augmented cybersecurity can strengthen your organization’s security posture? The Managed Services team at Forvis Mazars has certified partnerships with SentinelOne and Elastic. Contact us to discuss tailored solutions.
Related reading:
AI Research
USA TODAY rolls out AI answer engine to all users

How to use AI rewriting tools
Artificial Intelligence can instantly proofread your writing and make suggestions to tweak the tone of a message, paper or presentation.
Problem Solved
Gannett, USA TODAY’s parent company, has fully implemented generative AI engine DeeperDive for USA TODAY’s audience of more than 195 million monthly unique visitors.
DeeperDive uses the high-quality content created by reporters and editors of the USA TODAY Network to deliver clear, timely GenAI conversations to readers. The technology was created by Taboola. Gannett is the first U.S. publisher to fully embed the AI-answer engine.
The step aligns with the company’s commitment to embrace innovation for the benefit of its readers, Michael Reed, chairman and CEO of Gannett, said in a statement.
“The Taboola partnership gives us the opportunity to further deliver on our promise to enrich and empower the communities we serve because DeeperDive provides our valued audiences with trusted relevant content,” Reed said.
Because it sources its responses solely from trusted USA TODAY and USA TODAY Network journalism and content, DeeperDive interacts with readers to deliver a sharper understanding of the topics users want to know about.
Other highlights include more curated advertising, Reed said. A DeeperDive beta was launched in June to a percentage of readers and was expanded after initial performance exceeded expectations.
DeeperDive’s technology spans various coverage areas, answering reader questions about travel, their local communities, sports, political updates and more.
In the next phase of the collaboration, AI agents will be tested to give readers access to seamless, easy purchasing options tailored to their specific needs and interests, Reed said.
Adam Singolda, CEO and founder of Taboola, called the partnership with Gannett a “once-in-a-generation” opportunity.
“With DeeperDive, we’re moving the industry from page views to Generative AI conversations, and from clicks to transactions rooted in what I see as the most valuable part of the LLM market – decisions that matter,” Singolda said in a statement. LLM refers to large language models, like ChatGPT.
“Consumers may ask questions using consumer GenAI engines, but when it comes to choices that require trust and conviction, where to travel with their family, which financial step to take, or whether to buy a product – USA TODAY is where they turn,” added Singolda.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries