Connect with us

AI Research

Dsit announces £15m AI security research fund

Published

on


 Image: Dsit, via Flickr

UKRI and industry involved in partnership on AI behaviour, allowing “experiments beyond typical academic reach”

The Department for Science, Innovation and Technology has announced a fund, backed by more than £15 million, for research on artificial intelligence behaviour and control.

On 30 July, Dsit announced that its AI Security Institute will lead research into AI alignment—making sure AI behaves as it was designed.

The Alignment Project is an international coalition with the Canadian AI Safety Institute, cloud computing platform Amazon Web Services and US-based AI company Anthropic, alongside UK Research and Innovation and the Advanced Research and Invention Agency, and others.

The science and technology secretary, Peter Kyle (pictured), said that as AI systems continue to develop at greater speed, “it’s crucial we’re driving forward research to ensure this transformative technology is behaving in our interests”.

Charlotte Deane, executive chair at the Engineering and Physical Sciences Research Council, said the partnership “unites critical elements of the UK’s AI ecosystem, bridging the gap between fundamental discovery science and the practical challenges of AI alignment”.

The project is billed as funding research to ensure AI systems remain responsive to human oversight and can identify and eliminate behaviours that may pose risks to society. Up to £1m will be available for researchers from disciplines including computer sciences and cognitive science. Researchers will have access to both Anthropic and Amazon Web Services’ computing resources, which Dsit says will enable “technical experiments beyond typical academic reach”.

AI and breakthrough innovation

“AI alignment is one of the most urgent and under-resourced challenges of our time,” said Geoffrey Irving, chief scientist at the AI Security Institute. “Progress is essential, but it’s not happening fast enough relative to the rapid pace of AI development.”

The project will also receive investment from private funders to accelerate commercial alignment solutions. Its advisory board includes winners of the Turing Award (widely regarded as the highest prize in computer science), Yoshua Bengio and Shafi Goldwasser.

AI is a key component of current government plans to boost economic growth. According to Dsit, the Alignment Project would help remove one of the biggest barriers to safe AI adoption: public trust.

The EPSRC, said Deane, is “dedicated to advancing the pioneering research that underpins AI safety, and by coordinating our efforts with the AI Security Institute and partners across the sector, we are strengthening the coherence of our national AI ecosystem”.

She added: “Together, we will ensure that the UK’s leadership in artificial intelligence drives both breakthrough innovation and tangible benefits for society.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment – The Standard (HK)

Published

on



(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment  The Standard (HK)



Source link

Continue Reading

AI Research

[2506.08171] Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models


View a PDF of the paper titled Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models, by Daniel Koh and 4 other authors

View PDF
HTML (experimental)

Abstract:Large language models (LLMs) have demonstrated strong performance on coding tasks such as generation, completion and repair, but their ability to handle complex symbolic reasoning over code still remains underexplored. We introduce the task of worst-case symbolic constraints analysis, which requires inferring the symbolic constraints that characterise worst-case program executions; these constraints can be solved to obtain inputs that expose performance bottlenecks or denial-of-service vulnerabilities in software systems. We show that even state-of-the-art LLMs (e.g., GPT-5) struggle when applied directly on this task. To address this challenge, we propose WARP, an innovative neurosymbolic approach that computes worst-case constraints on smaller concrete input sizes using existing program analysis tools, and then leverages LLMs to generalise these constraints to larger input sizes. Concretely, WARP comprises: (1) an incremental strategy for LLM-based worst-case reasoning, (2) a solver-aligned neurosymbolic framework that integrates reinforcement learning with SMT (Satisfiability Modulo Theories) solving, and (3) a curated dataset of symbolic constraints. Experimental results show that WARP consistently improves performance on worst-case constraint reasoning. Leveraging the curated constraint dataset, we use reinforcement learning to fine-tune a model, WARP-1.0-3B, which significantly outperforms size-matched and even larger baselines. These results demonstrate that incremental constraint reasoning enhances LLMs’ ability to handle symbolic reasoning and highlight the potential for deeper integration between neural learning and formal methods in rigorous program analysis.

Submission history

From: Daniel Koh [view email]
[v1]
Mon, 9 Jun 2025 19:33:30 UTC (1,462 KB)
[v2]
Tue, 16 Sep 2025 10:35:33 UTC (1,871 KB)



Source link

Continue Reading

AI Research

Spatially-Aware Image Focus for Visual Reasoning


View a PDF of the paper titled SIFThinker: Spatially-Aware Image Focus for Visual Reasoning, by Zhangquan Chen and 6 other authors

View PDF
HTML (experimental)

Abstract:Current multimodal large language models (MLLMs) still face significant challenges in complex visual tasks (e.g., spatial understanding, fine-grained perception). Prior methods have tried to incorporate visual reasoning, however, they fail to leverage attention correction with spatial cues to iteratively refine their focus on prompt-relevant regions. In this paper, we introduce SIFThinker, a spatially-aware “think-with-images” framework that mimics human visual perception. Specifically, SIFThinker enables attention correcting and image region focusing by interleaving depth-enhanced bounding boxes and natural language. Our contributions are twofold: First, we introduce a reverse-expansion-forward-inference strategy that facilitates the generation of interleaved image-text chains of thought for process-level supervision, which in turn leads to the construction of the SIF-50K dataset. Besides, we propose GRPO-SIF, a reinforced training paradigm that integrates depth-informed visual grounding into a unified reasoning pipeline, teaching the model to dynamically correct and focus on prompt-relevant regions. Extensive experiments demonstrate that SIFThinker outperforms state-of-the-art methods in spatial understanding and fine-grained visual perception, while maintaining strong general capabilities, highlighting the effectiveness of our method. Code: this https URL.

Submission history

From: Zhangquan Chen [view email]
[v1]
Fri, 8 Aug 2025 12:26:20 UTC (5,223 KB)
[v2]
Thu, 14 Aug 2025 10:34:22 UTC (5,223 KB)
[v3]
Sun, 24 Aug 2025 13:04:46 UTC (5,223 KB)
[v4]
Tue, 16 Sep 2025 09:40:13 UTC (5,223 KB)



Source link

Continue Reading

Trending