Connect with us

AI Research

Building CERN’s AI Strategy | CERN

Published

on


Artificial intelligence has been used at CERN since the 1980s; now a new Steering Committee is helping the Organization adopt it more widely

Global media coverage of artificial intelligence (AI) has recently exploded, but CERN’s excellence in the field can be traced as far back as 1987. However, the fast-evolving nature of AI makes it essential for the whole CERN community to develop the skills needed to use it competently, safely and ethically. The CERN AI Steering Committee (CAISC) was therefore set up in April 2025 to define a strategy and supporting governance structure that will promote a coherent and collaborative approach to AI across the Organization. CAISC, which incorporates many pre-existing CERN AI initiatives and strategies, presented its initial proposals in June during Council week.

The 2024 Nobel Prize in Physics emphasised how AI is carving out new routes to groundbreaking scientific discoveries, highlighting the strong relationship between physics and AI. Research at CERN has reflected this relationship, with AI currently being integrated into data processing, event simulations and numerous other processes including improved triggers for the HL-LHC and accelerator controls. CERN’s AI research is also impacting wider society through projects such as CAFEINTM. Initially developed to detect accelerator anomalies, CAFEINTM is now used to diagnose and predict brain pathologies, improving outcomes for stroke patients across Europe.

As the technological demands of particle physics research increase with the HL-LHC and beyond, the benefits of using AI are only likely to grow. A plethora of potential uses exists across the Organization, including large-scale software, device integration, commercial products and CERN-developed innovations. To avoid unnecessary duplication of effort, CAISC aims to engage experts and foster communication and collaboration, helping CERN to identify common AI activities, tools and infrastructure needs.

Wider adoption of the technology will also introduce new risks alongside its benefits, with some, such as data privacy, already being mitigated. To continue to address these challenges, CAISC plans to run an AI awareness campaign, identify training requirements and develop a new CERN policy for AI usage.

CERN’s unique environment and proven AI track record give the Organization a fantastic opportunity to shape the future of open, ethical AI driven by high-energy physics. Equipping tomorrow’s physicists and engineers with world-leading AI skills will play a vital role in sustaining CERN’s scientific excellence and talent pipeline. Efforts towards these goals will be outlined in an initial proposal for a unified CERN-wide AI strategy planned for the end of 2025. Built on strategies already independently produced by various sectors across CERN, this strategy will be firmly rooted in the Organization’s core values and evolve alongside European and Member State initiatives. CERN’s intelligent approach to artificial intelligence is under way.

CERN project wins AI for Good award

The award-winning CAFEINTM project is presented at the AI for Good Summit, which took place in Geneva from 8 to 11 July 2025. (Image: CERN)

Congratulations to the CAFEINTM project, which won an Innovate for Impact in Healthcare Award at the AI for Good Summit for its application to the TRUSTroke and UMBRELLA projects. CAFEINTM uses a decentralised and secure approach to train machine-learning algorithms without exchanging confidential data. This technology transfer from accelerators to healthcare was initiated thanks to seed funding from CERN’s medical applications budget and developed using solely external funds. The improved model will now efficiently and sustainably support CERN accelerator operations and research.

 



Source link

AI Research

Captions rebrands as Mirage, expands beyond creator tools to AI video research

Published

on


Captions, an AI-powered video creation and editing app for content creators that has secured over $100 million in venture capital to date at a valuation of $500 million, is rebranding to Mirage, the company announced on Thursday. 

The new name reflects the company’s broader ambitions to become an AI research lab focused on multimodal foundational models specifically designed for short-form video content for platforms like TikTok, Reels, and Shorts. The company believes this approach will distinguish it from traditional AI models and competitors such as D-ID, Synthesia, and Hour One.

The rebranding will also unify the company’s offerings under one umbrella, bringing together the flagship creator-focused AI video platform, Captions, and the recently launched Mirage Studio, which caters to brands and ad production.

“The way we see it, the real race for AI video hasn’t begun. Our new identity, Mirage, reflects our expanded vision and commitment to redefining the video category, starting with short-form video, through frontier AI research and models,” CEO Gaurav Misra told TechCrunch.

Image Credits:Mirage

The sales pitch behind Mirage Studio, which launched in June, focuses on enabling brands to create short advertisements without relying on human talent or large budgets. By simply submitting an audio file, the AI generates video content from scratch, with an AI-generated background and custom AI avatars. Users can also upload selfies to create an avatar using their likeness.

What sets the platform apart, according to the company, is its ability to produce AI avatars that have natural-looking speech, movements, and facial expressions. Additionally, Mirage says it doesn’t rely on existing stock footage, voice cloning, or lip-syncing. 

Mirage Studio is available under the business plan, which costs $399 per month for 8,000 credits. New users receive 50% off the first month. 

Techcrunch event

San Francisco
|
October 27-29, 2025

While these tools will likely benefit brands wanting to streamline video production and save some money, they also spark concerns around the potential impact on the creative workforce. The growing use of AI in advertisements has prompted backlash, as seen in a recent Guess ad in Vogue’s July print edition that featured an AI-generated model.

Additionally, as this technology becomes more advanced, distinguishing between real and deepfake videos becomes increasingly difficult. It’s a difficult pill to swallow for many people, especially given how quickly misinformation can spread these days.

Mirage recently addressed its role in deepfake technology in a blog post. The company acknowledged the genuine risks of misinformation while also expressing optimism about the positive potential of AI video. It mentioned that it has put moderation measures in place to limit misuse, such as preventing impersonation and requiring consent for likeness use. 

However, the company emphasized that “design isn’t a catch-all” and that the real solution lies in fostering a “new kind of media literacy” where people approach video content with the same critical eye as they do news headlines.



Source link

Continue Reading

AI Research

Head of UK’s Turing AI Institute resigns after funding threat

Published

on


Graham FraserTechnology reporter

PA Jean Innes, Foreign Secretary David Lammy and his French counterpart, Jean-Noel Barrot at a meeting in London PA

Dr Jean Innes (left) pictured with Foreign Secretary David Lammy (centre) and his French counterpart Jean-Noel Barrot at a meeting in London

The chief executive of the UK’s national institute for artificial intelligence (AI) has resigned following staff unrest and a warning the charity was at risk of collapse.

Dr Jean Innes said she was stepping down from the Alan Turing Institute as it “completes the current transformation programme”.

Her position has come under pressure after the government demanded the centre change its focus to defence and threatened to pull its funding if it did not – leading to staff discontent and a whistleblowing complaint submitted to the Charity Commission.

Dr Innes, who was appointed chief executive in July 2023, said the time was right for “new leadership”.

The BBC has approached the government for comment.

The Turing Institute said its board was now looking to appoint a new CEO who will oversee “the next phase” to “step up its work on defence, national security and sovereign capabilities”.

Its work had once focused on AI and data science research in environmental sustainability, health and national security, but moved on to other areas such as responsible AI.

The government, however, wanted the Turing Institute to make defence its main priority, marking a significant pivot for the organisation.

“It has been a great honour to lead the UK’s national institute for data science and artificial intelligence, implementing a new strategy and overseeing significant organisational transformation,” Dr Innes said.

“With that work concluding, and a new chapter starting… now is the right time for new leadership and I am excited about what it will achieve.”

What happened at the Alan Turing Institute?

Founded in 2015 as the UK’s leading centre of AI research, the Turing Institute, which is headquartered at the British Library in London, has been rocked by internal discontent and criticism of its research activities.

A review last year by government funding body UK Research and Innovation found “a clear need for the governance and leadership structure of the Institute to evolve”.

At the end of 2024, 93 members of staff signed a letter expressing a lack of confidence in its leadership team.

In July, Technology Secretary Peter Kyle wrote to the Turing Institute to tell its bosses to focus on defence and security.

He said boosting the UK’s AI capabilities was “critical” to national security and should be at the core of the institute’s activities – and suggested it should overhaul its leadership team to reflect its “renewed purpose”.

He said further government investment would depend on the “delivery of the vision” he had outlined in the letter.

This followed Prime Minister Sir Keir Starmer’s commitment to increasing UK defence spending to 5% of national income by 2035, which would include investing more in military uses of AI.

Getty Images Peter Kyle. He has short smart grey hair and is wearing a sharp blue suit with a white shirt and red tie. He appears to be leaving 10 Downing Street.Getty Images

Technology Secretary Peter Kyle wants the Alan Turing Institute to focus on defence

A month after Kyle’s letter was sent, staff at the Turing institute warned the charity was at risk of collapse, after the threat to withdraw its funding.

Workers raised a series of “serious and escalating concerns” in a whistleblowing complaint submitted to the Charity Commission.

Bosses at the Turing Institute then acknowledged recent months had been “challenging” for staff.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”



Source link

Continue Reading

AI Research

Global Working Group Releases Publication on Responsible Use of Artificial Intelligence in Creating Lay Summaries of Clinical Trial Results

Published

on


New publication underscores the importance of human oversight, transparency, and patient involvement in AI-assisted lay summaries.

BOSTON, Sept. 4, 2025 /PRNewswire/ — The Center for Information and Study on Clinical Research Participation (CISCRP) today announced the publication of a landmark article, “Considerations for the Use of Artificial Intelligence in the Creation of Lay Summaries of Clinical Trial Results” , in Medical Writing (Volume 34, Issue 2, June 2025). Developed by the working group, Patient-focused AI for Lay Summaries (PAILS) , this comprehensive document addresses both the opportunities and risks of using artificial intelligence (AI) in the development of plain language communications of clinical trial results.

CISCRP logo

Lay summaries (LS) are essential tools for translating complex clinical trial results into plain language that is clear, accurate, and accessible to patients, caregivers, and the broader community. As AI technologies evolve, they hold promise for streamlining LS creation, improving efficiency, and expanding access to trial results. However, without thoughtful integration and oversight , AI-generated content can risk inaccuracies, cultural insensitivity, and loss of public trust.

For biopharma sponsors, CROs, and medical writing vendors, this framework offers clear, best practices for integrating AI responsibly while maintaining compliance with EU and UK lay summary regulations and improving efficiency at scale.

Key recommendations from the working group include:

  • Human oversight is essential – AI should support, not replace, expert review to ensure accuracy, clarity, and cultural sensitivity.

  • Prompt Engineering is a Critical Skillset – Thoughtful, specific prompts – including instructions on tone, reading level, terminology, structure, and disclaimers – can make the difference between usable and unusable drafts.

  • Full transparency of AI involvement – Disclosing when and how AI was used builds public trust and complies with emerging regulations such as the EU Artificial Intelligence Act.

  • Robust governance frameworks – Policies should address bias, privacy, compliance, and ongoing monitoring of AI systems.

  • Patient and public involvement – Including patient perspectives in review processes improves relevance and comprehension.

“This considerations document is the result of thoughtful collaboration among industry, academia , and CISCRP.” said Kimbra Edwards, Senior Director of Health Communication Services at CISCRP. “By combining human expertise with AI innovation, we can ensure that clinical trial information remains transparent, accurate, and truly patient-centered.”



Source link

Continue Reading

Trending