Connect with us

AI Research

Protein Dynamics Predicted Rapidly with Generative AI Model, BioEmu

Published

on


A newly developed generative AI model is helping researchers explore protein dynamics with increased speed. The deep learning system, called BioEmu, predicts the full range of conformations a protein can adopt, modeling the structural ensembles that underlie protein function.

The work, in a paper titled “Scalable emulation of protein equilibrium ensembles with generative deep learning,” was published in Science. Researchers developed BioEmu as a high-speed emulator of protein motion, capable of generating thousands of conformational states in just one GPU-hour, significantly outperforming traditional molecular dynamics (MD) simulations.

Understanding protein function has been a challenge, often hinging not on a single structural component of the protein, but on the combined ensemble of shapes within the protein. Proteins frequently shift between different conformations depending on their interactions or environment, which has been a challenge for other methods to capture accurately.

By integrating over 200 milliseconds of MD simulations, AlphaFold-predicted static structures, and experimental data on protein stability, the model learns to capture realistic equilibrium behavior. It then uses that knowledge to generate diverse conformations that reflect the functional landscape of a protein, without the need to run new simulations for each case.

BioEmu “captures diverse functional motions—including cryptic pocket formation, local unfolding, and domain rearrangements—and predicts relative free energies with 1 kcal/mol accuracy compared to millisecond-scale MD and experimental data,” the authors wrote.

The authors also highlight BioEmu’s property-prediction fine-tuning (PFFT) algorithm, which enables the model’s outputs to match with experimental data even when structural data is lacking.

“BioEmu and MD simulation are complementary,” the authors noted. They pointed out that BioEmu was trained with MD simulation data and that the data generated by the model can effectively mimic MD distributions at a fraction of the cost.

However, BioEmu doesn’t model dynamics over time like MD does, but it can generate equilibrium ensembles far faster, making it ideal for high-throughput applications where speed and scale matter more than temporal resolution. Further, it doesn’t model molecular dynamics or interactions with membranes, ligands, or changing conditions such as temperature or pH. Instead, it generates snapshots of structures from the equilibrium distribution—a statistical view of how a protein behaves in its native environment.

Despite these limitations, BioEmu is well-suited for applications that require high-throughput predictions of protein structural changes, such as drug design, enzyme engineering, and variant impact prediction. In these areas, the ability to rapidly sample conformational space can reveal hidden sites for targeting or design more stable variants of a protein.

While BioEmu’s performance is constrained by its training data—particularly for proteins or conditions underrepresented in public datasets—the authors emphasized its scalability and cost-efficiency. Once trained, the model can generate ensembles for new proteins orders of magnitude faster than even accelerated MD simulations, effectively “amortizing” the initial cost of data generation.

BioEmu joins other generative machine learning models that are moving beyond structure prediction into function-level modeling. While AlphaFold improved our ability to determine static structures from sequence, models like BioEmu represent the next phase: understanding how proteins move, interact, and function in real biological systems.





Source link

AI Research

Head of UK’s Turing AI Institute resigns after funding threat

Published

on


Graham FraserTechnology reporter

PA Jean Innes, Foreign Secretary David Lammy and his French counterpart, Jean-Noel Barrot at a meeting in London PA

Dr Jean Innes (left) pictured with Foreign Secretary David Lammy (centre) and his French counterpart Jean-Noel Barrot at a meeting in London

The chief executive of the UK’s national institute for artificial intelligence (AI) has resigned following staff unrest and a warning the charity was at risk of collapse.

Dr Jean Innes said she was stepping down from the Alan Turing Institute as it “completes the current transformation programme”.

Her position has come under pressure after the government demanded the centre change its focus to defence and threatened to pull its funding if it did not – leading to staff discontent and a whistleblowing complaint submitted to the Charity Commission.

Dr Innes, who was appointed chief executive in July 2023, said the time was right for “new leadership”.

The BBC has approached the government for comment.

The Turing Institute said its board was now looking to appoint a new CEO who will oversee “the next phase” to “step up its work on defence, national security and sovereign capabilities”.

Its work had once focused on AI and data science research in environmental sustainability, health and national security, but moved on to other areas such as responsible AI.

The government, however, wanted the Turing Institute to make defence its main priority, marking a significant pivot for the organisation.

“It has been a great honour to lead the UK’s national institute for data science and artificial intelligence, implementing a new strategy and overseeing significant organisational transformation,” Dr Innes said.

“With that work concluding, and a new chapter starting… now is the right time for new leadership and I am excited about what it will achieve.”

What happened at the Alan Turing Institute?

Founded in 2015 as the UK’s leading centre of AI research, the Turing Institute, which is headquartered at the British Library in London, has been rocked by internal discontent and criticism of its research activities.

A review last year by government funding body UK Research and Innovation found “a clear need for the governance and leadership structure of the Institute to evolve”.

At the end of 2024, 93 members of staff signed a letter expressing a lack of confidence in its leadership team.

In July, Technology Secretary Peter Kyle wrote to the Turing Institute to tell its bosses to focus on defence and security.

He said boosting the UK’s AI capabilities was “critical” to national security and should be at the core of the institute’s activities – and suggested it should overhaul its leadership team to reflect its “renewed purpose”.

He said further government investment would depend on the “delivery of the vision” he had outlined in the letter.

This followed Prime Minister Sir Keir Starmer’s commitment to increasing UK defence spending to 5% of national income by 2035, which would include investing more in military uses of AI.

Getty Images Peter Kyle. He has short smart grey hair and is wearing a sharp blue suit with a white shirt and red tie. He appears to be leaving 10 Downing Street.Getty Images

Technology Secretary Peter Kyle wants the Alan Turing Institute to focus on defence

A month after Kyle’s letter was sent, staff at the Turing institute warned the charity was at risk of collapse, after the threat to withdraw its funding.

Workers raised a series of “serious and escalating concerns” in a whistleblowing complaint submitted to the Charity Commission.

Bosses at the Turing Institute then acknowledged recent months had been “challenging” for staff.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”



Source link

Continue Reading

AI Research

Global Working Group Releases Publication on Responsible Use of Artificial Intelligence in Creating Lay Summaries of Clinical Trial Results

Published

on


New publication underscores the importance of human oversight, transparency, and patient involvement in AI-assisted lay summaries.

BOSTON, Sept. 4, 2025 /PRNewswire/ — The Center for Information and Study on Clinical Research Participation (CISCRP) today announced the publication of a landmark article, “Considerations for the Use of Artificial Intelligence in the Creation of Lay Summaries of Clinical Trial Results” , in Medical Writing (Volume 34, Issue 2, June 2025). Developed by the working group, Patient-focused AI for Lay Summaries (PAILS) , this comprehensive document addresses both the opportunities and risks of using artificial intelligence (AI) in the development of plain language communications of clinical trial results.

CISCRP logo

Lay summaries (LS) are essential tools for translating complex clinical trial results into plain language that is clear, accurate, and accessible to patients, caregivers, and the broader community. As AI technologies evolve, they hold promise for streamlining LS creation, improving efficiency, and expanding access to trial results. However, without thoughtful integration and oversight , AI-generated content can risk inaccuracies, cultural insensitivity, and loss of public trust.

For biopharma sponsors, CROs, and medical writing vendors, this framework offers clear, best practices for integrating AI responsibly while maintaining compliance with EU and UK lay summary regulations and improving efficiency at scale.

Key recommendations from the working group include:

  • Human oversight is essential – AI should support, not replace, expert review to ensure accuracy, clarity, and cultural sensitivity.

  • Prompt Engineering is a Critical Skillset – Thoughtful, specific prompts – including instructions on tone, reading level, terminology, structure, and disclaimers – can make the difference between usable and unusable drafts.

  • Full transparency of AI involvement – Disclosing when and how AI was used builds public trust and complies with emerging regulations such as the EU Artificial Intelligence Act.

  • Robust governance frameworks – Policies should address bias, privacy, compliance, and ongoing monitoring of AI systems.

  • Patient and public involvement – Including patient perspectives in review processes improves relevance and comprehension.

“This considerations document is the result of thoughtful collaboration among industry, academia , and CISCRP.” said Kimbra Edwards, Senior Director of Health Communication Services at CISCRP. “By combining human expertise with AI innovation, we can ensure that clinical trial information remains transparent, accurate, and truly patient-centered.”



Source link

Continue Reading

AI Research

Artificial intelligence is here. Will it replace teachers?

Published

on

By


Westend61/Getty Images

(NEW YORK) — Many parents, school districts and the federal government alike have embraced artificial intelligence this back-to-school season, but some experts warn artificial intelligence could widen the teacher shortage by eliminating jobs.

In a Pew Research Center study released last spring, 31% of AI experts, whose work or research focuses on the topic, said they expected artificial intelligence to lead to fewer jobs for teachers. Nearly a third of the experts surveyed predicted that AI will place teaching jobs “at risk” over the next 20 years, according to they Pew Research study.

The warning comes after the Learning Policy Institute — an organization that conducts independent research to improve education and policy practices — in July issued an overview of teacher shortages, which estimated that about one in eight teaching positions in 2025 are either unfilled or filled by teachers not fully certified for their assignments.

Indiana’s 2024 Teacher of the Year Eric Jenkins suggested AI could end up replacing “some parts” of teaching, but as a tool — not a replacement.

Idaho Superintendent of Public Instruction Debbie Critchfield emphasized that using AI to address the long-standing staffing shortage shouldn’t be considered.

“In no universe do I think that AI is going to replace a teacher,” Critchfield told ABC News.

“The teacher is the most important part and component of the classroom, but [AI] is a very useful tool in helping them provide the best educational environment that they can in the classroom,” she said.

The White House encourages K-12 students to use AI. While the Trump administration hasn’t directly addressed whether AI could replace teachers, the administration has launched its own action plan on the technology, which says “AI will improve the lives of Americans by complementing their work — not replacing it.”

Last week, first lady Melania Trump launched an AI contest challenging students to develop projects that use AI to address community challenges. Education Secretary Linda McMahon endorsed the challenge.

“AI has the potential to revolutionize education, drive meaningful learning outcomes, and prepare students for tomorrow’s challenges,” McMahon wrote in a post on X.

Teachers say they offer what AI can’t: connection
Nearly three years after the launch of ChatGPT, which stands for Chat Generative Pre-Trained Transformer, most of the United States has developed guidance on AI use in schools.

Many districts tell ABC News that they are embracing the technology so long as it is used appropriately — by adhering to local education agency guidance — with academic integrity. Critchfield even downplayed concerns that AI use in schools encourages cheating.

“Teachers can tell if you were writing like a seventh grader on Wednesday and then, all of a sudden, your paper you turn in on a Friday sounds like your post-doctorate in philosophy,” she said. “They know how to tell those differences.”

But in the wake of the pandemic, Thomas Toch, the director of FutureEd — an education policy center at Georgetown University, argued students need connection — to their peers, family and education tools such as AI chatbots — more than ever. Still, Toch rejected the full-time use of AI in place of humans.

“The loss of that connection during the pandemic, when kids were learning virtually, created widespread mental-health challenges,” Toch told ABC News. “The notion that, you know, a machine will be the only entity that interacts with kids is problematic in that regard.”

Education experts, such as Toch, contend K-12 education has “perpetual” teacher shortages with about a half-dozen areas in need, such as science, technology, engineering and mathematics (STEM), and special education instructors. The shortages have plagued the workforce for many years now, educators have told ABC News, with many of them citing strict time demands, persistent behavioral issues and lack of administrative support, among other obstacles.

Toch and Jenkins told ABC News they both appreciate AI for the powerful tool it can be in assisting teachers. It helps teachers plan lessons, grade students’ essays and is used as a “time saver” that helps them do their jobs better, according to Toch.

Preparing educators to work with AI tools
Jenkins said AI is inevitable and that he believes teachers need to lean in and embrace its capabilities.

“I don’t think we can put our head in the sand about it,” Jenkins told ABC News. “I don’t think that it’s necessarily going to replace teachers because teachers can offer something that AI can’t, which is a connection, like authentic connection and community.”

Jenkins argued the chatbots lack the human element of what teachers do: making sure that students feel seen and heard. He said that is not going away.

With AI’s presence in education, Jenkins added, “it’s going to make those moments even more important.”

In Idaho, Critchfield said she has been excited about students and educators using the technology, but suggested the challenge ahead focuses on making AI be seen as a tool and not a negative. According to Critchfield, using AI wisely can aid in the shortage by increasing teacher retention and reducing educators’ workloads.

“How are we preparing and training our teachers to use [AI] so that we don’t add new problems as we’re trying to solve some other problems?” Critchfield said.

Ultimately, Critchfield said she doesn’t see AI as a boogeyman that is going to eliminate jobs, but she stressed that teachers who know AI could replace those who are less familiar with the technology.

After teachers in his district suggested banning ChatGPT just a few years ago, School District of Philadelphia Superintendent Tony Watlington told ABC News that instead of removing AI, Philadelphia is now learning from it together. The school district is implementing AI 101 Training for its teachers, school leaders, and superintendent through a partnership with the University of Pennsylvania’s Graduate School of Education.

Watlington said it’s about “getting people around the table, and we are learning together.”

“We’re not hiding from AI,” Watlington said. “We’re also thinking about its implications and we’re really paying attention to what the prospective unintended consequences could be as well.”

Watlington added: “I think that’s the responsible way to think about artificial intelligence.”

Copyright © 2025, ABC Audio. All rights reserved.



Source link

Continue Reading

Trending