Connect with us

AI Research

Penn Engineers Introduce Groundbreaking Generative AI Model for Antibiotic

Published

on


What if artificial intelligence could revolutionize the development of life-saving antibiotics in the same way it has transformed the creation of art and text? This question is at the forefront of groundbreaking research conducted by scientists from the University of Pennsylvania. In a recent paper published in the journal Cell Biomaterials, researchers have unveiled a novel generative AI tool called AMP-Diffusion. This state-of-the-art technology has successfully generated tens of thousands of new antimicrobial peptides (AMPs), which are short chains of amino acids with the potential to combat bacterial infections. The implications of this research could be profound, particularly in the context of the escalating threat posed by antibiotic resistance.

The arrival of AMP-Diffusion marks a significant advancement from previous methodologies that primarily relied on sifting through vast datasets to isolate promising antibiotic candidates. Prior breakthroughs at Penn had already demonstrated that AI could effectively sort through massive amounts of biological data and identify antibiotic prospects. However, the current study takes a revolutionary leap forward by demonstrating that AI can also concoct antibiotic candidates from scratch. With the increasing urgency of developing new antibiotics, especially in the wake of alarming rates of antibiotic resistance, the promise of AMP-Diffusion could not be more timely.

Pranam Chatterjee, Assistant Professor in Bioengineering and Computer and Information Science at Penn, along with César de la Fuente, Presidential Associate Professor in Bioengineering and Chemical and Biomolecular Engineering, spearheaded this innovative project. Chatterjee emphasizes the ability to leverage AI not merely as a tool for analysis but as a creator capable of designing new antibiotic molecules. The collaborative efforts of both labs are foundational, blending their unique expertise to push the boundaries of what’s achievable in antibiotic discovery.

The methodology of the AMP-Diffusion model mirrors techniques used in popular AI platforms like DALL·E and Stable Diffusion, which have gained prominence for their ability to generate images based on textual descriptions. Instead of “denoising” pixels as in these more visual AI applications, AMP-Diffusion undergoes a similar process for sequences of amino acids—gradually refining random noise into biologically relevant sequences. In this intricate process, the model begins with a chaotic array of possibilities and hones in on effective peptide structures.

While traditional generative models typically rely on predicting the next element in a sequence, AMP-Diffusion takes advantage of pre-existing protein language models, specifically ESM-2 developed by Meta. This foundational model had been trained on a staggering number of natural protein sequences, providing AMP-Diffusion with a comprehensive internal framework of how proteins are structured. By starting with this robust “mental map,” AMP-Diffusion can expedite the generation of candidate AMPs while ensuring that these candidates adhere to the biological realities governing effective peptides.

In total, AMP-Diffusion produced approximately 50,000 candidate sequences, an incredible volume far surpassing what conventional testing methods could evaluate. Recognizing the impracticality of testing every candidate, the researchers employed an AI tool previously developed by de la Fuente’s lab, known as APEX 1.1, to filter candidates based on various parameters. The screening process not only sought sequences with strong antimicrobial properties but also filtered out redundancies by eliminating peptides too similar to existing AMPs. This level of filtration ensures a diverse array of candidate types, thus broadening the scope of potential discoveries.

From the pool of candidates, the teams synthesized 46 of the most promising AMPs for comprehensive testing. The subsequent evaluations in human cells and animal models yielded remarkable results: two of these AMP candidates demonstrated efficacy comparable to that of FDA-approved antibiotics such as levofloxacin and polymyxin B. Astonishingly, these AI-generated molecules managed to treat skin infections in mice without causing any adverse effects, validating the effectiveness of machine learning in drug discovery.

The implications of these findings extend beyond antibiotic treatment; they represent a paradigm shift in how researchers can expedite the timeline of antibiotic discovery, which frequently spans many years. Chatterjee outlines this potential transformation, expressing hope that future iterations of AMP-Diffusion could allow for the crafting of drug candidates with even more specific therapeutic goals in mind. This could mean producing antibiotics tailored for particularly stubborn bacterial strains or even for different types of infections.

Looking ahead, the researchers plan to refine the capabilities of AMP-Diffusion, enhancing its ability to target specific properties in future designs to elevate the effectiveness of generated antibiotics. Each refinement brings scientists one step closer to realizing their ambition of reducing the antibiotic discovery timeline from years to mere days. Such efficiency could usher in a new era of drug development, one where generating effective antibiotics becomes a streamlined and rapidly attainable goal.

This research is not merely a demonstration of technology; it represents a broader vision of battling antibiotic resistance through innovation. As the urgency of developing new antibacterial treatments increases, AMP-Diffusion positions itself as a beacon of hope for medical science, providing the tools necessary to forge new paths in the fight against drug-resistant bacteria.

The study not only underscores the synergy between biology and artificial intelligence but also serves as a springboard for future investigations. By tapping into generative AI’s potential, researchers can explore uncharted territories in drug discovery and rekindle the fight against some of humanity’s most pressing health challenges. Ultimately, the integration of AI in the process illuminates a bright future, one where antibiotics can be designed, tested, and deployed rapidly, thereby offering a significant countermeasure to the perilous rise of antibiotic-resistant infections globally.

Subject of Research: Animals
Article Title: Generative latent diffusion language modeling yields anti-infective synthetic peptides
News Publication Date: 2-Sep-2025
Web References: DOI link
References:
Image Credits: Credit: Sylvia Zhang

Keywords

Artificial Intelligence, Antibiotic Resistance, Antimicrobial Peptides, Drug Discovery, Generative AI, Bioengineering, Peptide Design, Innovation in Medicine, Computational Biology, Synthetic Biology

Tags: AI in drug developmentAI-generated antibiotic candidatesAMP-Diffusion technologyantimicrobial peptides discoveryartificial intelligence in healthcarebreakthroughs in biomedical researchcombating antibiotic resistancefuture of antibiotics developmentgenerative AI for antibiotic designlife-saving antibiotics innovationnovel AI tools in medicinePenn University antibiotic research



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Inside Austin’s Gauntlet AI, the Elite Bootcamp Forging “AI First” Builders

Published

on


In the brave new world of artificial intelligence, talent is the new gold, and companies are in a frantic race to find it. While universities work to churn out computer science graduates, a new kind of school has emerged in Austin to meet the insatiable demand: Gauntlet AI.

Gauntlet AI bills itself as an elite training program. It’s a high-stakes, high-reward process designed to forge “AI-first” engineers and builders in a matter of weeks.

“We’re closer to Navy SEAL bootcamp training than a school,” said Ash Tilawat, Head of Product and Learning. “We take the smartest people in the world. We bring them into the same place for a 1000 hours over ten weeks and we make them go all in with building with AI.”

Austen Allred, the co-founder and CEO of Gauntlet AI, says when they claim to be looking for the smartest engineers in the world, it’s no exaggeration. The selection process is intensely rigorous.

“We accept around 2 percent of the applicants,” Allred explained. “We accept 98th percentile and above of raw intelligence, 95th percentile of coding ability, and then you start on The Gauntlet.”

ALSO| The 60-Second Guardian: Can a Swarm of Drones Stop a School Shooter?

The price of admission isn’t paid in dollars—there are no tuition fees. Instead, the cost is a student’s absolute, undivided attention.

“It is pretty grueling, but it’s invigorating and I love doing this,” said Nataly Smith, one of the “Gauntlet Challengers.”

Smith, whose passions lie in biotech and space, recently channeled her love for bioscience to complete one of the program’s challenges. Her team was tasked with building a project called “Geno.”

“It’s a tool where a person can upload their genomic data and get a statistical analysis of how likely they are to have different kinds of cancers,” Smith described.

Incredibly, her team built the AI-powered tool in just one week.

The ultimate prize waiting at the end of the grueling 10-week gauntlet is a guaranteed job offer with a starting salary of at least $200,000 a year. And hiring partners are already lining up to recruit challengers like Nataly.

“We very intentionally chose to partner with everything from seed-stage startups all the way to publicly traded companies,” said Brett Johnson, Gauntlet’s COO. “So Carvana is a hiring partner. Here in Austin, we have folks like Function Health. We have the Trilogy organization; we have Capital Factory just around the corner. We’re big into the Austin tech community and looking to double down on that.”

In a world desperate for skilled engineers, Gauntlet AI isn’t just training people; it’s manufacturing the very talent pipeline it believes will power the next wave of technological innovation.



Source link

Continue Reading

AI Research

Endangered languages AI tools developed by UH researchers

Published

on

By


Reading time: < 1 minute

University of Hawaiʻi at Mānoa researchers have made a significant advance in studying how artificial intelligence (AI) understands endangered languages. This research could help communities document and maintain their languages, support language learning and make technology more accessible to speakers of minority languages.

The paper by Kaiying Lin, a PhD graduate in linguistics from UH Mānoa, and Department of Information and Computer Sciences Assistant Professor Haopeng Zhang, introduces the first benchmark for evaluating large language models (AI systems that process and generate text) on low-resource Austronesian languages. The study focuses on three Formosan (Indigenous peoples and languages of Taiwan) languages spoken in Taiwan—Atayal, Amis and Paiwan—that are at risk of disappearing.

Using a new benchmark called FORMOSANBENCH, Lin and Zhang tested AI systems on tasks such as machine translation, automatic speech recognition and text summarization. The findings revealed a large gap between AI performance in widely spoken languages such as English, and these smaller, endangered languages. Even when AI models were given examples or fine-tuned with extra data, they struggled to perform well.

“These results show that current AI systems are not yet capable of supporting low-resource languages,” Lin said.

Zhang added, “By highlighting these gaps, we hope to guide future development toward more inclusive technology that can help preserve endangered languages.”

The research team has made all datasets and code publicly available to encourage further work in this area. The preprint of the study is available online, and the study has been accepted into the 2025 Conference on Empirical Methods in Natural Language Processing in Suzhou, China, an internationally recognized premier AI conference.

The Department of Information and Computer Sciences is housed in UH Mānoa’s College of Natural Sciences, and the Department of Linguistics is housed in UH Mānoa’s College of Arts, Languages & Letters.



Source link

Continue Reading

AI Research

OpenAI reorganizes research team behind ChatGPT’s personality

Published

on


OpenAI is reorganizing its Model Behavior team, a small but influential group of researchers who shape how the company’s AI models interact with people, TechCrunch has learned.

In an August memo to staff seen by TechCrunch, OpenAI’s chief research officer Mark Chen said the Model Behavior team — which consists of roughly 14 researchers — would be joining the Post Training team, a larger research group responsible for improving the company’s AI models after their initial pre-training.

As part of the changes, the Model Behavior team will now report to OpenAI’s Post Training lead Max Schwarzer. An OpenAI spokesperson confirmed these changes to TechCrunch.

The Model Behavior team’s founding leader, Joanne Jang, is also moving on to start a new project at the company. In an interview with TechCrunch, Jang says she’s building out a new research team called OAI Labs, which will be responsible for “inventing and prototyping new interfaces for how people collaborate with AI.”

The Model Behavior team has become one of OpenAI’s key research groups, responsible for shaping the personality of the company’s AI models and for reducing sycophancy — which occurs when AI models simply agree with and reinforce user beliefs, even unhealthy ones, rather than offering balanced responses. The team has also worked on navigating political bias in model responses and helped OpenAI define its stance on AI consciousness.

In the memo to staff, Chen said that now is the time to bring the work of OpenAI’s Model Behavior team closer to core model development. By doing so, the company is signaling that the “personality” of its AI is now considered a critical factor in how the technology evolves.

In recent months, OpenAI has faced increased scrutiny over the behavior of its AI models. Users strongly objected to personality changes made to GPT-5, which the company said exhibited lower rates of sycophancy but seemed colder to some users. This led OpenAI to restore access to some of its legacy models, such as GPT-4o, and to release an update to make the newer GPT-5 responses feel “warmer and friendlier” without increasing sycophancy.

Techcrunch event

San Francisco
|
October 27-29, 2025

OpenAI and all AI model developers have to walk a fine line to make their AI chatbots friendly to talk to but not sycophantic. In August, the parents of a 16-year-old boy sued OpenAI over ChatGPT’s alleged role in their son’s suicide. The boy, Adam Raine, confided some of his suicidal thoughts and plans to ChatGPT (specifically a version powered by GPT-4o), according to court documents, in the months leading up to his death. The lawsuit alleges that GPT-4o failed to push back on his suicidal ideations.

The Model Behavior team has worked on every OpenAI model since GPT-4, including GPT-4o, GPT-4.5, and GPT-5. Before starting the unit, Jang previously worked on projects such as Dall-E 2, OpenAI’s early image-generation tool.

Jang announced in a post on X last week that she’s leaving the team to “begin something new at OpenAI.” The former head of Model Behavior has been with OpenAI for nearly four years.

Jang told TechCrunch she will serve as the general manager of OAI Labs, which will report to Chen for now. However, it’s early days, and it’s not clear yet what those novel interfaces will be, she said.

“I’m really excited to explore patterns that move us beyond the chat paradigm, which is currently associated more with companionship, or even agents, where there’s an emphasis on autonomy,” said Jang. “I’ve been thinking of [AI systems] as instruments for thinking, making, playing, doing, learning, and connecting.”

When asked whether OAI Labs will collaborate on these novel interfaces with former Apple design chief Jony Ive — who’s now working with OpenAI on a family of AI hardware devices — Jang said she’s open to lots of ideas. However, she said she’ll likely start with research areas she’s more familiar with.

This story was updated to include a link to Jang’s post announcing her new position, which was released after this story published. We also clarify the models that OpenAI’s Model Behavior team worked on.





Source link

Continue Reading

Trending