AI Research
MIT Researchers Harness Generative AI to Develop Compounds Targeting

In a groundbreaking study published in the prestigious journal Cell, researchers at the Massachusetts Institute of Technology (MIT) have leveraged the capabilities of artificial intelligence (AI) to innovate antibiotic therapy against two notoriously hard-to-treat bacterial infections: drug-resistant Neisseria gonorrhoeae and multi-drug-resistant Staphylococcus aureus, also known as MRSA. This promising approach marks a significant step forward in the ongoing battle against antibiotic-resistant bacteria, which pose a serious global health threat, causing millions of deaths annually.
The study initiated by the MIT team utilized generative AI algorithms to design over 36 million potential antibiotic compounds, an audacious effort that expands the chemical space for drug discovery. By computationally screening these compounds for antimicrobial properties, the researchers were able to identify several leading candidates exhibiting structural novelty compared to existing antibiotics. These candidates appear to function through previously uncharacterized mechanisms, primarily by disrupting bacterial cell membranes, thereby presenting a unique avenue for therapeutic intervention.
Historically, the development of new antibiotics has stagnated, with the Food and Drug Administration (FDA) approving only a handful of new classes in the past four decades, mostly derivatives of existing drugs. Consequently, with rising bacterial resistance, the need for innovative antibiotic strategies has never been more acute. Antibiotic resistance is responsible for an estimated 5 million deaths globally each year, compelling researchers to seek methods that circumvent conventional approaches. The MIT Antibiotics-AI Project aims to address this crisis by exploiting AI technologies to screen extensive libraries of existing chemical compounds, yielding several promising drug candidates in earlier works, such as halicin and abaucin.
.adsslot_0EeMsAwZld{width:728px !important;height:90px !important;}
@media(max-width:1199px){ .adsslot_0EeMsAwZld{width:468px !important;height:60px !important;}
}
@media(max-width:767px){ .adsslot_0EeMsAwZld{width:320px !important;height:50px !important;}
}
ADVERTISEMENT
In this latest endeavor, the MIT researchers took a more radical approach by venturing into uncharted territory—specifically, generating wholly new compounds that are not found in existing chemical libraries. The decision to apply AI to theorize previously undiscovered molecules opened a broader landscape of potential drug candidates that could lead to breakthroughs in antibiotic effectiveness. This imaginative concept is a clear departure from traditional methods, allowing scientists to explore new realms of chemical diversity.
To accomplish their objectives, the research team employed two distinct AI-driven approaches while focusing on their target pathogens: Neisseria gonorrhoeae and Staphylococcus aureus. The first strategy revolved around fragment-based design, wherein the researchers identified promising fragments capable of inducing antimicrobial effects. Initially, they constructed a vast repository of 45 million known chemical fragments. By utilizing machine-learning models trained to predict antibacterial activity, they filtered this extensive library down to nearly 4 million fragments, effectively narrowing the pool by eliminating cytotoxic and structurally similar compounds to known antibiotics.
By employing these rigorous selection criteria, the researchers ultimately refined the candidates to about 1 million unique fragments. This detailed filtration process highlights their strategic focus on innovating antibiotic mechanisms, which is essential in tackling antimicrobial resistance. The use of a specific fragment, referred to as F1, turned out to be a pivotal moment in their investigation. As a direct consequence of this foundational work, they ventured to use this fragment as the basis for generating further compounds through state-of-the-art generative AI algorithms.
The researchers employed two sophisticated AI algorithms: CReM (Chemically Reasonable Mutations) and F-VAE (Fragment-based Variational Autoencoder). CReM works by mutating the chosen fragment using various methods, including adding, replacing, or deleting atoms and functional groups. In contrast, F-VAE constructs complete molecules based on the parameters of the identified fragment. By combining these approaches, they generated an astonishing array of approximately 7 million new candidates featuring the F1 fragment, showcasing the potential of AI in revolutionizing traditional drug discovery.
Through meticulous computational screening, the research team identified about 1,000 candidates that demonstrated promising activity against the target bacteria. They subsequently contacted chemical synthesis vendors to produce these compounds and were able to successfully synthesize two, one of which, designated NG1, exhibited significant efficacy in laboratory tests, proving its ability to eradicate Neisseria gonorrhoeae in both in vitro and animal models of resistant infection. NG1’s mechanism of action involves interacting with a protein known as LptA, which is vital for the synthesis of the bacterial outer membrane, underlining the innovative approach of targeting previously unexplored pathways.
In addition to examining Neisseria gonorrhoeae, the researchers also set their sights on Staphylococcus aureus, employing a similar generative framework but with fewer constraints. This unconstrained approach allowed the AI to freely generate compounds while adhering to the general chemical bonding rules, ultimately leading to the creation of an additional 29 million potential compounds. The refinement process again followed rigorous filtering similar to that applied in the previous rounds, leading to the identification of 90 viable compounds.
The compounds conceived through this innovative AI-based approach led to the successful synthesis and testing of 22 molecules, six of which displayed robust antibacterial activity against multi-drug-resistant Staphylococcus aureus in laboratory conditions. Notably, the compound DN1 emerged as a leading candidate once again, demonstrating the potential to effectively clear MRSA skin infections in animal models. This result further emphasizes the efficacy of the newly designed molecules, which target bacterial cell membranes but also hint at a broader mechanism of action that transcends the interaction with a single protein.
Going forward, the research team is collaborating with Phare Bio, a nonprofit focusing on antibiotic innovation, to improve the pharmacological properties of NG1 and DN1 for subsequent clinical testing. This partnership encapsulates the collaborative spirit essential for tackling complex health challenges. With continued support and funding from entities such as the U.S. Defense Threat Reduction Agency, the National Institutes of Health, and various private foundations, this work represents an exciting step in the ongoing quest to develop novel antibiotics that can keep pace with evolving bacterial resistance.
The future of antibiotic development may very well hinge upon the continued application of AI technology in discovering and designing novel antimicrobial compounds. Researchers are actively looking to extend this generative approach to target other clinically significant pathogens, including Mycobacterium tuberculosis and Pseudomonas aeruginosa. As efforts continue to innovate the landscape of antimicrobial therapy, the research from MIT stands as a beacon of hope in the field, promising to usher in a new era of effective and resilient antibiotic treatments.
Subject of Research: Novel antibiotic design using AI for drug-resistant bacteria
Article Title: A generative deep learning approach to de novo antibiotic design
News Publication Date: 14-Aug-2025
Web References: http://dx.doi.org/10.1016/j.cell.2025.07.033
References: ‘Cell’ journal, MIT Antibiotics-AI Project
Image Credits: Massachusetts Institute of Technology
Keywords
Tags: antibiotic therapy innovationantimicrobial compound developmentcombating antibiotic resistancecomputational screening of compoundsdrug-resistant bacteria solutionsgenerative AI in drug discoveryglobal health threat of antibiotic resistanceMIT researchersMRSA antibiotic candidatesNeisseria gonorrhoeae treatmentnovel therapeutic mechanisms in antibioticsstructural novelty in antibiotics
AI Research
Anthropic’s $1.5-billion settlement signals new era for AI and artists

Chatbot builder Anthropic agreed to pay $1.5 billion to authors in a landmark copyright settlement that could redefine how artificial intelligence companies compensate creators.
The San Francisco-based startup is ready to pay authors and publishers to settle a lawsuit that accused the company of illegally using their work to train its chatbot.
Anthropic developed an AI assistant named Claude that can generate text, images, code and more. Writers, artists and other creative professionals have raised concerns that Anthropic and other tech companies are using their work to train their AI systems without their permission and not fairly compensating them.
As part of the settlement, which the judge still needs to be approve, Anthropic agreed to pay authors $3,000 per work for an estimated 500,000 books. It’s the largest settlement known for a copyright case, signaling to other tech companies facing copyright infringement allegations that they might have to pay rights holders eventually as well.
Meta and OpenAI, the maker of ChatGPT, have also been sued over alleged copyright infringement. Walt Disney Co. and Universal Pictures have sued AI company Midjourney, which the studios allege trained its image generation models on their copyrighted materials.
“It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” said Justin Nelson, a lawyer for the authors, in a statement. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”
Last year, authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson sued Anthropic, alleging that the company committed “large-scale theft” and trained its chatbot on pirated copies of copyrighted books.
U.S. District Judge William Alsup of San Francisco ruled in June that Anthropic’s use of the books to train the AI models constituted “fair use,” so it wasn’t illegal. But the judge also ruled that the startup had improperly downloaded millions of books through online libraries.
Fair use is a legal doctrine in U.S. copyright law that allows for the limited use of copyrighted materials without permission in certain cases, such as teaching, criticism and news reporting. AI companies have pointed to that doctrine as a defense when sued over alleged copyright violations.
Anthropic, founded by former OpenAI employees and backed by Amazon, pirated at least 7 million books from Books3, Library Genesis and Pirate Library Mirror, online libraries containing unauthorized copies of copyrighted books, to train its software, according to the judge.
It also bought millions of print copies in bulk and stripped the books’ bindings, cut their pages and scanned them into digital and machine-readable forms, which Alsup found to be in the bounds of fair use, according to the judge’s ruling.
In a subsequent order, Alsup pointed to potential damages for the copyright owners of books downloaded from the shadow libraries LibGen and PiLiMi by Anthropic.
Although the award was massive and unprecedented, it could have been much worse, according to some calculations. If Anthropic were charged a maximum penalty for each of the millions of works it used to train its AI, the bill could have been more than $1 trillion, some calculations suggest.
Anthropic disagreed with the ruling and didn’t admit wrongdoing.
“Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims,” said Aparna Sridhar, deputy general counsel for Anthropic, in a statement. “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”
The Anthropic dispute with authors is one of many cases where artists and other content creators are challenging the companies behind generative AI to compensate for the use of online content to train their AI systems.
Training involves feeding enormous quantities of data — including social media posts, photos, music, computer code, video and more — to train AI bots to discern patterns of language, images, sound and conversation that they can mimic.
Some tech companies have prevailed in copyright lawsuits filed against them.
In June, a judge dismissed a lawsuit authors filed against Facebook parent company Meta, which also developed an AI assistant, alleging that the company stole their work to train its AI systems. U.S. District Judge Vince Chhabria noted that the lawsuit was tossed because the plaintiffs “made the wrong arguments,” but the ruling didn’t “stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”
Trade groups representing publishers praised the Anthropic settlement on Friday, noting it sends a big signal to tech companies that are developing powerful artificial intelligence tools.
“Beyond the monetary terms, the proposed settlement provides enormous value in sending the message that Artificial Intelligence companies cannot unlawfully acquire content from shadow libraries or other pirate sources as the building blocks for their models,” said Maria Pallante, president and chief executive of the Association of American Publishers in a statement.
The Associated Press contributed to this report.
AI Research
Why AI Chatbots Hallucinate, According to OpenAI Researchers

OpenAI researchers claim they’ve cracked one of the biggest obstacles to large language model performance — hallucinations.
Hallucinations occur when a large language model generates inaccurate information that it presents as fact. They plague the most popular LLMs, from OpenAI’s GPT-5 to Anthropic’s Claude.
OpenAI’s baseline finding, which it made public in a paper released on Thursday, is that large language models hallucinate because the methods they’re trained under reward guessing more than admitting uncertainty.
In other words, LLMs are being told to fake it till they make it. Some are better than others, however. In a blog post last month, OpenAI said that Claude models are more “aware of their uncertainty and often avoid making statements that are inaccurate.” It also noted that Claude’s high refusal rates risked limiting its utility.
“Hallucinations persist due to the way most evaluations are graded — language models are optimized to be good test-takers, and guessing when uncertain improves test performance,” the researchers wrote in the paper.
Large language models are essentially always in “test-taking mode,” answering questions as if everything in life were binary — right or wrong, black or white.
In many ways, they’re not equipped for the realities of life, where uncertainty is more common than certainty, and true accuracy is not a given.
“Humans learn the value of expressing uncertainty outside of school, in the school of hard knocks. On the other hand, language models are primarily evaluated using exams that penalize uncertainty,” the researchers wrote.
The good news is that there is a fix, and it has to do with redesigning evaluation metrics.
“The root problem is the abundance of evaluations that are not aligned,” they wrote. “The numerous primary evaluations must be adjusted to stop penalizing abstentions when uncertain.”
In a blog post about the paper, OpenAI elaborated on what this type of adjustment would entail.
“The widely used, accuracy-based evals need to be updated so that their scoring discourages guessing. If the main scoreboards keep rewarding lucky guesses, models will keep learning to guess,” OpenAI said.
OpenAI did not immediately respond to a request for comment from Business Insider.
AI Research
Palantir CEO Alex Karp says U.S. labor workers won’t lose their jobs to AI—‘it’s not true’

As fears swirl that American manufacturing workers and skilled laborers may soon be replaced by artificial intelligence and robots, Alex Karp, CEO of the AI and data analytics software company Palantir Technologies, hopes to change the narrative.
“It’s not true, and in fact, it’s kind of the opposite,” Karp said in an interview with Fortune Thursday at the company’s commercial customer conference, AIPCon, where Palantir customers showcased how they were using the company’s software platform and generative AI within their own businesses at George Lucas’ Skywalker Ranch in Marin County, Calif.
The primary danger of AI in this country, says Karp, is that workers don’t understand that AI will actually help them in their roles—and it will hardly replace them. “Silicon Valley’s done an immensely crappy job of explaining that,” he said. “If you’re in manufacturing, in any capacity: You’re on the assembly line, you maintain a complicated machine—you have any kind of skilled labor job—the way we do AI will actually make your job more valuable and make you more valuable. But currently you would think—just roaming around the country, and if you listen to the AI narratives coming out of Silicon Valley—that all these people are going to lose their jobs tomorrow.”
Karp made these comments the day before the Bureau of Labor Statistics released its August jobs report, which showcased a climbing unemployment rate and stagnating hiring figures, reigniting fears of whether AI is at all responsible for the broader slowdown. There has been limited data thus far suggesting that generative AI is to blame for the slowing jobs market—or even job cuts for that matter—though a recent ADP hiring report offered a rare suggestion that AI may be one of several factors influencing hiring sentiment. Some executives, including Salesforce’s Marc Benioff, have cited the efficiency gains of AI for layoffs at their companies, and others, like Ford CEO Jim Farley and Amazon CEO Andy Jassy, have made lofty predictions about how AI is on track to replace jobs in the future. Most of these projections have been centered around white collar roles, in particular, versus manufacturing or skilled labor positions.
Karp, who has a PhD in neoclassical social theory and a reputation for being outspoken and contrarian on many issues, argues that fears of AI eliminating skilled labor jobs are unfounded—and he’s committed to “correcting” the public perception.
Earlier this week, Palantir launched “Working Intelligence: The AI Optimism Project,” a quasi-public information and marketing campaign centered around artificial intelligence in the workplace. The project has begun with a series of short blog posts featuring Palantir’s customers and their opinions on AI, as well as a “manifesto” that takes aim at both the “doomers” and “pacifiers” of AI. “Doomers fear, and pacifiers welcome, a future of conformity: a world in which AI flattens human difference. Silicon Valley is already selling such bland, dumbed-down slop,” the manifesto declares, arguing that the true power of AI is not to standardize but to “supercharge” workers.
Jordan Hirsch, who is spearheading the new project at Palantir, said that there are approximately 20 people working on it and that they plan to launch a corresponding podcast.
While Palantir has an obvious commercial interest in dispelling public fears about AI, Karp framed his commitment to the project as something important for society. Fears about job replacement will “feed a kind of weird populism based on a notion that’s not true—that’s going to make the factions on the right and left much, much, much more powerful based on something that’s not true,” he said. “I think correcting that—but not just by saying platitudes, but actually showing how this works, is one of the most important things we have to get on top of.”
Karp said he planned to invest “lots of energy and money” into the AI Optimism Project. When asked how much money, he said he didn’t know yet, but that “we have a lot of money, and it’s one of my biggest priorities.”
Palantir has seen enormous growth within the commercial side of its business in the last two years, largely due to the artificial intelligence product it released in 2023, called “AIP.” Palantir’s revenue surpassed $1 billion for the first time last quarter. And while Palantir only joined the S&P 500 last year, it now ranks as one of the most valuable companies in the world thanks to its soaring stock price.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi