AI Research
Democratizing Artificial Intelligence in Pre-Clinical Drug Discovery
Most breakthrough discoveries are made based on evidence that’s already there,” Ming-Ming Zhou, PhD, asserted in his New York City office as we overlooked a cloudy Central Park. “It takes somebody to connect the dots in a different way to solve the problem.”
Zhou began his faculty career in 1997 and is currently a professor in physiology and biophysics at the Icahn School of Medicine at Mount Sinai. His lab designs chemical compounds to modulate chromatin-mediated gene transcription for therapeutic applications. Zhou’s seminal work in chemical targeting of the bromodomain, a set of proteins that recognize acetylated lysine in histones, opened the pharmaceutical field of bromodomain drug discovery to address a wide array of cancers and inflammatory disorders.
Reflecting on the past three decades of therapeutic research, Zhou says structure-based drug discovery has now transformed into artificial intelligence (AI)-aided drug discovery, a booming interdisciplinary field transforming pre-clinical pipelines from building on literature-defined disease targets to massive searches through big data for never-before-seen leads. According to Zhou, this paradigm shift is what Mount Sinai’s new AI Small Molecule Drug Discovery Center is tackling head-on.
Building a home for AI-innovation
Led by Avner Schlessinger, PhD, professor of pharmacological sciences and associate director of the Mount Sinai Center for Therapeutics Discovery, the AI initiative launched in April as New York’s newest hub leveraging computational approaches for pre-clinical development.
To expand access to AI-driven drug discovery, the center will provide hands-on training for the next generation of scientists through seminars, internship programs, and drug discovery hackathons, while fostering AI-focused research collaborations with pharmaceutical companies, biotech firms, and academic institutions.
“Drug discovery is an inefficient process. One of the top limiting factors is insufficient communication, interaction, or thinking outside the box,” Zhou told GEN. “This center is a way of bringing people together to unlock new ideas and technologies that can help us address this limitation.”
In contrast to conventional drug discovery, which relies on slow and resource intensive experimental workflows, AI models trained on vast datasets of molecular structures and biological activity can predict properties of new compounds before synthesis, an approach that is proposed to expand the throughput and scale of pre-clinical research programs at a fraction of the speed and cost.
Mount Sinai’s center will focus on three core areas: designing novel drug-like molecules using generative AI, optimizing existing compounds to enhance their efficacy and safety, and predicting drug-target interactions to repurpose known drugs or natural products for new indications.
“I was trained in AI and machine learning a long time ago before it was cool,” chuckled Schlessinger as we dodged NYC taxis during my tour of Mount Sinai’s campus. “But now is particularly good timing to use Mount Sinai’s datasets and experts to improve our models for real solutions.”
As a medical school embedded in a hospital system, Mount Sinai’s community emphasizes making an impact on patient care. Many research projects have a highly translational focus, ranging from target identification for Alzheimer’s disease to developing machine learning algorithms to predict the pathogenicity of mutations based on patient data.
Marta Filizola, PhD, professor and dean of the graduate school of biomedical sciences at Mount Sinai, leads the center’s graduate education efforts, and highlights the need for interdisciplinary education to generate the next wave of AI innovation, a notion that has motivated the establishment of Mount Sinai’s newest PhD program in Artificial Intelligence and Emerging Technologies in Medicine (AIET).
“We’ve created an infrastructure to increase the visibility of the AI training here at Sinai and give students hands-on experience in research programs that are directly related to improving human health,” she told GEN.
Show me the data
Historically, structure-based drug discovery has largely been fueled by the protein data bank (PDB), a publicly available dataset that houses over 200,000 entries for experimentally-determined protein and nucleic acid structure data collected by researchers over 50 years.
While the PDB has been a powerful resource driving AI advances, such as the Nobel Prize in Chemistry-winning protein structure prediction algorithm AlphaFold, many novel drug targets fall outside of the PDB, motivating many AI biotechs to invest in their own data generation. Much of this proprietary industry data remains under lock and key.
“A key problem for any party that builds and innovates new model architectures is that they cannot benchmark on proprietary data. The validity for industrial grade research is something that you cannot assess,” said Robin Roehm, CEO and co-founder of Apheris, in an interview with GEN. “Access to industry data for benchmarking is a huge value-add for everyone who builds models.”
Apheris is a start-up focused on enabling governed, private, and secure access to data for machine learning. In March, the Berlin-based company announced an initiative with the AI Structural Biology (AISB) Consortium to fine-tune OpenFold3, a protein structure prediction algorithm developed by the lab of Mohammed AlQuraishi, PhD, assistant professor of systems biology at Columbia University, using proprietary data from AbbVie and Johnson & Johnson in a confidentiality-preserving environment.
The collaboration will evaluate and refine OpenFold3 for predicting 3D structures of molecule complexes, focusing on small molecule-protein and antibody-antigen interactions for drug discovery. As of May, the list of participating drug developers has expanded to include AstraZeneca, Boehringer Ingelheim, Sanofi, and Takeda.
The push for an open-source code
Other scientists are looking to make AI molecular models widely accessible to push collaboration forward. In June, researchers from the Massachusetts Institute of Technology (MIT) Jameel Clinic for Machine Learning in Health announced the open-source release of Boltz-2, which now predicts molecular binding affinity at newfound speed and accuracy to help democratize commercial drug discovery.
Boltz-2 is available under the highly permissive MIT license, which allows commercial drug developers to use the model internally and apply their own proprietary data. The work was done in collaboration with Recursion, the Salt Lake City-based artificial intelligence (AI) drug discovery company that combined with Exscientia last year. The MIT research team was led by Regina Barzilay, PhD, distinguished professor of AI and health at MIT.
Boltz-2 is an answer to the community outcry at the limited accessibility of AlphaFold 3, which was published in Nature in May 2024 by Google DeepMind and Isomorphic Labs without the open-source code. AlphaFold 3 expanded the protein structure prediction tool to a broad spectrum of biomolecular interactions, including small molecules, DNA, RNA, and more, offering a powerful next step for drug discovery.
However, the code omission prevented other scientists from reproducing the publication’s results and using the model in their own research efforts, leading more than 1,000 scientists to sign a protest letter calling for AlphaFold 3’s transparency. To address the outcry, AlphaFold 3 developers released the code under a restrictive non-commercial license six months after the Nature publication.
Anshul Kundaje, PhD, associate professor of genetics and computer science at Stanford University, wrote in a letter sent to Nature and posted on the social media platform, X, that while commercial entities are under no obligation to open source or share details about their products, “this does not mean they get to bypass canonical standards for what constitutes a peer-reviewed and verifiable scientific publication. What Nature published as a peer-reviewed article is in fact an advertisement and at best a white paper.”
Back at MIT, Corso said the biggest reward from releasing Boltz was seeing the community rally behind an open-source project.
“Just at a time where it seemed inevitable that closed models like AlphaFold 3 would dominate the field, many researchers from academia and industry decided to contribute to an open-source project like Boltz to build new capabilities and open them up for everyone to use,” Corso told GEN.
Lifting all boats
While AlphaFold 3 made the advance of accurately predicting molecular complex structures, in silico binding affinity calculations—as achieved by Boltz-2—had not been (publicly) shown by DeepMind and Isomorphic Labs. Binding affinity measures the strength of interaction between a drug and its target and is a key drug discovery metric that can dictate the progression of a candidate through the development pipeline from hit discovery to lead optimization.
In terms of accuracy, Boltz-2 was the leading affinity performer at the December 2024 Critical Assessment of protein Structure Prediction 16 (CASP16) competition, the biannual experiment that assesses the latest state-of-the-art models in structural biology. In speed, Boltz-2 is reported to calculate binding affinity values in just 20 seconds, 1,000 times faster than the current physics-based computational standard, free-energy perturbation (FEP) simulations.
Najat Khan, PhD, chief R&D officer and chief commercial officer at Recursion, stated the open-source release of Boltz-2 “lifts all boats” in making progress in the integration of tech, biology, and chemistry.
“Binding affinity is core to developing a therapeutic start to finish and has been the fundamental issue that a lot of us have been trying to grapple [with],” said Khan. “The value of this collaboration is significant technological advancement geared to the purpose of application, which is drug discovery.”
In May, Recursion said it will end development for four of its 11 pipeline programs and pause a fifth program, in a pruning designed to further focus the AI-based drug developer on cancer and rare disease treatments. The company looks forward to applying Boltz-2 toward future discovery candidates.
While proprietary restrictions remain a reality in commercial interests, education, data partnerships, and open-source modeling march forward to push a culture of collaboration. Time will tell whether the new AI drug discovery paradigm will be one of true democracy.
AI Research
Joint UT, Yale research develops AI tool for heart analysis – The Daily Texan
A study published on June 23 in collaboration with UT and Yale researchers developed an artificial intelligence tool capable of performing and analyzing the heart using echocardiography.
The app, PanEcho, can analyze echocardiograms, or pictures of the heart, using ultrasounds. The tool was developed and trained on nearly one million echocardiographic videos. It can perform 39 echocardiographic tasks and accurately detect conditions such as systolic dysfunction and severe aortic stenosis.
“Our teammates helped identify a total of 39 key measurements and labels that are part of a complete echocardiographic report — basically what a cardiologist would be expected to report on when they’re interpreting an exam,” said Gregory Holste, an author of the study and a doctoral candidate in the Department of Electrical and Computer Engineering. “We train the model to predict those 39 labels. Once that model is trained, you need to evaluate how it performs across those 39 tasks, and we do that through this robust multi site validation.”
Holste said out of the functions PanEcho has, one of the most impressive is its ability to measure left ventricular ejection fraction, or the proportion of blood the left ventricle of the heart pumps out, far more accurately than human experts. Additionally, Holste said PanEcho can analyze the heart as a whole, while humans are limited to looking at the heart from one view at a time.
“What is most unique about PanEcho is that it can do this by synthesizing information across all available views, not just curated single ones,” Holste said. “PanEcho integrates information from the entire exam — from multiple views of the heart to make a more informed, holistic decision about measurements like ejection fraction.”
PanEcho is available for open-source use to allow researchers to use and experiment with the tool for future studies. Holste said the team has already received emails from people trying to “fine-tune” the application for different uses.
“We know that other researchers are working on adapting PanEcho to work on pediatric scans, and this is not something that PanEcho was trained to do out of the box,” Holste said. “But, because it has seen so much data, it can fine-tune and adapt to that domain very quickly. (There are) very exciting possibilities for future research.”
AI Research
Google launches AI tools for mental health research and treatment
Google announced two new artificial intelligence initiatives on July 7, 2025, designed to support mental health organizations in scaling evidence-based interventions and advancing research into anxiety, depression, and psychosis treatments.
The first initiative involves a comprehensive field guide developed in partnership with Grand Challenges Canada and McKinsey Health Institute. According to the announcement from Dr. Megan Jones Bell, Clinical Director for Consumer and Mental Health at Google, “This guide offers foundational concepts, use cases and considerations for using AI responsibly in mental health treatment, including for enhancing clinician training, personalizing support, streamlining workflows and improving data collection.”
The field guide addresses the global shortage of mental health providers, particularly in low- and middle-income countries. According to analysis from the McKinsey Health Institute cited in the document, “closing this gap could result in more years of life for people around the world, as well as significant economic gains.”
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: Google for Health, Google DeepMind, Grand Challenges Canada, McKinsey Health Institute, and Wellcome Trust, targeting mental health organizations and task-sharing programs globally.
What: Two AI initiatives including a practical field guide for scaling mental health interventions and a multi-year research investment for developing new treatments for anxiety, depression, and psychosis.
When: Announced July 7, 2025, with ongoing development and research partnerships extending multiple years.
Where: Global implementation with focus on low- and middle-income countries where mental health provider shortages are most acute.
Why: Address the global shortage of mental health providers and democratize access to quality, evidence-based mental health support through AI-powered scaling solutions and advanced research.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
The 73-page guide outlines nine specific AI use cases for mental health task-sharing programs, including applicant screening tools, adaptive training interfaces, real-time guidance companions, and provider-client matching systems. These tools aim to address challenges such as supervisor shortages, inconsistent feedback, and protocol drift that limit the effectiveness of current mental health programs.
Task-sharing models allow trained non-mental health professionals to deliver evidence-based mental health services, expanding access in underserved communities. The guide demonstrates how AI can standardize training, reduce administrative burdens, and maintain quality while scaling these programs.
According to the field guide documentation, “By standardizing training and avoiding the need for a human to be involved at every phase of the process, AI can help mental health task-sharing programs effectively scale evidence-based interventions throughout communities, maintaining a high standard of psychological support.”
The second initiative represents a multi-year investment from Google for Health and Google DeepMind in partnership with Wellcome Trust. The funding, which includes research grant funding from the Wellcome Trust, will support research projects developing more precise, objective, and personalized measurement methods for anxiety, depression, and psychosis conditions.
The research partnership aims to explore new therapeutic interventions, potentially including novel medications. This represents an expansion beyond current AI applications into fundamental research for mental health treatment development.
The field guide acknowledges that “the application of AI in task-sharing models is new and only a few pilots have been conducted.” Many of the outlined use cases remain theoretical and require real-world validation across different cultural contexts and healthcare systems.
For the marketing community, these developments signal growing regulatory attention to AI applications in healthcare advertising. Recent California guidance on AI healthcare supervision and Google’s new certification requirements for pharmaceutical advertising demonstrate increased scrutiny of AI-powered health technologies.
The field guide emphasizes the importance of regulatory compliance for AI mental health tools. Several proposed use cases, including triage facilitators and provider-client matching systems, could face classification as medical devices requiring regulatory oversight from authorities like the FDA or EU Medical Device Regulation.
Organizations considering these AI tools must evaluate technical infrastructure requirements, including cloud versus edge computing approaches, data privacy compliance, and integration with existing healthcare systems. The guide recommends starting with pilot programs and establishing governance committees before full-scale implementation.
Technical implementation challenges include model selection between proprietary and open-source systems, data preparation costs ranging from $10,000 to $90,000, and ongoing maintenance expenses of 10 to 30 percent of initial development costs annually.
The initiatives build on growing evidence that task-sharing approaches can improve clinical outcomes while reducing costs. Research cited in the guide shows that mental health task-sharing programs are cost-effective and can increase the number of people treated while reducing mental health symptoms, particularly in low-resource settings.
Real-world implementations highlighted in the guide include The Trevor Project’s AI-powered crisis counselor training bot, which trained more than 1,000 crisis counselors in approximately one year, and Partnership to End Addiction’s embedded AI simulations for peer coach training.
These organizations report improved training efficiency and enhanced quality of coach conversations through AI implementation, suggesting practical benefits for established mental health programs.
The field guide warns that successful AI adoption requires comprehensive planning across technical, ethical, governance, and sustainability dimensions. Organizations must establish clear policies for responsible AI use, conduct risk assessments, and maintain human oversight throughout implementation.
According to the World Health Organization principles referenced in the guide, responsible AI in healthcare must protect autonomy, promote human well-being, ensure transparency, foster responsibility and accountability, ensure inclusiveness, and promote responsive and sustainable development.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Timeline
- July 7, 2025: Google announces two AI initiatives for mental health research and treatment
- January 2025: California issues guidance requiring physician supervision of healthcare AI systems
- May 2024: FDA reports 981 AI and machine learning software devices authorized for medical use
- Development ongoing: Field guide created through 10+ discovery interviews, expert summit with 20+ specialists, 5+ real-life case studies, and review of 100+ peer-reviewed articles
AI Research
New Research Shows Language Choice Alone Can Guide AI Output Toward Eastern or Western Cultural Outlooks
A new study shows that the language used to prompt AI chatbots can steer them toward different cultural mindsets, even when the question stays the same. Researchers at MIT and Tongji University found that large language models like OpenAI’s GPT and China’s ERNIE change their tone and reasoning depending on whether they’re responding in English or Chinese.
The results indicate that these systems translate language while also reflecting cultural patterns. These patterns appear in how the models provide advice, interpret logic, and handle questions related to social behavior.
Same Question, Different Outlook
The team tested both GPT and ERNIE by running identical tasks in English and Chinese. Across dozens of prompts, they found that when GPT answered in Chinese, it leaned more toward community-driven values and context-based reasoning. In English, its responses tilted toward individualism and sharper logic.
Take social orientation, for instance. In Chinese, GPT was more likely to favor group loyalty and shared goals. In English, it shifted toward personal independence and self-expression. These patterns matched well-documented cultural divides between East and West.
When it came to reasoning, the shift continued. The Chinese version of GPT gave answers that accounted for context, uncertainty, and change over time. It also offered more flexible interpretations, often responding with ranges or multiple options instead of just one answer. In contrast, the English version stuck to direct logic and clearly defined outcomes.
No Nudging Needed
What’s striking is that these shifts occurred without any cultural instructions. The researchers didn’t tell the models to act more “Western” or “Eastern.” They simply changed the input language. That alone was enough to flip the models’ behavior, almost like switching glasses and seeing the world in a new shade.
To check how strong this effect was, the researchers repeated each task more than 100 times. They tweaked prompt formats, varied the examples, and even changed gender pronouns. No matter what they adjusted, the cultural patterns held steady.
Real-World Impact
The study didn’t stop at lab tests. In a separate exercise, GPT was asked to choose between two ad slogans, one that stressed personal benefit, another that highlighted family values. When the prompt came in Chinese, GPT picked the group-centered slogan most of the time. In English, it leaned toward the one focused on the individual.
This might sound small, but it shows how language choice can guide the model’s output in ways that ripple into marketing, decision-making, and even education. People using AI tools in one language may get very different advice than someone asking the same question in another.
Can You Steer It?
The researchers also tested a workaround. They added cultural prompts, telling GPT to imagine itself as a person raised in a specific country. That small nudge helped the model shift its tone, even in English, suggesting that cultural context can be dialed up or down depending on how the prompt is framed.
Why It Matters
The findings concern how language affects the way AI models present information. Differences in response patterns suggest that the input language influences how content is structured and interpreted. As AI tools become more integrated into routine tasks and decision-making processes, language-based variations in output may influence user choices over time.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Jack Dorsey Builds Offline Messaging App That Uses Bluetooth Instead of the Internet
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Funding & Business5 days ago
HOLY SMOKES! A new, 200% faster DeepSeek R1-0528 variant appears from German lab TNG Technology Consulting GmbH