Eli Lilly and Co.LLY on Tuesday launched Lilly TuneLab, an artificial intelligence and machine learning (AI/ML) platform that provides biotech companies access to drug discovery models trained on years of Lilly’s research data.
Eli Lilly estimates that this first release of AI models includes proprietary data obtained at a cost of over $1 billion.
“Lilly has spent decades building comprehensive datasets for drug discovery. Today, we’re sharing the intelligence gained from that investment to help lift the tide of biotechnology research,” said Daniel Skovronsky, chief scientific officer and president, Lilly Research Laboratories and Lilly Immunology.
Lilly TuneLab is powered by Lilly’s comprehensive drug disposition, safety, and preclinical datasets, which represent experimental data obtained with hundreds of thousands of unique molecules.
In return for access, selected biotech partners contribute training data, which fuels continuous improvement for the benefit of others in the ecosystem and ultimately patients.
The platform is hosted by a third-party and employs federated learning, a privacy-preserving approach that enables biotechs to tap into Lilly’s AI models without directly exposing their proprietary data or Lilly’s.
Lilly intends to extend the platform’s features and capabilities beyond this first release, including adding in vivo small molecule predictive models, available exclusively on Lilly TuneLab.
Price Action: LLY stock is up 0.19% at $740.02 during the premarket session at the last check on Tuesday.
Read Next:
Image: Shutterstock
Stock Score Locked: Edge Members Only
Benzinga Rankings give you vital metrics on any stock – anytime.
Merrill, who studies the intersection of technology and health communication, was interviewed by Spectrum News to discuss safeguards over AI and health communications.
The interview points out that while Ohio no laws regulating AI in mental health, several states have already acted: Illinois bans AI from being marketed as therapy without licensed oversight, Nevada prohibits AI from presenting itself as a provider, and Utah requires AI chatbots to disclose their nonhuman nature and protect user data.
Merrill urges Ohio lawmakers to follow suit and “protect people over profit.” The assistant professor of health communication and technology in UC’s School of Communication, Film, and Media Studies has spent more than five years researching how digital tools affect well-being, motivated in part by his father’s death from cancer.
His recent study on AI companions found that while about a third of participants reported feeling happier after using them, Merrill cautions that the tools pose risks—including privacy concerns, unrealistic expectations of human relationships, and even dependency. To address these issues, he stresses the importance of “AI literacy,” so users understand what AI can and cannot do.
Merrill also argues that companies should build in safeguards, such as usage reminders and prompts to seek professional help. He supports temporary bans on AI therapy while research catches up, saying the tools should supplement, not replace, overburdened mental health systems.
Rabat — Al Akhawayn University in Ifrane (AUI) and Prince Mohammed Bin Fahd University (PMU) announced an agreement establishing the Prince Mohammed Bin Fahd bin Abdulaziz Chair for Artificial Intelligence Applications.
A statement from AUI said Amine Bensaid, President of AUI, signed the agreement with his PMU counterpart Issa Al Ansari.
The Chair, established within AUI, will conduct applied research in AI to develop solutions that address societal needs and promote innovation to support Moroccan talents in their fields.
The agreement reflects a shared commitment to strengthen cooperation between the two institutions, with a focus on AI to contribute to the socio-economic development of both Morocco and Saudi Arabia, the statement added.
The initiative also seeks to help Morocco and Saudi Arabia boost their national priorities through AI as a key tool in advancing academic excellence.
Bensaid commented on the agreement, saying that the partnership will strengthen Al Akhawayn’s mission to “combine academic excellence with technological innovation.”
It will also help to master students’ skills in AI in order to serve humanity and protect citizens from risk.
“By hosting this initiative, we also affirm the role of Al Akhawayn and Morocco as pioneering actors in this field in Africa and in the region.”
For his part, Al Ansari also expressed satisfaction with the new agreement, stating that the pact is in line with PU’s efforts to serve Saudi Arabia’s Vision 2030.
This vision “places artificial intelligence at the heart of economic and social transformation,” he affirmed.
He also expressed his university’s commitment to working with Al Akhawayn University to help address tomorrow’s challenges and train the new generation of talents that are capable of shaping the future.
Al Akhawayn has been reiterating its commitment to continue to cooperate with other institutions in order to boost research as well as ethical AI use.
In April, AUI signed an agreement with the American University of Sharjah to promote collaboration in research and teaching, as well as to empower Moroccan and Emirati students and citizens to engage with AI tools while staying rooted in their cultural identity.
This is in line with Morocco’s ambition to enhance AI use in its own education sector.
In January, Secretary General of Education Younes Shimi outlined Morocco’s ambition and advocacy for integrating AI into education.
He also called for making this technology effective, adaptable, and accessible for the specific needs of Moroccans and for the rest of the Arab world.
Generative AI is in classrooms already. Can educators use this tool to enhance learning among their students instead of undercutting assignments?
Yes, said Priyanka Parekh, an assistant research professor in the Center for STEM Teaching and Learning at NAU. With a grant from NAU’s Transformation through Artificial Intelligence in Learning (TRAIL) program, Parekh is investigating how undergraduate students use GenAI as learning partners—building on what they learn in the classroom to maximize their understanding of STEM topics. It’s an important question as students make increasing use of these tools with or without their professors’ knowledge.
“As GenAI becomes an integral part of everyday life, this project contributes to building critical AI literacy skills that enable individuals to question, critique and ethically utilize AI tools in and beyond the school setting,” Parekh said.
That is the foundation of the TRAIL program, which is in its second year of offering grants to professors to explore how to use GenAI in their work. Fourteen professors received grants to implement GenAI in their classrooms this year. Now in its second year, the Office of the Provost partnered with the Office of the Vice President for Research to offer grants to professors in five different colleges to study the use of GenAI tools in research.
The recipients are:
Chris Johnson, School of Communication, Integrating AI-Enhanced Creative Workflows into Art, Design, Visual Communication, and Animation Education
Priyanka Parekh, Center for Science Teaching and Learning, Understanding Learner Interactions with Generative AI as Distributed Cognition
Marco Gerosa, School of Informatics, Computing, and Cyber Systems, To what extent can AI replace human subjects in software engineering research?
Emily Schneider, Criminology and Criminal Justice, Israeli-Palestinian Peacebuilding through Artificial Intelligence
Delaney La Rosa, College of Nursing, Enhancing Research Proficiency in Higher Education: Analyzing the Impact of Afforai on Student Literature Review and Information Synthesis
Exploring how GenAI shapes students as learners
Parekh’s goals in her research are to understand how students engage with GenAI in real academic tasks and what this learning process looks like; to advance AI literacy, particularly among first-generation, rural and underrepresented learners; help faculty become more comfortable with AI; and provide evidence-based recommendations for integrating GenAI equitably in STEM education.
It’s a big ask, but she’s excited to see how the study shakes out and how students interact with the tools in an educational improvement. She anticipates her study will have broader applications as well; employees in industries like healthcare, engineering and finance are using AI, and her work may help implement more equitable GenAI use across a variety of industries.
“Understanding how learners interact with GenAI to solve problems, revise ideas or evaluate information can inform AI-enhanced workplace training, job simulations and continuing education,” she said.
Using AI as a collaborator, not a shortcut
Johnson, a professor of visual communication in the School of Communication, isn’t looking for AI to create art, but he thinks it can be an important tool in the creation process—one that helps human creators create even better art. His project will include:
Building a set of classroom-ready workflows that combine different industry tools like After Effects, Procreate Dreams and Blender with AI assistants for tasks such as storyboarding, ideation, cleanup, accessibility support
Running guided stories to compare baseline pipelines to AI-assisted pipelines, looking at time saved and quality
Creating open teaching modules that other instructors can adopt
In addition to creating usable, adaptable curriculum that teaches students to use AI to enhance their workflow—without replacing their work—and to improve accessibility standards, Johnson said this study will produce clear before and after case studies that show where AI can help and where it can’t.
“AI is changing creative industries, but the real skill isn’t pressing a button—it’s knowing how to direct, critique and refine AI as a collaborator,” Johnson said. “That’s what we’re teaching our students: how to keep authorship, ethics and creativity at the center.”
Johnson’s work also will take on the ethics of training and provenance that are a constant part of the conversation around using AI in art creation. His study will emphasize tools that respect artists’ rights and steer clear of imitating the styles of living artists without consent. He also will emphasize to students where AI fits into the work; it’s second in the process after they’ve initially created their work. It offers feedback; it doesn’t create the work.
Top photo: This is an image produced by ChatGPT illustrating Parekh’s research. I started with the prompt: “Can you make an image that has picture quality that shows a student with a reflection journal or interface showing their GenAI interaction and metacognitive responses (e.g., “Did this response help me?”)? It took a few rounds of changing the prompt, including telling AI twice to not put three hands into the image, to get to an image that reflects Parekh’s research and adheres to The NAU Review’s standards.