AI Research
Investors Bet $235 Million on Bringing AI to Scientific Research

Scientists working in Lila Science’s labs.
(Bloomberg) — The latest artificial intelligence unicorn is Lila Sciences, a biotechnology company that promises to speed the rate of scientific discovery using new AI tools.
Most Read from Bloomberg
The startup announced it had raised $235 million at a roughly $1.23 billion valuation.
Lila emerged from stealth mode in March after a $200 million seed round. It has developed AI trained on academic literature in areas that include materials, chemistry and life sciences, and is creating labs to test the models’ hypotheses.
The new funding will allow the Massachusetts-based company to expand the size and number of facilities — or what it calls AI science factories — where automated research can be carried out by human researchers and software, with the results then fed back into the model. The company is looking to develop technologies as diverse as novel materials that can capture carbon and new drugs.
“If your training input is entirely publicly available data, then one hits a ceiling or phase of diminishing returns,” Geoffrey von Maltzahn, Lila’s co-founder and chief executive officer, said. The feedback loop allows Lila to “discover things that would be slower to impossible to discover with previous paradigms.”
Those paradigms were largely human-driven, with scientists forming a hypothesis, gathering information and data, implementing an experiment, and refining the results. Doing so can take years. Lila believes AI can shave weeks or even months off that process.
AI and its potential scientific applications has been a subject of intense interest as the technologies have developed. An increasing number of researchers are relying on AI, and other companies such as Orbital Materials and Isomorphic Labs are also racing to create AI-generated technologies.
Lila believes that building dedicated, automated labs will give it a leg up. The company “has discovered thousands of novel proteins, nucleic acids, chemistries and materials” and tested them in its lab since it was founded in 2023, von Maltzahn said.
The company hasn’t commercialized any of its products yet. But von Maltzahn said Lila has seen interest from outside companies wanting to use its AI and labs, and that the startup plans to open up its platform to some by the end of the year. He declined to name specific firms.
AI Research
Maine police can’t investigate AI-generated child sexual abuse images

A Maine man went to watch a children’s soccer game. He snapped photos of kids playing. Then he went home and used artificial intelligence to take the otherwise innocuous pictures and turn them into sexually explicit images.
Police know who he is. But there is nothing they could do because the images are legal to have under state law, according to Maine State Police Lt. Jason Richards, who is in charge of the Computer Crimes Unit.
While child sexual abuse material has been illegal for decades under both federal and state law, the rapid development of generative AI — which uses models to create new content based on user prompts — means Maine’s definition of those images has lagged behind other states. Lawmakers here attempted to address the proliferating problem this year but took only a partial step.
“I’m very concerned that we have this out there, this new way of exploiting children, and we don’t yet have a protection for that,” Richards said.
Two years ago, it was easy to discern when a piece of material had been produced by AI, he said. It’s now hard to tell without extensive experience. In some instances, it can take a fully clothed picture of a child and make the child appear naked in an image known as a “deepfake.” People also train AI on child sexual abuse materials that are already online.
Nationally, the rise of AI-generated child sexual abuse material is a concern. At the end of last year, the National Center for Missing and Exploited Children saw a 1,325% increase in the number of tips it received related to AI-generated materials. That material is becoming more commonly found when investigating cases of possession of child sexual abuse materials.
On Sept. 5, a former Maine state probation officer pleaded guilty to accessing with intent to view child sexual abuse materials in federal court. When federal investigators searched the man’s Kik account, they found he had sought out the content and had at least one image that was “AI-generated,” according to court documents.
The explicit material generated by AI has rapidly become intertwined with the real stuff at the same time as his staff are seeing increasing reports. In 2020, Richards’ team received 700 tips relating to child sexual abuse materials and reports of adults sexually exploiting minors online in Maine.
By the end of 2025, Richards said he expects his team will have received more than 3,000 tips. They can only investigate about 14% any given year. His team now has to discard any material that is touched by AI.
“It’s not what could happen, it is happening, and this is not material that anyone is OK with in that it should be criminalized,” Shira Burns, the executive director of the Maine Prosecutors’ Association, said.
Across the country, 43 states have created laws outlawing sexual deepfakes, and an additional 28 states have banned the creation of AI-generated child sexual abuse material. Twenty-two states have done both, according to MultiState, a government relations firm that tracks how state legislatures have passed laws governing artificial intelligence.
More than two dozen states have enacted laws banning AI-generated child sexual abuse material. Rep. Amy Kuhn, D-Falmouth, proposed doing so earlier this year. But lawmakers on the Judiciary Committee had concerns about how the proposed legislation could cause constitutional issues.
She agreed to drop that portion of the bill for now. The version of the bill that passed expanded the state’s pre-existing law against “revenge porn” to include dissemination of altered or so-called “morphed images” as a form of harassment. But it did not label morphed images of children as child sexual abuse material.
The legislation, which was drafted chiefly by the Maine Prosecutors’ Association and the Maine Coalition Against Sexual Assault, was modeled after already enacted law in other places. Kuhn said she plans to propose the expanded definition of sexually explicit material mostly unchanged from her early version when the Legislature reconvenes in January.
Maine’s lack of a law at least labeling morphed images of children as child sexual abuse material makes the state an outlier, said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered AI. She studies the abusive uses of AI and the intersection of legislation around AI-generated content and the Constitution.
In her research, Pfefferkorn said she’s found that most legislatures that have considered changing pre-existing laws on child sexual abuse material have at least added that morphed images of children should be considered sexually explicit material.
“It’s a bipartisan area of interest to protect children online, and nobody wants to be the person sticking their hand up and very publicly saying, ‘I oppose this bill that would essentially better protect children online,’” Pfefferkorn said.
There is also pre-existing federal law and case law that Maine can look to in drafting its own legislation, she said. Morphed images of children are already banned federally, she said. While federal agencies have a role in investigating these cases, they typically handle only the most serious ones. It mostly falls on the state to police sexually explicit materials.
Come 2026, both Burns and Kuhn said they are confident that the Legislature will fix the loophole because there are plenty of model policies to follow across the country.
“We’re on the tail end of addressing this issue, but I am very confident that this is something that the judiciary will look at, and we will be able to get a version through, because it’s needed,” Burns said.
AI Research
Empowering clinicians with intelligence at the point of conversation

AI Research
Transforming Health Care with AI: Microsoft’s Dr. James Weinstein to Speak at Miller School Dean’s Lecture

The senior vice president of Microsoft Health and renowned spine surgeon will talk about AI’s role in transforming health care ecosystems.
Artificial intelligence is rapidly reshaping the health care landscape, and few leaders embody this transformation more than James Weinstein, D.O. A renowned spine surgeon, health policy innovator and senior vice president of Microsoft Health, Dr. Weinstein will headline the University of Miami Miller School of Medicine’s Dean’s Lecture with a talk titled, “AI in Health Care Ecosystem Transformation.”
• Date and time: Sept. 23, 2025 at noon
Dr. Weinstein’s career spans decades of pioneering work in patient-centered care, health equity and value-based health systems. At Microsoft Health, he leads global strategy and innovation, focusing on improving access and outcomes through AI-driven technologies. His work has consistently emphasized empowering patients through “informed choice” rather than traditional “informed consent,” and he was instrumental in developing Patient Reported Outcome Measures (PROMs), now widely used to assess treatment efficacy.
“I feel very optimistic that artificial intelligence will have a key role to help us transform systems that we all want to make better,” said Dr. Weinstein at a recent Association of American Medical Colleges (AAMC) meeting.
His leadership at Dartmouth-Hitchcock Health and The Dartmouth Institute for Health Policy and Clinical Practice helped establish national collaborations like the High Value Healthcare Collaborative, which improved care quality while reducing costs across 50 states. Dr. Weinstein’s research and policy contributions have earned him appointments to national advisory boards and recognition as a thought leader in health care reform.
In a Harvard Business Review article, Dr. Weinstein and co-author Ron Adner argued that generative AI is not just a tool for efficiency. It’s a catalyst for ecosystem transformation. They emphasized that AI’s true potential lies in its ability to rewire traditional health care silos and create new organizational models that better serve patients. Dr. Weinstein makes sure to underscore the importance of catering technology to patient needs.
“Disruption generally refers to a substitution of technology to make a specific thing easier to do,” he said. “But that doesn’t necessarily change the patient experience for the better.”
Whether you’re a clinician, researcher, student or health care leader, this lecture offers a unique opportunity to engage with one of the foremost voices in AI and health care transformation.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries