Connect with us

AI Insights

New AI model can identify treatments that reverse disease states in cells

Published

on


In a move that could reshape drug discovery, researchers at Harvard Medical School have designed an artificial intelligence model capable of identifying treatments that reverse disease states in cells.

Unlike traditional approaches that typically test one protein target or drug at a time in hopes of identifying an effective treatment, the new model, called PDGrapher and available for free, focuses on multiple drivers of disease and identifies the genes most likely to revert diseased cells back to healthy function.

The tool also identifies the best single or combined targets for treatments that correct the disease process. The work, described Sept. 9 in Nature Biomedical Engineering, was supported in part by federal funding.

By zeroing in on the targets most likely to reverse disease, the new approach could speed up drug discovery and design and unlock therapies for conditions that have long eluded traditional methods, the researchers noted.

Traditional drug discovery resembles tasting hundreds of prepared dishes to find one that happens to taste perfect. PDGrapher works like a master chef who understands what they want the dish to be and exactly how to combine ingredients to achieve the desired flavor.”


Marinka Zitnik, study senior author, associate professor of biomedical informatics in the Blavatnik Institute at HMS

The traditional drug-discovery approach – which focuses on activating or inhibiting a single protein – has succeeded with treatments such as kinase inhibitors, drugs that block certain proteins used by cancer cells to grow and divide. However, Zitnik noted, this discovery paradigm can fall short when diseases are fueled by the interplay of multiple signaling pathways and genes. For example, many breakthrough drugs discovered in recent decades – think immune checkpoint inhibitors and CAR T-cell therapies – work by targeting disease processes in cells.

The approach enabled by PDGrapher, Zitnik said, looks at the bigger picture to find compounds that can actually reverse signs of disease in cells, even if scientists don’t yet know exactly which molecules those compounds may be acting on.

How PDGrapher works: Mapping complex linkages and effects

PDGrapher is a type of artificial intelligence tool called a graph neural network. This tool doesn’t just look at individual data points but at the connections that exist between these data points and the effects they have on one another. 

In the context of biology and drug discovery, this approach is used to map the relationship between various genes, proteins, and signaling pathways inside cells and predict the best combination of therapies that would correct the underlying dysfunction of a cell to restore healthy cell behavior. Instead of exhaustively testing compounds from large drug databases, the new model focuses on drug combinations that are most likely to reverse disease.

PDGrapher points to parts of the cell that might be driving disease. Next, it simulates what happens if these cellular parts were turned off or dialed down. The AI model then offers an answer as to whether a diseased cell would happen if certain targets were “hit.”

“Instead of testing every possible recipe, PDGrapher asks: ‘Which mix of ingredients will turn this bland or overly salty dish into a perfectly balanced meal?'” Zitnik said.

Advantages of the new model

The researchers trained the tool on a dataset of diseased cells before and after treatment so that it could figure out which genes to target to shift cells from a diseased state to a healthy one.

Next, they tested it on 19 datasets spanning 11 types of cancer, using both genetic and drug-based experiments, asking the tool to predict various treatment options for cell samples it had not seen before and for cancer types it had not encountered.

The tool accurately predicted drug targets already known to work but that were deliberately excluded during training to ensure the model did not simply recall the right answers. It also identified additional candidates supported by emerging evidence. The model also highlighted KDR (VEGFR2) as a target for non-small cell lung cancer, aligning with clinical evidence. It also identified TOP2A – an enzyme already targeted by approved chemotherapies – as a treatment target in certain tumors, adding to evidence from recent preclinical studies that TOP2A inhibition may be used to curb the spread of metastases in non-small cell lung cancer.

The model showed superior accuracy and efficiency, compared with other similar tools. In previously unseen datasets, it ranked the correct therapeutic targets up to 35 percent higher than other models did and delivered results up to 25 times faster than comparable AI approaches.

What this AI advance spells for the future of medicine

The new approach could optimize the way new drugs are designed, the researchers said. This is because instead of trying to predict how every possible change would affect a cell and then looking for a useful drug, PDGrapher right away seeks which specific targets can reverse a disease trait. This makes it faster to test ideas and lets researchers focus on fewer promising targets.

This tool could be especially useful for complex diseases fueled by multiple pathways, such as cancer, in which tumors can outsmart drugs that hit just one target. Because PDGrapher identifies multiple targets involved in a disease, it could help circumvent this problem.

Additionally, the researchers said that after careful testing to validate the model, it could one day be used to analyze a patient’s cellular profile and help design individualized treatment combinations.

Finally, because PDGrapher identifies cause-effect biological drivers of disease, it could help researchers understand why certain drug combinations work – offering new biological insights that could propel biomedical discovery even further.

The team is currently using this model to tackle brain diseases such as Parkinson’s and Alzheimer’s, looking at how cells behave in disease and spotting genes that could help restore them to health. The researchers are also collaborating with colleagues at the Center for XDP at Massachusetts General Hospital to identify new drug targets and map which genes or pairs of genes could be affected by treatments for X-linked Dystonia-Parkinsonism, a rare inherited neurodegenerative disorder.

“Our ultimate goal is to create a clear road map of possible ways to reverse disease at the cellular level,” Zitnik said.

Source:

Journal reference:

Gonzalez, G., et al. (2025). Combinatorial prediction of therapeutic perturbations using causally inspired neural networks. Nature Biomedical Engineering. doi.org/10.1038/s41551-025-01481-x



Source link

AI Insights

Why California again backed off on sweeping AI regulation

Published

on


By Khari Johnson, CalMatters

"A
Reading materials and fliers at the Sacramento Works job training and resources center in Sacramento on April 23, 2024. The center provides help and resources to job seekers, business and employers in Sacramento County. Photo by Miguel Gutierrez Jr., CalMatters

This story was originally published by CalMatters. Sign up for their newsletters.

After three years of trying to give Californians the right to know when AI is making a consequential decision about their lives and to appeal when things go wrong, Assemblymember Rebecca Bauer-Kahan said she and her supporters will have to wait again, until next year.

The San Ramon Democrat announced Friday that Assembly Bill 1018, which cleared the Assembly and two Senate committees, has been designated a two-year bill, meaning it can return as part of the legislative session next year. That move will allow more time for conversations with Gov. Gavin Newsom and more than 70 opponents. The decision came in the final hours of the California Legislative session, which ends today. 

Her bill would require businesses and government agencies to alert individuals when automated  systems are used to make important decisions about them, including for apartment leases, school admissions, and, in the workplace, hiring, firing, promotions, and disciplinary actions. The bill also covers decisions made in education, health care, criminal justice, government benefits, financial services, and insurance.

Automated systems that assign people scores or make recommendations can stop Californians from receiving unemployment benefits they’re entitled to, declare job applicants less qualified for arbitrary reasons that have nothing to do with job performance, or deny people health care or a mortgage because of their race.

“This pause reflects our commitment to getting this critical legislation right, not a retreat from our responsibility to protect Californians,” Bauer-Kahan said in a statement shared with CalMatters.

Bauer-Kahan adopted the principles enshrined in the legislation from the Biden administration’s AI Bill of Rights. California has passed more AI regulation than any other state, but has yet to adopt a law like Bauer-Kahan’s or like other laws requiring disclosure of consequential AI decisions like the Colorado AI Act or European Union’s AI Act

The pause comes at a time when politicians in Washington D.C. continue to oppose AI regulation that they say could stand in the way of progress. Last week, leaders of the nation’s largest tech companies joined President Trump at a White House dinner to further discuss a recent executive order and other initiatives to  prevent AI regulation. Earlier this year, Congress tried and failed to pass a moratorium on AI regulation by state governments.

When an automated system makes an error, AB 1018 gives people the right to have that mistake rectified within 60 days. It also reiterates that algorithms must give “full and equal” accommodations to everyone, and cannot discriminate against people based on characteristics like age, race, gender, disability, or immigration status. Developers must carry out impact assessments to, among other things, test for bias embedded in their systems. If an impact assessment is not conducted on an AI system, and that system is used to make consequential decisions about people’s lives, the developer faces fines of up to $25,000 per violation, or legal action by the attorney general, public prosecutors, or the Civil Rights Department.

Amendments made to the bill in recent weeks exempted generative AI models from coverage under the bill, which could prevent it from impacting major AI companies or ongoing generative AI pilot projects carried out by state agencies. The bill was also amended to delay a developer auditing requirement to 2030, and to clarify that the bill intends to address evaluating a person and making predictions or recommendations about them.

An intense legislative fight

Samantha Gordon, a chief program officer at TechEquity, a sponsor of the bill, said she’s seen more lobbyists attempt to kill AB 1018 this week in the California Senate than for any other AI bill ever. She said she thinks AB 1018 had a pathway to passage but the decision was made to pause in order to work with the governor, who ends his second and final term next year.

“There’s a fundamental disagreement about whether or not these tools should face basic scrutiny of testing and informing the public that they’re being used,” Gordon said.

Gordon thinks it’s possible tech companies will use their “unlimited amount of money” to fight the bill next year.. 

“But it’s clear,” she added, “that Americans want these protections — poll after poll shows Americans want strong laws on AI and that voluntary protections are insufficient.”

AB 1018 faced opposition from industry groups, big tech companies, the state’s largest health care provider, venture capital firms, and the Judicial Council of California, a policymaking body for state courts.

A coalition of hospitals, Kaiser Permanente, and health care software and AI company Epic Systems urged lawmakers to vote no on 1018 because they argued the bill would negatively influence patient care, increase costs, and require developers to contract with third-party auditors to assess compliance by 2030.

A coalition of business groups opposed the bill because of generalizing language and concern that compliance could be expensive for businesses and taxpayers. The group Technet, which seeks to shape policy nationwide and whose members include companies like Apple, Google, Nvidia, and OpenIAI, argued that AB 1018 would stifle job growth, raise costs, and punish the fastest growing industries in the state in a video ad campaign.

Venture capital firm Andreessen Horowitz, whose founder Marc Andreessen supported the re-election of President Trump, oppose the bill due to costs and due to the fact that the bill seeks to regulate AI in California and beyond.

A policy leader in the state judiciary said in an alert sent to lawmakers urging a no vote this week that the burden of compliance with the bill is so great that the judicial branch is at risk of losing the ability to use pretrial risk assessment tools like the kind that assign recidivism scores to sex offenders and violent felons. The state Judicial Council, which makes policy for California courts, estimates that AB 1018 passage will cost the state up to $300 million a year. Similar points were made in a letter to lawmakers last month.

Why backers keep fighting

Exactly how much AB 1018 could cost taxpayers is still a big unknown, due to contradictory information from state government agencies. An analysis by California legislative staff found that if the bill passes it could cost local agencies, state agencies, and the state judicial branch hundreds of millions of dollars. But a California Department of Technology report covered exclusively by CalMatters concluded in May that no state agencies use high risk automated systems, despite historical evidence to the contrary. Bauer-Kahan said last month that she was surprised by the financial impact estimates because CalMatters reporting found that automated decisionmaking system use was not widespread at the state level.

Support for the bill has come from unions who pledged to discuss AI in bargaining agreements, including the California Nurses Association and the Service Employees International Union, and from groups like the Citizen’s Privacy Coalition, Consumer Reports, and the Consumer Federation of California.

Coauthors of AB 1018 include major Democratic proponents of AI regulation in the California Legislature, including Assembly majority leader Cecilia Aguilar-Curry of Davis, author of a bill passed and on the governor’s desk that seeks to stop algorithms from raising prices on consumer goods; Chula Vista Senator Steve Padilla, whose bill to protect kids from companion chatbots awaits the governor’s decision; and San Diego Assemblymember Chris Ward, who previously helped pass a law requiring state agencies to disclose use of high-risk automated systems and this year sought to pass a bill to prevent pricing based on your personal information.

The anti-discrimination language in AB 1018 is important because tech companies and their customers often see themselves as exempt from discrimination law if the discrimination is done by automated systems, said Inioluwa Deborah Raji, an AI researcher at UC Berkeley who has audited algorithms for discrimination and advised government officials in Sacramento and Washington D.C. about how AI can harm people. She questions whether state agencies have the resources to enforce AB 1018, but also likes the disclosure requirement in the bill because “I think people deserve to know, and there’s no way that they can appeal or contest without it.”

“I need to know that an AI system was the reason I wasn’t able to rent this house. Then I can at an individual level appeal and contest. There’s something very valuable about that.”

Raji said she witnessed corporate influence and pushback when she helped shape a report about how California can balance guardrails and innovation for generative AI development, and she sees similar forces at play in the delay of AB 1018.

“It’s disappointing this [AB 1018] isn’t the priority for AI policy folks at this time,” she told CalMatters. “I truly hope the fourth time is the charm.”

A number of other bills with union backing were also considered by lawmakers this session that sought to protect workers from artificial intelligence. For the third year in a row, a bill to require a human driver in commercial delivery trucks in autonomous vehicles failed to become law. Assembly Bill 1331, which sought to prevent surveillance of workers with AI-powered tools in private spaces like locker or lactation rooms and placed limitations on surveillance in breakrooms, also failed to pass.

But another measure, Senate Bill 7 passed the legislature and is headed to the governor. It requires employers to disclose plans to use an automated system 30 days prior to  doing so  and make annual requests data used by an employer for discipline or firing. In recent days, author Senator Jerry McNerney amended the law to remove the right to appeal decisions made by AI and eliminate a prohibition against employers making predictions about a worker’s political beliefs, emotional state, or neural data. The California Labor Federation supported similar bills in Massachusetts, Vermont, Connecticut, and Washington.

This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.



Source link

Continue Reading

AI Insights

Best Artificial Intelligence Stocks To Keep An Eye On – September 12th – MarketBeat

Published

on



Best Artificial Intelligence Stocks To Keep An Eye On – September 12th  MarketBeat



Source link

Continue Reading

AI Insights

Malaysia and Zetrix AI Partner to Build Global Standards for Shariah-Compliant Artificial Intelligence

Published

on


JOHOR BAHRU, Malaysia, Sept. 13, 2025 /PRNewswire/ — In a significant step towards islamic values-based artificial intelligence, Zetrix AI Berhad, developer of the world’s first Shariah-aligned Large Language Model (LLM) NurAi and the Government of Malaysia, through the Prime Minister’s Department (Religious Affairs), today signed a Letter of Intent (LOI) to collaborate on establishing the foremost global framework for Shariah compliance, certification and governance in AI. The ceremony was witnessed by Prime Minister YAB Dato’ Seri Anwar Ibrahim.

Building Trust in NurAI

JAKIM, Malaysia’s Department of Islamic Development, is internationally recognised as the gold standard in halal certification, accrediting foreign certification bodies across nearly 50 countries. Malaysia has consistently ranked first in the Global Islamic Economy Indicator, reflecting its leadership not only in halal certification but also in Islamic finance, food and education. By integrating emerging technologies such as AI and blockchain to enhance compliance and monitoring, Malaysia continues to set holistic benchmarks for the global Islamic economy.

NurAI has already established itself as a pioneering Shariah-aligned AI platform. With today’s collaboration, JAKIM, under the Ministry’s leadership, would play a central role in guiding the certification, governance and ethical standards of NurAI, ensuring its alignment with Islamic principles.

Additionally, this milestone underscores the urgent need for AI systems that move beyond secular or foreign-centric worldviews, offering instead a platform rooted in Islamic ethics. It positions Malaysia as a global leader in ethical and Shariah-compliant AI while setting international benchmarks. The initiative also reflects the country’s halal and digitalisation agendas, ensuring AI remains trusted, secure, and representative of Muslim values while serving more than 2 billion people worldwide.

Prime Minister YAB Dato’ Seri Anwar Ibrahim reinforced that national policies should incorporate various inputs, including digitalisation and artificial intelligence — and must always remain grounded in islamic principles and values that deserve emphasis.

Areas of Collaboration

Through the LOI, Zetrix AI and the Government via JAKIM, propose to collaborate in three key areas:

  • Shariah Certification and Governance — Developing frameworks, ethical guidelines and certification standards for AI systems rooted in Islamic principles.
  • Global Advocacy and Promotion — Positioning Malaysia as the global centre of excellence for Islamic AI and championing the Islamic digital economy projected at USD 5.74 trillion by 2030.
  • JAKIM’s Official Channel on NurAI — Creating a trusted platform for Islamic legal rulings, halal certification and verified Shariah guidance, combating misinformation through AI.

Reinforcing Global Halal Tech Leadership

Through this collaboration, NurAI demonstrates how advanced AI can be guided by ethical and faith-based principles to serve global communities. By extending halal leadership into the digital economy particularly in Islamic finance, education and law — Malaysia positions itself as a key contributor to setting international benchmarks for Shariah-compliant AI.

Inclusive, Secure and Cost-Effective AI

NurAI is developed in Malaysia, supporting Bahasa Melayu, English, Indonesian and Arabic. It complies with national data sovereignty and cybersecurity policies, reducing reliance on foreign tools while ensuring AI knowledge stays local, trusted, and secure.

NurAI is available for download on nur-ai.zetrix.com

About Zetrix AI Berhad

Zetrix AI Berhad (“Zetrix AI”), formerly known as MY E.G. Services Berhad, is leading the way in the deployment of blockchain technology and artificial intelligence in powering the public and private sectors across ASEAN.  Headquartered in Malaysia, Zetrix AI started operations in 2000 as a pioneer in the provision of electronic government services and complementary commercial offerings in its home country. Today, it has advanced to the forefront of technology transformation in the broader region, leveraging its Layer-1 blockchain platform Zetrix and embracing the convergence of Web3, AI and robotics to enable optimally-efficient, intelligent and secure cross-border transactions, digital identity interoperability and automation solutions that seamlessly connect peoples, businesses and governments.

SOURCE Zetrix AI Berhad



Source link

Continue Reading

Trending