Connect with us

AI Insights

The promise and peril of AI powered medicine

Published

on


In the search for promising new drug treatments, the pathway from laboratory to pharmacy is typically expensive, time-consuming, and uncertain.

It’s estimated that it can take up to 15 years and require more than $2 billion to get each approved drug out of the gate.

However artificial intelligence (AI), with its unparalleled ability to analyse vast datasets, holds out the promise of a speedier, less arduous drug development process.

Research published in Nature Medicine highlights the multiple benefits that flow from bringing AI to different tasks in drug development.

These tasks include identifying disease biomarkers and potential drug targets, simulating drug–target interactions, predicting the safety and efficacy of drug candidates, and managing clinical trials.

But amid the optimism, there are growing calls for caution.

Current uses

In Australia, biotech giant CSL is using AI to accelerate drug development, aiming to produce more personalised and effective treatments for serious diseases.

Meanwhile, CSIRO’s new Virga supercomputer is also aiming to expedite early drug discovery.

At Moderna, AI is integrated across the drug discovery and development pipeline by leveraging a strong digital foundation built on cloud infrastructure, data integration, automation, and advanced analytics, says Brice Challamel, Moderna’s Head of AI and Product Innovation.

At the earliest stage, such as target identification and mRNA design, AI-driven machine learning models can help optimise the construct of mRNA sequences for efficiency, stability, and protein expression.

“This is crucial because there are billions of possible mRNA designs for any given protein, and AI helps navigate this complexity beyond traditional science alone,” Challamel explains via email.

In later development stages, AI helps with data analysis, brainstorms best next steps, and aids operational efficiency.

It also supports manufacturing and supply chain processes by providing real-time insights and supporting automation design and documentation.

AI can effectively be used even in complex diseases such as cancer.

AI-driven machine learning models can help analyse vast biological data and generate novel hypotheses, which seeks to improve both efficiency and stability of precision medicines.

Brice Challamel

Challamel points to Moderna’s development program for individualized neoantigen therapies (INT) for cancers.

AI algorithms rapidly analyse sequencing data from patients’ tumours, which are unique to each person, like a ‘fingerprint’, along with blood samples to identify mutations and predict neoantigens (mutated proteins that are likely to trigger an immune response).

“This step, which can be time-consuming and complex, is streamlined using an integrated, AI-driven process with expert human oversight,” he says.

“Based on this analysis, our scientists work with the AI to select up to 34 neoantigens and design an mRNA sequence that gives cells instructions to produce these cancer-specific proteins.

“The goal is to train the immune system to recognise these tumour ‘fingerprint’ proteins and mount a targeted immune response against the cancer.”

Caution advised ahead

Yet Challamel argues that the most critical piece in using any form of AI for drug development is ensuring robust human oversight and transparency at every step.

“We operate in a highly regulated industry where decisions impact patient safety, so it’s essential that AI tools are never used in isolation,” he explains.

“Every output is reviewed through structured human ‘expert-in-the-loop’ processes, or employees who are qualified against that particular workflow.”

Meanwhile, cross-functional governance ensures decisions are traceable, explainable, and aligned with regulatory expectations.

“Transparency is non-negotiable as regulators need to understand how AI-derived insights are generated, including the data inputs, assumptions, and review steps behind them,” he adds.

“We document this thoroughly and ensure all decisions involving AI are supported by clear, auditable evidence.”

Others warn that the potential for AI to revolutionise medical discovery may only be fully realised if some important guard rails are put in place.

A study published earlier this year in Fundamental Research, highlighted the importance of data quality, algorithm training, and ethical consideration, particularly in patient data handling during clinical trials.

Data quality demands

There are many risks, cautions and challenges associated with the use of AI.

In high stakes fields like drug discovery, diversity of data is essential to avoid errors and biases.

A 2023 review published in Pharmaceuticals identified that the availability of suitable and sufficient data is essential for the accuracy and reliability of results.

“Scientifically, data quality and integration are foundational because AI is only as good as the data it’s trained on,” says Challamel.

“We’ve invested heavily in digital infrastructure to ensure clean, consistent, and accessible datasets across functions.”

High failure rates

Although AI is helping to fast-track drug development, it has thus far failed to shift the needle when it comes to addressing the 90 per cent failure rate during clinical trials.

Tony Kenna, President of the Australian Society for Medical Research (ASMR), says he is not yet aware of any real benefits from applying AI tools to clinical trials data.

“There are ongoing studies evaluating whether AI models can create digital twins—virtual patient models based on historical public data—to predict disease progression and treatment effects,” he says.

As highlighted in Communications Medicine, this may allow for smaller, more efficient trials with fewer patients in control groups, improving statistical power and reducing trial duration.

Kenna also pointed to the work of QuantHealth which is using AI trained on data from 350 million patients and 700,000 therapeutics to simulate trials.

“I’m not aware of any tangible outcomes from this yet though,” he adds.

Shane Huntington OAM, CEO of ASMR, says the drug discovery pipeline often results in large numbers of pharmaceuticals which have efficacy below a critical threshold.

“For many people, these drugs work beautifully,” he says.

“The problem is to determine who they work for prior to use.”

Genetic tests for a relatively small number of drugs are currently available – and though costly, they can provide crucial information on use.

“Given the enormous amount of money already invested in drugs that are ‘sitting on the shelf’ – genetic assessment, perhaps supported by AI, should be a focus,” Huntington adds.

Blind spots

AI tools do tend to have blind spots, which often spring from limitations in data quality, data availability, and the complexity of biological systems, says Kenna.

“Not all research papers are created equal,” he explains.

“A trained scientist can assess critical elements of quality in a study such as sample quality, cohort selection, and appropriateness of statistical methods, to determine the robustness of the published findings.

“AI tools are poor at discriminating good from bad science so the models can include both robust and poor-quality data which will likely impact the strength of the application of the AI tools.”

Kenna adds that negative data (from failed experiments) is underreported, which is critical for training robust models.

Misuse

Awareness is growing that AI tools used in drug design can become dangerous in the absence of ethical and legal frameworks.

In 2022, scientists Sean Ekins and Fabio Urbina wrote about their ‘Dr Evil’ project in Nature Machine Intelligence.

They demonstrated how an algorithm designed to identify therapeutic compounds could be turned on its head to create chemical weapons.

According to Huntington, the misuse of AI in other, less regulated industries is already causing major issues.

He refers to the major lawsuit recently lodged by several of the big motion picture companies, over copyright infringement.

“There will be similar issues around IP if AI systems are not carefully restricted in terms of what data they have access to – in some regards stripping them of their greatest advantage,” he says.

Overconfidence

Machine learning tools often fail to quantify uncertainty – leading to bold, but misleading, predictions.

An editorial published in Nature in 2023 stated that systems based on generative AI which used patterns leant from training data to generate new data with similar characteristics could be problematic.

It noted how the chatbot ChatGPT sometimes fabricated answers.

“In drug discovery, the equivalent problem leads it to suggest substances that are impossible to make,” it states.

Kenna says that risks from hallucinations and AI errors can be mitigated by keeping ‘humans in the loop’ and ensuring expert review before decision making.

“AI tools should be helping experts not replacing them,” he says.

Security and privacy

Challamel also notes that security of data is key.

“We’ve complemented public AI tools with secure, internal enterprise solutions which keep sensitive data isolated and protected,” he says.

“In short, AI can be a powerful accelerator, but only when paired with rigorous human oversight, transparency, and compliance.”





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Cerillion hails recognition in Gartner artificial intelligence reports

Published

on


(Alliance News) – Cerillion PLC on Tuesday said it has been recognised in two recently published artificial intelligence reports from Gartner Inc, a Connecticut-based research and advisory firm.

The London-based billing, charging and customer relationship management software said it was named in the Gartner’s Magic Quadrant for AI in communication service provider customer and business operations report, and in a report on critical capabilities for AI in the same sector.

Cerillion said Gartner evaluated vendors across criteria including market understanding, product strategy, sales strategy, innovation and customer experience.

The firm said it believes its inclusion in the reports follows its ongoing investment in AI-powered capabilities.

“We’re delighted to be recognised in these new Magic Quadrant and Critical Capabilities reports for AI in CSP customer and business operations,” said Chief Executive Officer Louis Hall.

“We believe it validates our ongoing strategy of embedding advanced AI into our [software-as-a-service]-based, composable [business support systems]/[operations support systems] suite to help CSPs streamline operations, enhance customer experience and accelerate innovation.”

Shares in Cerillion were up 2.0% at 1,397.50 pence in London on Tuesday morning.

By Michael Hennessey, Alliance News reporter

Comments and questions to newsroom@alliancenews.com

Copyright 2025 Alliance News Ltd. All Rights Reserved.



Source link

Continue Reading

AI Insights

Salesforce cuts 4,000 jobs with AI — CEO calls AGI overhyped

Published

on


At the beginning of this year, Salesforce CEO Marc Benioff indicated that the company was seriously debating hiring software engineers in 2025. Consequently, the executive revealed that the tech firm was using AI to do up to 50% of its work, citing incredible productivity gains due to agentic AIs.

During a recent episode of The Logan Bartlett Show, Benioff revealed that AI is on course to replace humans at the workplace, specifically indicating that the technology is helping bolster the company’s sales by augmenting the customer support division, prompting it to cut support staff from 9,000 to 5,000 (via Business Insider).

It’s been eight of the most exciting months of my career. I was able to rebalance my head count on my support. I’ve reduced it from 9,000 heads to about 5,000 because I need less heads.

Salesforce CEO, Marc Benioff



Source link

Continue Reading

AI Insights

AI companies must honour a foundational rule of the internet: Respecting site owners’ wishes on content

Published

on


Open this photo in gallery:

AI tools are diverting traffic and attention away from Wikipedia and other websites, starving them of the revenue that they need to survive, researchers have concluded.Gregory Bull/The Associated Press

Viet Vu is manager of economic research at the Dais at Toronto Metropolitan University.

One foundational rule of the internet has been the social contract between website owners and search giants such as Google.

Owners would agree to let search-engine bots crawl and index their websites for free, in return for their sites showing up in search results for relevant queries, driving traffic. If they didn’t want their website to be indexed, they could politely note it in what’s called a “robots.txt” file on their server, and upon seeing it, the bot would leave the site unindexed.

Without intervention from any political or legal bodies, most companies have complied (with some hiccups) with the voluntary standard since its introduction in 1994. That is, until the latest generation of large language models, or LLMs, emerged.

LLMs are data-hungry. A previous version of ChatGPT, OpenAI’s chatbot model, was reportedly trained with data equivalent to 10 trillion words, or enough to fill more than 13 billion Globe and Mail op-eds. It would take a columnist writing daily for 36.5 million years to generate sufficient data to train that model.

Opinion: AI will ruin art, and it will ruin the sparkle in our lives

To satisfy this need, artificial-intelligence companies have to look to the broader internet for high-quality text input. As it turns out, there aren’t nearly enough websites that allow bots to collect data to create better chatbots.

Some AI companies state that they respect robots.txt. In some cases, their public statements are allegedly at odds with their actual practices. But even when these pledges are genuine, companies benefit from another loophole in robots.txt: To block a bot, the website owner must be able to specify the bot’s name. And many AI companies’ bots’ names are only disclosed or discovered after they have crawled freely through the internet.

The impact of these bots is profound. Take Wikipedia, for example: Its content is freely licensed, allowing bots to crawl through its trove of high-quality information. Between January, 2024, and April, 2025, the non-profit found that its multimedia downloads increased by 50 per cent, driven in large part by AI companies’ bots downloading licence-free images to train their products.

Wikipedia says the bot traffic is adding to its operating costs. This ultimately could have been profitable for the site if these chatbots directed users to a page inviting them to contribute to the donation-funded organization.

Instead, researchers have concluded that as AI tools divert traffic and attention away from Wikipedia and other websites, those sites are increasingly starved of the revenue that they need to survive, even as the tools themselves rely on those sites for input. Figures published by Similarweb indicate a 22.7-per-cent drop in Wikipedia’s overall traffic between 2022 and 2025.

Instead of a quid pro quo of letting search engines crawl websites and serving traffic to them through search results, the current arrangement means websites face increased costs while seeing fewer actual visitors, and while also providing the resources to train ever-improving AI chatbots. Without intervention, this threatens to create a vicious cycle where AI products drive websites to shut down, destroying the very data AI needs to improve further.

In response, companies such as Cloudflare, a web-services provider, have started treating AI bots as hackers trying to compromise their customers’ cybersecurity. In August, Cloudflare alleged that Perplexity, a leading AI company, is actively developing new ways to hide its crawling activities to circumvent existing cybersecurity barriers.

A technical showdown against the largest and most well-resourced technology companies in the world isn’t sustainable. The obvious solution would be for these AI companies to honour the trust-based mechanism of robots.txt. However, given competitive pressures, companies aren’t incentivized to do the right thing.

Individual countries are also struggling to find ways to protect individual content creators. For example, Canada’s Online News Act was an attempt to compel social-media companies to compensate news organizations for lost revenue. Instead of achieving that goal, Ottawa learned that companies such as Meta would rather remove Canadians’ access to news on their platforms than compensate publishers.

Our next-best bet is an international agreement, akin to the Montreal Protocol, which bound countries to co-ordinate laws phasing out substances that eroded the Earth’s ozone layer. Absent American leadership, Canada should lead in establishing a similar protocol for AI bots. It could encourage countries to co-ordinate legislative efforts compelling companies to honour robots.txt instructions. If all tech companies around the world had to operate under common rules, it would level the playing field by removing the competitive pressure to race to the bottom.

AI technology can, and should, benefit the world – but it cannot do so by breaking the internet.



Source link

Continue Reading

Trending