Connect with us

AI Research

Challenges in AI-Driven Virtual Cells for Cancer Research

Published

on


Artificial intelligence (AI) is increasingly penetrating various scientific disciplines, with its applications in healthcare and cancer research sparking both excitement and skepticism. A recent article authored by Ardila and Yadalam delves into the intricacies of AI virtual cells and their unresolved questions in cancer research. With technology evolving at a breakneck speed, the collaboration between AI and biological sciences could usher in revolutionary breakthroughs in cancer diagnostics and treatment. However, it’s essential to address the shortcomings and ethical questions that surround the application of AI in this sensitive field.

The authors begin by discussing the potential of AI in enhancing the traditional methodologies used in cancer research. In recent years, the deployment of machine learning algorithms has brought forth new ways to analyze complex datasets, leading to intriguing hypotheses about cancer cell behavior. AI’s ability to parse through extensive genomic, proteomic, and metabolomic data opens a new frontier, where traditional lab-based techniques fall short due to their time-consuming nature. However, while AI can augment these efforts, researchers must tread carefully as they navigate the limitations of these technologies.

One of the central components explored in the article is the concept of virtual cells. These AI-driven simulations are designed to mimic biological processes, allowing scientists to observe interactions and cellular events without the constraints of physical experiments. The use of virtual cells is particularly attractive for cancer research, as these models can be manipulated to reflect various environmental factors that may influence tumor growth. This kind of autonomy in experimentation can streamline the identification of therapeutic targets and accelerate the drug discovery timeline dramatically.

Nonetheless, there are unresolved challenges that accompany the use of virtual cells in cancer research that warrant discussion. Although AI can generate realistic models based on existing data, the fidelity of these simulations relies heavily on the accuracy and comprehensiveness of the input datasets. Gaps in data can result in misleading conclusions, and this variability poses a risk to the integrity of research findings. The authors stress the importance of curating and standardizing data sets to ensure that AI-generated simulations yield dependable results.

Another significant issue raised in the article pertains to the interpretability of AI models. While cutting-edge algorithms can produce results that may seem promising, it is the ‘black box’ nature of many machine learning models that raises concerns among scientists. How can researchers trust AI conclusions when the decision-making process remains opaque? This gap in understanding can hinder collaboration between data scientists and domain experts, ultimately affecting the successful integration of AI into traditional cancer research paradigms. It’s crucial that the scientific community finds ways to bridge this gap, ensuring clarity and transparency in AI methodologies.

Moreover, ethical considerations play a pivotal role in the utilization of AI in medical research, particularly in sensitive areas such as oncology. With virtual models that simulate human biology, there are myriad ethical questions to confront. Should AI-generated research findings undergo the same rigorous ethical scrutiny as traditional experiments? Given the potential consequences of AI’s conclusions on patient treatment, establishing a robust ethical framework becomes increasingly critical. The authors call for an interdisciplinary approach where ethics, AI technology, and biology are discussed collaboratively to develop guidelines that uphold the integrity of research.

Ardila and Yadalam also highlight the generative capabilities of AI in shaping new hypotheses that may have been overlooked through conventional methodologies. The patterns AI identifies could lead to novel insights into cancer biology and ultimately facilitate innovation in therapeutic approaches. However, as promising as this sounds, the authors caution against over-reliance on AI-generated hypotheses without rigorous experimental validation. Mustering an effective collaboration between AI-generated discoveries and laboratory verification is paramount in confirming that these findings are indeed applicable in clinical settings.

In light of the technological advances in AI, researchers are increasingly discussing the potential of federated learning as a tool to enhance virtual cell accuracy. By aggregating learning from multiple healthcare institutions while maintaining data privacy, federated learning could effectively harness a diverse array of datasets. This collaborative approach holds the promise of overcoming data transparency challenges and facilitating improvements in machine learning models, thus enhancing their applicability in cancer research.

As the discourse continues to unfold in the scientific community, the authors emphasize that education and training for researchers in both biology and AI are imperative. Interdisciplinary educational programs that equip researchers with a dual understanding of life sciences and advanced technologies will foster an environment ripe for innovation. Encouraging collaboration among experts from artificial intelligence, data science, and oncology will not only improve the application of virtual cells but also cultivate a culture of open dialogue concerning the ethical and methodological challenges presented by these new technologies.

Regardless of the promises AI holds, the challenge of reproducibility in research remains at the forefront. The authors detail that a collaborative effort to develop standardized protocols and methodologies will bolster confidence in AI-generated results. Without addressing reproducibility, the scientific community risks diluting the credibility of findings stemming from AI research. Collaborative verification of results across institutions can strengthen the foundation upon which AI research stands.

In the broader context, the impactful role of AI in medical research trends towards increased personalization of treatment. By utilizing AI to create patient-specific virtual cells, oncologists may more accurately predict how individual tumors respond to various therapies. The potential for tailoring treatments to specific genetic and phenotypic profiles may soon become standard practice, revolutionizing the field of oncology. However, realizing this future demands steadfast research efforts and multidisciplinary cooperation, which the authors argue will ultimately benefit patients.

In summary, Ardila and Yadalam’s article sheds light on the anticipatory role of AI in cancer research, while highlighting the intricate challenges that must be addressed to unlock its full potential. By tackling unresolved questions surrounding virtual cells and synthesizing input from various disciplines, researchers can assure that their findings are credible, ethical, and transformative. As the collaboration between AI and biology evolves, the scientific community stands on the precipice of a new era that could redefine how we understand and combat cancer.

In conclusion, the advent of AI in medical research, particularly in the domain of oncology, presents a landscape rich with potential and unprecedented challenges. As the scientific community collectively navigates this terrain, it remains imperative to foster transparency, rigor, and ethical considerations in research methodologies. Moving forward, continued exploration of virtual cell technologies, combined with ethical frameworks, interdisciplinary collaboration, and an unwavering commitment to scientific integrity, will ultimately serve to enhance patient care in the realm of cancer treatment.

Subject of Research: Unresolved Questions in the Application of Artificial Intelligence Virtual Cells for Cancer Research

Article Title: Unresolved questions in the application of artificial intelligence virtual cells for cancer research

Article References:

Ardila, C.M., Yadalam, P.K. Unresolved questions in the application of artificial intelligence virtual cells for cancer research.
Military Med Res 12, 19 (2025). https://doi.org/10.1186/s40779-025-00608-0

Image Credits: AI Generated

DOI: 10.1186/s40779-025-00608-0

Keywords: Artificial intelligence, virtual cells, cancer research, machine learning, ethical considerations, federated learning, reproducibility, personalized treatment.

Tags: AI in cancer researchbreakthroughs in cancer treatment with AIchallenges in AI-driven simulationscollaboration between AI and biological sciencesenhancing traditional cancer research methodologiesethical implications of AI in biologygenomic analysis using AIlimitations of AI in cancer diagnosticsmachine learning algorithms for healthcareproteomic data interpretation with AIskepticism surrounding AI applications in medicinevirtual cells in oncology



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

OpenAI, which leads the global artificial intelligence (AI) market, has invested a large amount of m..

Published

on


Start-up StatSig Acquires…CEO Appointed CTO “AI Quality Most Important…”Make safe and useful AI”

[Photo = Yonhap News]

OpenAI, which leads the global artificial intelligence (AI) market, has invested a large amount of money to acquire startups. Analysts say that this is a move that recognizes that ChatGPT users have recently emerged as a serious social problem after being delusional or killing themselves while talking for a long time.

According to the information technology (IT) industry on the 4th, OpenAI announced the previous day that it would acquire startup StatSig for $1.1 billion (about 1.5 trillion won). This transaction is made through a stock exchange in full.

Founded in 2021, StatSig has a platform that verifies the effectiveness and impact of developers improving software functions. Representatively, it applies a new function to some users and provides a service that modifies the function according to the user’s response after testing and updating the function compared to all functional users.

Vijai Raj StatSig CEO will be appointed as OpenAI’s Chief Technology Officer (CTO) for Applications. It is expected that it will be in charge of application engineering. However, there is still a process for regulatory authorities to review and permit the acquisition.

OpenAI has been pursuing large-scale mergers and acquisitions this year. In July, it bought Johnny Ive’s AI hardware startup io, who served as Apple’s design director, for $6.5 billion (about 9 trillion won), and attempted to acquire AI coding startup Windsurf for $3 billion (about 4 trillion won), but it failed.

“In order to create intuitive, safe and useful Generative AI, a strong engineering system, fast repetitive work, and a long-term focus on quality and stability are needed,” an OpenAI official said. “We will improve our AI model in a way that allows users to better recognize and respond to signals that are mentally and emotionally difficult.”

Controversy over AI psychosis…Introducing ‘Danger Conversation’ Protection

Sam Altman, CEO of OpenAI. [Photo = Yonhap News]
Sam Altman, CEO of OpenAI. [Photo = Yonhap News]

Recently, AI psychosis is the topic of the global AI industry. AI psychosis refers to the phenomenon of losing a sense of reality or imagining in vain while interacting with AI. It’s not an official disease name, it’s a newly coined word.

For example, last month, American teenager Adam Lane confessed to ChatGPT-4o that he felt an urge to choose an extreme, discussed his suicide plan, and put it into action. Lane’s parents filed a lawsuit against OpenAI, claiming ChatGPT was responsible for the death of their son.

OpenAI acknowledged the system defect, saying that long repeated communication has unlocked ChatGPT’s safety devices. In response, OpenAI plans to work with experts in various fields to strengthen ChatGPT’s user protection function, and then introduce an AI model that focuses on a safe use environment within this year.

First, to block sensitive and dangerous conversations, we take measures to automatically switch from the general model to the inference model when a stress signal is detected. The inference model can adequately respond to anomalies because it takes sufficient time to understand the context and answer than the general model.

In particular, it focuses on protecting youth. You can link the accounts of parents and children. Through this, parents have the authority to check their children’s conversation content and delete conversation records. If your child seems emotionally anxious, you will send a warning notification to your parents. Age-specific rules of conduct are also applied.

Meta also held a roundtable on online safety for youth and women. Currently, Meta is reflecting user feedback in its service after the introduction of youth accounts. A location notification function has been added to the direct message (DM) to indicate the other party’s country. It aims to prevent sexual exploitation and fraud. In order to prevent the spread of private photos, it has introduced a function to send a warning message and automatically blur when nude photos are detected. It is also detecting AI technology advertisements that synthesize and distort human photos.

Meanwhile, as AI becomes a part of life, demands from AI companies to protect ethics are expected to spread. According to WiseApp and Retail, the monthly active users (MAU) of Korea’s ChatGPT app exceeded 20.31 million as of last month. It increased five times compared to the same month last year (4.07 million). The figure is even nearly half of the KakaoTalk app (51.2 million people), a messenger used by the whole nation.

“AI does not recognize emotions, but it can learn conversation patterns and react as if it had emotions,” said Dr. Zeev Benzion of Yale University. “It should be designed to remind users that AI is not a therapist or a substitute for human relationships.”



Source link

Continue Reading

AI Research

AI to reshape India’s roads? Artificial intelligence can take the wheel to fix highways before they break, ETInfra

Published

on


From digital twins that simulate entire highways to predictive algorithms that flag out structural fatigue, the country’s infrastructure is beginning to show signs of cognition.

In India, a pothole is rarely just a pothole. It is a metaphor, a mood and sometimes, a meme. It is the reason your cab driver mutters about karma and your startup founder misses a pitch meeting because the expressway has turned into a swimming pool. But what if roads could detect their own distress, predict failures before they happen, and even suggest how to fix them?

That is not science-fiction but the emerging reality of AI-powered infrastructure.

According to KPMG’s 2025 report AI-powered road infrastructure transformation- Roads 2047, artificial intelligence is slowly reshaping how India builds, maintains, and governs its roads. From digital twins that simulate entire highways to predictive algorithms that flag out structural fatigue, the country’s infrastructure is beginning to show signs of cognition.

From concrete to cognition

India’s road network spans over 6.3 million kilometers – second only to the United States. As per KPMG, AI is now being positioned not just as a tool but as a transformational layer. Technologies like Geographic Information System (GIS), Building Informational Modelling (BIM) and sensor fusion are enabling digital twins – virtual replicas of physical assets that allow engineers to simulate stress, traffic and weather impact in real time. The National Highway Authority of India (NHAI) has already integrated AI into its Project Management Information System (PMIS), using machine learning to audit construction quality and flag anomalies.

Autonomous infrastructure in action

Across urban India, infrastructure is beginning to self-monitor. Pune’s Intelligent Traffic Management System (ITMS) and Bengaluru’s adaptive traffic control systems are early examples of AI-driven urban mobility.

Meanwhile, AI-MC, launched by the Ministry of Road Transport and Highways (MoRTH), uses GPS-enabled compactors and drone-based pavement surveys to optimise road construction.

Beyond cities, state-level initiatives are also embracing AI for infrastructure monitoring. As reported by ETInfra earlier, Bihar’s State Bridge Management & Maintenance Policy, 2025 employs AI and machine learning for digital audits of bridges and culverts. Using sensors, drones, and 3D digital twins, the state has surveyed over 12,000 culverts and 743 bridges, identifying damaged structures for repair or reconstruction. IIT Patna and Delhi have been engaged for third-party audits, showing how AI can extend beyond roads to critical bridge infrastructure in both urban and rural contexts.

While these examples demonstrate the potential of AI-powered maintenance, challenges remain. Predictive maintenance, KPMG notes, could reduce lifecycle costs by up to 30 per cent and improve asset longevity, but much of rural India—nearly 70 per cent of the network—still relies on manual inspections and paper-based reporting.

Governance and the algorithm

India’s road safety crisis is staggering: over 1.5 lakh deaths annually. AI could be a game-changer. KPMG estimates that intelligent systems can reduce emergency response times by 60 per cent, and improve traffic efficiency by 30 per cent. AI also supports ESG goals— enabling carbon modeling, EV corridor planning, and sustainable design.

But technology alone won’t fix systemic gaps. The promise of AI hinges on institutional readiness – spanning urban planning, enforcement, and civic engagement.

While NITI Aayog has outlined a national AI strategy, and MoRTH has initiated digital reforms, state-level adoption remains fragmented. Some states have set up AI cells within their PWDs; others lack the technical capacity or policy mandate.

KPMG calls for a unified governance framework — one that enables interoperability, safeguards data, and fosters public-private partnerships. Without it, India risks building smart systems on shaky foundations.

As India looks towards 2047, the road ahead is both digital and political. And if AI can help us listen to our roads, perhaps we’ll finally learn to fix them before they speak in potholes.

  • Published On Sep 4, 2025 at 07:10 AM IST

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

Get updates on your preferred social platform

Follow us for the latest news, insider access to events and more.



Source link

Continue Reading

AI Research

ST Engineering Spotlights AI Innovation at 5th InnoTech Conference

Published

on


Singapore, 4 September 2025 – ST Engineering held its annual InnoTech Conference 2025, bringing together industry visionaries, business leaders and government officials to shape the future with AI. Themed “AI.Innovating the Future”, the large-scale conference was graced by Guest-of-Honour Josephine Teo, Minister for Digital Development and Information, and Minister-in-charge of Cybersecurity and Smart Nation Group. 

One of the conference’s highlights was the announcement of a five-year, $250 million AI Research Translation programme for Physical AI, funded and led by ST Engineering in collaboration with academic and research partners. This phased programme will advance robotics, swarm and humanoid solutions to tackle complex operational challenges, with an initial focus on enhancing teamwork with human and unmanned systems. At the conference, ST Engineering offered a first look at Manned-Unmanned Teaming Operating System (MUMTOS), a minimum viable product showcasing these capabilities in action.

MUMTOS acts as the ‘brain’ of human-machine collaboration, coordinating robots, drones, and autonomous vehicles to deliver actionable insights and faster decision-making across operations. In humanitarian missions, for instance, it uses AI to assess life-risk scoring — including oxygen levels, structural stability and the number of people detected — to prioritise rescue efforts. By providing real-time updates, precise location data and uninterrupted communications, MUMTOS helps first responders reach those in need quickly. It can also alert hospitals and coordinate ambulance dispatch, enabling faster medical response and seamless patient handovers.

“For many years, ST Engineering has applied AI across multiple domains, gaining first-hand experience of its potential and understanding its real-world challenges,” said Lee Shiang Long, Group Chief Technology & Digital Officer, ST Engineering. “Building on this foundation, our focus supported by increased investment, positions us to lead the AI Research Translation programme and turn advanced AI and robotics into impactful solutions across industries.”

Low Jin Phang, President, Digital Systems, ST Engineering, added, “AI enables faster, smarter decision-making by processing vast amounts of data, helping organisations and individuals navigate increasingly complex and dynamic environments. But it is no substitute for humans. We believe humans are needed to interpret insights, make nuanced choices and guide AI towards meaningful outcomes. That is why we are investing in our people, with a clear roadmap to further develop our AI-ready workforce across the Group.”

Today, ST Engineering has a 10,000-strong AI-ready workforce. Over the next few years, it targets to have 5,000 AI engineers through upskilling 4,000 engineers in training AI modules and deploying AI systems, and creating 1,000 AI specialists focused on the development of AI modules, cybersecurity for AI, and agentic AI systems. These efforts will ensure the Group has the talent to drive advanced AI capabilities and deliver transformative solutions across industries.

Other highlights at the conference included the showcase of AI innovations across unmanned ecosystems, counter-drone operations, connectivity and sensemaking. The InnoTech Conference 2025 drew nearly 2,000 participants and featured keynote speakers such as Mike Walsh, CEO of Tomorrow; Jixun Foo, Senior Managing Partner of Granite Asia; and Geoff Soon, Vice President of Revenue APAC at Mistral AI. The event included a fireside chat on leadership perspectives in “AI.Innovating the Future”, and specialised tracks covering AI in Cloud, Unmanned Ecosystems, Intelligent Connectivity, Learning, and emerging AI technologies.

*****

For media enquiries, please write to us at news@stengg.com.





Source link

Continue Reading

Trending