Connect with us

AI Research

Dresden Research Team Develops AI Model for Simultaneous Detection of

Published

on


A groundbreaking multicenter study has unveiled a novel approach employing deep learning to decode complex genetic alterations within colorectal cancer, marking a significant advancement in precision oncology. Researchers analyzed nearly 2,000 digitized tissue slides from colon cancer patients across seven independent cohorts in Europe and the United States, integrating whole-slide histological images with detailed clinical, demographic, and lifestyle datasets. This extensive dataset enabled the development of a sophisticated “multi-target transformer model” capable of simultaneously predicting a broad spectrum of genetic mutations directly from standard stained tissue sections — a feat that outperforms previous models traditionally limited to single-target mutation prediction.

The innovative model represents a leap forward from prior deep learning frameworks by addressing the co-occurrence of genetic mutations and shared morphological features within tumors. Earlier AI systems largely focused on identifying one mutation at a time, missing the intricate interplay and combined phenotypic manifestations that multiple concurrent genetic aberrations produce. By capturing shared visual patterns that correlate with multiple genetic markers, the model lays the groundwork for more holistic and nuanced interpretations of tumor biology right from histopathological images, offering insights previously accessible primarily through expensive and time-intensive molecular testing.

Marco Gustav, M.Sc., the study’s lead author and a research associate at the Else Kröner Fresenius Center for Digital Health (EKFZ) at TU Dresden, explains, “Our transformer model detects numerous biomarkers concurrently, including mutations that have not yet attained clinical relevance. This comprehensive identification is made feasible by recognizing shared tissue morphology changes, particularly prevalent in microsatellite instability (MSI)-high tumors—a critical subtype of colorectal cancer.” MSI describes a molecular condition resulting from defective DNA repair systems, leading to unstable repetitive DNA sequences, a hallmark linked with distinct therapeutic responses, especially to immunotherapy.

.adsslot_Ke4qRcokDO{width:728px !important;height:90px !important;}
@media(max-width:1199px){ .adsslot_Ke4qRcokDO{width:468px !important;height:60px !important;}
}
@media(max-width:767px){ .adsslot_Ke4qRcokDO{width:320px !important;height:50px !important;}
}

ADVERTISEMENT

Microsatellite instability (MSI) is a pivotal factor in colorectal cancer diagnostics and treatment stratification, given its association with better responses to immune checkpoint inhibitors. Detecting MSI status directly from pathology slides using AI could revolutionize clinical workflows, providing rapid, cost-effective preliminaries without waiting for molecular assays. The ML model’s capability extends beyond just MSI detection to encompass key driver mutations, such as those in the BRAF and RNF43 genes, which are essential for prognostication and targeted treatment decisions. The model’s performance matches or even surpasses traditional single-target predictive frameworks, underscoring the power of embracing multi-target learning strategies.

An integral aspect of the study was its collaborative nature, integrating pathology expertise to ensure rigorous assessment of tissue morphology and validate AI outputs. Dr. Nic G. Reitsam, a pathologist affiliated with the Medical Faculty at the University of Augsburg, contributed critical domain knowledge that anchored the model’s development in practical and clinically relevant contexts. This interplay between computational scientists and experienced medical specialists exemplifies a growing trend where digital pathology and machine learning converge to redefine diagnostic paradigms.

Jakob N. Kather, Professor of Clinical Artificial Intelligence at EKFZ and senior oncologist at the National Center for Tumor Diseases and University Hospital Carl Gustav Carus Dresden, highlights the transformative potential: “By accelerating diagnostic routines and unveiling intricate genotype-phenotype relationships, AI-driven methodologies can refine patient selection methods for molecular testing and tailor personalized therapeutic approaches. Our work points to a future where integrated digital tools form a cornerstone of oncology practice.” This vision embodies precision medicine, where treatments and prognoses are finely tuned to a tumor’s unique molecular and phenotypic landscape.

The methodological core—the multi-target transformer architecture—derives from recent advances in natural language processing adapted for medical image analysis. This architecture attentively processes entire histology slides, recognizing contextual morphological cues linked to various mutations without requiring prior knowledge of each mutation’s individual effects. Such holistic image interpretation contrasts starkly with older machine learning methods that isolated features or required manual region-of-interest selection, limiting comprehensiveness and robustness.

Testing the model across multiple independent cohorts in geographically and demographically diverse populations further solidified its generalizability and clinical relevance. The inclusion of centers from the Medical University of Vienna, Fred Hutchinson Cancer Center, Mayo Clinic, University of Augsburg, and NCT Heidelberg ensured that findings are broadly applicable and not confined to a single institutional setting. This wide collaboration also facilitated access to rich datasets harmonizing histopathology and clinical metadata, a prerequisite for reliable deep learning analyses in oncology.

The study’s implications extend beyond colorectal cancer. By successfully decoding complex genotype-phenotype correlations in this common cancer type, the research paves the way for applying similar deep learning models to other malignancies where genetic heterogeneity and histological variability complicate diagnosis and treatment. Future iterations of the model could incorporate a broader array of biomarkers and integrate multi-modal data such as radiological images, further enriching predictive accuracy.

Clinical integration of these AI tools promises to trim turnaround times for pathology reports, reduce costs associated with comprehensive molecular profiling, and potentially improve patient outcomes through earlier and more tailored interventions. However, prospective clinical trials remain vital to establishing standardized protocols and regulatory approvals for routine practice. Ethical considerations, including data privacy, algorithm transparency, and equitable access, also warrant concerted attention as AI technologies become embedded in healthcare.

Furthermore, the EKFZ for Digital Health itself represents an innovative institutional model, funded generously by the Else Kröner Fresenius Foundation to foster cross-disciplinary digital health research. Since its establishment in 2019 at TU Dresden and the University Hospital Carl Gustav Carus Dresden, EKFZ has cultivated an environment where computational innovation and clinical expertise synergize to address pressing medical challenges, as exemplified by this landmark colorectal cancer study.

Altogether, this study exemplifies how cutting-edge artificial intelligence can decode the complex molecular tapestry of cancer from routine clinical materials, shifting diagnostic paradigms and opening avenues for personalized medicine. As researchers build upon this foundation, deep learning’s role in oncology is poised to grow, yielding profound impacts on patient care and outcomes.

Subject of Research: Deep learning-based prediction of genotype–phenotype correlations in colorectal cancer using histopathological images

Article Title: Assessing genotype−phenotype correlations in colorectal cancer with deep learning: a multicentre cohort study

News Publication Date: 18-Aug-2025

Image Credits: Anja Stübner / EKFZ

Keywords: Cancer, Colorectal cancer, Diseases and disorders, Computer science, Artificial intelligence

Tags: advanced AI frameworks for tumor biologyAI in colorectal cancer researchco-occurrence of genetic mutations in cancerdeep learning in precision oncologydigitized tissue slides for cancer diagnosishistological analysis of tissue samplesinnovations in molecular testing for cancer detectionintegrating clinical datasets with histopathologymulti-target transformer model for genetic mutationsnovel approaches in cancer mutation predictionpersonalized medicine and cancer treatmentsimultaneous detection of cancer markers



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Albania appoints world’s first AI-made minister – POLITICO

Published

on


The pro-Brexit politician is set to visit Tirana to settle a debate with the country’s prime minister over how many Albanians are in U.K. prisons, but what else could be on his itinerary while he’s there?


Jul 11


7 mins read





Source link

Continue Reading

AI Research

AI & Elections – McCourt School – Public Policy

Published

on


How can artificial intelligence transform election administration while preserving public trust? This spring, the McCourt School of Public Policy partnered with The Elections Group and Discourse Labs to tackle this critical question at a groundbreaking workshop.

The day-long convening brought together election officials, researchers, technology experts and civil society leaders to chart a responsible path forward for AI in election administration. Participants focused particularly on how AI could revolutionize voter-centric communications—from streamlining information delivery to enhancing accessibility.

The discussions revealed both promising opportunities for public service innovation and legitimate concerns about maintaining institutional trust in our democratic processes. Workshop participants developed a comprehensive set of findings and actionable recommendations that could shape the future of election technology.

Expert Insights from Georgetown’s Leading Researchers

To unpack the workshop’s key insights, we spoke with two McCourt School experts who are at the forefront of this intersection between technology and democracy:

Ioannis Ziogas is an Assistant Teaching Professor at the McCourt School of Public Policy, an Assistant Research Professor at the Massive Data Institute, and Associate Director of the Data Science for Public Policy program. His work bridges the gap between cutting-edge data science and real-world policy challenges.

Headshot of Lia Merivaki

Lia Merivaki is an Associate Teaching Professor at the McCourt School of Public Policy and Associate Research Professor at the Massive Data Institute, where she focuses on the practical applications of technology in democratic governance, particularly election integrity and voter confidence.

Together, they address five essential questions about AI’s role in election administration—and what it means for voters, officials and democracy itself.

Q1

How is AI currently being used in election administration, and are there particular jurisdictions that are leading in adoption?

Ioannis: When we talk about AI in elections, we need to clarify that it is not a single technology but a family of approaches, from predictive analytics to natural language processing to generative AI. In practice, election officials are already using generative AI routinely for communication purposes such as drafting social media posts and shaping public-facing messages. These efforts aim to increase trust in the election process and make information more accessible. Some offices have even experimented with using generative AI to design infographics, though this can be tricky due to hallucinations or inaccuracies. More recently, local election officials are exploring AI to streamline staff training, operations, or to summarize complex legal documents.

Our work focuses on alerting election officials to the limitations of generative AI, such as model drift and bias propagation. A key distinction we emphasize in our research is between AI as a backend administrative tool (which voters may never see) and AI as a direct interface with the public (where voter trust and transparency become central). We believe that generative AI tools can be used in both contexts, provided that there is awareness of the challenges and limitations.

Lia: Election officials have been familiar with AI for quite some time, primarily to understand how to mitigate AI-generated misinformation. A leader in this space has been the Arizona Secretary of State Adrian Fontes, who conducted the first of its kind deepfake detection tabletop exercise in preparation for the 2024 election cycle. 

We’ve had conversations with election officials in California, New Mexico, North Carolina, Florida, Maryland and others whom we call early adopters, with many more being ‘AI curious.’

Q2

Security is always a big concern when it comes to the use of AI. Talk about what risks are introduced by bringing AI into election administration, and conversely, how AI can help detect and prevent any type of election interference and voter fraud.

Ioannis: From my perspective, the core security challenge is not only technical but also about privacy and trust. AI systems, by design, rely on large volumes of data. In election contexts, this often includes sensitive voter information. Even when anonymized, the use of personal data raises concerns about surveillance, profiling, or accidental disclosure. Another risk relates to delegating sensitive tasks to AI, which can render election systems vulnerable to adversarial attacks or hidden biases baked into the models. 

At the same time, AI can support security: machine learning can detect coordinated online influence campaigns, identify anomalous traffic to election websites, or flag irregularities that warrant further human review. In short, I view AI as both a potential shield and a potential vulnerability, which is why careful governance and transparency are essential. That is why I believe it is critical to pair AI adoption with clear safeguards, training and guidance, so that officials can use these tools confidently and responsibly.

Lia: A potential risk we are trying to mitigate is the impact of relying on AI for important administrative tasks on voter trust. For instance, voters who call/email their election official and expect to talk with them, but instead interact with a chatbot, may feel disappointed and in turn not trust the information as well as the election official. There is also some evidence that voters do not trust information that is generated with AI, particularly when its use is disclosed.

As for detecting and preventing any irregularities, over-reliance on AI can be problematic and can lead to disenfranchisement. To illustrate, AI can help identify individuals in voter records whose information is missing, which would seemingly make the process of maintaining accurate lists more efficient. The election office can send a letter to these individuals to verify they are citizens, and ask for their information to be updated. This seems like a sound practice; however, it violates federal law, and it risks making eligible voters feel intimidated, or having their eligibility challenged by bad actors. The reality is that maintaining voter records is a highly complex process, and data entry errors are very common. Deploying AI models to substitute existing practices in election administration such as voter list maintenance – with the goal of detecting whether non-citizens register or whether dead dead voters exist in voter rolls – can harm voters and undermine trust.

Q3

What are the biggest barriers to AI adoption in election administration – technical, financial, or political?

Lia: There are significant skills and knowledge gaps among election officials when it comes to utilizing technology generally, and we see such gaps with AI adoption, which is not surprising. Aside from technical barriers, election offices are under-resourced, especially at the local jurisdiction level. We observe that policies around AI adoption in public administration generally, and election administration specifically, are sparse at the moment.

While the election community invested a lot of resources to safeguard the election infrastructure against the threats of AI, we are not seeing – yet – a proportional effort to educate and prepare election officials on how to use AI to improve elections. To better understand the landscape of AI adoption and how to best support the election community, we hosted an exploratory workshop at McCourt in April 2025, in collaboration with The Elections Group and Discourse Labs. In this workshop, we brought together election officials, industry, civil society leaders and other practitioners to discuss how AI tools are used by election officials, what technical barriers exist and how to move forward with designing policies on ethical and responsible use of AI in election administration. Through this workshop, we identified a list of priorities which require close collaboration among the election community, academia, civil society and industry, to ensure that the adoption of AI is done responsibly, ethically and efficiently, without negatively affecting the voter experience.

Ioannis: I would highlight that barriers are not just about resources but also about institutional design. Election officials often work in environments of high political scrutiny but low budgets and limited technical staff. Introducing AI tools into that context requires financial investment and clear guidance on how to evaluate these systems: what counts as success, how to measure error rates and how to align tools with federal and state regulations. Beyond that, there is a cultural barrier. Many election officials are understandably cautious; they’ve spent the past decade defending democracy against disinformation and cyber threats, so embracing new technologies requires trust and confidence that AI will not introduce new risks. That is why partnerships with universities and nonpartisan civil-society groups are critical: they provide a space to pilot ideas, build capacity, and translate research into practice.

Our two priorities are to help narrow the skills gap and build frameworks for ethical and responsible AI use. At McCourt, we’re collaborating with the Arizona State University’s Mechanics of Democracy Lab, which is developing training materials and custom-AI products for election officials. Drawing on our background in AI and elections, we aim to provide election officials with a practical resource that maps out both the risks and the potential of these tools, and that helps them identify ideal use cases where AI can enhance efficiency without compromising trust or voter experience.

Q4

Looking ahead, what emerging AI technologies could transform election administration in the next 5-10 years?

Lia: It’s hard to predict really. At the moment we are seeing high interest from vendors and election officials to integrate AI into elections. Concerns about security and privacy will undoubtedly shape the discussion about what AI can do for the election infrastructure. It could be possible that we see a liberal approach to using AI technologies to communicate with voters, produce training materials, translate election materials into non-English languages, among others. That said, elections are run by humans, and maintaining public trust relies on having “humans in the – elections – loop.” This, coupled with ongoing debates about how AI should or should not be regulated, may result in more guardrails and restrictions over time.

Ioannis: One promising direction is multimodal AI: systems that process text, audio and images together. For election officials, this could mean automatically generating plain-language guides, sign-language translations, or sample audio ballots to improve accessibility. But these same tools can amplify risks if their limitations are not understood. For that reason, any adoption will need to be coupled with auditing, transparency and education for election staff, so they view AI as a supportive tool rather than a replacement platform or a black box.

Q5

What guidelines or regulatory frameworks are needed to govern AI use in elections?

Ioannis: We urgently need a baseline framework that establishes what is permissible, what requires disclosure, and what is off-limits. Today, election officials are experimenting with AI in a largely unregulated space, and they are eager for guidance. A responsible framework should include at least three elements: a) transparency: voters should know when AI-generated materials are used in communications; b) accountability: human oversight should retain the final authority, with AI serving only as a support; and c) auditing: independent experts must be able to test and evaluate these tools for accuracy, bias and security.



Source link

Continue Reading

AI Research

AI Transformation in NHS Faces Key Challenges: Study

Published

on


Implementing artificial intelligence (AI) into NHS hospitals is far harder than initially anticipated, with complications around governance, harmonisation with old IT systems and finding the right AI tools and staff training, finds a major new UK study led by UCL researchers.

The authors of the study, published in The Lancet eClinicalMedicine, say the findings should provide timely and useful learning for the UK Government, whose recent 10-year NHS plan identifies digital transformation, including AI, as a key platform to improving the service and patient experience.

In 2023, NHS England launched a programme to introduce AI to help diagnose chest conditions, including lung cancer, across 66 NHS hospital trusts in England, backed by £21 million in funding. The trusts are grouped into 12 imaging diagnostic networks: these hospital networks mean more patients have access to specialist opinions. Key functions of these AI tools included prioritising critical cases for specialist review and supporting specialists’ decisions by highlighting abnormalities on scans.

Funded by the National Institute for Health and Care Research (NIHR), this research was conducted by a team from UCL, the Nuffield Trust, and the University of Cambridge, analysing how procurement and early deployment of the AI tools went. The study is one of the first studies to analyse real-world implementation of AI in healthcare.

Evidence from previous studies, mostly laboratory-based, suggested that AI might benefit diagnostic services by supporting decisions, improving detection accuracy, reducing errors and easing workforce burdens.

In this UCL-led study, the researchers reviewed how the new diagnostic tools were procured and set up through interviews with hospital staff and AI suppliers, identifying any pitfalls but also any factors that helped smooth the process.

They found that setting up the AI tools took longer than anticipated by the programme’s leadership. Contracting took between four and 10 months longer than anticipated and by June 2025, 18 months after contracting was meant to be completed, a third (23 out of 66) of the hospital trusts were not yet using the tools in clinical practice.

Key challenges included engaging clinical staff with already high workloads in the project, embedding the new technology in ageing and varied NHS IT systems across dozens of hospitals and a general lack of understanding, and scepticism, among staff about using AI in healthcare.

The study also identified important factors which helped embed AI including national programme leadership and local imaging networks sharing resources and expertise, high levels of commitment from hospital staff leading implementation, and dedicated project management.

The researchers concluded that while “AI tools may offer valuable support for diagnostic services, they may not address current healthcare service pressures as straightforwardly as policymakers may hope” and are recommending that NHS staff are trained in how AI can be used effectively and safely and that dedicated project management is used to implement schemes like this in the future.

First author Dr Angus Ramsay (UCL Department of Behavioural Science and Health) said: “In July ministers unveiled the Government’s 10-year plan for the NHS, of which a digital transformation is a key platform.

“Our study provides important lessons that should help strengthen future approaches to implementing AI in the NHS.

“We found it took longer to introduce the new AI tools in this programme than those leading the programme had expected.

“A key problem was that clinical staff were already very busy – finding time to go through the selection process was a challenge, as was supporting integration of AI with local IT systems and obtaining local governance approvals. Services that used dedicated project managers found their support very helpful in implementing changes, but only some services were able to do this.

“Also, a common issue was the novelty of AI, suggesting a need for more guidance and education on AI and its implementation.

“AI tools can offer valuable support for diagnostic services, but they may not address current healthcare service pressures as simply as policymakers may hope.”

The researchers conducted their evaluation between March and September last year, studying 10 of the participating networks and focusing in depth on six NHS trusts. They interviewed network teams, trust staff and AI suppliers, observed planning, governance and training and analysed relevant documents.

Some of the imaging networks and many of the hospital trusts within them were new to procuring and working with AI.

The problems involved in setting up the new tools varied – for example, in some cases those procuring the tools were overwhelmed by a huge amount of very technical information, increasing the likelihood of key details being missed. Consideration should be given to creating a national approved shortlist of potential suppliers to facilitate procurement at local level, the researchers said.

Another problem was initial lack of enthusiasm among some NHS staff for the new technology in this early phase, with some more senior clinical staff raising concerns about the potential impact of AI making decisions without clinical input and on where accountability lay in the event a condition was missed. The researchers found the training offered to staff did not address these issues sufficiently across the wider workforce – hence their call for early and ongoing training on future projects.

In contrast, however, the study team found the process of procurement was supported by advice from the national team and imaging networks learning from each other. The researchers also observed high levels of commitment and collaboration between local hospital teams (including clinicians and IT) working with AI supplier teams to progress implementation within hospitals.

Senior author Professor Naomi Fulop (UCL Department of Behavioural Science and Health) said: “In this project, each hospital selected AI tools for different reasons, such as focusing on X-ray or CT scanning, and purposes, such as to prioritise urgent cases for review or to identify potential symptoms.

“The NHS is made up of hundreds of organisations with different clinical requirements and different IT systems and introducing any diagnostic tools that suit multiple hospitals is highly complex. These findings indicate AI might not be the silver bullet some have hoped for but the lessons from this study will help the NHS implement AI tools more effectively.”

Limitations

While the study has added to the very limited body of evidence on the implementation and use of AI in real-world settings, it focused on procurement and early deployment. The researchers are now studying the use of AI tools following early deployment when they have had a chance to become more embedded. Further, the researchers did not interview patients and carers and are therefore now conducting such interviews to address important gaps in knowledge about patient experiences and perspectives, as well as considerations of equity.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.



Source link

Continue Reading

Trending