Connect with us

AI Research

Implementing Artificial Intelligence in Critical Care Medicine: a consensus of 22 | Critical Care

Published

on


Artificial intelligence (AI) is rapidly entering critical care, where it holds the potential to improve diagnostic accuracy and prognostication, streamline intensive care unit (ICU) workflows, and enable personalized care. [1, 2] Without a structured approach to implementation, evaluation, and control, this transformation may be hindered or possibly lead to patient harm and unintended consequences.

Despite the need to support overwhelmed ICUs facing staff shortages, increasing case complexity, and rising costs, most AI tools remain poorly validated and untested in real settings. [3, 45]

To address this gap, we issue a call to action for the critical care community: the integration of AI into the ICU must follow a pragmatic, clinically informed, and risk-aware framework. [6,7,8] As a result of a multidisciplinary consensus process with a panel of intensivists, AI researchers, data scientists and experts, this paper offers concrete recommendations to guide the safe, effective, and meaningful adoption of AI into critical care.

Methods

The consensus presented in this manuscript emerged through expert discussions, rather than formal grading or voting on evidence, in recognition that AI in critical care is a rapidly evolving field where many critical questions remain unanswered. Participants were selected by the consensus chairs (MC, AB, FT, and JLV) based on their recognized contributions to AI in critical care to ensure representation from both clinical end-users and AI developers. Discussions were iterative with deliberate engagement across domains, refining recommendations through critical examination of real-world challenges, current research, and regulatory landscapes.

While not purely based on traditional evidence grading, this manuscript reflects a rigorous, expert-driven synthesis of key barriers and opportunities for AI in critical care, aiming to bridge existing knowledge gaps and provide actionable guidance in a rapidly evolving field. To guide physicians in this complex and rapidly evolving arena [9], some of the current taxonomy and classifications are reported in Fig. 1.

Fig. 1

Taxonomy of AI in critical care

Main barriers and challenges for AI integration in critical care

The main barriers to AI implementation in critical care determined by the expert consensus are presented in this section. These unresolved and evolving challenges have prompted us to develop a series of recommendations to physicians and other healthcare workers, patients, and societal stakeholders, emphasizing the principles we believe should guide the advancement of AI in healthcare. Challenges and principles are divided into four main areas, 1) human-centric AI; 2) Recommendation for clinician training on AI use; 3) standardization of data models and networks and 4) AI governance. These are summarized in Fig. 2 and discussed in more detail in the next paragraphs.

Fig. 2
figure 2

Recommendations, according to development of standards for networking, data sharing and research, ethical challenges, regulations and societal challenges, and clinical practice

The development and maintenance of AI applications in medicine require enormous computational power, infrastructure, funding and technical expertise. Consequently, AI development is led by major technology companies whose goals may not always align with those of patients or healthcare systems [10, 11]. The rapid diffusion of new AI models contrasts sharply with the evidence-based culture of medicine. This raises concerns about the deployment of insufficiently validated clinical models. [12]

Moreover, many models are developed using datasets that underrepresent vulnerable populations, leading to algorithmic bias. [13] AI models may lack both temporal validity (when applied to new data in a different time) and geographic validity (when applied across different institutions or regions). Variability in temporal or geographical disease patterns including demographics, healthcare infrastructure, and the design of Electronic Health Records (EHR) further complicates generalizability.

Finally, the use of AI raises ethical concerns, including trust in algorithmic recommendations and the risk of weakening the human connection at the core of medical practice, which is the millenary relation between physicians and patients. [14]

Recommendations

Here we report recommendations, divided in four domains. Figure 3 reports a summary of five representative AI use cases in critical care—ranging from waveform analysis to personalized clinician training—mapped across these four domains.

Fig. 3
figure 3

Summary of five representative AI use cases in critical care—ranging from waveform analysis to personalized clinician training—mapped across these 4 domains

Strive for human-centric and ethical AI utilization in healthcare

Alongside its significant potential benefit, the risk of AI misuse cannot be underestimated. AI algorithms may be harmful when prematurely deployed without adequate control [9, 15,16,17]. In addition to the regulatory frameworks that have been established to maintain control (presented in Sect.”Governance and regulation for AI in Critical Care“) [18, 19] we advocate for clinicians to be involved in this process and provide guidance.

Develop human-centric AI in healthcare

AI development in medicine and healthcare should maintain a human-centric perspective, promote empathetic care, and increase the time allocated to patient-physician communication and interaction. For example, the use of AI to replace humans in time-consuming or bureaucratic tasks such as documentation and transfers of care [20,21,22]. It could craft clinical notes, ensuring critical information is accurately captured in health records while reducing administrative burdens [23].

Establish social contract for AI use in healthcare

There is a significant concern that AI may exacerbate societal healthcare disparities [24]. When considering AI’s potential influence on physicians’choices and behaviour, the possibility of including or reinforcing biases should be examined rigorously to avoid perpetuating existing health inequities and unfair data-driven associations [24]. It is thus vital to involve patients and societal representatives in discussions regarding the vision of the next healthcare era, its operations, goals, and limits of action [25]. The desirable aim would be to establish a social contract for AI in healthcare, to ensure the accountability and transparency of AI in healthcare. A social contract for AI in healthcare should define clear roles and responsibilities for all stakeholders—clinicians, patients, developers, regulators, and administrators. This includes clinicians being equipped to critically evaluate AI tools, developers ensuring transparency, safety, and clinical relevance, and regulators enforcing performance, equity, and post-deployment monitoring standards. We advocate for hospitals to establish formal oversight mechanisms, such as dedicated AI committees, to ensure the safe implementation of AI systems. Such structures would help formalize shared accountability and ensure that AI deployment remains aligned with the core values of fairness, safety, and human-centred care.

Prioritize human oversight and ethical governance in clinical AI

Since the Hippocratic oath, patient care has been based on the doctor-patient connection where clinicians bear the ethical responsibility to maximize patient benefit while minimizing harm. As AI technologies are increasingly integrated into healthcare, their responsibility must also extend to overseeing its development and application. In the ICU, where treatment decisions balance between individual patient preferences and societal consideration, healthcare professionals must lead this transition [26]. As intensivists, we should maintain governance of this process, ensuring ethical principles and scientific rigor guide the development of frameworks to measure fairness, assess bias, and establish acceptable thresholds for AI uncertainty [6,7,8].

While AI models are rapidly emerging, most are being developed outside the medical community. To better align AI development with clinical ethics, we propose the incorporation of multidisciplinary boards comprising clinicians, patients, ethicists, and technological experts, who should be responsible for systematically reviewing algorithmic behaviour in critical care, assessing the risks of bias, and promoting transparency in decision-making processes. In this context, AI development offers an opportunity to rethink and advance ethical principles in patient care.

Recommendations for clinician training on AI use

Develop and assess the Human-AI interface

Despite some promising results [27, 28], the clinical application of AI remains limited [29,30,31]. The first step toward integration is to understand how clinicians interact with AI and to design systems that complement, rather than disrupt, clinical reasoning [32]. This translates into the need for specific research on the human-AI interface, where a key area of focus is identifying the most effective cognitive interface between clinicians and AI systems. On one side, physicians may place excessive trust on AI model results, possibly overlooking crucial information. For example, in sepsis detection an AI algorithm might miss an atypical presentation or a tropical infectious disease due to limitations in its training data; if clinicians overly trust the algorithm’s negative output, they may delay initiating a necessary antibiotic. On the other, the behaviour of clinicians can influence AI responses in unintended ways. To better reflect this interaction, the concept of synergy between human and AI has been proposed in the last years, emphasizing that AI supports rather than replaces human clinicians [33]. This collaboration has been described in two forms: human-AI augmentation (when human–AI interface enhances clinical performance compared to human alone) and human-AI synergy (where the combined performance exceeds that of both the human and the AI individually) [34]. To support the introduction of AI in clinical practice in intensive care, we propose starting with the concept of human-AI augmentation, which is more inclusive and better established according to medical literature [34]. A straightforward example of the latter is the development of interpretable, real-time dashboards that synthetize complex multidimensional data into visual formats, thereby enhancing clinicians’ situational awareness without overwhelming them.

Improve disease characterization with AI

Traditional procedures for classifying patients and labelling diseases and syndromes based on a few simple criteria are the basis of medical education, but they may fail to grasp the complexity of underlying pathology and lead to suboptimal care. In critical care, where patient conditions are complex and rapidly evolving, AI-driven phenotyping plays a crucial role by leveraging vast amounts of genetic, radiological, biomarker, and physiological data. AI-based phenotyping methods can be broadly categorized into two approaches.

One approach involves unsupervised clustering, in which patients are grouped based on shared features or patterns without prior labelling. Seymour et al. demonstrated how machine learning can stratify septic patients into clinically meaningful subgroups using high-dimensional data, which can subsequently inform risk assessment and prognosis [35]. Another promising possibility is the use of supervised or semi-supervised clustering techniques, which incorporate known outcomes or partial labelling to enhance the phenotyping of patient subgroups [36].

The second approach falls under the causal inference framework, where phenotyping is conducted with the specific objective of identifying subgroups that benefit from a particular intervention due to a causal association. This method aims to enhance personalized treatment by identifying how treatment effects vary among groups, ensuring that therapies are targeted toward patients most likely to benefit. For example, machine learning has been used to stratify critically ill patients based on their response to specific therapeutic interventions, potentially improving clinical outcomes [37]. In a large ICU cohort of patients with traumatic brain injury (TBI), unsupervised clustering identified six distinct subgroups, based on combined neurological and metabolic profiles. [38]

These approaches hold significant potential for advancing acute and critical care by ensuring that AI-driven phenotyping is not only descriptive, but also actionable. Before integrating these methodologies into clinical workflows, we need to make sure clinicians can accept the paradigm shift between broad syndromes and specific sub-phenotypes, ultimately supporting the transition toward personalized medicine [35, 39,40,41].

Ensure AI training for responsible use of AI in healthcare

In addition to clinical practice, undergraduate medical education is also directly influenced by AI transformation [42] as future workers need to be equipped to understand and use these technologies. Providing training and knowledge from the start of their education requires that all clinicians understand data science and AI’s fundamental concepts, methods, and limitations, which should be included in medical degree core curriculum. This will allow clinicians to use and assess AI critically, identify biases and limitations, and make well-informed decisions, which may ultimately benefit the medical profession’s identity crisis and provide new careers in data analysis and AI research [42].

In addition to undergraduate education, it is essential to train experienced physicians, nurses, and other allied health professional [43]. The effects of AI on academic education are deep and outside the scope of the current manuscript. One promising example is the use of AI to support personalized, AI-driven training for clinicians—both in clinical education and in understanding AI-related concepts [44]. Tools such as chatbots, adaptive simulation platforms, and intelligent tutoring systems can adapt content to students’ learning needs in real time, offering a tailored education. This may be applied to both clinical training and training in AI domains.

Accepting uncertainty in medical decision-making

Uncertainty is an intrinsic part of clinical decision-making, with which clinicians are familiar and are trained to navigate it through experience and intuition. However, AI models introduce a new type of uncertainty, which can undermine clinicians’trust, especially when models function as opaque “black boxes” [45,46,47]. This increases cognitive distance between model and clinical judgment, as clinicians don’t know how to interpret it. To bridge this gap, explainable AI (XAI) has emerged, providing tools to make model predictions more interpretable and, ideally, more trustworthy to reduce perceived uncertainty [48].

Yet, we argue that interpretability alone is not enough [48].To accelerate AI adoption and trust, we advocate that physicians must be trained to interpret outputs under uncertainty—using frameworks like plausibility, consistency with known biology, and alignment with consolidated clinical reasoning—rather than expecting full explainability [49].

Standardize and share data while maintaining patient privacy

In this section we present key infrastructures for AI deployment in critical care [50]. Their costs should be seen as investment in patient outcomes, processes efficiency, and reduced operational costs. Retaining data ownership within healthcare institutions, and recognizing patients and providers as stakeholders, allows them to benefit from the value their data creates. On the contrary, without safeguards clinical data risk becoming proprietary products of private companies—which are resold to their source institutions rather than serving as a resource for their own development—for instance, through the development and licensing of synthetic datasets [51].

Standardize data to promote reproducible AI models

Standardized data collection is essential for creating generalizable and reproducible AI models and fostering interoperability between different centres and systems. A key challenge in acute and critical care is the variability in data sources, including EHRs, multi-omics data (genomics, transcriptomics, proteomics, and metabolomics), medical imaging (radiology, pathology, and ultrasound), and unstructured free-text data from clinical notes and reports. These diverse data modalities are crucial for developing AI-driven decision-support tools, yet their integration is complex due to differences in structure, format, and quality across healthcare institutions.

For instance, the detection of organ dysfunction in the ICU, hemodynamic monitoring collected by different devices, respiratory parameters from ventilators by different manufacturers, and variations in local policies and regulations all impact EHR data quality, structure, and consistency across different centres and clinical trials.

The Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM), which embeds standard vocabularies such as LOINC and SNOMED CT, continues to gain popularity as a framework for structuring healthcare data, enabling cross-centre data exchange and model interoperability [52,53,54]. Similarly, Fast Healthcare Interoperability Resources (FHIR) offers a flexible, standardized information exchange solution, facilitating real-time accessibility of structured data [55].

Hospitals, device and EHR companies must contribute to the adoption of recognized standards to make sure interoperability is not a barrier to AI implementation.

Beyond structured data, AI has the potential to enhance data standardization by automatically tagging and labelling data sources, tracking provenance, and harmonizing data formats across institutions. Leveraging AI for these tasks can help mitigate data inconsistencies, thereby improving the reliability and scalability of AI-driven clinical applications.

Prioritize data safety, security, and patient privacy

Data safety, security and privacy are all needed for the application of AI in critical care. Data safety refers to the protection of data from accidental loss or system failure, while data security is related with defensive strategies for malicious attacks including hacking, ransomware, or unauthorized data access [56]. In modern hospitals, data safety and security will soon become as essential as wall oxygen in operating rooms [57, 58]. A corrupted or hacked clinical dataset during hospital care could be as catastrophic as losing electricity, medications, or oxygen. Finally, data privacy focuses on the safeguard of personally information, ensuring that patient data is stored and accessed in compliance with legal standards [56].

Implementing AI that prioritizes these three pillars will be critical for resilient digital infrastructure in healthcare. A possible option for the medical community is to support open-source modes to increase transparency and reduce dependence on proprietary algorithms, and possibly enable better control of safety and privacy issues within the distributed systems [59]. However, sustaining open-source innovation requires appropriate incentives, such as public or dedicated research funding, academic recognition, and regulatory support to ensure high-quality development and long-term viability [60]. Without such strategies, the role of open-source models will be reduced, with the risk of ceding a larger part of control of clinical decision-making to commercial algorithms.

Develop rigorous AI research methodology

We believe AI research should be held to the same methodological standards of other areas of medical research. Achieving this will require greater accountability from peer reviewers and scientific journals to ensure rigor, transparency, and clinical relevance.

Furthermore, advancing AI in ICU research requires a transformation in the necessary underlying infrastructure, particularly when considering high-frequency data collection and the integration of complex, multimodal patient information, detailed in the sections below. In this context, the gap in data resolution between highly monitored environments such as ICUs and standard wards become apparent. The ICU provides a high level of data granularity due to high resolution monitoring systems, capable of capturing the rapid changes in a patient’s physiological status [61]. Consequently, the integration of this new source of high-volume, rapidly changing physiological data into medical research and clinical practice could give rise to “physiolomics”, a proposed term to describe this domain, that could become as crucial as genomics, proteomics and other “-omics” fields in advancing personalized medicine.

AI will change how clinical research is performed, improving evidence-based medicine and conducting randomized clinical trials (RCTs) [62]. Instead of using large, heterogeneous trial populations, AI might help researchers design and enrol tailored patient subgroups for precise RCTs [63, 64]. These precision methods could solve the problem of negative critical care trials related to inhomogeneities in the population and significant confounding effects. AI could thus improve RCTs by allowing the enrolment of very subtle subgroups of patients with hundreds of specific inclusion criteria over dozens of centres, a task impossible to perform by humans in real-time practice, improving trial efficiency in enrolling enriched populations [65,66,67]. In the TBI example cited, conducting an RCT on the six AI-identified endotypes—such as patients with moderate GCS but severe metabolic derangement—would be unfeasible without AI stratification [38]. This underscores AI’s potential to enable precision trial designs in critical care.

There are multiple domains for interaction between AI and RCT, though a comprehensive review is beyond the scope of this paper. These include trial emulation to identify patient populations that may benefit most from an intervention, screening for the most promising drugs for interventions, detecting heterogeneity of treatment effects, and automated screening to improve the efficiency and cost of clinical trials.

Ensuring that AI models are clinically effective, reproducible, and generalizable requires adherence to rigorous methodological standards, particularly in critical care where patient heterogeneity, real-time decision-making, and high-frequency data collection pose unique challenges. Several established reporting and validation frameworks already provide guidance for improving AI research in ICU settings. While these frameworks are not specific to the ICU environment, we believe these should be rapidly disseminated into the critical care community through dedicated initiatives, courses and scientific societies.

For predictive models, the TRIPOD-AI extension of the TRIPOD guidelines focuses on transparent reporting for clinical prediction models with specific emphasis on calibration, internal and external validation, and fairness [68]. PROBAST-AI framework complements this by offering a structured tool to assess risk of bias and applicability in prediction model studies [69]. CONSORT-AI extends the CONSORT framework to include AI-specific elements such as algorithm transparency and reproducibility for interventional trials with AI [70], while STARD-AI provides a framework for reporting AI-based diagnostic accuracy studies [71]. Together, these guidelines encompass several issues related to transparency, reproducibility, fairness, external validation, and human oversight—principles that must be considered foundational for any trustworthy AI research in healthcare. Despite the availability of these frameworks, many ICU studies involving AI methods still fail to meet these standards, leading to concerns about inadequate external validation and generalizability [68, 72, 73].

Beyond prediction models, critical care-specific guidelines proposed in recent literature offer targeted recommendations for evaluating AI tools in ICU environments, particularly regarding data heterogeneity, patient safety, and integration with clinical workflows. Moving forward, AI research in critical care must align with these established frameworks and adopt higher methodological standards, such as pre-registered AI trials, prospective validation in diverse ICU populations, and standardized benchmarks for algorithmic performance.

Encourage collaborative AI models

Centralizing data collection from multiple ICUs, or federating them into structured networks, enhances external validity and reliability by enabling a scale of data volume that would be unattainable for individual institutions alone [74]. ICUs are at the forefront of data sharing efforts, offering several publicly available datasets for use by the research community [75]. There are several strategies to build collaborative databases. Networking refers to collaborative research consortia [76] that align protocols and pool clinical research data across institutions. Federated learning, by contrast, involves a decentralized approach where data are stored locally and only models or weights are shared between centres [77]. Finally, centralized approaches, such as the Epic Cosmos initiative, leverage de-identified data collected from EHR and stored on a central server providing access to large patient populations for research and quality improvement purposes across the healthcare system [78]. Federated learning is gaining traction in Europe, where data privacy regulations have a more risk-averse approach to AI development, thus favouring decentralized models [79]. In contrast, centralized learning approaches like Epic Cosmos are more common in the United States, where there is a more risk-tolerant environment which favours large-scale data aggregation.

In parallel, the use of synthetic data is emerging as a complementary strategy to enable data sharing while preserving patient privacy. Synthetic datasets are artificially generated to reflect the characteristics of real patient data and can be used to train and test models without exposing sensitive information [80]. The availability of large-scale data, may also support the creation of digital twins. Digital twins, or virtual simulations that mirror an individual’s biological and clinical state and rely on high-volume, high-fidelity datasets, may allow for predictive modelling and virtual testing of interventions before bedside application and improve safety of interventions.

The ICU community should advocate for the diffusion of further initiatives to extended collaborative AI models at national and international level.

Governance and regulation for AI in Critical Care

Despite growing regulatory efforts, AI regulation remains one of the greatest hurdles to clinical implementation, particularly in high-stakes environments like critical care, as regulatory governance, surveillance, and evaluation of model performance are not only conceptually difficult, but also require a large operational effort across diverse healthcare settings. The recent European Union AI Act introduced a risk-based regulatory framework, classifying medical AI as high-risk and requiring stringent compliance with transparency, human oversight, and post-market monitoring [18]. While these regulatory efforts provide foundational guidance, critical care AI presents unique challenges requiring specialized oversight.

By integrating regulatory, professional, and institutional oversight, AI governance in critical care can move beyond theoretical discussions toward actionable policies that balance technological innovation with patient safety [73, 81, 82].

Grant collaboration between public and private sector

Given the complexity and significant economic, human, and computational resources needed to develop a large generative AI model, physicians and regulators should promote partnerships among healthcare institutions, technology companies, and governmental bodies to support the research, development, and deployment of AI-enabled care solutions [83]. Beyond regulatory agencies, professional societies and institutional governance structures must assume a more active role. Organizations such as Society of Critical Care Medicine (SCCM), European Society of Intensive Care Medicine (ESICM), and regulatory bodies like the European Medical Agency (EMA) should establish specific clinical practice guidelines for AI in critical care, including standards for model validation, clinician–AI collaboration, and accountability. Regulatory bodies should operate at both national and supranational levels, with transparent governance involving multidisciplinary representation—including clinicians, data scientists, ethicists, and patient advocates—to ensure decisions are both evidence-based and ethically grounded. To avoid postponing innovation indefinitely, regulation should be adaptive and proportionate, focusing on risk-based oversight and continuous post-deployment monitoring rather than rigid pre-market restrictions. Furthermore, implementing mandatory reporting requirements for AI performance and creating hospital-based AI safety committees could offer a structured, practical framework to safeguard the ongoing reliability and safety of clinical AI applications.

Address AI divide to improve health equality

The adoption of AI may vary significantly across various geographic regions, influenced by technological capacities, (i.e. disparities in access to software or hardware resources), and differences in investments and priorities between countries. This “AI divide” can separate those with high access to AI from those with limited or no access, exacerbating social and economic inequalities.

The EU commission has been proposed to act as an umbrella to coordinate EU wide strategies to reduce the AI divide between European countries, implementing coordination and supporting programmes of activities [84]. The use of specific programmes, such as Marie-Curie training networks, is mentioned here to strengthen the human capital on AI while developing infrastructures and implementing common guidelines and approaches across countries.

A recent document from the United Nations also addresses the digital divide across different economic sectors, recommending education, international cooperation, and technological development for an equitable AI resource and infrastructure allocation [85].

Accordingly, the medical community in each country should lobby at both national level and international level through society and WHO for international collaborations, such as through the development of specific grants and research initiatives. Intensivist should require supranational approaches to standardized data collection and require policies for AI technology and data analysis. Governments, UN, WHO, and scientific society should be the target of this coordinated effort.

Continuous evaluation of dynamic models and post-marketing surveillance

A major limitation in current regulation is the lack of established pathways for dynamic AI models. AI systems in critical care are inherently dynamic, evolving as they incorporate new real-world data, while most FDA approvals rely on static evaluation. In contrast, the EU AI Act emphasizes continuous risk assessment [18]. This approach should be expanded globally to enable real-time auditing, validation, and governance of AI-driven decision support tools in intensive care units, as well as applying to post-market surveillance. The EU AI Act mandates ongoing surveillance of high-risk AI systems, a principle that we advocate to be adopted internationally to mitigate the risks of AI degradation and bias drift in ICU environments. In practice, this requires AI commercial entities to provide post-marketing surveillance plans and to report serious incidents within a predefined time window (15 days or less) [18]. Companies should also maintain this monitoring as the AI systems evolve over time. The implementation of these surveillance systems should include standardized monitoring protocols, embedded incident reporting tools within clinical workflows, participation in performance registries, and regular audits. These mechanisms are overseen by national Market Surveillance Authorities (MSAs), supported by EU-wide guidance and upcoming templates to ensure consistent and enforceable oversight of clinical AI systems.

Require adequate regulations for AI deployment in clinical practice

Deploying AI within complex clinical environments like the ICU, acute wards, or even regular wards presents a complex challenge [86].

We underline three aspects for adequate regulation: first, a rigorous regulatory process for evaluation of safety and efficacy before clinical application of AI products. A second aspect is related with continuous post-market evaluation, which should be mandatory and conducted according to other types of medical devices [18].

The third important aspect is liability, identifying who should be held accountable if an AI decision or a human decision based on AI leads to harm. This relates with the necessity for adequate insurance policies. We urge regulatory bodies in each country to provide regulations on these issues, which are fundamental for AI diffusion.

We also recommend that both patients and clinicians request that regulatory bodies in each country update current legislation and regulatory pathways, including clear rules for insurance policies to anticipate and reduce the risk for case laws.



Source link

AI Research

Instagram wrongly says some users breached child sex abuse rules

Published

on


Graham Fraser

Technology Reporter

Getty Images The logos of Instagram and FacebookGetty Images

Instagram users have told the BBC of the “extreme stress” of having their accounts banned after being wrongly accused by the platform of breaching its rules on child sexual exploitation.

The BBC has been in touch with three people who were told by parent company Meta that their accounts were being permanently disabled, only to have them reinstated shortly after their cases were highlighted to journalists.

“I’ve lost endless hours of sleep, felt isolated. It’s been horrible, not to mention having an accusation like that over my head,” one of the men told BBC News.

Meta declined to comment.

BBC News has been contacted by more than 100 people who claim to have been wrongly banned by Meta.

Some talk of a loss of earnings after being locked out of their business pages, while others highlight the pain of no longer having access to years of pictures and memories. Many point to the impact it has had on their mental health.

Over 27,000 people have signed a petition that accuses Meta’s moderation system, powered by artificial intelligence (AI), of falsely banning accounts and then having an appeal process that is unfit for purpose.

Thousands of people are also in Reddit forums dedicated to the subject, and many users have posted on social media about being banned.

Meta has previously acknowledged a problem with Facebook Groups but denied its platforms were more widely affected.

‘Outrageous and vile’

The BBC has changed the names of the people in this piece to protect their identities.

David, from Aberdeen in Scotland, was suspended from Instagram on 4 June. He was told he had not followed Meta’s community standards on child sexual exploitation, abuse and nudity.

He appealed that day, and was then permanently disabled on Instagram and his associated Facebook and Facebook Messenger accounts.

David found a Reddit thread, where many others were posting that they had also been wrongly banned over child sexual exploitation.

“We have lost years of memories, in my case over 10 years of messages, photos and posts – due to a completely outrageous and vile accusation,” he told BBC News.

He said Meta was “an embarrassment”, with AI-generated replies and templated responses to his questions. He still has no idea why his account was banned.

“I’ve lost endless hours of sleep, extreme stress, felt isolated. It’s been horrible, not to mention having an accusation like that over my head.

“Although you can speak to people on Reddit, it is hard to go and speak to a family member or a colleague. They probably don’t know the context that there is a ban wave going on.”

The BBC raised David’s case to Meta on 3 July, as one of a number of people who claimed to have been wrongly banned over child sexual exploitation. Within hours, his account was reinstated.

In a message sent to David, and seen by the BBC, the tech giant said: “We’re sorry that we’ve got this wrong, and that you weren’t able to use Instagram for a while. Sometimes, we need to take action to help keep our community safe.”

“It is a massive weight off my shoulders,” said David.

Faisal was banned from Instagram on 6 June over alleged child sexual exploitation and, like David, found his Facebook account suspended too.

The student from London is embarking on a career in the creative arts, and was starting to earn money via commissions on his Instagram page when it was suspended. He appealed after feeling he had done nothing wrong, and then his account was then banned a few minutes later.

He told BBC News: “I don’t know what to do and I’m really upset.

“[Meta] falsely accuse me of a crime that I have never done, which also damages my mental state and health and it has put me into pure isolation throughout the past month.”

His case was also raised with Meta by the BBC on 3 July. About five hours later, his accounts were reinstated. He received the exact same email as David, with the apology from Meta.

He told BBC News he was “quite relieved” after hearing the news. “I am trying to limit my time on Instagram now.”

Faisal said he remained upset over the incident, and is now worried the account ban might come up if any background checks are made on him.

A third user Salim told BBC News that he also had accounts falsely banned for child sexual exploitation violations.

He highlighted his case to journalists, stating that appeals are “largely ignored”, business accounts were being affected, and AI was “labelling ordinary people as criminal abusers”.

Almost a week after he was banned, his Instagram and Facebook accounts were reinstated.

What’s gone wrong?

When asked by BBC News, Meta declined to comment on the cases of David, Faisal, and Salim, and did not answer questions about whether it had a problem with wrongly accusing users of child abuse offences.

It seems in one part of the world, however, it has acknowledged there is a wider issue.

The BBC has learned that the chair of the Science, ICT, Broadcasting, and Communications Committee at the National Assembly in South Korea, said last month that Meta had acknowledged the possibility of wrongful suspensions for people in her country.

Dr Carolina Are, a blogger and researcher at Northumbria University into social media moderation, said it was hard to know what the root of the problem was because Meta was not being open about it.

However, she suggested it could be due to recent changes to the wording of some its community guidelines and an ongoing lack of a workable appeal process.

“Meta often don’t explain what it is that triggered the deletion. We are not privy to what went wrong with the algorithm,” she told BBC News.

In a previous statement, Meta said: “We take action on accounts that violate our policies, and people can appeal if they think we’ve made a mistake.”

Meta, in common with all big technology firms, have come under increased pressure in recent years from regulators and authorities to make their platforms safe spaces.

Meta told the BBC it used a combination of people and technology to find and remove accounts that broke its rules, and was not aware of a spike in erroneous account suspension.

Meta says its child sexual exploitation policy relates to children and “non-real depictions with a human likeness”, such as art, content generated by AI or fictional characters.

Meta also told the BBC a few weeks ago it uses technology to identify potentially suspicious behaviours, such as adult accounts being reported by teen accounts, or adults repeatedly searching for “harmful” terms.

Meta states that when it becomes aware of “apparent child exploitation”, it reports it to the National Center for Missing and Exploited Children (NCMEC) in the US. NCMEC told BBC News it makes all of those reports available to law enforcement around the world.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”





Source link

Continue Reading

AI Research

AI Algorithms Now Capable of Predicting Drug-Biological Target Interactions to Streamline Pharmaceutical Research – geneonline.com

Published

on



AI Algorithms Now Capable of Predicting Drug-Biological Target Interactions to Streamline Pharmaceutical Research  geneonline.com



Source link

Continue Reading

AI Research

A Semiconductor Leader Poised for AI-Driven Growth Despite Near-Term Headwinds

Published

on


The semiconductor industry is at a pivotal juncture, fueled by explosive demand for advanced chips powering artificial intelligence (AI), 5G, and high-performance computing. At the heart of this revolution is Lam Research (LRCX), a leader in semiconductor equipment that stands to benefit from secular tailwinds—even as geopolitical risks cloud near-term visibility. This article examines whether LRCX’s valuation, earnings momentum, and strategic positioning justify a buy rating despite a cautious Zacks Rank.

Valuation: Undervalued PEG Ratio Signals Opportunity

Lam Research’s PEG ratio of 1.24 (as of July 2025) remains below both the semiconductor equipment industry average of 1.55 and the broader Electronics-Semiconductors sector’s average of 1.59. This metric, calculated by dividing the P/E ratio by the 5-year EBITDA growth rate, suggests LRCX is trading at a discount to its growth prospects.

The PEG ratio’s allure lies in its dual consideration of valuation and growth. A ratio under 1.5 typically indicates undervaluation, and LRCX’s 1.24 places it squarely in this category. Even if we use the industry average cited in earlier research (2.09), LRCX’s PEG remains compelling. This discount is puzzling given its dominant market share (15% of global wafer fabrication equipment, or WFE) and its role in critical technologies like atomic layer deposition (ALD), essential for AI chip production.

Earnings Momentum: Positive Revisions Amid Industry Growth

Lam’s earnings revisions tell a story of resilience. Despite macroeconomic headwinds, analysts have raised fiscal 2025 EPS estimates to $4.00, a 5% increase from 2024 levels. This upward momentum aligns with LRCX’s 48% year-over-year (YoY) earnings growth projection for Q2 2025.

The semiconductor equipment sector is a prime beneficiary of AI’s rise. AI chips require advanced nodes (e.g., 3nm and below), demanding cutting-edge equipment like LRCX’s etch and deposition tools. This structural demand, paired with rising WFE spending (expected to hit $130 billion by 2027), positions LRCX for sustained growth.

The Zacks Rank Dilemma: Why Hold Doesn’t Tell the Full Story

Lam Research’s Zacks Rank #4 (Sell) as of July 2025 reflects near-term risks, including:
Geopolitical tensions: U.S.-China trade disputes could disrupt LRCX’s China revenue (a major market).
Delayed NAND spending: A slowdown in NAND memory chip investments has dampened short-term demand.

However, the Zacks Rank focuses on 12–24 months of near-term volatility. It underweights long-term catalysts like:
1. AI-driven capex boom: Chipmakers like TSMC and Samsung are ramping up AI-specific foundries, requiring Lam’s tools.
2. Potential China trade thaw: If U.S. sanctions ease, LRCX could regain access to Chinese clients, boosting revenue.

The Rank’s caution is understandable, but investors should separate short-term noise from LRCX’s strong fundamentals:
Forward P/E of 21.6x, below the semiconductor sector’s 35.3x average.
ROE of 53%, reflecting operational efficiency.

Catalysts for a Re-Rating: AI and Geopolitical Shifts

The key catalysts to watch for a valuation rebound are:
1. AI Chip Demand: NVIDIA’s $200 billion AI chip roadmap and Google’s quantum computing investments underscore the need for advanced fabrication tools. LRCX’s ALD systems are critical for these chips.
2. Trade Policy Shifts: A potential easing of U.S.-China trade restrictions could unlock $500 million+ in annual revenue for LRCX.
3. Q3 2025 Earnings: Management’s guidance of $1.00 EPS and $4.65 billion in revenue (both above consensus) could surprise positively.

Risks and Conclusion: A Buy for the Next 12 Months

Lam Research isn’t without risks:
Execution risks: High R&D costs ($1.3 billion annually) could pressure margins.
Macroeconomic slowdown: A recession could delay chip capex.

However, the long-term case for LRCX is too strong to ignore. Its PEG discount, earnings momentum, and strategic position in AI infrastructure justify a buy rating for the next 12 months. Investors should aim for a target price of $110 (25x forward P/E), with upside if China-related risks abate.

In sum, LRCX’s valuation and growth trajectory make it a compelling play on the AI revolution. While near-term headwinds justify caution, the re-rating potential is undeniable.

Investment thesis: Buy LRCX at current levels, with a 12-month price target of $110.
Risk rating: Moderate (geopolitical and macro risks).
Hold for: 12–18 months for valuation expansion.



Source link

Continue Reading

Trending