Connect with us

AI Research

Implementing Artificial Intelligence in Critical Care Medicine: a consensus of 22 | Critical Care

Published

on


Artificial intelligence (AI) is rapidly entering critical care, where it holds the potential to improve diagnostic accuracy and prognostication, streamline intensive care unit (ICU) workflows, and enable personalized care. [1, 2] Without a structured approach to implementation, evaluation, and control, this transformation may be hindered or possibly lead to patient harm and unintended consequences.

Despite the need to support overwhelmed ICUs facing staff shortages, increasing case complexity, and rising costs, most AI tools remain poorly validated and untested in real settings. [3, 45]

To address this gap, we issue a call to action for the critical care community: the integration of AI into the ICU must follow a pragmatic, clinically informed, and risk-aware framework. [6,7,8] As a result of a multidisciplinary consensus process with a panel of intensivists, AI researchers, data scientists and experts, this paper offers concrete recommendations to guide the safe, effective, and meaningful adoption of AI into critical care.

Methods

The consensus presented in this manuscript emerged through expert discussions, rather than formal grading or voting on evidence, in recognition that AI in critical care is a rapidly evolving field where many critical questions remain unanswered. Participants were selected by the consensus chairs (MC, AB, FT, and JLV) based on their recognized contributions to AI in critical care to ensure representation from both clinical end-users and AI developers. Discussions were iterative with deliberate engagement across domains, refining recommendations through critical examination of real-world challenges, current research, and regulatory landscapes.

While not purely based on traditional evidence grading, this manuscript reflects a rigorous, expert-driven synthesis of key barriers and opportunities for AI in critical care, aiming to bridge existing knowledge gaps and provide actionable guidance in a rapidly evolving field. To guide physicians in this complex and rapidly evolving arena [9], some of the current taxonomy and classifications are reported in Fig. 1.

Fig. 1

Taxonomy of AI in critical care

Main barriers and challenges for AI integration in critical care

The main barriers to AI implementation in critical care determined by the expert consensus are presented in this section. These unresolved and evolving challenges have prompted us to develop a series of recommendations to physicians and other healthcare workers, patients, and societal stakeholders, emphasizing the principles we believe should guide the advancement of AI in healthcare. Challenges and principles are divided into four main areas, 1) human-centric AI; 2) Recommendation for clinician training on AI use; 3) standardization of data models and networks and 4) AI governance. These are summarized in Fig. 2 and discussed in more detail in the next paragraphs.

Fig. 2
figure 2

Recommendations, according to development of standards for networking, data sharing and research, ethical challenges, regulations and societal challenges, and clinical practice

The development and maintenance of AI applications in medicine require enormous computational power, infrastructure, funding and technical expertise. Consequently, AI development is led by major technology companies whose goals may not always align with those of patients or healthcare systems [10, 11]. The rapid diffusion of new AI models contrasts sharply with the evidence-based culture of medicine. This raises concerns about the deployment of insufficiently validated clinical models. [12]

Moreover, many models are developed using datasets that underrepresent vulnerable populations, leading to algorithmic bias. [13] AI models may lack both temporal validity (when applied to new data in a different time) and geographic validity (when applied across different institutions or regions). Variability in temporal or geographical disease patterns including demographics, healthcare infrastructure, and the design of Electronic Health Records (EHR) further complicates generalizability.

Finally, the use of AI raises ethical concerns, including trust in algorithmic recommendations and the risk of weakening the human connection at the core of medical practice, which is the millenary relation between physicians and patients. [14]

Recommendations

Here we report recommendations, divided in four domains. Figure 3 reports a summary of five representative AI use cases in critical care—ranging from waveform analysis to personalized clinician training—mapped across these four domains.

Fig. 3
figure 3

Summary of five representative AI use cases in critical care—ranging from waveform analysis to personalized clinician training—mapped across these 4 domains

Strive for human-centric and ethical AI utilization in healthcare

Alongside its significant potential benefit, the risk of AI misuse cannot be underestimated. AI algorithms may be harmful when prematurely deployed without adequate control [9, 15,16,17]. In addition to the regulatory frameworks that have been established to maintain control (presented in Sect.”Governance and regulation for AI in Critical Care“) [18, 19] we advocate for clinicians to be involved in this process and provide guidance.

Develop human-centric AI in healthcare

AI development in medicine and healthcare should maintain a human-centric perspective, promote empathetic care, and increase the time allocated to patient-physician communication and interaction. For example, the use of AI to replace humans in time-consuming or bureaucratic tasks such as documentation and transfers of care [20,21,22]. It could craft clinical notes, ensuring critical information is accurately captured in health records while reducing administrative burdens [23].

Establish social contract for AI use in healthcare

There is a significant concern that AI may exacerbate societal healthcare disparities [24]. When considering AI’s potential influence on physicians’choices and behaviour, the possibility of including or reinforcing biases should be examined rigorously to avoid perpetuating existing health inequities and unfair data-driven associations [24]. It is thus vital to involve patients and societal representatives in discussions regarding the vision of the next healthcare era, its operations, goals, and limits of action [25]. The desirable aim would be to establish a social contract for AI in healthcare, to ensure the accountability and transparency of AI in healthcare. A social contract for AI in healthcare should define clear roles and responsibilities for all stakeholders—clinicians, patients, developers, regulators, and administrators. This includes clinicians being equipped to critically evaluate AI tools, developers ensuring transparency, safety, and clinical relevance, and regulators enforcing performance, equity, and post-deployment monitoring standards. We advocate for hospitals to establish formal oversight mechanisms, such as dedicated AI committees, to ensure the safe implementation of AI systems. Such structures would help formalize shared accountability and ensure that AI deployment remains aligned with the core values of fairness, safety, and human-centred care.

Prioritize human oversight and ethical governance in clinical AI

Since the Hippocratic oath, patient care has been based on the doctor-patient connection where clinicians bear the ethical responsibility to maximize patient benefit while minimizing harm. As AI technologies are increasingly integrated into healthcare, their responsibility must also extend to overseeing its development and application. In the ICU, where treatment decisions balance between individual patient preferences and societal consideration, healthcare professionals must lead this transition [26]. As intensivists, we should maintain governance of this process, ensuring ethical principles and scientific rigor guide the development of frameworks to measure fairness, assess bias, and establish acceptable thresholds for AI uncertainty [6,7,8].

While AI models are rapidly emerging, most are being developed outside the medical community. To better align AI development with clinical ethics, we propose the incorporation of multidisciplinary boards comprising clinicians, patients, ethicists, and technological experts, who should be responsible for systematically reviewing algorithmic behaviour in critical care, assessing the risks of bias, and promoting transparency in decision-making processes. In this context, AI development offers an opportunity to rethink and advance ethical principles in patient care.

Recommendations for clinician training on AI use

Develop and assess the Human-AI interface

Despite some promising results [27, 28], the clinical application of AI remains limited [29,30,31]. The first step toward integration is to understand how clinicians interact with AI and to design systems that complement, rather than disrupt, clinical reasoning [32]. This translates into the need for specific research on the human-AI interface, where a key area of focus is identifying the most effective cognitive interface between clinicians and AI systems. On one side, physicians may place excessive trust on AI model results, possibly overlooking crucial information. For example, in sepsis detection an AI algorithm might miss an atypical presentation or a tropical infectious disease due to limitations in its training data; if clinicians overly trust the algorithm’s negative output, they may delay initiating a necessary antibiotic. On the other, the behaviour of clinicians can influence AI responses in unintended ways. To better reflect this interaction, the concept of synergy between human and AI has been proposed in the last years, emphasizing that AI supports rather than replaces human clinicians [33]. This collaboration has been described in two forms: human-AI augmentation (when human–AI interface enhances clinical performance compared to human alone) and human-AI synergy (where the combined performance exceeds that of both the human and the AI individually) [34]. To support the introduction of AI in clinical practice in intensive care, we propose starting with the concept of human-AI augmentation, which is more inclusive and better established according to medical literature [34]. A straightforward example of the latter is the development of interpretable, real-time dashboards that synthetize complex multidimensional data into visual formats, thereby enhancing clinicians’ situational awareness without overwhelming them.

Improve disease characterization with AI

Traditional procedures for classifying patients and labelling diseases and syndromes based on a few simple criteria are the basis of medical education, but they may fail to grasp the complexity of underlying pathology and lead to suboptimal care. In critical care, where patient conditions are complex and rapidly evolving, AI-driven phenotyping plays a crucial role by leveraging vast amounts of genetic, radiological, biomarker, and physiological data. AI-based phenotyping methods can be broadly categorized into two approaches.

One approach involves unsupervised clustering, in which patients are grouped based on shared features or patterns without prior labelling. Seymour et al. demonstrated how machine learning can stratify septic patients into clinically meaningful subgroups using high-dimensional data, which can subsequently inform risk assessment and prognosis [35]. Another promising possibility is the use of supervised or semi-supervised clustering techniques, which incorporate known outcomes or partial labelling to enhance the phenotyping of patient subgroups [36].

The second approach falls under the causal inference framework, where phenotyping is conducted with the specific objective of identifying subgroups that benefit from a particular intervention due to a causal association. This method aims to enhance personalized treatment by identifying how treatment effects vary among groups, ensuring that therapies are targeted toward patients most likely to benefit. For example, machine learning has been used to stratify critically ill patients based on their response to specific therapeutic interventions, potentially improving clinical outcomes [37]. In a large ICU cohort of patients with traumatic brain injury (TBI), unsupervised clustering identified six distinct subgroups, based on combined neurological and metabolic profiles. [38]

These approaches hold significant potential for advancing acute and critical care by ensuring that AI-driven phenotyping is not only descriptive, but also actionable. Before integrating these methodologies into clinical workflows, we need to make sure clinicians can accept the paradigm shift between broad syndromes and specific sub-phenotypes, ultimately supporting the transition toward personalized medicine [35, 39,40,41].

Ensure AI training for responsible use of AI in healthcare

In addition to clinical practice, undergraduate medical education is also directly influenced by AI transformation [42] as future workers need to be equipped to understand and use these technologies. Providing training and knowledge from the start of their education requires that all clinicians understand data science and AI’s fundamental concepts, methods, and limitations, which should be included in medical degree core curriculum. This will allow clinicians to use and assess AI critically, identify biases and limitations, and make well-informed decisions, which may ultimately benefit the medical profession’s identity crisis and provide new careers in data analysis and AI research [42].

In addition to undergraduate education, it is essential to train experienced physicians, nurses, and other allied health professional [43]. The effects of AI on academic education are deep and outside the scope of the current manuscript. One promising example is the use of AI to support personalized, AI-driven training for clinicians—both in clinical education and in understanding AI-related concepts [44]. Tools such as chatbots, adaptive simulation platforms, and intelligent tutoring systems can adapt content to students’ learning needs in real time, offering a tailored education. This may be applied to both clinical training and training in AI domains.

Accepting uncertainty in medical decision-making

Uncertainty is an intrinsic part of clinical decision-making, with which clinicians are familiar and are trained to navigate it through experience and intuition. However, AI models introduce a new type of uncertainty, which can undermine clinicians’trust, especially when models function as opaque “black boxes” [45,46,47]. This increases cognitive distance between model and clinical judgment, as clinicians don’t know how to interpret it. To bridge this gap, explainable AI (XAI) has emerged, providing tools to make model predictions more interpretable and, ideally, more trustworthy to reduce perceived uncertainty [48].

Yet, we argue that interpretability alone is not enough [48].To accelerate AI adoption and trust, we advocate that physicians must be trained to interpret outputs under uncertainty—using frameworks like plausibility, consistency with known biology, and alignment with consolidated clinical reasoning—rather than expecting full explainability [49].

Standardize and share data while maintaining patient privacy

In this section we present key infrastructures for AI deployment in critical care [50]. Their costs should be seen as investment in patient outcomes, processes efficiency, and reduced operational costs. Retaining data ownership within healthcare institutions, and recognizing patients and providers as stakeholders, allows them to benefit from the value their data creates. On the contrary, without safeguards clinical data risk becoming proprietary products of private companies—which are resold to their source institutions rather than serving as a resource for their own development—for instance, through the development and licensing of synthetic datasets [51].

Standardize data to promote reproducible AI models

Standardized data collection is essential for creating generalizable and reproducible AI models and fostering interoperability between different centres and systems. A key challenge in acute and critical care is the variability in data sources, including EHRs, multi-omics data (genomics, transcriptomics, proteomics, and metabolomics), medical imaging (radiology, pathology, and ultrasound), and unstructured free-text data from clinical notes and reports. These diverse data modalities are crucial for developing AI-driven decision-support tools, yet their integration is complex due to differences in structure, format, and quality across healthcare institutions.

For instance, the detection of organ dysfunction in the ICU, hemodynamic monitoring collected by different devices, respiratory parameters from ventilators by different manufacturers, and variations in local policies and regulations all impact EHR data quality, structure, and consistency across different centres and clinical trials.

The Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM), which embeds standard vocabularies such as LOINC and SNOMED CT, continues to gain popularity as a framework for structuring healthcare data, enabling cross-centre data exchange and model interoperability [52,53,54]. Similarly, Fast Healthcare Interoperability Resources (FHIR) offers a flexible, standardized information exchange solution, facilitating real-time accessibility of structured data [55].

Hospitals, device and EHR companies must contribute to the adoption of recognized standards to make sure interoperability is not a barrier to AI implementation.

Beyond structured data, AI has the potential to enhance data standardization by automatically tagging and labelling data sources, tracking provenance, and harmonizing data formats across institutions. Leveraging AI for these tasks can help mitigate data inconsistencies, thereby improving the reliability and scalability of AI-driven clinical applications.

Prioritize data safety, security, and patient privacy

Data safety, security and privacy are all needed for the application of AI in critical care. Data safety refers to the protection of data from accidental loss or system failure, while data security is related with defensive strategies for malicious attacks including hacking, ransomware, or unauthorized data access [56]. In modern hospitals, data safety and security will soon become as essential as wall oxygen in operating rooms [57, 58]. A corrupted or hacked clinical dataset during hospital care could be as catastrophic as losing electricity, medications, or oxygen. Finally, data privacy focuses on the safeguard of personally information, ensuring that patient data is stored and accessed in compliance with legal standards [56].

Implementing AI that prioritizes these three pillars will be critical for resilient digital infrastructure in healthcare. A possible option for the medical community is to support open-source modes to increase transparency and reduce dependence on proprietary algorithms, and possibly enable better control of safety and privacy issues within the distributed systems [59]. However, sustaining open-source innovation requires appropriate incentives, such as public or dedicated research funding, academic recognition, and regulatory support to ensure high-quality development and long-term viability [60]. Without such strategies, the role of open-source models will be reduced, with the risk of ceding a larger part of control of clinical decision-making to commercial algorithms.

Develop rigorous AI research methodology

We believe AI research should be held to the same methodological standards of other areas of medical research. Achieving this will require greater accountability from peer reviewers and scientific journals to ensure rigor, transparency, and clinical relevance.

Furthermore, advancing AI in ICU research requires a transformation in the necessary underlying infrastructure, particularly when considering high-frequency data collection and the integration of complex, multimodal patient information, detailed in the sections below. In this context, the gap in data resolution between highly monitored environments such as ICUs and standard wards become apparent. The ICU provides a high level of data granularity due to high resolution monitoring systems, capable of capturing the rapid changes in a patient’s physiological status [61]. Consequently, the integration of this new source of high-volume, rapidly changing physiological data into medical research and clinical practice could give rise to “physiolomics”, a proposed term to describe this domain, that could become as crucial as genomics, proteomics and other “-omics” fields in advancing personalized medicine.

AI will change how clinical research is performed, improving evidence-based medicine and conducting randomized clinical trials (RCTs) [62]. Instead of using large, heterogeneous trial populations, AI might help researchers design and enrol tailored patient subgroups for precise RCTs [63, 64]. These precision methods could solve the problem of negative critical care trials related to inhomogeneities in the population and significant confounding effects. AI could thus improve RCTs by allowing the enrolment of very subtle subgroups of patients with hundreds of specific inclusion criteria over dozens of centres, a task impossible to perform by humans in real-time practice, improving trial efficiency in enrolling enriched populations [65,66,67]. In the TBI example cited, conducting an RCT on the six AI-identified endotypes—such as patients with moderate GCS but severe metabolic derangement—would be unfeasible without AI stratification [38]. This underscores AI’s potential to enable precision trial designs in critical care.

There are multiple domains for interaction between AI and RCT, though a comprehensive review is beyond the scope of this paper. These include trial emulation to identify patient populations that may benefit most from an intervention, screening for the most promising drugs for interventions, detecting heterogeneity of treatment effects, and automated screening to improve the efficiency and cost of clinical trials.

Ensuring that AI models are clinically effective, reproducible, and generalizable requires adherence to rigorous methodological standards, particularly in critical care where patient heterogeneity, real-time decision-making, and high-frequency data collection pose unique challenges. Several established reporting and validation frameworks already provide guidance for improving AI research in ICU settings. While these frameworks are not specific to the ICU environment, we believe these should be rapidly disseminated into the critical care community through dedicated initiatives, courses and scientific societies.

For predictive models, the TRIPOD-AI extension of the TRIPOD guidelines focuses on transparent reporting for clinical prediction models with specific emphasis on calibration, internal and external validation, and fairness [68]. PROBAST-AI framework complements this by offering a structured tool to assess risk of bias and applicability in prediction model studies [69]. CONSORT-AI extends the CONSORT framework to include AI-specific elements such as algorithm transparency and reproducibility for interventional trials with AI [70], while STARD-AI provides a framework for reporting AI-based diagnostic accuracy studies [71]. Together, these guidelines encompass several issues related to transparency, reproducibility, fairness, external validation, and human oversight—principles that must be considered foundational for any trustworthy AI research in healthcare. Despite the availability of these frameworks, many ICU studies involving AI methods still fail to meet these standards, leading to concerns about inadequate external validation and generalizability [68, 72, 73].

Beyond prediction models, critical care-specific guidelines proposed in recent literature offer targeted recommendations for evaluating AI tools in ICU environments, particularly regarding data heterogeneity, patient safety, and integration with clinical workflows. Moving forward, AI research in critical care must align with these established frameworks and adopt higher methodological standards, such as pre-registered AI trials, prospective validation in diverse ICU populations, and standardized benchmarks for algorithmic performance.

Encourage collaborative AI models

Centralizing data collection from multiple ICUs, or federating them into structured networks, enhances external validity and reliability by enabling a scale of data volume that would be unattainable for individual institutions alone [74]. ICUs are at the forefront of data sharing efforts, offering several publicly available datasets for use by the research community [75]. There are several strategies to build collaborative databases. Networking refers to collaborative research consortia [76] that align protocols and pool clinical research data across institutions. Federated learning, by contrast, involves a decentralized approach where data are stored locally and only models or weights are shared between centres [77]. Finally, centralized approaches, such as the Epic Cosmos initiative, leverage de-identified data collected from EHR and stored on a central server providing access to large patient populations for research and quality improvement purposes across the healthcare system [78]. Federated learning is gaining traction in Europe, where data privacy regulations have a more risk-averse approach to AI development, thus favouring decentralized models [79]. In contrast, centralized learning approaches like Epic Cosmos are more common in the United States, where there is a more risk-tolerant environment which favours large-scale data aggregation.

In parallel, the use of synthetic data is emerging as a complementary strategy to enable data sharing while preserving patient privacy. Synthetic datasets are artificially generated to reflect the characteristics of real patient data and can be used to train and test models without exposing sensitive information [80]. The availability of large-scale data, may also support the creation of digital twins. Digital twins, or virtual simulations that mirror an individual’s biological and clinical state and rely on high-volume, high-fidelity datasets, may allow for predictive modelling and virtual testing of interventions before bedside application and improve safety of interventions.

The ICU community should advocate for the diffusion of further initiatives to extended collaborative AI models at national and international level.

Governance and regulation for AI in Critical Care

Despite growing regulatory efforts, AI regulation remains one of the greatest hurdles to clinical implementation, particularly in high-stakes environments like critical care, as regulatory governance, surveillance, and evaluation of model performance are not only conceptually difficult, but also require a large operational effort across diverse healthcare settings. The recent European Union AI Act introduced a risk-based regulatory framework, classifying medical AI as high-risk and requiring stringent compliance with transparency, human oversight, and post-market monitoring [18]. While these regulatory efforts provide foundational guidance, critical care AI presents unique challenges requiring specialized oversight.

By integrating regulatory, professional, and institutional oversight, AI governance in critical care can move beyond theoretical discussions toward actionable policies that balance technological innovation with patient safety [73, 81, 82].

Grant collaboration between public and private sector

Given the complexity and significant economic, human, and computational resources needed to develop a large generative AI model, physicians and regulators should promote partnerships among healthcare institutions, technology companies, and governmental bodies to support the research, development, and deployment of AI-enabled care solutions [83]. Beyond regulatory agencies, professional societies and institutional governance structures must assume a more active role. Organizations such as Society of Critical Care Medicine (SCCM), European Society of Intensive Care Medicine (ESICM), and regulatory bodies like the European Medical Agency (EMA) should establish specific clinical practice guidelines for AI in critical care, including standards for model validation, clinician–AI collaboration, and accountability. Regulatory bodies should operate at both national and supranational levels, with transparent governance involving multidisciplinary representation—including clinicians, data scientists, ethicists, and patient advocates—to ensure decisions are both evidence-based and ethically grounded. To avoid postponing innovation indefinitely, regulation should be adaptive and proportionate, focusing on risk-based oversight and continuous post-deployment monitoring rather than rigid pre-market restrictions. Furthermore, implementing mandatory reporting requirements for AI performance and creating hospital-based AI safety committees could offer a structured, practical framework to safeguard the ongoing reliability and safety of clinical AI applications.

Address AI divide to improve health equality

The adoption of AI may vary significantly across various geographic regions, influenced by technological capacities, (i.e. disparities in access to software or hardware resources), and differences in investments and priorities between countries. This “AI divide” can separate those with high access to AI from those with limited or no access, exacerbating social and economic inequalities.

The EU commission has been proposed to act as an umbrella to coordinate EU wide strategies to reduce the AI divide between European countries, implementing coordination and supporting programmes of activities [84]. The use of specific programmes, such as Marie-Curie training networks, is mentioned here to strengthen the human capital on AI while developing infrastructures and implementing common guidelines and approaches across countries.

A recent document from the United Nations also addresses the digital divide across different economic sectors, recommending education, international cooperation, and technological development for an equitable AI resource and infrastructure allocation [85].

Accordingly, the medical community in each country should lobby at both national level and international level through society and WHO for international collaborations, such as through the development of specific grants and research initiatives. Intensivist should require supranational approaches to standardized data collection and require policies for AI technology and data analysis. Governments, UN, WHO, and scientific society should be the target of this coordinated effort.

Continuous evaluation of dynamic models and post-marketing surveillance

A major limitation in current regulation is the lack of established pathways for dynamic AI models. AI systems in critical care are inherently dynamic, evolving as they incorporate new real-world data, while most FDA approvals rely on static evaluation. In contrast, the EU AI Act emphasizes continuous risk assessment [18]. This approach should be expanded globally to enable real-time auditing, validation, and governance of AI-driven decision support tools in intensive care units, as well as applying to post-market surveillance. The EU AI Act mandates ongoing surveillance of high-risk AI systems, a principle that we advocate to be adopted internationally to mitigate the risks of AI degradation and bias drift in ICU environments. In practice, this requires AI commercial entities to provide post-marketing surveillance plans and to report serious incidents within a predefined time window (15 days or less) [18]. Companies should also maintain this monitoring as the AI systems evolve over time. The implementation of these surveillance systems should include standardized monitoring protocols, embedded incident reporting tools within clinical workflows, participation in performance registries, and regular audits. These mechanisms are overseen by national Market Surveillance Authorities (MSAs), supported by EU-wide guidance and upcoming templates to ensure consistent and enforceable oversight of clinical AI systems.

Require adequate regulations for AI deployment in clinical practice

Deploying AI within complex clinical environments like the ICU, acute wards, or even regular wards presents a complex challenge [86].

We underline three aspects for adequate regulation: first, a rigorous regulatory process for evaluation of safety and efficacy before clinical application of AI products. A second aspect is related with continuous post-market evaluation, which should be mandatory and conducted according to other types of medical devices [18].

The third important aspect is liability, identifying who should be held accountable if an AI decision or a human decision based on AI leads to harm. This relates with the necessity for adequate insurance policies. We urge regulatory bodies in each country to provide regulations on these issues, which are fundamental for AI diffusion.

We also recommend that both patients and clinicians request that regulatory bodies in each country update current legislation and regulatory pathways, including clear rules for insurance policies to anticipate and reduce the risk for case laws.



Source link

AI Research

Advance, which develops artificial intelligence (AI) technology specializing in commerce, announced ..

Published

on


Enhanced CI

Advance, which develops artificial intelligence (AI) technology specializing in commerce, announced AI commerce solution “CommerceOS” at DevCon 3 of Palantir Technologies, a global AI platform company, held last month.

DevCon, held by Palantir, is a conference for developers and is an event where developers and Palantir partners attend to share the latest technologies.

Founded in 2021, Startup Ens is a company that provides commerce automation solutions using LAM, which performs real actions with AI.

Unlike Large Language Model (LLM), which specializes in language generation, LAM is characterized by performing real tasks directly and self-judging and executing tasks that can improve sales and operating profit.

By utilizing AI models specialized in commerce and retail, InS supports the entire process of data collection and refining, AI inference and analysis, and automation execution.

The “Commerce OS” announced at DevCon 3 is a solution specialized in the retail and e-commerce markets, where AI optimizes inventory and sets prices while monitoring sales of competitors’ products in real time on online marketplaces.

In May, Ince, which was selected as Palantir’s ‘Startup Fellowship’, showed the practicality and scalability of AI agent technology in the commerce area through the demonstration of ‘Commerce OS’, an AI-based commerce solution built in cooperation with Palantir at the Devcon.

“Through the announcement of DevCon 3, we have introduced our technical capabilities as a ‘Palantir in the commerce world’ to the world,” said Lee Seung-hyun, CEO of Ins. “We will spread cases of global business use in the commerce sector and open the era of AI agents in earnest.”



Source link

Continue Reading

AI Research

Amadeus announces Demand360®and MeetingBroker® to be enhanced with artificial intelligence

Published

on

By


Amadeus has partnered with Microsoft and is leveraging OpenAI’s models on Azure to develop a suite of AI integrations that enhance its Hospitality portfolio. The two latest AI tools will provide hoteliers of any background easy access to industry-leading insights and dramatically improve the efficiency of group bookings.

Amadeus Advisor chat is coming to Demand360: Making sophisticated insights instantly available

To help hoteliers stay agile and respond quickly to the fast-changing travel industry, Amadeus is integrating Advisor Chat, its Gen AI chatbot, into its industry-leading Demand360 data product. Powered by Azure OpenAI, Advisor chat offers immediate and intuitive access to crucial insights for teams across various functions, including sales, operations, marketing, and distribution.

Demand360 currently captures the most comprehensive view of the hospitality market to inform hotel strategies. Based on insights from 44,000 hotels and 35 million short-term rental properties, Demand360 provides a 12-month, forward-looking view of a hotel’s occupancy and its market ranking as well as two years of retrospective data.

Amadeus Advisor chat was rolled out to Amadeus Agency360® in 2024. In the year since, customers have enjoyed instantaneous insights. In some cases, Amadeus Advisor has saved analysts approximately a day each week as the bulk of requests can now be handled directly by the wider team.
Amadeus plans to make Advisor available within Microsoft Teams, making it easier than ever to understand performance and make informed decisions.

Transforming group sales with AI: Email to RFP

Amadeus is introducing new AI functionality, Email to RFP, within MeetingBroker to help hotels streamline the handling of inbound group booking requests, a valuable, growing segment of the market.

With Email to RFP, customers will be able to email inbound RFPs directly to MeetingBroker, where AI is then used to evaluate it and create an instant RFP response. To provide accurate, up-to-date information that is specific to each location, Email to RFP will be trained to retrieve additional, relevant information from reliable sources. Email to RFP is powered by Azure OpenAI.

Omni Atlanta Hotel, the first pilot customer, has seen significant returns with faster responses and near autonomous RFP handling.

This builds on the current functionalities of Amadeus MeetingBroker, a centralized hub for managing all group inquiries, no matter how or where they originate. By consolidating leads into a single workflow, MeetingBroker helps hotel sales teams respond faster, reduce missed opportunities, and convert more business.

Amadeus plans to introduce individual AI agents for each of its products, helping travel companies to gain more value by answering queries more easily and more quickly. Amadeus is also working to develop AI agents that will draw on multiple sources when responding to queries, unlocking new levels of insight from across Amadeus’ portfolio.

“As an industry, we’re at an important juncture where the next year of AI development and implementation will shape decades of travel and hospitality. It’s becoming increasingly clear that AI is here to make sense of complexity and support productivity in order to enhance efficiency, return on investment and ultimately increase conversions,” says Francisco Pérez-Lozao Rüter, President of Hospitality, Amadeus.



Source link

Continue Reading

AI Research

Lehigh University Professor Awarded NSF Grant to Advance AI Literacy

Published

on


BETHMEHEM, PA — Dr. Juan Zheng, assistant professor in the Teaching, Learning, and Technology program at Lehigh University’s College of Education, has been awarded a grant from the National Science Foundation to support her groundbreaking project, “Meta-Partner: Hybrid Intelligence for Self-Regulated Learning.”

Over the next two years, Dr. Zheng and her research team will develop Meta-Partner, an artificial intelligence (AI) system designed to help students set learning goals, adjust strategies, monitor progress, and reflect on their educational journeys—all while building critical AI literacy and self-regulation skills.

The project addresses a pressing national need: preparing a diverse and inclusive workforce for the rapidly evolving AI-driven future. “Millions are already using AI, but few people know how to use it in an informed and strategic way,” Dr. Zheng explained. “Our goal is to teach students not just the concepts of AI, but how to approach problems, think critically, and regulate their learning, skills that are essential for success in any field that will use AI as a tool.”

Meta-Partner will be integrated into AIResolver, an existing online problem-based learning platform. The system will guide students through complex problem-solving scenarios, such as designing classification systems for scientific research, by providing real-time, personalized support. As students interact with the platform, Meta-Partner will generate initial learning goals, create automated notes, visualize progress, and compose reflections, all of which students can review and refine. This iterative, human-AI collaboration is designed to deepen metacognitive engagement and foster independent learning.

The research will focus on high school and undergraduate students from non-computer science backgrounds, particularly in rural areas, to ensure the benefits of AI education reach underserved communities. Through a robust evaluation involving both quantitative and qualitative methods, the project will examine how Meta-Partner impacts students’ cognitive, motivational, and emotional engagement with AI problem-solving.

“We believe that by making AI education more accessible and engaging, we can help bridge the digital divide and empower students who might otherwise be left behind,” said Dr. Zheng.

Beyond teaching the technical concepts of AI, the project aims to equip students with software skills, critical thinking abilities, and the self-regulation strategies needed to thrive in a workforce where AI is ubiquitous. Dr. Zheng emphasized the importance of learning to use AI strategically and responsibly: “Just as the internet and online learning brought both opportunities and risks, AI will reshape how we learn and work. Our research will help students navigate these changes and use AI as a partner in their learning.”

Meta-Partner’s open-source design ensures that its impact will extend far beyond the initial study, allowing other educational institutions and platforms to adopt and adapt the technology. By pioneering the integration of hybrid intelligence into self-regulated learning, Dr. Zheng’s work has the potential to transform AI education practices and prepare the next generation for a future where human and artificial intelligence work in tandem.

ABOUT LEHIGH UNIVERSITY COLLEGE OF EDUCATION

Lehigh’s College of Education offers premier graduate-level programs focused on high-impact research, interdisciplinary applications, evidence-based practices, and partnerships at the local, national, and international level.

 

For the latest news on everything happening in Chester County and the surrounding area, be sure to follow MyChesCo on Google News and MSN.



Source link

Continue Reading

Trending