AI Research
UCLA’s AI Researchers Turn Fragmented EHR Data into ‘Pseudo-notes’
UCLA researchers have developed an AI system that turns fragmented structured data from the EHR into readable narratives, allowing AI systems to make sense of complex patient histories and use these narratives to perform clinical decision support.
In npj digital medicine, the researchers describe how the Multimodal Embedding Model for EHR (MEME) transforms tabular health data into “pseudo-notes” that mirror clinical documentation, allowing AI models designed for text to analyze patient information more effectively.
With the rapid increase in the use of AI models, there has been a mismatch because large language models work with text, while hospital data is often stored in complex tables with numbers, codes, and categories. Emergency departments, where quick decisions can be critical, particularly need tools that can rapidly process comprehensive patient histories to predict outcomes and guide treatment decisions, the UCLA researchers noted.
The researchers created an approach that converts tabular EHR data into text-based “pseudo-notes” using medical documentation shortcuts commonly used by healthcare providers.
The system breaks patient data into concept-specific blocks (medications, triage vitals, diagnostics, etc.), transforming each into text using simple templates, and then encodes each one separately using language models. It essentially emulates a form of medical reasoning.
“This approach circumvents the need for explicit concept harmonization while serving as a natural language interface between structured EHR data and large language models,” the study says.
This study was conducted retrospectively on datasets collected from the Beth Israel Deaconness Medical Center in Boston and the UCLA Health medical system in Los Angeles.
The team tested their system against traditional machine learning methods, specialized healthcare AI models, and prompting-based approaches using real emergency department prediction tasks.
In a study of 400,019 emergency department visits, MEME successfully predicted emergency department disposition, discharge location, intensive care requirement, and mortality.
Across over 1.3 million emergency room visits from the Medical Information Mart for Intensive Care (MIMIC) database and UCLA datasets, it consistently outperformed existing approaches across multiple emergency department decision support tasks, the researchers found.
The multimodal text approach, which processes different components of health records separately, achieved better results than trying to combine all information into a single representation. The system demonstrated superior performance compared to traditional machine learning techniques, EHR-specific foundation models like CLMBR and Clinical Longformer, and standard prompting methods. The approach also showed good portability across different hospital systems and coding standards.
“This bridges a critical gap between the most powerful AI models available today and the complex reality of healthcare data,” said Simon Lee, Ph.D. student at UCLA Computational Medicine, in a statement. “By converting hospital records into a format that advanced language models can understand, we’re unlocking capabilities that were previously inaccessible to healthcare providers. The fact that this approach is more portable and adaptable than existing healthcare AI systems could make it particularly valuable for institutions working with different data standards.”
Next steps
The research team plans to test MEME’s effectiveness in other clinical settings beyond emergency departments to validate its broader applicability. They also aim to address limitations observed in cross-site model generalizability, working to ensure the system performs consistently across different healthcare institutions.
Future work will focus on extending the approach to accommodate new medical concepts and evolving healthcare data standards, potentially making advanced AI more accessible to healthcare systems.
AI Research
The new frontier of medical malpractice
Although the beginnings of modern artificial intelligence (AI) can be traced
as far back as 1956, modern generative AI, the most famous example of which is
arguably ChatGPT, only began emerging in 2019. For better or worse, the steady
rise of generative AI has increasingly impacted the medical field. At this time, AI has begun to advance in a way that creates
potential liability…
AI Research
Pharmaceutical Innovation Rises as Global Funding Surges and AI Reshapes Clinical Research – geneonline.com
AI Research
Radiomics-Based Artificial Intelligence and Machine Learning Approach for the Diagnosis and Prognosis of Idiopathic Pulmonary Fibrosis: A Systematic Review – Cureus
-
Funding & Business7 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers7 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions7 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers6 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Funding & Business4 days ago
Dust hits $6M ARR helping enterprises build AI agents that actually do stuff instead of just talking