AI Research
AI Integration with Epic EHR: Promise and Practicalities
The integration of artificial intelligence into healthcare environments represents one of the most significant technological shifts in the industry today. For organizations running Epic, which powers the electronic health records of approximately 250 million patients across major health systems, the question is no longer if AI will transform their operations, but how and when. As healthcare IT leaders navigate this landscape, they face complex decisions balancing technical feasibility, clinical utility, and operational sustainability.
Understanding the Inflection Point
Healthcare organizations find themselves at a critical inflection point. The maturation of AI technologies coincides with increasing demands on health systems to improve clinical outcomes, operational efficiency, and patient experience. Epic environments, which traditionally focused on stability and reliability above all else, must now accommodate emerging AI capabilities without compromising their core functions.
This transition introduces unprecedented complexity. Epic systems were designed as comprehensive but largely self-contained ecosystems. Now, they must interface with AI technologies that may reside in different computing environments, rely on different data models, and operate according to different processing paradigms.
The Infrastructure Imperative
Perhaps the most immediate challenge organizations face involves infrastructure requirements. Epic systems already demand significant computational resources, with recent versions requiring exponentially higher performance compared to historical implementation patterns. Adding AI functionality compounds these demands substantially.
Consider the infrastructure implications: machine learning models, particularly those analyzing medical imaging or unstructured clinical notes, require specialized hardware configurations. Organizations must determine whether to expand their existing on-premises infrastructure or develop hybrid architectures that extend into public cloud environments.
This decision carries significant financial implications. Health systems have already invested millions in Epic infrastructure and continue to allocate substantial operational budgets to maintain these environments. Implementing AI may require additional capital expenditures, revisions to refresh cycles, and new staffing expertise.
The shift toward AMD processors in some Epic environments further complicates planning. Healthcare organizations must now balance processor architecture decisions with their AI implementation roadmap, determining whether traditional CPU-centric environments will suffice or if specialized GPU resources become necessary as AI workloads increase.
Data Governance Foundations
Beyond infrastructure considerations, data governance represents a foundational element of AI integration with Epic. Successful AI implementations require not just access to data, but consistent, controlled access to high-quality clinical information that maintains patient privacy while enabling analytic insights.
Health systems must establish comprehensive data governance frameworks that address:
- Data quality standards for AI training and operation
- Policies controlling which data elements can be processed by AI systems
- Mechanisms to ensure AI outputs remain traceable to source data
- Processes to identify and mitigate algorithmic bias
- Procedures for managing data provenance across systems
These governance frameworks must function within existing regulatory constraints, including HIPAA and emerging AI-specific regulations, while maintaining operational flexibility. The governance challenge extends beyond technical implementation to include clinical and administrative stakeholders who must understand how patient data flows through AI systems.
Interoperability Challenges
Interoperability represents another critical consideration. Epic has made significant strides in supporting standards like FHIR (Fast Healthcare Interoperability Resources), but AI integration introduces new interfaces that must be carefully designed and maintained.
Healthcare organizations must determine how AI systems will access Epic data and how AI-generated insights will flow back into clinical workflows. Options include leveraging Epic’s APIs, implementing dedicated integration services, or utilizing third-party middleware designed specifically for healthcare AI implementations.
Each approach presents distinct advantages and limitations regarding real-time access, data transformation capabilities, and long-term sustainability. Organizations that have invested heavily in Epic extension capabilities may prefer native integration approaches, while those with broader technology portfolios might implement integration platforms that serve multiple systems beyond Epic.
Strategic Pathways
Healthcare IT leaders face three primary strategic pathways when integrating AI with Epic environments:
- Epic-native AI capabilities – Leveraging functionalities developed by Epic itself, which offer tight integration but may offer less cutting-edge capabilities than specialized solutions
- Hyperscale cloud provider partnerships – Implementing AI services from major cloud providers, which offer advanced capabilities but require careful integration planning
- Custom AI development – Building organization-specific AI solutions tailored to particular clinical or operational needs, which can address unique requirements but demands specialized expertise
Most organizations will ultimately pursue a hybrid approach, selecting different strategies for different use cases based on clinical priority, technical complexity, and resource availability. Strategic success requires continual alignment between technical and clinical leadership to ensure AI capabilities address genuine organizational needs rather than pursuing technology for its own sake.
Clinical Adoption and Workflow Integration
Even technically successful AI implementations fail without meaningful clinical adoption. Healthcare organizations must carefully consider how AI-generated insights appear within Epic workflows, ensuring they enhance rather than disrupt clinical processes.
AI capabilities should augment clinical judgment rather than attempting to replace it, providing decision support that fits naturally within established workflows. This requires careful attention to user interface design, alert fatigue mitigation, and transparency regarding how AI generates its recommendations.
Organizations should implement structured feedback mechanisms allowing clinicians to report AI performance issues, creating a continuous improvement cycle that enhances both the technical performance and clinical utility of these systems.
Security Implications
AI integration introduces new security considerations for Epic environments. Organizations must evaluate how AI systems impact their security posture, particularly when these systems cross traditional infrastructure boundaries.
Key security considerations include:
- Authentication mechanisms between Epic and AI systems
- Data encryption requirements during processing
- Vulnerability management across expanded technology surfaces
- Monitoring requirements for AI-specific threats
- Incident response procedures for AI-related security events
Security planning must address not just traditional threats but emerging concerns specific to AI, such as model poisoning attacks or adversarial inputs designed to manipulate AI outputs.
Measuring Success
Ultimately, healthcare organizations must establish clear metrics for evaluating AI integration success. These metrics should span technical performance, clinical outcomes, and financial impact, creating a comprehensive view of implementation effectiveness.
Rather than pursuing AI adoption as an end in itself, organizations should identify specific problems AI can meaningfully address, establish baseline measurements, implement targeted solutions, and rigorously assess outcomes. This measured approach ensures AI investments deliver tangible benefits rather than merely introducing additional complexity.
As AI in healthcare transitions from experimental to essential, organizations running Epic must develop coherent implementation roadmaps that balance innovation with the fundamental reliability requirements of clinical systems. Those that successfully navigate this transition will position themselves to deliver higher quality care while managing operational costs more effectively.
About Mike Hale
Mike Hale is a Principal Solutions Engineer at EchoStor, where he leads the company’s healthcare initiatives. He has nearly 20 years of executive leadership experience in the health technology sector.
AI Research
Pope: AI development must build bridges of dialogue and promote fraternity
In a message signed by the Cardinal Secretary of State Pietro Parolin, to the United Nations’ AI for Good Summit happening in Geneva, Pope Leo XIV encourages nations to create frameworks and regulations to work for the common good.
By Isabella H. de Carvalho
Pope Leo XIV encouraged nations to establish frameworks and regulations on AI so that it can be developed and used according to the common good, in a message sent on July 10 to the participants of the AI for Good Summit, taking place in Geneva, Switzerland, from July 8 to 11.
“I would like to take this opportunity to encourage you to seek ethical clarity and to establish a coordinated local and global governance of AI, based on the shared recognition of the inherent dignity and fundamental freedoms of the human person”, the message, signed by the Secretary of State, Cardinal Pietro Parolin, said.
The summit is organized by the United Nations’ International Telecommunication Union (ITU) and is co-hosted by the Swiss government. The event sees the participation of governments, tech leaders, academics and others who are interested and work with AI.
In this “era of profound innovation” where many are reflecting on “what it means to be human”, the world “is at crossroads, facing the immense potential generated by the digital revolution driven by Artificial Intelligence”, the Pope highlighted in his message.
AI requires ethical management and regulatory frameworks
“As AI becomes capable of adapting autonomously to many situations by making purely technical algorithmic choices, it is crucial to consider its anthropological and ethical implications, the values at stake and the duties and regulatory frameworks required to uphold those values”, the Pope underlined in his message.
He emphasized that the “responsibility for the ethical use of AI systems begins with those who develop, manage and oversee them” but users also need to share this mission. AI “requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency,” the Pope insisted.
Building peaceful societies
Citing St. Augustine’s concept of the “tranquility of order”, Pope Leo highlighted that this should be the common goal and thus AI should foster “more human order of social relations” and “peaceful and just societies in the service of integral human development and the good of the human family”.
While AI can simulate human reasoning and perform tasks quickly and efficiently or transform areas such as “education, work, art, healthcare, governance, the military, and communication”, “it cannot replicate moral discernment or the ability to form genuine relationships”, Pope Leo warned.
For him the development of this technology “must go hand in hand with respect for human and social values, the capacity to judge with a clear conscience, and growth in human responsibility”. It requires “discernment to ensure that AI is developed and utilized for the common good, building bridges of dialogue and fostering fraternity”, the Pope urged. AI needs to serve “the interests of humanity as a whole”.
AI Research
AI slows down some experienced software developers, study finds
By Anna Tong
SAN FRANCISCO (Reuters) -Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found.
AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with.
Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%.
The study’s lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected “a 2x speed up, somewhat obviously.”
The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development.
AI is also expected to replace entry-level coding positions. Dario Amodei, CEO of Anthropic, recently told Axios that AI could wipe out half of all entry-level white collar jobs in the next one to five years.
Prior literature on productivity improvements has found significant gains: one study found using AI sped up coders by 56%, another study found developers were able to complete 26% more tasks in a given time.
But the new METR study shows that those gains don’t apply to all software development scenarios. In particular, this study showed that experienced developers intimately familiar with the quirks and requirements of large, established open source codebases experienced a slowdown.
Other studies often rely on software development benchmarks for AI, which sometimes misrepresent real-world tasks, the study’s authors said.
The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested.
“When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what’s needed,” Becker said.
The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren’t familiar with.
Still, the majority of the study’s participants, as well as the study’s authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page.
“Developers have goals other than completing the task as soon as possible,” Becker said. “So they’re going with this less effortful route.”
(Reporting by Anna Tong in San Francisco; Editing by Sonali Paul)
AI Research
Persona-Driven AI for Brand Engagement and Audience Research
NEW! Listen to article
Earlier this year (2025), OpenAI’s GPT-4.5 achieved a groundbreaking feat: In controlled Turing Test scenarios, it was mistaken for a human 73% of the time when adopting a carefully crafted persona.
That isn’t merely a technological milestone. It’s a paradigm shift in how brands can and should use AI to boost engagement.
As AI transitions from a behind-the-scenes utility to a front-facing conversational partner, marketers must recognize its potential to redefine short-form, high-touch digital interactions.
The Blurred Line: What Happens When AI Feels Human
Persona-driven AI, curated to embody distinct tones, styles, and even values, is already in the wild.
Think customer support agents that mirror Gen Z slang. Think chatbots with the warmth and wit of lifestyle influencers. Think AI-driven avatars hosting livestreams, fielding DMs, or acting as extensions of a brand’s personality.
Those aren’t hypothetical cases. They’re live, and they’re running at scale.
In fact, one study found that 58% of US consumers were already following virtual influencers, a sign that the public is increasingly comfortable engaging with AI-driven agents in personal, even emotional ways.
That human-feel approach makes AI more engaging and more effective, but it also muddies the water: When do audiences believe they’re talking to a real person? Does it matter? And if AI can convincingly “be” a person online, what does authenticity even mean in marketing anymore?
The Power of Persona-Driven AI as a Research Tool
Ironically, one of the best uses of human-like AI isn’t outward-facing at all.
Persona-driven AI can serve as a powerful research tool, offering marketers a dynamic, low-risk sandbox for testing messages, concepts, and campaigns. These AI personas can be designed to represent diverse communities, even mirroring the demographic, cultural, or behavioral traits of specific populations, such as those within a particular country.
By enabling these personas to interact with one another, marketers can simulate complex social dynamics, uncover nuanced reactions, and explore how ideas resonate across different segments without the ethical risks or costs of real-world experimentation.
Companies, including Social Trait, have built and trained persona-driven AI agents to simulate diverse consumer segments for precisely this purpose: real-time insight generation and campaign validation.
Need to understand how different segments might respond to a sensitive campaign? Or how a new product’s tone lands with Gen Z vs. Boomers? A simulated audience can help refine messaging before a single post goes live.
This approach isn’t replacing focus groups or gut instinct. It’s augmenting them. It’s like having a hyper-intelligent sounding board that lets you test, iterate, and learn fast.
Used in this way, AI becomes less about automating engagement and more about deepening it.
Authenticity at Scale: Redefining Brand Voice With AI
Let’s bust a myth: AI needn’t dilute brand voice; it can actually help define and sharpen it.
Persona-driven AI can simulate reactions from different demographics, emotional states, and cultural contexts—allowing marketers to practice empathetic listening at scale.
And it’s not just about saying the right thing to an audience; it’s about anticipating how it will be felt on the other end.
By creating dynamic communities of AI personas modeled after real populations, whether they reflect a nation’s cultural attitudes, a niche subculture, or a targeted customer segment, brands can engage in real-time, low-risk dialogue with their audiences. And AI personas don’t just respond, they interact with one another, revealing emergent behavior, social influence patterns, and emotional nuance that traditional testing often misses.
That’s why more than half of marketers are already using generative AI, according to a recent Salesforce survey. They’re not just automating. They’re enhancing strategy, empathy, and creativity at scale.
The result? Brand voices that are more nuanced, inclusive, and resonant.
Responsible Engagement and Guardrails for AI in Marketing
The temptation to use human-like AI to smooth over friction, boost interaction, and even increase conversions is strong. But with power comes responsibility.
Authenticity must remain a guiding principle. That begins with transparency: Audiences deserve to know when they’re engaging with AI. Deceptive use of AI, even unintentionally, breaks trust—which, once lost, is hard to regain.
Ethical engagement also means resisting manipulative design. If a bot sounds like your friend, it shouldn’t be weaponized to upsell or nudge behavior without consent. Persona-driven AI should reflect brand values, not obscure them.
Marketers must also acknowledge public sentiment: 55% of people say they would be more eager to use AI applications if they felt more human, not necessarily if they looked more human. This nuance matters. Feeling seen and heard drives engagement far more than superficial mimicry.
We need a new set of standards for AI in marketing. Ones that measure success not just in clicks and dwell time but in trust, clarity, and long-term brand health.
To keep AI a force for good in marketing, we must build it with intention, guided by a set of guardrails that ensure long-term trust and brand integrity:
- Always disclose when AI is being used in real-time engagement.
- Design AI agents that reflect your brand’s values, not just what performs well.
- Avoid stereotypes in persona development and training data.
- Keep humans in the loop especially in sensitive or high-stakes conversations.
- Audit for bias continuously, not just at launch.
- Understand the boundaries of responsible use, with a clear commitment to being insightful, not manipulative.
Ethical AI isn’t a feature. It’s a framework—which we must build now, not retroactively.
AI as a Bridge, Not a Barrier
We’re entering a new era. Not AI vs. human, but AI with human.
Marketers stand at the helm of this transformation. We have the tools to shape AI that engages, but also the responsibility to ensure it does so honestly.
When used with care, persona-driven AI isn’t a shortcut to connection. It’s a way to deepen it.
As the new stewards of AI-human interaction, let’s lead with intention and build a future where technology amplifies what’s best about being human.
More Resources on AI Use Cases in Marketing
How Market Researchers Are Using Generative AI
Using AI to Build Your Personas: Don’t Lose Sight of Your Real-World Buyers
Five Use Cases for AI in B2B Marketing (Beyond Content Generation)
Navigating AI Adoption and Use in Marketing: A Strategic Approach
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education3 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children