AI Research
What Do Digital Health Leaders Think of Trump’s New AI Action Plan?

The White House released “America’s AI Action Plan” last week, which outlines various federal policy recommendations designed to advance the nation’s status as a leader in international AI diplomacy and security. The plan seeks to cement American AI dominance mainly through deregulation, the expansion of AI infrastructure and a “try-first” culture.
Here are some measures included in the plan:
- Deregulation: The plan aims to repeal state and local rules that hinder AI development — and federal funding may also be withheld from states with restrictive AI regulations.
- Innovation: The proposal seeks to establish government-run regulatory sandboxes, which are safe environments in which companies can test new technologies.
- Infrastructure: The White House’s plan is calling for a rapid buildout of the country’s AI infrastructure and is offering companies tax incentives to do so. This also includes fast-tracking permits for data centers and expanding the power grid.
- Data: The plan seeks to create industry-specific data usage guidelines to accelerate AI deployment in critical sectors like healthcare, agriculture and energy.
Leaders in the healthcare AI space are cautiously optimistic about the action plan’s pro-innovation stance, and they’re grateful that it advocates for better AI infrastructure and data exchange standards. However, experts still have some concerns about the plan, such as its lack of focus on AI safety and patient consent, as well as the failure to mention key healthcare regulatory bodies.
Overall, experts believe the plan will end up being a net positive for the advancement of healthcare AI — but they do think it could use some edits.
Deregulation of data centers
Ahmed Elsayyad — CEO of Ostro, which sells AI-powered engagement technology to life sciences companies — views the plan as a generally beneficial move for AI startups. This is mainly due to the plan’s emphasis on deregulating infrastructure like data centers, energy grids and semiconductor capacity, he said.
Training and running AI models requires enormous amounts of computing power, which translates to high energy consumption, and some states are trying to address these increasing levels of consumption.
Local governments and communities have considered regulating data center buildouts due to concerns about the strain on power grids and the environmental impact — but the White House’s AI action plan aims to eliminate these regulatory barriers, Elsayyad noted.
No details on AI safety
However, Elsayyad is concerned about the plan’s lack of attention to AI safety.
He expected the plan to have a greater emphasis on AI safety because it’s a major priority within the AI research community, with leading companies like OpenAI and Anthropic dedicating significant amounts of their computing resources to safety efforts.
“OpenAI famously said that they’re going to allocate 20% of their computational resources for AI safety research,” Elsayyad stated.
He noted that AI safety is a “major talking point” in the digital health community. For instance, responsible AI use is a frequently discussed topic at industry events, and organizations focused on AI safety in healthcare — such as the Coalition for Health AI and Digital Medicine Society — have attracted thousands of members.
Elsayyad said he was surprised that the new federal action plan doesn’t mention AI safety, and he believes incorporating language and funding around it would have made the plan more balanced.
He isn’t alone in noticing that AI safety is conspicuously absent from the White House plan — Adam Farren, CEO of EHR platform Canvas Medical, was also stunned by the lack of attention to AI safety.
“I think that there needs to be a push to require AI solution providers to provide transparent benchmarks and evaluations of the safety of what they are providing on the clinical front lines, and it feels like that was missing from what was released,” Farren declared.
He noted that AI is fundamentally probabilistic and needs continuous evaluation. He argued in favor of mandatory frameworks to assess AI’s safety and accuracy, especially in higher-stakes use cases like medication recommendations and diagnostics.
No mention of the ONC
The action plan also fails to mention the Office of the National Coordinator for Health Information Technology (ONC), despite naming “tons” of other agencies and regulatory bodies, Farren pointed out.
This surprised him, given the ONC is the primary regulatory body responsible for all matters related to health IT and providers’ medical records.
“[The ONC] is just not mentioned anywhere. That seems like a miss to me because one of the fastest-growing applications of AI right now in healthcare is the AI scribe. Doctors are using it when they see a patient to transcribe the visit — and it’s fundamentally a software product that should sit underneath the ONC, which has experience regulating these products,” Farren remarked.
Ambient scribes are just one of the many AI tools being implemented into providers’ software systems, he added. For example, providers are adopting AI models to improve clinical decision making, flag medication errors and streamline coding.
Call for technical standards
Leigh Burchell, chair of the EHR Association and vice president of policy and public affairs at Altera Digital Health, views the plan as largely positive, particularly its focus on innovation and its acknowledgement of the need for technical standards.
Technical data standards — such as those developed by organizations like HL7 and overseen by National Institute of Standards and Technology (NIST) — ensure that healthcare’s software systems can exchange and interpret data consistently and accurately. These standards allow AI tools to more easily integrate with the EHR, as well as use clinical data in a way that is useful for providers, Burchell said.
“We do need standards. Technology in healthcare is complex, and it’s about exchanging information in ways that it can be consumed easily on the other end — and so that it can be acted on. That takes standards,” she declared.
Without standards, AI systems risk miscommunication and poor performance across different settings, Burchell added.
Little regard for patient consent
Burchell also raised concerns that the AI action plan doesn’t adequately address patient consent — particularly whether patients have a say in how their data is used or shared for AI purposes.
“We’ve seen states pass laws about how AI should be regulated. Where should there be transparency? Where should there be information about the training data that was used? Should patients be notified when AI is used in their diagnostic process or in their treatment determination? This doesn’t really address that,” she explained.
Actually, the plan suggests that the federal government could, in the future, withhold funds from states that pass regulations that get in the way of AI innovation, Burchell pointed out.
But without clear federal rules, states must fill the gap with their own AI laws — which creates a fragmented, burdensome landscape, she noted. To solve this problem, she called for a coherent federal framework to provide more consistent guardrails on issues like transparency and patient consent.
While the White House’s AI action plan lays the groundwork for faster innovation, Burchell and other experts agree it must be accompanied by stronger safeguards to ensure the responsible and equitable use of AI in healthcare.
Credit: MR.Cole_Photographer, Getty Images
AI Research
Research: Reviewer Split on Generative AI in Peer Review

A new global reviewer survey from IOP Publishing (IOPP) reveals a growing divide in attitudes among reviewers in the physical sciences regarding the use of generative AI in peer review. The study follows a similar survey conducted last year showing that while some researchers are beginning to embrace AI tools, others remain concerned about the potential negative impact, particularly when AI is used to assess their own work.
Currently, IOPP does not allow the use of AI in peer review as generative models cannot meet the ethical, legal, and scholarly standards required. However, there is growing recognition of AI’s potential to support, rather than replace, the peer review process.
Key Findings:
- 41% of respondents now believe generative AI will have a positive impact on peer review (up 12% from 2024), while 37% see it as negative (up 2%). Only 22% are neutral or unsure—down from 36% last year—indicating growing polarisation in views.
- 32% of researchers have already used AI tools to support them with their reviews.
- 57% would be unhappy if a reviewer used generative AI to write a peer review report on a manuscript they had co-authored and 42% would be unhappy if AI were used to augment a peer review report.
- 42% believe they could accurately detect an AI-written peer review report on a manuscript they had co-authored.
Women tend to feel less positive about the potential of AI compared with men, suggesting a gendered difference in the usefulness of AI in peer review. Meanwhile, more junior researchers appear more optimistic about the benefits of AI, compared to their more senior colleagues who express greater scepticism.
When it comes to reviewer behaviour and expectations, 32% of respondents reported using AI tools to support them during the peer review process in some form. Notably, over half (53%) of those using AI said they apply it in more than one way. The most common use (21%) was for editing grammar and improving the flow of text and 13% said they use AI tools to summarise or digest articles under review, raising serious concerns around confidentiality and data privacy. A small minority (2%) admitted to uploading entire manuscripts into AI chatbots asking it to generate a review on their behalf.
Interestingly, 42% of researchers believe they could accurately detect an AI-written peer review report on a manuscript they had co-authored.
“These findings highlight the need for clearer community standards and transparency around the use of generative AI in scholarly publishing. As the technology continues to evolve, so too must the frameworks that support ethical and trustworthy peer review”, said Laura Feetham-Walker, Reviewer Engagement Manager at IOP Publishing and lead author of the study.
“One potential solution is to develop AI tools that are integrated directly into peer review systems, offering support to reviewers and editors without compromising security or research integrity. These tools should be designed to support, rather than replace, human judgment. If implemented effectively, such tools would not only address ethical concerns but also mitigate risks around confidentiality and data privacy; particularly the issue of reviewers uploading manuscripts to third-party generative AI platforms,” adds Feetham-Walker.
AI Research
$3.1 Million Raised To Advance Autonomous Investment Research Platform

Pascal AI Labs, a rapidly growing technology company focused on transforming how investment research is conducted, has announced the close of a $3.1 million seed funding round. The funding was led by Kalaari Capital, with additional participation from Norwest, Infoedge Ventures, Antler, and several prominent angel investors.
This funding marks a significant step in the company’s journey to bring advanced, AI-driven research capabilities to financial institutions worldwide.
The new capital will be used to speed up the development of Pascal AI’s autonomous investment workflows, expand its presence in the United States, and form strategic partnerships with key data providers.
The company’s platform is already in use by more than 25 financial firms across the U.S. and the Asia-Pacific region, including private equity funds managing $2 billion in assets and one of the world’s top three asset managers with over $1 trillion under management.
Pascal AI offers secure and native connections to data on over 16,000 publicly traded companies across 27 markets, giving investment teams a broad and reliable foundation for their work.
The problem that Pascal AI is addressing is one that many investment professionals are familiar with. Analysts and portfolio managers are inundated with vast amounts of data from company filings, earnings call transcripts, market reports, and internal research notes.
While existing platforms can surface this information, they often fail to capture the accumulated judgment and institutional knowledge that experienced investors rely on. As a result, analysts spend hours manually piecing together information, and chief investment officers often lack a clear, forward-looking view of their portfolios.
Pascal AI takes a different approach by automating the entire investment lifecycle. The platform learns from a firm’s proprietary history—its past decisions, research notes, and investment patterns so it can reason and act like a seasoned investor rather than simply retrieving data. This means it can proactively connect insights, identify risks, and suggest actions in a way that reflects the unique thinking of each firm.
Because the stakes in investment decision-making are high, trust and security are central to Pascal AI’s design. The platform is built on a proprietary Knowledge Graph that makes every action fully auditable and traceable. It supports enterprise-grade security features, including role-based permissions and the option for on-premise deployment, ensuring that sensitive information remains protected while still enabling robust AI-driven analysis.
Pascal AI was founded by Vibhav Viswanathan and Mithun Madhusudan, both of whom bring deep expertise in finance, artificial intelligence, and scaling technology products.
Viswanathan, a graduate of the University of Chicago Booth School of Business, previously led AWS Inferentia and Neuron in Silicon Valley and has hands-on investment experience from his time at Capital Group and NEA-IUVP.
Madhusudan, an alumnus of the Indian Institute of Management Bangalore, has led AI and product teams at Indian tech unicorns Apna and ShareChat, where he helped scale AI products to more than 100 million users.
KEY QUOTES:
“The future of investment management is autonomous investment research. Pascal AI is systematically automating complex investment workflows with the long-term vision of creating a fully autonomous investment research company. This funding allows us to accelerate that journey, moving from workflow automation to true autonomy, and giving analysts instant, auditable insights and CIOs a continuously updated view of exposures and performance”.
Vibhav Viswanathan, co-founder and CEO of Pascal AI
“At Kalaari, we believe the next decade will see a decisive shift toward autonomous research platforms that can scale human judgment with machine intelligence. Pascal AI is at the forefront of this transformation—building secure, auditable, and truly agentic workflows that don’t just process information, but reason like an investor. What stood out to us was the clarity and conviction with which Vibhav and Mithun are reimagining how investors and CIOs make decisions. With strong early traction from marquee global clients, the team has already validated the depth of the problem and the strength of their solution. We are excited to partner with them on this mission.”
Kalaari Capital Partner Sampath P
AI Research
Chair File: Using Innovation and AI to Advance Health

With all of the challenges facing health care — a shrinking workforce population, reduced funding, new technologies and pharmaceuticals — it’s no longer an option to change, but an imperative. In order to keep caring for our communities well into the future, we need to transform how we provide care to people. Technology, artificial intelligence and digital transformation can not only help us mitigate these trends but truly innovate and find new ways of making health better.
There are many exciting capabilities already making their way into our field. Ambient listening technology for providers and other automation and AI reduce administrative burden and free up people and resources to improve front-line care. Within the next five years, we expect hospital “smart rooms” to be the norm; they leverage cameras and AI-assisted alerting to improve safety, enable virtual care models across our footprint and allow us to boost efficiency while also improving quality and outcomes.
It’s easy to get caught up in shiny new tools or cutting-edge treatments, but often the most impactful innovations are smaller — adapting or designing our systems and processes to empower our teams to do what they do best.
That’s exactly what a new collaboration with the AHA and Epic is aiming to do. A set of point-of-care tools in the electronic health record is helping providers prevent, detect and treat postpartum hemorrhage, which is responsible for 11% of maternal deaths in the U.S. Early detection and treatment of PPH is key to a full recovery. One small innovation — incorporating tools into your EHR and labor and delivery workflows — is having a big impact: enhancing providers’ ability to effectively diagnose and treat PPH.
It’s critical to leverage technology advancements like this to navigate today’s challenging environment and advance health care into the future. However, at the same time, we also need to focus on how these opportunities can deliver measurable value to our patients, members and the communities we serve.
I will be speaking with Jackie Gerhart, M.D., chief medical officer at Epic, later this month for a Leadership Dialogue conversation. Listen in to learn more about how AI and other technological innovations can better serve patients and make actions more efficient for care providers.
Helping You Help Communities – Key AHA Resources
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries