AI Research
What Do Digital Health Leaders Think of Trump’s New AI Action Plan?

The White House released “America’s AI Action Plan” last week, which outlines various federal policy recommendations designed to advance the nation’s status as a leader in international AI diplomacy and security. The plan seeks to cement American AI dominance mainly through deregulation, the expansion of AI infrastructure and a “try-first” culture.
Here are some measures included in the plan:
- Deregulation: The plan aims to repeal state and local rules that hinder AI development — and federal funding may also be withheld from states with restrictive AI regulations.
- Innovation: The proposal seeks to establish government-run regulatory sandboxes, which are safe environments in which companies can test new technologies.
- Infrastructure: The White House’s plan is calling for a rapid buildout of the country’s AI infrastructure and is offering companies tax incentives to do so. This also includes fast-tracking permits for data centers and expanding the power grid.
- Data: The plan seeks to create industry-specific data usage guidelines to accelerate AI deployment in critical sectors like healthcare, agriculture and energy.
Leaders in the healthcare AI space are cautiously optimistic about the action plan’s pro-innovation stance, and they’re grateful that it advocates for better AI infrastructure and data exchange standards. However, experts still have some concerns about the plan, such as its lack of focus on AI safety and patient consent, as well as the failure to mention key healthcare regulatory bodies.
Overall, experts believe the plan will end up being a net positive for the advancement of healthcare AI — but they do think it could use some edits.
Deregulation of data centers
Ahmed Elsayyad — CEO of Ostro, which sells AI-powered engagement technology to life sciences companies — views the plan as a generally beneficial move for AI startups. This is mainly due to the plan’s emphasis on deregulating infrastructure like data centers, energy grids and semiconductor capacity, he said.
Training and running AI models requires enormous amounts of computing power, which translates to high energy consumption, and some states are trying to address these increasing levels of consumption.
Local governments and communities have considered regulating data center buildouts due to concerns about the strain on power grids and the environmental impact — but the White House’s AI action plan aims to eliminate these regulatory barriers, Elsayyad noted.
No details on AI safety
However, Elsayyad is concerned about the plan’s lack of attention to AI safety.
He expected the plan to have a greater emphasis on AI safety because it’s a major priority within the AI research community, with leading companies like OpenAI and Anthropic dedicating significant amounts of their computing resources to safety efforts.
“OpenAI famously said that they’re going to allocate 20% of their computational resources for AI safety research,” Elsayyad stated.
He noted that AI safety is a “major talking point” in the digital health community. For instance, responsible AI use is a frequently discussed topic at industry events, and organizations focused on AI safety in healthcare — such as the Coalition for Health AI and Digital Medicine Society — have attracted thousands of members.
Elsayyad said he was surprised that the new federal action plan doesn’t mention AI safety, and he believes incorporating language and funding around it would have made the plan more balanced.
He isn’t alone in noticing that AI safety is conspicuously absent from the White House plan — Adam Farren, CEO of EHR platform Canvas Medical, was also stunned by the lack of attention to AI safety.
“I think that there needs to be a push to require AI solution providers to provide transparent benchmarks and evaluations of the safety of what they are providing on the clinical front lines, and it feels like that was missing from what was released,” Farren declared.
He noted that AI is fundamentally probabilistic and needs continuous evaluation. He argued in favor of mandatory frameworks to assess AI’s safety and accuracy, especially in higher-stakes use cases like medication recommendations and diagnostics.
No mention of the ONC
The action plan also fails to mention the Office of the National Coordinator for Health Information Technology (ONC), despite naming “tons” of other agencies and regulatory bodies, Farren pointed out.
This surprised him, given the ONC is the primary regulatory body responsible for all matters related to health IT and providers’ medical records.
“[The ONC] is just not mentioned anywhere. That seems like a miss to me because one of the fastest-growing applications of AI right now in healthcare is the AI scribe. Doctors are using it when they see a patient to transcribe the visit — and it’s fundamentally a software product that should sit underneath the ONC, which has experience regulating these products,” Farren remarked.
Ambient scribes are just one of the many AI tools being implemented into providers’ software systems, he added. For example, providers are adopting AI models to improve clinical decision making, flag medication errors and streamline coding.
Call for technical standards
Leigh Burchell, chair of the EHR Association and vice president of policy and public affairs at Altera Digital Health, views the plan as largely positive, particularly its focus on innovation and its acknowledgement of the need for technical standards.
Technical data standards — such as those developed by organizations like HL7 and overseen by National Institute of Standards and Technology (NIST) — ensure that healthcare’s software systems can exchange and interpret data consistently and accurately. These standards allow AI tools to more easily integrate with the EHR, as well as use clinical data in a way that is useful for providers, Burchell said.
“We do need standards. Technology in healthcare is complex, and it’s about exchanging information in ways that it can be consumed easily on the other end — and so that it can be acted on. That takes standards,” she declared.
Without standards, AI systems risk miscommunication and poor performance across different settings, Burchell added.
Little regard for patient consent
Burchell also raised concerns that the AI action plan doesn’t adequately address patient consent — particularly whether patients have a say in how their data is used or shared for AI purposes.
“We’ve seen states pass laws about how AI should be regulated. Where should there be transparency? Where should there be information about the training data that was used? Should patients be notified when AI is used in their diagnostic process or in their treatment determination? This doesn’t really address that,” she explained.
Actually, the plan suggests that the federal government could, in the future, withhold funds from states that pass regulations that get in the way of AI innovation, Burchell pointed out.
But without clear federal rules, states must fill the gap with their own AI laws — which creates a fragmented, burdensome landscape, she noted. To solve this problem, she called for a coherent federal framework to provide more consistent guardrails on issues like transparency and patient consent.
While the White House’s AI action plan lays the groundwork for faster innovation, Burchell and other experts agree it must be accompanied by stronger safeguards to ensure the responsible and equitable use of AI in healthcare.
Credit: MR.Cole_Photographer, Getty Images
AI Research
UK to receive $6.8B Google investment for AI development

Google, part of Alphabet Inc., revealed its intention to invest £5 billion, approximately $6.8 billion, in the UK specifically to boost the development of an AI economy in the country in the next two years.
The tech giant shared this significant plan just as the US President Donald Trump gets ready to disclose economic deals surpassing $10 billion. This was brought during Trump’s visit to the US’s long-standing ally this week.
Google and AI rivals fuel UK tech surge
Not all the investment will be dedicated to the above sector; some will be set aside for a newly developed data center in Waltham Cross that focuses on meeting the surging demand for Google’s services, such as map and search services. According to the tech giant, this investment is a game-changer that will create about 8,250 jobs for UK citizens annually.
Just like Google, its rivals in the AI race, OpenAI and Nvidia, are also eyeing the UK to make investments worth billions in the country’s data centers during Trump’s visit.
According to reports, the investment will be implemented in collaboration with Nscale Global Holdings Ltd. Nscale is a London firm that operates large scale data centers and is a major player in Europe’s growing demand for AI infrastructure.
Trump’s visit to the UK strengthens the economies of the two nations
Earlier on September 15, senior officials in the US revealed that the American president was planning to announce economic deals exceeding $10 billion during his second visit to the United Kingdom.
“The trip to the U.K. is going to be incredible,” Trump told reporters Sunday. He said Windsor Castle is “supposed to be amazing” and added: “It’s going to be very exciting.”
The visit will feature a collaboration in science and technology, a sector anticipated to bring billions in new investments. The officials who shared these details about Trump’s trip wished to remain anonymous due to the confidential nature of the discussion.
They also stated that there is a likelihood that Trump and Keir Starmer, UK’s Prime Minister, might announce a defense technology cooperation deal and boost relationships between major financial centers in the two countries.
Some of these economic deals may be announced during a business reception that Rachel Reeves, the Chancellor of the Exchequer, will host, where the two leaders will be present. Other top US tech executives attending the event include Jensen Huang from Nvidia, and Sam Altman from OpenAI. They will participate in roundtable talks on Thursday, September 18, at Chequers, the prime minister’s residence.
These economic programs came alongside previous efforts to sign a significant deal that would ease the construction of nuclear power plants. The two countries will utilize each other’s safety checks on reactor designs that will accelerate the approval process.
Even though some economic deals are progressing smoothly, US officials have highlighted that Trump’s announcements will likely not include a deal to loosen US tariff policies on scotch whiskey. Notably, this is what Starmer has been actively pushing for.
The officials also pointed out a likelihood that the announcements will not address Trump’s ongoing worries brought about by the UK government’s ability to regulate US-based tech firms such as Apple and Alphabet, in connection with their control over smartphones.
Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.
AI Research
Researchers used AI to design the perfect phishing plot, what happened next shocked everyone

AI is increasingly being put to the test for its potential benefits, but a new experiment has shown how the same technology can also fuel online crime. A Reuters investigation, conducted in partnership with Harvard researcher Fred Heiding, has revealed that some of the world’s most widely used AI chatbots can be nudged into producing scam emails aimed at senior citizens.
In a controlled study, emails generated by these bots were sent to more than 100 elderly volunteers in the United States. While no money or personal data was taken, the results were troubling. About 11 per cent of the participants clicked on the links inside the phishing emails, suggesting that AI-generated scams can be as persuasive as those crafted by humans.
The fake charity experiment with Grok
The investigation began with a test on Grok, the chatbot developed by Elon Musk’s company xAI. Reporters asked it to create a message for older readers about a charity called the “Silver Hearts Foundation”. The mail looked convincing, speaking about dignity for seniors and urging them to join the mission. Without further prompting, Grok even added a line to create urgency: “Click now to act before it’s too late.” The charity did not exist, the entire email was designed to trick recipients.
Phishing: a growing global threat
Phishing, where people are deceived into revealing sensitive information or sending money, is one of the biggest challenges in cybersecurity. According to FBI figures, it is the most reported cybercrime in the US, and older people are among the worst affected. In 2023 alone, Americans over 60 lost nearly $5 billion to such fraud. The agency has also warned that generative AI tools can make these scams more effective and harder to detect.
Chatbots tested beyond Grok
The Reuters team went beyond Grok and tested five other major chatbots – OpenAI’s ChatGPT, Meta’s AI assistant, Google’s Gemini, Anthropic’s Claude and DeepSeek. Initially, most of them refused to generate phishing content. But with slight changes in the way requests were worded, such as describing the exercise as academic research or fiction writing, the chatbots eventually produced scam-like drafts.
Why AI makes scams easier
Heiding, who has studied phishing techniques for years, said this flexibility makes chatbots “potentially valuable partners in crime”. Unlike humans, they can generate dozens of variations instantly, helping criminals cut costs and scale up operations. In fact, Heiding’s earlier research showed that phishing emails written by AI could be just as effective in luring targets as those created manually.
When tested on seniors, five out of nine AI-generated mails resulted in clicks. Two came from Grok, two from Meta AI and one from Claude. None of the volunteers responded to ChatGPT or DeepSeek’s drafts. But the study was not intended to rank which chatbot is more dangerous, rather to show that several can be exploited for scams.
Tech firms acknowledge risks
Technology companies have acknowledged the concerns. Meta said it invests in safeguards to prevent misuse and regularly stress-tests its systems. Anthropic stated that using its chatbot Claude for scams violates its policies and accounts found misusing the tool are suspended. Google said it retrained Gemini after learning it had generated phishing content, while OpenAI has publicly admitted in past reports that its models can be misused for “social engineering”.
Security experts believe the issue lies in how companies balance user experience with safety. Chatbots are designed to be helpful, but stricter refusals could drive users towards rival products with fewer restrictions. This trade-off, researchers argue, creates room for misuse.
The problem is not confined to experiments. Survivors of scam operations in Southeast Asia told Reuters that they had been forced to use ChatGPT in real-world fraud schemes. Workers at such centres reportedly used the bot to polish responses, translate messages and build trust with victims.
Governments and regulators respond
Governments are beginning to take note. Some US states have passed laws against AI-generated fraud, though most target scammers themselves rather than the companies providing the technology. The FBI, in a recent alert, said criminals are now able to “commit fraud on a larger scale” because AI reduces the time and effort required to make scams believable.
– Ends
AI Research
SEERai™ by Galorath Wins SiliconANGLE TechForward Award with Industry-First Agentic Artificial Intelligence
SEERai Recognized as the Industry’s First Agentic AI Platform Transforming Cost, Schedule, and Risk Planning in Secure Enterprise Environments
LONG BEACH, Calif., Sept. 16, 2025 /PRNewswire/ — Galorath, the premier AI-powered operational intelligence platform provider, today announced that SEERai™ has been named a winner in SiliconANGLE’s 2025 TechForward Awards. The platform was recognized in the “AI Tech – Generative AI & Foundation Models” category for its impact in enabling secure, explainable AI-driven planning across complex programs.
SEERai is the first commercially available agentic AI platform engineered for program-critical outcomes. Unlike generic AI copilots or disconnected estimation tools, SEERai uses a modular architecture of purpose-built agents, retrieval-augmented generation (RAG), and structured decision logic to deliver fully traceable outputs. It enables organizations to accelerate proposal timelines, standardize estimation practices, and scale expert insight—without compromising accuracy, auditability, or security.
“Being recognized by SiliconANGLE is a testament to Galorath’s ongoing commitment to innovation and impact,” said Charles Orlando, Chief Strategy Officer, Galorath Incorporated. “With rising costs, constrained budgets, and outdated tools testing the limits of traditional project planning, SEERai delivers an agentic AI solution that replaces static assumptions with accuracy, agility, and confidence.”
The TechForward Awards recognize the technologies and solutions driving business forward. As the trusted voice of enterprise and emerging tech, SiliconANGLE applies a rigorous editorial lens to highlight innovations reshaping how businesses operate in our rapidly changing landscape. As organizations face pressures to deliver projects faster, reduce costs, and improve outcomes across increasingly complex environments, traditional tools and approaches often fail to adapt to real-time changes, leaving teams struggling with inefficiencies, risks, and misalignment. Galorath’s award-winning SEERai solution is pioneering the future of AI for cost estimation, project planning, and risk management.
“These winners represent the most impressive achievements emerging from today’s fiercely competitive tech landscape, embodying the relentless drive and visionary thinking that pushes entire industries forward,” said John Furrier, co-founder and co-CEO of SiliconANGLE Media. “These are the solutions that business leaders trust to solve their most critical challenges. They’re not just products, they’re competitive advantages.”
The TechForward awards program honors both established enterprise solutions and breakthrough technologies defining the future of business, spanning AI innovation, security excellence, cloud transformation, data platform evolution and blockchain/crypto tech. SEERai was selected from a competitive field of nominees by a panel of industry experts and technology leaders. The complete list of winners can be found online at https://siliconangle.com/awards/.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media transforms the way technology companies connect with their target markets. Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 10+ million elite tech professionals, 4+ million SiliconANGLE readers and 250,000+ social media subscribers. The company’s new, proprietary theCUBE AI LLM is breaking ground in audience interaction, leveraging CUBE365’s neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
About SEER® and SEERai
Galorath’s flagship project estimating software, SEER®, offers unparalleled capabilities in project cost forecasting, risk mitigation, and actionable insights, making it the go-to platform for project cost planning for hardware and software development, systems engineering, aerospace, and manufacturing companies. SEERai is Galorath’s modular agentic AI platform for estimation, sourcing, labor, schedule, and risk, standing out as a first-of-its-kind generative AI for digital engineering support. Combining its connection with the knowledge bases of SEER, along with secure, isolated integration of an organization’s backend systems, processes, databases, and projects, SEERai allows cost and project estimation professionals to use natural language to instantly generate actionable information and data for project and cost estimation, from Work Breakdown Structures (WBS) to project and cost estimation guidance and much more. For more information, visit https://galorath.com/ai.
About Galorath Incorporated
Leveraging four decades of in-market experience and success, Galorath transforms cost, scheduling, should-cost analysis, and project estimation, optimizing outcomes and achieving unparalleled efficiencies for public and private sector organizations worldwide. SEER®, Galorath’s flagship digital engineering platform, is trusted by industry giants like Accenture, NASA, Boeing, the U.S. Department of Defense, and BAE Systems (EU). SEER accelerates time to market, dramatically enhances project predictability and visibility, and ensures project costs are on track and on budget. For more information, visit https://galorath.com/.
All trademarks are the property of their respective owners. |
SOURCE Galorath
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries