Connect with us

AI Research

High-Risk AI Models Need Military-Grade Security – War on the Rocks

Published

on


The race to develop artificial general intelligence is accelerating, but America’s approach to securing it remains dangerously inadequate. While Washington celebrates its new “AI Action Plan,” which champions a light-touch regulatory model to foster innovation, Chinese intelligence services are very likely targeting American AI labs with sophisticated espionage operations. This official embrace of minimal oversight ignores a sobering reality: The country’s most advanced AI research — the very technology that will define the next century of global power — remains critically vulnerable to theft and sabotage.

The math is sobering: It takes years to build secure data centers, establish protected supply chains, and implement the kind of military-grade safeguards that could withstand determined nation-state actors. Meanwhile, leading AI models are advancing toward human-level capabilities on timelines measured in months, not years. Every day we delay implementing serious security measures is another day that critical AI research remains vulnerable to theft, sabotage, or worse.

Advocates of a light-touch approach — labs and their lobbyists, innovation-focused policymakers, and some conservatives — argue that firms already have strong incentives to act responsibly. There’s no need for onerous regulations, they argue, when companies already want to avoid Chinese domination and technical catastrophes (e.g., misaligned superintelligence). Pair that with export controls to slow China’s progress, and you have the makings of a winning formula: Let the United States innovate faster and keep the most powerful systems out of Beijing’s hands.

Anthropic CEO Dario Amodei recently proposed a compromise: mandatory transparency requirements that would force labs to disclose safety evaluations and mitigation plans, while preserving their freedom to innovate. A federal standard would be an easy fix, he says, because it would “codify what many major developers are already doing.” It’s a reasonable middle ground for a politically constrained moment. But transparency alone cannot solve the fundamental problem that AI labs developing potentially superintelligent systems are still operating like commercial tech companies when they should be treated like strategic national assets. In this race, half-measures are a formula for strategic failure.

Amodei is right to call for disclosure and oversight, but his proposal rests on a mistaken assumption: that transparency alone can manage threats from AI systems that may one day exceed human intelligence. As someone who has written extensively about the harms of excessive government secrecy, I value transparency deeply. But I also understand its limits as a solution to the urgent national security risks we face. Those include the control and alignment, and malicious actor threats Amodei presents as examples of AI dangers. At least as important is the very real possibility that China will develop advanced artificial general intelligence before the United States. That would let Beijing achieve military and economic superiority, giving it a strategic monopoly on global power. It might do that more or less on its own, or it might achieve this through espionage and/or sabotage. We have extensive evidence showing persistent Chinese efforts to steal intellectual property, including from leading tech firms building frontier AI models.

As things stand now, the leading AI labs “are the security equivalent of swiss cheese.” Gladstone AI’s April 2025 report, written with extraordinary inside access likely due to its relationship with the federal government, documents significant vulnerabilities at every level of model development: Attacks that could paralyze data centers for less than $20,000; Chinese parts providing back-door access and sabotage opportunities, without alternative options because of China’s dominance of the hardware supply chain; Chinese human and signals intelligence capabilities, which probably already provide access to critically important intellectual property, including model weights and architectures.

One example from the report describes “an attack that allows hackers to reconstruct the architecture of a small AI model using nothing but the power consumption profile of the hardware that runs it.” Much stronger “information extraction attacks” using “electromagnetic, sound, or vibrational signals” are also available. Beyond the strategic nightmare of China achieving an advanced artificial general intelligence monopoly, the consequences of such a security breach could be immediate and catastrophic for Americans, potentially targeting everything from financial markets to critical infrastructure. While China remains America’s primary competitor for AI and other domains, Russia’s highly capable intelligence services could also steal secrets and wreak havoc to get ahead.

As Amodei has publicly stated, the threat of Chinese industrial espionage is a primary concern for leading AI labs. This is not a distant threat —  it is an active siege. For years, the FBI has been sounding the alarm, with former Director Chris Wray warning that China’s campaign of theft is “more brazen, more damaging than ever before,” forcing the bureau to open a new China-related counterintelligence investigation “every 12 hours.” While federal authorities have achieved notable successes — such as the recent indictment of a Chinese national for an alleged plot to steal proprietary AI technology from Google — these actions are fundamentally reactive. They reveal a strategy of catching spies after they’ve already penetrated the gates, which is inadequate when the goal ought to be to prevent the theft of nation-defining technology in the first place.

Amodei’s proposal might be politically viable in the short term. With regulation-wary Republicans in control of the White House and Congress, the notion of limited transparency requirements comes across as a reasonable compromise, “the best way to balance the considerations in play.” But transparency can’t protect AI labs from Chinese espionage and sabotage. Labs working toward advanced artificial general intelligence are not just commercial entities, like pharmaceutical firms, where disclosure and product safety are the primary regulatory goals. They are more like private nuclear facilities or bioweapons labs, sites of strategic national importance. Disclosure standards and post-hoc oversight are nowhere near enough. The problem isn’t just that AI labs are insecure. It’s that they are treated as commercial ventures when they are already operating as strategic sites targeted by rival intelligence services. Light-touch, with or without transparency requirements, is fundamentally misaligned with national risk. Asking commercial companies to defend themselves against a determined, state-level adversary is a recipe for failure.

While a full “Manhattan Project-like program” may not be necessary, the current approach is untenable. What we need now is a tiered risk governance framework that distinguishes between levels of danger and scales regulatory demands accordingly. Low-risk models would remain unregulated, with minimal required public disclosure, perhaps enough to allow civil society monitoring. Intermediate-risk models could operate under a regime of mandatory transparency, safety evaluations, and state-enforced secrecy for particularly sensitive assets (e.g., model weights, novel algorithms and architectures). High-risk models would require something closer to military-grade governance. This includes not only technical safeguards like secure, government-audited data centers and a new classification system that treats models and the methods to build them as state secrets, but also rigorous personnel security protocols. Personnel would require not just federal vetting and clearance, but also continuous security training, participation in insider threat awareness programs, and cultivation of a security-first culture. Thresholds would be based on factors such as autonomous decision-making, strategic planning capabilities, goal preservation under adversarial conditions, and dual-use potential. To make the distinctions, the White House should convene a task force composed of lab executives, independent computer scientists, and national security professionals from the intelligence community, the Department of Defense, the Department of Energy, and the Cybersecurity and Infrastructure Security Agency.

Moving from light-touch to tiered risk governance will face political resistance, especially in the current environment. However, the national security framing may attract enough support from defense-minded lawmakers to make progress possible, particularly since Congress is already grappling with the scale of this threat in hearings on China’s systematic theft of U.S. technology, including advanced AI. Crucially, this approach would reframe security spending not as a regulatory burden, but as a strategic co-investment by the government, strengthening the leading AI labs financially through federal partnerships while leaving the vast majority of AI development unregulated

These are not radical proposals. We already treat nuclear facilities and cyberweapons with this level of precaution. The strategic stakes of advanced AI are no less serious, and the time to act is now. The proposed bipartisan Advanced AI Security Readiness Act is a critical first step. While the bill rightly tasks the NSA’s AI Security Center with designing an “AI Security Playbook to address vulnerabilities, threat detection, cyber and physical security strategies, and contingency plans for highly sensitive AI systems,” its success will depend on cooperation with FBI’s Counterintelligence Division, which is responsible for stopping spies targeting labs on U.S. soil. Passing this bill would be a critical down payment on the robust security framework America needs.

 

Jason Ross Arnold is professor and chair of political science at Virginia Commonwealth University, with an affiliated appointment in the Computer Science Department. He is the author of Secrecy in the Sunshine Era: The Promise and Failures of U.S. Open Government Laws (2014), Whistleblowers, Leakers, and Their Networks, from Snowden to Samizdat (2019), and Uncertain Threats: The FBI, the New Left, and Cold War Intelligence (forthcoming, 2025).

Image: Midjourney





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Alberta Follows Up Its Artificial Intelligence Data Centre Strategy with a Levy Framework

Published

on


Alberta is introducing a levy framework for data centres powering artificial intelligence technologies, the Province recently announced.

Effective by the end of 2026, a 2% levy on computer hardware will apply to grid-connected data centres of 75 megawatts or greater, according to a statement from Alberta.

The levy will be fully offset against provincial corporate income taxes, the government says. Once a data centre becomes profitable and pays corporate income tax in Alberta, the levy will not result in any additional tax burden.

Data centres of 75MW or greater will be recognized as designated industrial properties, with property values assessed by the province. Land and buildings associated with data centres will be subject to municipal taxation.

The framework was forged through a six-week consultation with industry stakeholders, according to Nate Glubish, Minister of Technology and Innovation.

“Alberta’s government has a duty to ensure Albertans receive a fair deal from data centre investments,” the provincial minister remarked. “This approach strikes a balance that we believe is fair to industry and Albertans, while protecting Alberta’s competitive advantage.”

Glubish added that the Alberta government is also exploring other options. This includes a payment in lieu of taxes program that would allow companies to make predictable annual payments instead of fluctuating levy amounts, as well as a deferral program to ease cash-flow pressures during construction and early years of operation.

“After working closely with industry, we’re introducing a fair, predictable levy that ensures data centres pay their share for the infrastructure and services that support them,” commented Nate Horner, Minister of Finance.

“This approach provides stability for businesses while generating new revenue to support Alberta’s future,” he posits.

The decision builds on the Alberta Artificial Intelligence Data Centre Strategy, introduced in 2024.

The strategy aims to capture a larger share of the global AI data centre market, which is expected to exceed $820 billion by 2030 as Alberta becomes a data centre powerhouse within Canada.

However, the Province’s tactics have not gone uncriticized.



Source link

Continue Reading

AI Research

Reimagining clinical AI: from clickstreams to clinical insights with EHR use metadata

Published

on


  • Harnessing EHR data for health research | Nature Medicine. https://www.nature.com/articles/s41591-024-03074-8.

  • Acosta, J. N., Falcone, G. J., Rajpurkar, P. & Topol, E. J. Multimodal biomedical AI. Nat. Med. 28, 1773–1784 (2022).

    CAS 
    PubMed 

    Google Scholar
     

  • Adler-Milstein, J. et al. Meeting the Moment: Addressing Barriers and Facilitating Clinical Adoption of Artificial Intelligence in Medical Diagnosis. NAM Perspect. https://doi.org/10.31478/202209c. (2022)

  • Aquino, Y. S. J. et al. Utopia versus dystopia: Professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skills. Int. J. Med. Inf. 169, 104903 (2023).


    Google Scholar
     

  • Pavuluri, S., Sangal, R., Sather, J. & Taylor, R. A. Balancing act: the complex role of artificial intelligence in addressing burnout and healthcare workforce dynamics. BMJ Health Care Inf. 31, e101120 (2024).


    Google Scholar
     

  • Rule, A. et al. Guidance for reporting analyses of metadata on electronic health record use. J. Am. Med. Inform. Assoc.31, 784–789 (2023).

    PubMed Central 

    Google Scholar
     

  • Adler-Milstein, J., Adelman, J. S., Tai-Seale, M., Patel, V. L. & Dymek, C. EHR audit logs: A new goldmine for health services research?. J. Biomed. Inform. 101, 103343 (2020).

    PubMed 

    Google Scholar
     

  • Kannampallil, T. & Adler-Milstein, J. Using electronic health record audit log data for research: insights from early efforts. J. Am. Med. Inform. Assoc. 30, 167–171 (2023).


    Google Scholar
     

  • Rule, A., Melnick, E. R. & Apathy, N. C. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J. Am. Med. Inform. Assoc. 30, 144–154 (2023).


    Google Scholar
     

  • Rule, A., Chiang, M. F. & Hribar, M. R. Using electronic health record audit logs to study clinical activity: a systematic review of aims, measures, and methods. J. Am. Med. Inform. Assoc.27, 480–490 (2020).

    PubMed 

    Google Scholar
     

  • Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern. Med 172, No 3. https://www.acpjournals.org/doi/10.7326/M18-3684.

  • Rotenstein, L. S., Holmgren, A. J., Downing, N. L. & Bates, D. W. Differences in total and after-hours electronic health record time across ambulatory specialties. JAMA Intern. Med. 181, 863–865 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Tai-Seale, M. et al. Association of physician burnout with perceived EHR work stress and potentially actionable factors. J. Am. Med. Inform. Assoc. 30, 1665–1672 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y. et al. Modeling care team structures in the neonatal intensive care unit through network analysis of EHR Audit Logs. Methods Inf. Med. 58, 109–123 (2019).

    PubMed 

    Google Scholar
     

  • Yakusheva, O. et al. An electronic health record metadata-mining approach to identifying patient-level interprofessional clinician teams in the intensive care unit. J. Am. Med. Inform. Assoc. 32, 426–434 (2025).

    PubMed 

    Google Scholar
     

  • Chen, Y., Patel, M. B., McNaughton, C. D. & Malin, B. A. Interaction patterns of trauma providers are associated with length of stay. J. Am. Med. Inform. Assoc.25, 790–799 (2018).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Lou, S. S. et al. Effect of clinician attention switching on workload and wrong-patient errors. Br. J. Anaesth. 129, e22–e24 (2022).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rose, C. et al. Team is brain: leveraging EHR audit log data for new insights into acute care processes. J. Am. Med. Inform. Assoc. 30, 8–15 (2023).


    Google Scholar
     

  • Melnick, E. R. et al. Analysis of electronic health record use and clinical productivity and their association with physician turnover. JAMA Netw. Open 4, e2128790 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Tran, B., Lenhart, A., Ross, R. & Dorr, D. A. Burnout and EHR use among academic primary care physicians with varied clinical workloads. AMIA Summits Transl. Sci. Proc. 2019, 136–144 (2019).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rossetti, S. C. et al. Real-time surveillance system for patient deterioration: a pragmatic cluster-randomized controlled trial. Nat. Med. 1–8. https://doi.org/10.1038/s41591-025-03609-7. (2025)

  • Rossetti, S. C. et al. Healthcare process modeling to phenotype clinician behaviors for exploiting the signal gain of clinical expertise (HPM-ExpertSignals): Development and evaluation of a conceptual framework. J. Am. Med. Inform. Assoc.28, 1242–1251 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Zhang, X., Yan, C., Malin, B. A., Patel, M. B. & Chen, Y. Predicting next-day discharge via electronic health record access logs. J. Am. Med. Inform. Assoc.28, 2670–2680 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Bhaskhar, N., Ip, W., Chen, J. H. & Rubin, D. L. Clinical outcome prediction using observational supervision with electronic health records and audit logs. J. Biomed. Inform. 147, 104522 (2023).

    PubMed 

    Google Scholar
     

  • Zhang, X. et al. Optimizing large language models for discharge prediction: best practices in leveraging electronic health record audit logs. AMIA Annu. Symp. Proc. 2024, 1323–1331 (2025).

  • Kim, S., Warner, B. C., Lew, D., Lou, S. S. & Kannampallil, T. Measuring cognitive effort using tabular transformer-based language models of electronic health record-based audit log action sequences. J. Am. Med. Inform. Assoc.31, 2228–2235 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rossetti, S. C. et al. Leveraging clinical expertise as a feature – not an outcome – of predictive models: evaluation of an early warning system use case. Amia. Annu. Symp. Proc. 2019, 323–332 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Duggan, M. J. et al. Clinician experiences with ambient scribe technology to assist with documentation burden and efficiency. JAMA Netw. Open 8, e2460637 (2025).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Garcia, P. et al. Artificial intelligence–generated draft replies to patient inbox messages. JAMA Netw. Open 7, e243201 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Sinsky, C. A., Rotenstein, L., Holmgren, A. J. & Apathy, N. C. The number of patient scheduled hours resulting in a 40-hour work week by physician specialty and setting: a cross-sectional study using electronic health record event log data. J. Am. Med. Inform. Assoc. 32, 235–240 (2025).

    PubMed 

    Google Scholar
     

  • Holmgren, A. J., Sinsky, C. A., Rotenstein, L. & Apathy, N. C. National comparison of ambulatory physician electronic health record use across specialties. J. Gen. Intern. Med. 39, 2868–2870 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rotenstein, L. et al. Virtual scribes and physician time spent on electronic health records. JAMA Netw. Open 7, e2413140 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rotenstein, L. S. et al. Association of primary care physicians’ Electronic Inbox activity patterns with patients’ likelihood to recommend the physician. J. Gen. Intern. Med. 39, 150–152 (2024).

    PubMed 

    Google Scholar
     

  • Li, H. et al. Quantifying EHR and policy factors associated with the gender productivity gap in ambulatory, general internal medicine. J. Gen. Intern. Med. 39, 557–565 (2024).

    PubMed 

    Google Scholar
     

  • Jay Holmgren, A., Steitz, B., Lou, S. & Apathy, N. Using Electronic Health Record Metadata to Understand Clinician Work and Behavior. In Reengineering Clinical Workflow in the Digital and AI Era: Toward Safer and More Efficient Care (eds. Zheng, K., Westbrook, J. & Patel, V. L.) 299–317 (Springer Nature Switzerland, Cham, https://doi.org/10.1007/978-3-031-82971-0_15. 2025).

  • Rotenstein, L. & Jay Holmgren, A. COVID exacerbated the gender disparity in physician electronic health record inbox burden. J. Am. Med. Inform. Assoc. 30, 1720–1724 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Gupta, K. et al. Differences in ambulatory EHR use patterns for male vs. female physicians. Catal. Carryover 5, (2019).

  • Rotenstein, L. S. et al. System-level factors and time spent on electronic health records by primary care physicians. JAMA Netw. Open 6, e2344713 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Holmgren, A. J., Thombley, R., Sinsky, C. A. & Adler-Milstein, J. Changes in physician electronic health record use with the expansion of telemedicine. JAMA Intern. Med. 183, 1357–1365 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Tawfik, D. et al. Emerging domains for measuring health care delivery with electronic health record metadata. J. Med. Internet Res. 27, e64721 (2025).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Yan, C. et al. Differences in health professionals’ engagement with electronic health records based on inpatient race and ethnicity. JAMA Netw. Open 6, e2336383 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Cox, M. L. et al. Documenting or operating: where is time spent in general surgery residency?. J. Surg. Educ. 75, e97–e106 (2018).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Read-Brown, S. et al. Time requirements for electronic health record use in an Academic Ophthalmology Center. JAMA Ophthalmol. 135, 1250–1257 (2017).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Dziorny, A. C. et al. Automatic detection of front-line clinician hospital shifts: a novel use of electronic health record timestamp data. Appl. Clin. Inform. 10, 28–37 (2019).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Hribar, M. R. et al. Secondary use of EHR timestamp data: validation and application for workflow optimization. Amia. Annu. Symp. Proc. 2015, 1909–1917 (2015).

    PubMed Central 

    Google Scholar
     

  • Hribar, M. R. et al. Secondary use of electronic health record data for clinical workflow analysis. J. Am. Med. Inform. Assoc. JAMIA 25, 40–46 (2018).

    PubMed 

    Google Scholar
     

  • Sinsky, C. A. et al. Metrics for assessing physician activity using electronic health record log data. J. Am. Med. Inform. Assoc. 27, 639–643 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Avdagovska, M. et al. Exploring the impact of in basket metrics on the adoption of a new electronic health record system among specialists in a tertiary hospital in alberta: descriptive study. J. Med. Internet Res. 26, e53122 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Akbar, F. et al. Physicians’ electronic inbox work patterns and factors associated with high inbox work duration. J. Am. Med. Inform. Assoc. 28, 923–930 (2021).

    PubMed 

    Google Scholar
     

  • Arndt, B. G. et al. Tethered to the EHR: Primary care physician workload assessment using EHR event log data and time-motion observations. Ann. Fam. Med. 15, 419–426 (2017).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Amroze, A. et al. Use of electronic health record access and audit logs to identify physician actions following noninterruptive alert opening: descriptive study. JMIR Med. Inform. 7, e12650 (2019).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Cutrona, S. L. et al. Primary care providers’ opening of time-sensitive alerts sent to commercial electronic health record inBaskets. J. Gen. Intern. Med. 32, 1210–1219 (2017).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rumlow, Z. et al. The impact of diagnosis-specific plan templates on admission note writing time: a quality improvement initiative. J. Grad. Med. Educ. 16, 581–587 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Nguyen, O. T. et al. Primary care physicians’ electronic health record proficiency and efficiency behaviors and time interacting with electronic health records: a quantile regression analysis. J. Am. Med. Inform. Assoc. JAMIA 29, 461–471 (2021).


    Google Scholar
     

  • Chen, B. et al. Mining tasks and task characteristics from electronic health record audit logs with unsupervised machine learning. J. Am. Med. Inform. Assoc. 28, 1168–1177 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Lou, S. S., Liu, H., Harford, D., Lu, C. & Kannampallil, T. Characterizing the macrostructure of electronic health record work using raw audit logs: an unsupervised action embeddings approach. J. Am. Med. Inform. Assoc. 30, 539–544 (2023).

    PubMed 

    Google Scholar
     

  • Tiase, V. L., Sward, K. A. & Facelli, J. C. A scalable and extensible logical data model of electronic health Record Audit Logs for Temporal Data Mining (RNteract): model conceptualization and formulation. JMIR Nurs. 7, e55793 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Zhang, X., Zhao, Y., Yan, C., Derr, T. & Chen, Y. Inferring EHR utilization workflows through audit logs. Amia. Annu. Symp. Proc. 2022, 1247–1256 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y., Adler-Milstein, J. & Sinsky, C. Measuring and maximizing undivided attention in the context of electronic health records. Appl. Clin. Inform. https://doi.org/10.1055/a-1892-1437. (2022)

  • Moy, A. J. et al. Characterizing multitasking and workflow fragmentation in electronic health records among emergency department clinicians: using time-motion data to understand documentation burden. Appl. Clin. Inform. 12, 1002–1013 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Jones, B., Zhang, X., Malin, B. A. & Chen, Y. Learning tasks of pediatric providers from electronic health record audit logs. Amia. Annu. Symp. Proc. 2020, 612–618 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Li, P. et al. Measuring collaboration through concurrent electronic health record usage: network analysis study. JMIR Med. Inform. 9, e28998 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Mannering, H. et al. Assessing neonatal intensive care unit structures and outcomes before and during the COVID-19 pandemic: network analysis study. J. Med. Internet Res. 23, e27261 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y., Yan, C. & Patel, M. B. Network analysis subtleties in ICU structures and outcomes. Am. J. Respir. Crit. Care Med. 202, 1606–1607 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y., Lorenzi, N. M., Sandberg, W. S., Wolgast, K. & Malin, B. A. Identifying collaborative care teams through electronic medical record utilization patterns. J. Am. Med. Inform. Assoc. JAMIA 24, e111–e120 (2017).

    PubMed 

    Google Scholar
     

  • Yan, C. et al. Collaboration structures in COVID-19 critical care: retrospective network analysis study. JMIR Hum. Factors 8, e25724 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Kelly Costa, D., Liu, H., Boltey, E. M. & Yakusheva, O. The structure of critical care nursing teams and patient outcomes: a network analysis. Am. J. Respir. Crit. Care Med. 201, 483–485 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Kim, C. et al. Provider Networks in the Neonatal Intensive Care Unit Associate with Length of Stay. In 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC) 127–134. https://doi.org/10.1109/CIC48465.2019.00024. (2019)

  • Apathy, N. C., Holmgren, A. J. & Cross, D. A. Physician EHR time and visit volume following adoption of team-based documentation support. JAMA Intern. Med. 184, 1212–1221 (2024).

    PubMed 

    Google Scholar
     

  • Tang, M., Holmgren, A. J., Huckman, R. S., Pany, M. J. & McWilliams, J. M. Modalities, Mo Problems: impacts of provider modality switching in hybrid outpatient clinics. Acad. Manag. Proc. 2024, 13107 (2024).


    Google Scholar
     

  • Jiang, S. Y., Hum, R. S., Vawdrey, D. & Mamykina, L. In search of social translucence: an audit log analysis of handoff documentation views and updates. Amia. Annu. Symp. Proc. 2015, 669–676 (2015).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Lyles, C. R. et al. Using electronic health record portals to improve patient engagement: research priorities and best practices. Ann. Intern. Med. 172, S123–S129 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Zhang, X. et al. Association between patient portal engagement and weight loss outcomes in patients after bariatric surgery: longitudinal observational study using electronic health records. J. Med. Internet Res. 26, e56573 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Davis, S. E., Embí, P. J. & Matheny, M. E. Sustainable deployment of clinical prediction tools—a 360° approach to model maintenance. J. Am. Med. Inform. Assoc. 31, 1195–1198 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Guo, L. L. et al. EHR foundation models improve robustness in the presence of temporal distribution shift. Sci. Rep. 13, 3767 (2023).

    CAS 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Brown, K. E. et al. Large language models are less effective at clinical prediction tasks than locally trained machine learning models. J. Am. Med. Inform. Assoc. ocaf038. https://doi.org/10.1093/jamia/ocaf038. (2025)

  • Wornow, M. et al. The shaky foundations of large language models and foundation models for electronic health records. Npj Digit. Med. 6, 1–10 (2023).


    Google Scholar
     

  • Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature 616, 259–265 (2023).

    CAS 
    PubMed 

    Google Scholar
     

  • Zhou, Y. et al. A foundation model for generalizable disease detection from retinal images. Nature 622, 156–163 (2023).

    CAS 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Guo, L. L. et al. A multi-center study on the adaptability of a shared foundation model for electronic health records. Npj Digit. Med. 7, 1–9 (2024).


    Google Scholar
     

  • Peng, C. et al. A study of generative large language model for medical research and healthcare. NPJ Digit. Med. 6, 210 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Krishnan, R., Rajpurkar, P. & Topol, E. J. Self-supervised learning in medicine and healthcare. Nat. Biomed. Eng. 6, 1346–1352 (2022).


    Google Scholar
     

  • Katsoulakis, E. et al. Digital twins for health: a scoping review. NPJ Digit. Med. 7, 77 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Embí, P. J., Rhew, D. C., Peterson, E. D. & Pencina, M. J. Launching the Trustworthy and Responsible AI Network (TRAIN): A Consortium to Facilitate Safe and Effective AI Adoption. JAMA https://doi.org/10.1001/jama.2025.1331. (2025)

  • Maddox, T. M. et al. Generative AI in Medicine — Evaluating Progress and Challenges. N. Engl. J. Med. 0

  • You, J. G., Hernandez-Boussard, T., Pfeffer, M. A., Landman, A. & Mishuris, R. G. Clinical trials informed framework for real world clinical implementation and deployment of artificial intelligence applications. Npj Digit. Med. 8, 1–5 (2025).


    Google Scholar
     

  • McCoy, A. B. et al. Clinician collaboration to improve clinical decision support: the Clickbusters initiative. J. Am. Med. Inform. Assoc. 29, 1050–1059 (2022).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Baxter, S. L., Apathy, N. C., Cross, D. A., Sinsky, C. & Hribar, M. R. Measures of electronic health record use in outpatient settings across vendors. J. Am. Med. Inform. Assoc. JAMIA 28, 955–959 (2021).

    PubMed 

    Google Scholar
     

  • Cohen, G. R., Boi, J., Johnson, C., Brown, L. & Patel, V. Measuring time clinicians spend using EHRs in the inpatient setting: a national, mixed-methods study. J. Am. Med. Inform. Assoc. JAMIA 28, 1676–1682 (2021).

    PubMed 

    Google Scholar
     

  • Wu, D. T. Y. et al. Using EHR audit trail logs to analyze clinical workflow: A case study from community-based ambulatory clinics. Amia. Annu. Symp. Proc. 2017, 1820–1827 (2018).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Sinsky, C. et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann. Intern. Med. 165, 753–760 (2016).

    PubMed 

    Google Scholar
     

  • Were, M. C. et al. Role and use of race in artificial intelligence and machine learning models related to health. J. Med. Internet Res. 27, e73996 (2025).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rajkomar, A. et al. Scalable and accurate deep learning with electronic health records. Npj Digit. Med. 1, 1–10 (2018).


    Google Scholar
     

  • Grabowska, M. E. et al. Developing and evaluating pediatric phecodes (Peds-Phecodes) for high-throughput phenotyping using electronic health records. J. Am. Med. Inform. Assoc. 31, 386–395 (2024).

    PubMed 

    Google Scholar
     

  • Hripcsak, G. & Albers, D. J. Next-generation phenotyping of electronic health records. J. Am. Med. Inform. Assoc. 20, 117–121 (2013).

    PubMed 

    Google Scholar
     

  • Yasrebi-de Kom, I. A. R. et al. Electronic health record-based prediction models for in-hospital adverse drug event diagnosis or prognosis: a systematic review. J. Am. Med. Inform. Assoc. 30, 978–988 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Dos Santos, F. C. et al. The effect of a combined mHealth and community health worker intervention on HIV self-management. J. Am. Med. Inform. Assoc. 32, 510–517 (2025).

    PubMed 

    Google Scholar
     

  • Liu, S. et al. Leveraging explainable artificial intelligence to optimize clinical decision support. J. Am. Med. Inform. Assoc. 31, 968–974 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Ozkaynak, M., Ponnala, S. & Werner, N. E. Patient-Oriented Workflow Approach. In Reengineering Clinical Workflow in the Digital and AI Era: Toward Safer and More Efficient Care (eds. Zheng, K., Westbrook, J. & Patel, V. L.) 213–229 (Springer Nature Switzerland, Cham, https://doi.org/10.1007/978-3-031-82971-0_11. 2025).

  • Sánchez-Salmerón, R. et al. Machine learning methods applied to triage in emergency services: A systematic review. Int. Emerg. Nurs. 60, 101109 (2022).

    PubMed 

    Google Scholar
     



  • Source link

    Continue Reading

    AI Research

    Minister Bae Kyung-hun opens GPU resources for AI research to foster Nobel laureates – CHOSUNBIZ – Chosun Biz

    Published

    on



    Minister Bae Kyung-hun opens GPU resources for AI research to foster Nobel laureates – CHOSUNBIZ  Chosun Biz



    Source link

    Continue Reading

    Trending