Connect with us

AI Research

Reimagining clinical AI: from clickstreams to clinical insights with EHR use metadata

Published

on


  • Harnessing EHR data for health research | Nature Medicine. https://www.nature.com/articles/s41591-024-03074-8.

  • Acosta, J. N., Falcone, G. J., Rajpurkar, P. & Topol, E. J. Multimodal biomedical AI. Nat. Med. 28, 1773–1784 (2022).

    CAS 
    PubMed 

    Google Scholar
     

  • Adler-Milstein, J. et al. Meeting the Moment: Addressing Barriers and Facilitating Clinical Adoption of Artificial Intelligence in Medical Diagnosis. NAM Perspect. https://doi.org/10.31478/202209c. (2022)

  • Aquino, Y. S. J. et al. Utopia versus dystopia: Professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skills. Int. J. Med. Inf. 169, 104903 (2023).


    Google Scholar
     

  • Pavuluri, S., Sangal, R., Sather, J. & Taylor, R. A. Balancing act: the complex role of artificial intelligence in addressing burnout and healthcare workforce dynamics. BMJ Health Care Inf. 31, e101120 (2024).


    Google Scholar
     

  • Rule, A. et al. Guidance for reporting analyses of metadata on electronic health record use. J. Am. Med. Inform. Assoc.31, 784–789 (2023).

    PubMed Central 

    Google Scholar
     

  • Adler-Milstein, J., Adelman, J. S., Tai-Seale, M., Patel, V. L. & Dymek, C. EHR audit logs: A new goldmine for health services research?. J. Biomed. Inform. 101, 103343 (2020).

    PubMed 

    Google Scholar
     

  • Kannampallil, T. & Adler-Milstein, J. Using electronic health record audit log data for research: insights from early efforts. J. Am. Med. Inform. Assoc. 30, 167–171 (2023).


    Google Scholar
     

  • Rule, A., Melnick, E. R. & Apathy, N. C. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J. Am. Med. Inform. Assoc. 30, 144–154 (2023).


    Google Scholar
     

  • Rule, A., Chiang, M. F. & Hribar, M. R. Using electronic health record audit logs to study clinical activity: a systematic review of aims, measures, and methods. J. Am. Med. Inform. Assoc.27, 480–490 (2020).

    PubMed 

    Google Scholar
     

  • Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern. Med 172, No 3. https://www.acpjournals.org/doi/10.7326/M18-3684.

  • Rotenstein, L. S., Holmgren, A. J., Downing, N. L. & Bates, D. W. Differences in total and after-hours electronic health record time across ambulatory specialties. JAMA Intern. Med. 181, 863–865 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Tai-Seale, M. et al. Association of physician burnout with perceived EHR work stress and potentially actionable factors. J. Am. Med. Inform. Assoc. 30, 1665–1672 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y. et al. Modeling care team structures in the neonatal intensive care unit through network analysis of EHR Audit Logs. Methods Inf. Med. 58, 109–123 (2019).

    PubMed 

    Google Scholar
     

  • Yakusheva, O. et al. An electronic health record metadata-mining approach to identifying patient-level interprofessional clinician teams in the intensive care unit. J. Am. Med. Inform. Assoc. 32, 426–434 (2025).

    PubMed 

    Google Scholar
     

  • Chen, Y., Patel, M. B., McNaughton, C. D. & Malin, B. A. Interaction patterns of trauma providers are associated with length of stay. J. Am. Med. Inform. Assoc.25, 790–799 (2018).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Lou, S. S. et al. Effect of clinician attention switching on workload and wrong-patient errors. Br. J. Anaesth. 129, e22–e24 (2022).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rose, C. et al. Team is brain: leveraging EHR audit log data for new insights into acute care processes. J. Am. Med. Inform. Assoc. 30, 8–15 (2023).


    Google Scholar
     

  • Melnick, E. R. et al. Analysis of electronic health record use and clinical productivity and their association with physician turnover. JAMA Netw. Open 4, e2128790 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Tran, B., Lenhart, A., Ross, R. & Dorr, D. A. Burnout and EHR use among academic primary care physicians with varied clinical workloads. AMIA Summits Transl. Sci. Proc. 2019, 136–144 (2019).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rossetti, S. C. et al. Real-time surveillance system for patient deterioration: a pragmatic cluster-randomized controlled trial. Nat. Med. 1–8. https://doi.org/10.1038/s41591-025-03609-7. (2025)

  • Rossetti, S. C. et al. Healthcare process modeling to phenotype clinician behaviors for exploiting the signal gain of clinical expertise (HPM-ExpertSignals): Development and evaluation of a conceptual framework. J. Am. Med. Inform. Assoc.28, 1242–1251 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Zhang, X., Yan, C., Malin, B. A., Patel, M. B. & Chen, Y. Predicting next-day discharge via electronic health record access logs. J. Am. Med. Inform. Assoc.28, 2670–2680 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Bhaskhar, N., Ip, W., Chen, J. H. & Rubin, D. L. Clinical outcome prediction using observational supervision with electronic health records and audit logs. J. Biomed. Inform. 147, 104522 (2023).

    PubMed 

    Google Scholar
     

  • Zhang, X. et al. Optimizing large language models for discharge prediction: best practices in leveraging electronic health record audit logs. AMIA Annu. Symp. Proc. 2024, 1323–1331 (2025).

  • Kim, S., Warner, B. C., Lew, D., Lou, S. S. & Kannampallil, T. Measuring cognitive effort using tabular transformer-based language models of electronic health record-based audit log action sequences. J. Am. Med. Inform. Assoc.31, 2228–2235 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rossetti, S. C. et al. Leveraging clinical expertise as a feature – not an outcome – of predictive models: evaluation of an early warning system use case. Amia. Annu. Symp. Proc. 2019, 323–332 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Duggan, M. J. et al. Clinician experiences with ambient scribe technology to assist with documentation burden and efficiency. JAMA Netw. Open 8, e2460637 (2025).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Garcia, P. et al. Artificial intelligence–generated draft replies to patient inbox messages. JAMA Netw. Open 7, e243201 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Sinsky, C. A., Rotenstein, L., Holmgren, A. J. & Apathy, N. C. The number of patient scheduled hours resulting in a 40-hour work week by physician specialty and setting: a cross-sectional study using electronic health record event log data. J. Am. Med. Inform. Assoc. 32, 235–240 (2025).

    PubMed 

    Google Scholar
     

  • Holmgren, A. J., Sinsky, C. A., Rotenstein, L. & Apathy, N. C. National comparison of ambulatory physician electronic health record use across specialties. J. Gen. Intern. Med. 39, 2868–2870 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rotenstein, L. et al. Virtual scribes and physician time spent on electronic health records. JAMA Netw. Open 7, e2413140 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rotenstein, L. S. et al. Association of primary care physicians’ Electronic Inbox activity patterns with patients’ likelihood to recommend the physician. J. Gen. Intern. Med. 39, 150–152 (2024).

    PubMed 

    Google Scholar
     

  • Li, H. et al. Quantifying EHR and policy factors associated with the gender productivity gap in ambulatory, general internal medicine. J. Gen. Intern. Med. 39, 557–565 (2024).

    PubMed 

    Google Scholar
     

  • Jay Holmgren, A., Steitz, B., Lou, S. & Apathy, N. Using Electronic Health Record Metadata to Understand Clinician Work and Behavior. In Reengineering Clinical Workflow in the Digital and AI Era: Toward Safer and More Efficient Care (eds. Zheng, K., Westbrook, J. & Patel, V. L.) 299–317 (Springer Nature Switzerland, Cham, https://doi.org/10.1007/978-3-031-82971-0_15. 2025).

  • Rotenstein, L. & Jay Holmgren, A. COVID exacerbated the gender disparity in physician electronic health record inbox burden. J. Am. Med. Inform. Assoc. 30, 1720–1724 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Gupta, K. et al. Differences in ambulatory EHR use patterns for male vs. female physicians. Catal. Carryover 5, (2019).

  • Rotenstein, L. S. et al. System-level factors and time spent on electronic health records by primary care physicians. JAMA Netw. Open 6, e2344713 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Holmgren, A. J., Thombley, R., Sinsky, C. A. & Adler-Milstein, J. Changes in physician electronic health record use with the expansion of telemedicine. JAMA Intern. Med. 183, 1357–1365 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Tawfik, D. et al. Emerging domains for measuring health care delivery with electronic health record metadata. J. Med. Internet Res. 27, e64721 (2025).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Yan, C. et al. Differences in health professionals’ engagement with electronic health records based on inpatient race and ethnicity. JAMA Netw. Open 6, e2336383 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Cox, M. L. et al. Documenting or operating: where is time spent in general surgery residency?. J. Surg. Educ. 75, e97–e106 (2018).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Read-Brown, S. et al. Time requirements for electronic health record use in an Academic Ophthalmology Center. JAMA Ophthalmol. 135, 1250–1257 (2017).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Dziorny, A. C. et al. Automatic detection of front-line clinician hospital shifts: a novel use of electronic health record timestamp data. Appl. Clin. Inform. 10, 28–37 (2019).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Hribar, M. R. et al. Secondary use of EHR timestamp data: validation and application for workflow optimization. Amia. Annu. Symp. Proc. 2015, 1909–1917 (2015).

    PubMed Central 

    Google Scholar
     

  • Hribar, M. R. et al. Secondary use of electronic health record data for clinical workflow analysis. J. Am. Med. Inform. Assoc. JAMIA 25, 40–46 (2018).

    PubMed 

    Google Scholar
     

  • Sinsky, C. A. et al. Metrics for assessing physician activity using electronic health record log data. J. Am. Med. Inform. Assoc. 27, 639–643 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Avdagovska, M. et al. Exploring the impact of in basket metrics on the adoption of a new electronic health record system among specialists in a tertiary hospital in alberta: descriptive study. J. Med. Internet Res. 26, e53122 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Akbar, F. et al. Physicians’ electronic inbox work patterns and factors associated with high inbox work duration. J. Am. Med. Inform. Assoc. 28, 923–930 (2021).

    PubMed 

    Google Scholar
     

  • Arndt, B. G. et al. Tethered to the EHR: Primary care physician workload assessment using EHR event log data and time-motion observations. Ann. Fam. Med. 15, 419–426 (2017).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Amroze, A. et al. Use of electronic health record access and audit logs to identify physician actions following noninterruptive alert opening: descriptive study. JMIR Med. Inform. 7, e12650 (2019).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Cutrona, S. L. et al. Primary care providers’ opening of time-sensitive alerts sent to commercial electronic health record inBaskets. J. Gen. Intern. Med. 32, 1210–1219 (2017).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rumlow, Z. et al. The impact of diagnosis-specific plan templates on admission note writing time: a quality improvement initiative. J. Grad. Med. Educ. 16, 581–587 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Nguyen, O. T. et al. Primary care physicians’ electronic health record proficiency and efficiency behaviors and time interacting with electronic health records: a quantile regression analysis. J. Am. Med. Inform. Assoc. JAMIA 29, 461–471 (2021).


    Google Scholar
     

  • Chen, B. et al. Mining tasks and task characteristics from electronic health record audit logs with unsupervised machine learning. J. Am. Med. Inform. Assoc. 28, 1168–1177 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Lou, S. S., Liu, H., Harford, D., Lu, C. & Kannampallil, T. Characterizing the macrostructure of electronic health record work using raw audit logs: an unsupervised action embeddings approach. J. Am. Med. Inform. Assoc. 30, 539–544 (2023).

    PubMed 

    Google Scholar
     

  • Tiase, V. L., Sward, K. A. & Facelli, J. C. A scalable and extensible logical data model of electronic health Record Audit Logs for Temporal Data Mining (RNteract): model conceptualization and formulation. JMIR Nurs. 7, e55793 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Zhang, X., Zhao, Y., Yan, C., Derr, T. & Chen, Y. Inferring EHR utilization workflows through audit logs. Amia. Annu. Symp. Proc. 2022, 1247–1256 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y., Adler-Milstein, J. & Sinsky, C. Measuring and maximizing undivided attention in the context of electronic health records. Appl. Clin. Inform. https://doi.org/10.1055/a-1892-1437. (2022)

  • Moy, A. J. et al. Characterizing multitasking and workflow fragmentation in electronic health records among emergency department clinicians: using time-motion data to understand documentation burden. Appl. Clin. Inform. 12, 1002–1013 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Jones, B., Zhang, X., Malin, B. A. & Chen, Y. Learning tasks of pediatric providers from electronic health record audit logs. Amia. Annu. Symp. Proc. 2020, 612–618 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Li, P. et al. Measuring collaboration through concurrent electronic health record usage: network analysis study. JMIR Med. Inform. 9, e28998 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Mannering, H. et al. Assessing neonatal intensive care unit structures and outcomes before and during the COVID-19 pandemic: network analysis study. J. Med. Internet Res. 23, e27261 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y., Yan, C. & Patel, M. B. Network analysis subtleties in ICU structures and outcomes. Am. J. Respir. Crit. Care Med. 202, 1606–1607 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y., Lorenzi, N. M., Sandberg, W. S., Wolgast, K. & Malin, B. A. Identifying collaborative care teams through electronic medical record utilization patterns. J. Am. Med. Inform. Assoc. JAMIA 24, e111–e120 (2017).

    PubMed 

    Google Scholar
     

  • Yan, C. et al. Collaboration structures in COVID-19 critical care: retrospective network analysis study. JMIR Hum. Factors 8, e25724 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Kelly Costa, D., Liu, H., Boltey, E. M. & Yakusheva, O. The structure of critical care nursing teams and patient outcomes: a network analysis. Am. J. Respir. Crit. Care Med. 201, 483–485 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Kim, C. et al. Provider Networks in the Neonatal Intensive Care Unit Associate with Length of Stay. In 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC) 127–134. https://doi.org/10.1109/CIC48465.2019.00024. (2019)

  • Apathy, N. C., Holmgren, A. J. & Cross, D. A. Physician EHR time and visit volume following adoption of team-based documentation support. JAMA Intern. Med. 184, 1212–1221 (2024).

    PubMed 

    Google Scholar
     

  • Tang, M., Holmgren, A. J., Huckman, R. S., Pany, M. J. & McWilliams, J. M. Modalities, Mo Problems: impacts of provider modality switching in hybrid outpatient clinics. Acad. Manag. Proc. 2024, 13107 (2024).


    Google Scholar
     

  • Jiang, S. Y., Hum, R. S., Vawdrey, D. & Mamykina, L. In search of social translucence: an audit log analysis of handoff documentation views and updates. Amia. Annu. Symp. Proc. 2015, 669–676 (2015).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Lyles, C. R. et al. Using electronic health record portals to improve patient engagement: research priorities and best practices. Ann. Intern. Med. 172, S123–S129 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Zhang, X. et al. Association between patient portal engagement and weight loss outcomes in patients after bariatric surgery: longitudinal observational study using electronic health records. J. Med. Internet Res. 26, e56573 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Davis, S. E., Embí, P. J. & Matheny, M. E. Sustainable deployment of clinical prediction tools—a 360° approach to model maintenance. J. Am. Med. Inform. Assoc. 31, 1195–1198 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Guo, L. L. et al. EHR foundation models improve robustness in the presence of temporal distribution shift. Sci. Rep. 13, 3767 (2023).

    CAS 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Brown, K. E. et al. Large language models are less effective at clinical prediction tasks than locally trained machine learning models. J. Am. Med. Inform. Assoc. ocaf038. https://doi.org/10.1093/jamia/ocaf038. (2025)

  • Wornow, M. et al. The shaky foundations of large language models and foundation models for electronic health records. Npj Digit. Med. 6, 1–10 (2023).


    Google Scholar
     

  • Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature 616, 259–265 (2023).

    CAS 
    PubMed 

    Google Scholar
     

  • Zhou, Y. et al. A foundation model for generalizable disease detection from retinal images. Nature 622, 156–163 (2023).

    CAS 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Guo, L. L. et al. A multi-center study on the adaptability of a shared foundation model for electronic health records. Npj Digit. Med. 7, 1–9 (2024).


    Google Scholar
     

  • Peng, C. et al. A study of generative large language model for medical research and healthcare. NPJ Digit. Med. 6, 210 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Krishnan, R., Rajpurkar, P. & Topol, E. J. Self-supervised learning in medicine and healthcare. Nat. Biomed. Eng. 6, 1346–1352 (2022).


    Google Scholar
     

  • Katsoulakis, E. et al. Digital twins for health: a scoping review. NPJ Digit. Med. 7, 77 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Embí, P. J., Rhew, D. C., Peterson, E. D. & Pencina, M. J. Launching the Trustworthy and Responsible AI Network (TRAIN): A Consortium to Facilitate Safe and Effective AI Adoption. JAMA https://doi.org/10.1001/jama.2025.1331. (2025)

  • Maddox, T. M. et al. Generative AI in Medicine — Evaluating Progress and Challenges. N. Engl. J. Med. 0

  • You, J. G., Hernandez-Boussard, T., Pfeffer, M. A., Landman, A. & Mishuris, R. G. Clinical trials informed framework for real world clinical implementation and deployment of artificial intelligence applications. Npj Digit. Med. 8, 1–5 (2025).


    Google Scholar
     

  • McCoy, A. B. et al. Clinician collaboration to improve clinical decision support: the Clickbusters initiative. J. Am. Med. Inform. Assoc. 29, 1050–1059 (2022).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Baxter, S. L., Apathy, N. C., Cross, D. A., Sinsky, C. & Hribar, M. R. Measures of electronic health record use in outpatient settings across vendors. J. Am. Med. Inform. Assoc. JAMIA 28, 955–959 (2021).

    PubMed 

    Google Scholar
     

  • Cohen, G. R., Boi, J., Johnson, C., Brown, L. & Patel, V. Measuring time clinicians spend using EHRs in the inpatient setting: a national, mixed-methods study. J. Am. Med. Inform. Assoc. JAMIA 28, 1676–1682 (2021).

    PubMed 

    Google Scholar
     

  • Wu, D. T. Y. et al. Using EHR audit trail logs to analyze clinical workflow: A case study from community-based ambulatory clinics. Amia. Annu. Symp. Proc. 2017, 1820–1827 (2018).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Sinsky, C. et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann. Intern. Med. 165, 753–760 (2016).

    PubMed 

    Google Scholar
     

  • Were, M. C. et al. Role and use of race in artificial intelligence and machine learning models related to health. J. Med. Internet Res. 27, e73996 (2025).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rajkomar, A. et al. Scalable and accurate deep learning with electronic health records. Npj Digit. Med. 1, 1–10 (2018).


    Google Scholar
     

  • Grabowska, M. E. et al. Developing and evaluating pediatric phecodes (Peds-Phecodes) for high-throughput phenotyping using electronic health records. J. Am. Med. Inform. Assoc. 31, 386–395 (2024).

    PubMed 

    Google Scholar
     

  • Hripcsak, G. & Albers, D. J. Next-generation phenotyping of electronic health records. J. Am. Med. Inform. Assoc. 20, 117–121 (2013).

    PubMed 

    Google Scholar
     

  • Yasrebi-de Kom, I. A. R. et al. Electronic health record-based prediction models for in-hospital adverse drug event diagnosis or prognosis: a systematic review. J. Am. Med. Inform. Assoc. 30, 978–988 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Dos Santos, F. C. et al. The effect of a combined mHealth and community health worker intervention on HIV self-management. J. Am. Med. Inform. Assoc. 32, 510–517 (2025).

    PubMed 

    Google Scholar
     

  • Liu, S. et al. Leveraging explainable artificial intelligence to optimize clinical decision support. J. Am. Med. Inform. Assoc. 31, 968–974 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Ozkaynak, M., Ponnala, S. & Werner, N. E. Patient-Oriented Workflow Approach. In Reengineering Clinical Workflow in the Digital and AI Era: Toward Safer and More Efficient Care (eds. Zheng, K., Westbrook, J. & Patel, V. L.) 213–229 (Springer Nature Switzerland, Cham, https://doi.org/10.1007/978-3-031-82971-0_11. 2025).

  • Sánchez-Salmerón, R. et al. Machine learning methods applied to triage in emergency services: A systematic review. Int. Emerg. Nurs. 60, 101109 (2022).

    PubMed 

    Google Scholar
     



  • Source link

    AI Research

    1 Brilliant Artificial Intelligence (AI) Stock Down 30% From Its All-Time High That’s a No-Brainer Buy

    Published

    on


    ASML is one of the world’s most critical companies.

    Few companies’ products are as critical to the modern world’s technological infrastructure as those made by ASML (ASML 3.75%). Without the chipmaking equipment the Netherlands-based manufacturer provides, much of the world’s most innovative technology wouldn’t be possible. That makes it one of the most important companies in the world, even if many people have never heard of it.

    Over the long term, ASML has been a profitable investment, but the stock has struggled recently — it’s down by more than 30% from the all-time high it touched in July 2024. I believe this pullback presents an excellent opportunity to buy shares of this key supporting player for the AI sector and other advanced technologies.  

    Image source: Getty Images.

    ASML has been a victim of government policies around the globe

    ASML makes lithography machines, which trace out the incredibly fine patterns of the circuits on silicon chips. Its top-of-the-line extreme ultraviolet (EUV) lithography machines are the only ones capable of printing the newest, most powerful, and most feature-dense chips. No other companies have been able to make EUV machines thus far. They are also highly regulated, as Western nations don’t want this technology going to China, so the Dutch and U.S. governments have put strict restrictions on the types of machines ASML can export to China or its allies. In fact, even tighter new regulations were put in place last year that prevented ASML from servicing some machines that it previously was allowed to sell to Chinese companies.

    As a result of these export bans, ASML’s sales to one of the world’s largest economies have been curtailed. This led to investors bidding the stock down in 2024 — a drop it still hasn’t recovered from.

    2025 has been a relatively strong year for ASML’s business, but tariffs have made it challenging to forecast where matters are headed. Management has been cautious with its guidance for the year as it is unsure of how tariffs will affect the business. In its Q2 report, management stated that tariffs had had a less significant impact in the quarter than initially projected. As a result, ASML generated 7.7 billion euros in sales, which was at the high end of its 7.2 billion to 7.7 billion euro guidance range. For Q3, the company says it expects sales of between 7.4 billion and 7.9 billion euros, but if tariffs have a significantly negative impact on the economic picture, it could come up short.

    Given all the planned spending on new chip production capacity to meet AI-related demand, investors would be wise to assume that ASML will benefit. However, the company is staying conservative in its guidance even as it prepares for growth. This conservative stance has caused the market to remain fairly bearish on ASML’s outlook even as all signs point toward a strong 2026.

    This makes ASML a buying opportunity at its current stock price.

    ASML’s valuation hasn’t been this low since 2023

    Compared to the last five years, ASML trades at a historically low price-to-earnings (P/E) ratio and a forward P/E ratio.

    ASML PE Ratio Chart

    ASML PE Ratio data by YCharts.

    With expectations for ASML at low levels, investors shouldn’t be surprised if its valuation rises sometime over the next year, particularly if management’s commentary becomes more bullish as demand increases in line with chipmakers’ efforts to expand their production capacity.

    This could lift ASML back into its more normal valuation range in the mid-30s, which is perfectly acceptable given its growth level, considering that it has no direct competition.

    ASML is a great stock to buy now and hold for several years or longer, allowing you to reap the benefits of chipmakers increasing their production capacity. Just because the market isn’t that bullish on ASML now, that doesn’t mean it won’t be in the future. This rare moment offers an ideal opportunity to load up on shares of a stock that I believe is one of the best values in the market right now.



    Source link

    Continue Reading

    AI Research

    AI’s not ‘reasoning’ at all – how this team debunked the industry hype

    Published

    on


    Pulse/Corbis via Getty Images

    Follow ZDNET: Add us as a preferred source on Google.


    ZDNET’s key takeaways

    • We don’t entirely know how AI works, so we ascribe magical powers to it.
    • Claims that Gen AI can reason are a “brittle mirage.”
    • We should always be specific about what AI is doing and avoid hyperbole.

    Ever since artificial intelligence programs began impressing the general public, AI scholars have been making claims for the technology’s deeper significance, even asserting the prospect of human-like understanding. 

    Scholars wax philosophical because even the scientists who created AI models such as OpenAI’s GPT-5 don’t really understand how the programs work — not entirely. 

    Also: OpenAI’s Altman sees ‘superintelligence’ just around the corner – but he’s short on details

    AI’s ‘black box’ and the hype machine

    AI programs such as LLMs are infamously “black boxes.” They achieve a lot that is impressive, but for the most part, we cannot observe all that they are doing when they take an input, such as a prompt you type, and they produce an output, such as the college term paper you requested or the suggestion for your new novel.

    In the breach, scientists have applied colloquial terms such as “reasoning” to describe the way the programs perform. In the process, they have either implied or outright asserted that the programs can “think,” “reason,” and “know” in the way that humans do. 

    In the past two years, the rhetoric has overtaken the science as AI executives have used hyperbole to twist what were simple engineering achievements. 

    Also: What is OpenAI’s GPT-5? Here’s everything you need to know about the company’s latest model

    OpenAI’s press release last September announcing their o1 reasoning model stated that, “Similar to how a human may think for a long time before responding to a difficult question, o1 uses a chain of thought when attempting to solve a problem,” so that “o1 learns to hone its chain of thought and refine the strategies it uses.”

    It was a short step from those anthropomorphizing assertions to all sorts of wild claims, such as OpenAI CEO Sam Altman’s comment, in June, that “We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence.”

    (Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

    The backlash of AI research

    There is a backlash building, however, from AI scientists who are debunking the assumptions of human-like intelligence via rigorous technical scrutiny. 

    In a paper published last month on the arXiv pre-print server and not yet reviewed by peers, the authors — Chengshuai Zhao and colleagues at Arizona State University — took apart the reasoning claims through a simple experiment. What they concluded is that “chain-of-thought reasoning is a brittle mirage,” and it is “not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching.” 

    Also: Sam Altman says the Singularity is imminent – here’s why

    The term “chain of thought” (CoT) is commonly used to describe the verbose stream of output that you see when a large reasoning model, such as GPT-o1 or DeepSeek V1, shows you how it works through a problem before giving the final answer.

    That stream of statements isn’t as deep or meaningful as it seems, write Zhao and team. “The empirical successes of CoT reasoning lead to the perception that large language models (LLMs) engage in deliberate inferential processes,” they write. 

    But, “An expanding body of analyses reveals that LLMs tend to rely on surface-level semantics and clues rather than logical procedures,” they explain. “LLMs construct superficial chains of logic based on learned token associations, often failing on tasks that deviate from commonsense heuristics or familiar templates.”

    The term “chains of tokens” is a common way to refer to a series of elements input to an LLM, such as words or characters. 

    Testing what LLMs actually do

    To test the hypothesis that LLMs are merely pattern-matching, not really reasoning, they trained OpenAI’s older, open-source LLM, GPT-2, from 2019, by starting from scratch, an approach they call “data alchemy.”

    arizona-state-2025-data-alchemy

    Arizona State University

    The model was trained from the beginning to just manipulate the 26 letters of the English alphabet, “A, B, C,…etc.” That simplified corpus lets Zhao and team test the LLM with a set of very simple tasks. All the tasks involve manipulating sequences of the letters, such as, for example, shifting every letter a certain number of places, so that “APPLE” becomes “EAPPL.”

    Also: OpenAI CEO sees uphill struggle to GPT-5, potential for new kind of consumer hardware

    Using the limited number of tokens, and limited tasks, Zhao and team vary which tasks the language model is exposed to in its training data versus which tasks are only seen when the finished model is tested, such as, “Shift each element by 13 places.” It’s a test of whether the language model can reason a way to perform even when confronted with new, never-before-seen tasks. 

    They found that when the tasks were not in the training data, the language model failed to achieve those tasks correctly using a chain of thought. The AI model tried to use tasks that were in its training data, and its “reasoning” sounds good, but the answer it generated was wrong. 

    As Zhao and team put it, “LLMs try to generalize the reasoning paths based on the most similar ones […] seen during training, which leads to correct reasoning paths, yet incorrect answers.”

    Specificity to counter the hype

    The authors draw some lessons. 

    First: “Guard against over-reliance and false confidence,” they advise, because “the ability of LLMs to produce ‘fluent nonsense’ — plausible but logically flawed reasoning chains — can be more deceptive and damaging than an outright incorrect answer, as it projects a false aura of dependability.”

    Also, try out tasks that are explicitly not likely to have been contained in the training data so that the AI model will be stress-tested. 

    Also: Why GPT-5’s rocky rollout is the reality check we needed on superintelligence hype

    What’s important about Zhao and team’s approach is that it cuts through the hyperbole and takes us back to the basics of understanding what exactly AI is doing. 

    When the original research on chain-of-thought, “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” was performed by Jason Wei and colleagues at Google’s Google Brain team in 2022 — research that has since been cited more than 10,000  times — the authors made no claims about actual reasoning. 

    Wei and team noticed that prompting an LLM to list the steps in a problem, such as an arithmetic word problem (“If there are 10 cookies in the jar, and Sally takes out one, how many are left in the jar?”) tended to lead to more correct solutions, on average. 

    google-2022-example-chain-of-thought-prompting

    Google Brain

    They were careful not to assert human-like abilities. “Although chain of thought emulates the thought processes of human reasoners, this does not answer whether the neural network is actually ‘reasoning,’ which we leave as an open question,” they wrote at the time. 

    Also: Will AI think like humans? We’re not even close – and we’re asking the wrong question

    Since then, Altman’s claims and various press releases from AI promoters have increasingly emphasized the human-like nature of reasoning using casual and sloppy rhetoric that doesn’t respect Wei and team’s purely technical description. 

    Zhao and team’s work is a reminder that we should be specific, not superstitious, about what the machine is really doing, and avoid hyperbolic claims. 





    Source link

    Continue Reading

    AI Research

    Lam Research, Simon Property, Corning, Fortinet, Tempus AI: Trending by Analysts

    Published

    on


    Analysts are intrested in these 5 stocks: ( (LRCX) ), ( (SPG) ), ( (GLW) ), ( (FTNT) ) and ( (TEM) ). Here is a breakdown of their recent ratings and the rationale behind them.

    Elevate Your Investing Strategy:

    • Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence.

    Lam Research, a key player in the semiconductor capital equipment sector, has recently been downgraded to a ‘Sell’ by analyst Shane Brett from Morgan Stanley. Despite its impressive performance in 2024 and 2025, driven by NAND, China, and TSMC, concerns loom over its growth prospects for 2026. The analyst highlights that while Lam has outperformed the wafer fab equipment market, the growth drivers are expected to slow down, leading to a challenging setup for 2026. The company’s revenue and EPS estimates are above street expectations, but the buyside estimates are believed to be even higher, capping potential upside for the stock.

    Simon Property Group, a major player in the mall REIT sector, has been downgraded to ‘Hold’ by analyst Simon Yarmak from Stifel. The shares have performed well, surpassing the target price of $179, leading to the downgrade. Despite the strong recovery and outperformance against its peers, the relative valuation is not attractive on a multiple basis. The implied cap rate has decreased, and while there is potential for further upward movement, the current premium to the average multiple suggests limited upside.

    Corning, a leader in the optical fiber industry, has been upgraded to ‘Buy’ by analyst Joshua Spector. The company is expected to benefit from ongoing AI-driven fiber growth, which is anticipated to exceed market expectations. The growth in the optical segment is projected to drive a significant increase in sales, with a sustainable CAGR through 2029. Corning’s innovations in fiber optics and its expansion into other growth opportunities, such as US solar and automotive segments, further bolster its growth prospects. The stock is expected to re-rate higher due to its sustained growth.

    Fortinet, a prominent cybersecurity firm, has been downgraded to ‘Sell’ by analyst Meta Marshall from Morgan Stanley. The anticipated firewall refresh is not expected to meet expectations, putting pressure on future growth estimates. Despite the company’s success in expanding its product offerings, the disappointing firewall refresh is likely to create a headwind for the stock. The valuation is expected to be pressured, and the shares are likely to underperform on a relative basis.

    Tempus AI, a healthcare technology company, has been initiated with a ‘Buy’ rating by analyst Yi Chen. The company is leveraging artificial intelligence to advance precision medicine, with significant growth expected in its revenue. Strategic acquisitions and collaborations are expected to boost its topline growth, strengthening its market position in AI-enabled healthcare. The company’s impressive track record and strong growth prospects make it an attractive investment opportunity, with a price target of $90.

    Disclaimer & DisclosureReport an Issue



    Source link

    Continue Reading

    Trending