Connect with us

AI Research

Reimagining clinical AI: from clickstreams to clinical insights with EHR use metadata

Published

on


  • Harnessing EHR data for health research | Nature Medicine. https://www.nature.com/articles/s41591-024-03074-8.

  • Acosta, J. N., Falcone, G. J., Rajpurkar, P. & Topol, E. J. Multimodal biomedical AI. Nat. Med. 28, 1773–1784 (2022).

    CAS 
    PubMed 

    Google Scholar
     

  • Adler-Milstein, J. et al. Meeting the Moment: Addressing Barriers and Facilitating Clinical Adoption of Artificial Intelligence in Medical Diagnosis. NAM Perspect. https://doi.org/10.31478/202209c. (2022)

  • Aquino, Y. S. J. et al. Utopia versus dystopia: Professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skills. Int. J. Med. Inf. 169, 104903 (2023).


    Google Scholar
     

  • Pavuluri, S., Sangal, R., Sather, J. & Taylor, R. A. Balancing act: the complex role of artificial intelligence in addressing burnout and healthcare workforce dynamics. BMJ Health Care Inf. 31, e101120 (2024).


    Google Scholar
     

  • Rule, A. et al. Guidance for reporting analyses of metadata on electronic health record use. J. Am. Med. Inform. Assoc.31, 784–789 (2023).

    PubMed Central 

    Google Scholar
     

  • Adler-Milstein, J., Adelman, J. S., Tai-Seale, M., Patel, V. L. & Dymek, C. EHR audit logs: A new goldmine for health services research?. J. Biomed. Inform. 101, 103343 (2020).

    PubMed 

    Google Scholar
     

  • Kannampallil, T. & Adler-Milstein, J. Using electronic health record audit log data for research: insights from early efforts. J. Am. Med. Inform. Assoc. 30, 167–171 (2023).


    Google Scholar
     

  • Rule, A., Melnick, E. R. & Apathy, N. C. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J. Am. Med. Inform. Assoc. 30, 144–154 (2023).


    Google Scholar
     

  • Rule, A., Chiang, M. F. & Hribar, M. R. Using electronic health record audit logs to study clinical activity: a systematic review of aims, measures, and methods. J. Am. Med. Inform. Assoc.27, 480–490 (2020).

    PubMed 

    Google Scholar
     

  • Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern. Med 172, No 3. https://www.acpjournals.org/doi/10.7326/M18-3684.

  • Rotenstein, L. S., Holmgren, A. J., Downing, N. L. & Bates, D. W. Differences in total and after-hours electronic health record time across ambulatory specialties. JAMA Intern. Med. 181, 863–865 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Tai-Seale, M. et al. Association of physician burnout with perceived EHR work stress and potentially actionable factors. J. Am. Med. Inform. Assoc. 30, 1665–1672 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y. et al. Modeling care team structures in the neonatal intensive care unit through network analysis of EHR Audit Logs. Methods Inf. Med. 58, 109–123 (2019).

    PubMed 

    Google Scholar
     

  • Yakusheva, O. et al. An electronic health record metadata-mining approach to identifying patient-level interprofessional clinician teams in the intensive care unit. J. Am. Med. Inform. Assoc. 32, 426–434 (2025).

    PubMed 

    Google Scholar
     

  • Chen, Y., Patel, M. B., McNaughton, C. D. & Malin, B. A. Interaction patterns of trauma providers are associated with length of stay. J. Am. Med. Inform. Assoc.25, 790–799 (2018).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Lou, S. S. et al. Effect of clinician attention switching on workload and wrong-patient errors. Br. J. Anaesth. 129, e22–e24 (2022).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rose, C. et al. Team is brain: leveraging EHR audit log data for new insights into acute care processes. J. Am. Med. Inform. Assoc. 30, 8–15 (2023).


    Google Scholar
     

  • Melnick, E. R. et al. Analysis of electronic health record use and clinical productivity and their association with physician turnover. JAMA Netw. Open 4, e2128790 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Tran, B., Lenhart, A., Ross, R. & Dorr, D. A. Burnout and EHR use among academic primary care physicians with varied clinical workloads. AMIA Summits Transl. Sci. Proc. 2019, 136–144 (2019).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rossetti, S. C. et al. Real-time surveillance system for patient deterioration: a pragmatic cluster-randomized controlled trial. Nat. Med. 1–8. https://doi.org/10.1038/s41591-025-03609-7. (2025)

  • Rossetti, S. C. et al. Healthcare process modeling to phenotype clinician behaviors for exploiting the signal gain of clinical expertise (HPM-ExpertSignals): Development and evaluation of a conceptual framework. J. Am. Med. Inform. Assoc.28, 1242–1251 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Zhang, X., Yan, C., Malin, B. A., Patel, M. B. & Chen, Y. Predicting next-day discharge via electronic health record access logs. J. Am. Med. Inform. Assoc.28, 2670–2680 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Bhaskhar, N., Ip, W., Chen, J. H. & Rubin, D. L. Clinical outcome prediction using observational supervision with electronic health records and audit logs. J. Biomed. Inform. 147, 104522 (2023).

    PubMed 

    Google Scholar
     

  • Zhang, X. et al. Optimizing large language models for discharge prediction: best practices in leveraging electronic health record audit logs. AMIA Annu. Symp. Proc. 2024, 1323–1331 (2025).

  • Kim, S., Warner, B. C., Lew, D., Lou, S. S. & Kannampallil, T. Measuring cognitive effort using tabular transformer-based language models of electronic health record-based audit log action sequences. J. Am. Med. Inform. Assoc.31, 2228–2235 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rossetti, S. C. et al. Leveraging clinical expertise as a feature – not an outcome – of predictive models: evaluation of an early warning system use case. Amia. Annu. Symp. Proc. 2019, 323–332 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Duggan, M. J. et al. Clinician experiences with ambient scribe technology to assist with documentation burden and efficiency. JAMA Netw. Open 8, e2460637 (2025).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Garcia, P. et al. Artificial intelligence–generated draft replies to patient inbox messages. JAMA Netw. Open 7, e243201 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Sinsky, C. A., Rotenstein, L., Holmgren, A. J. & Apathy, N. C. The number of patient scheduled hours resulting in a 40-hour work week by physician specialty and setting: a cross-sectional study using electronic health record event log data. J. Am. Med. Inform. Assoc. 32, 235–240 (2025).

    PubMed 

    Google Scholar
     

  • Holmgren, A. J., Sinsky, C. A., Rotenstein, L. & Apathy, N. C. National comparison of ambulatory physician electronic health record use across specialties. J. Gen. Intern. Med. 39, 2868–2870 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rotenstein, L. et al. Virtual scribes and physician time spent on electronic health records. JAMA Netw. Open 7, e2413140 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rotenstein, L. S. et al. Association of primary care physicians’ Electronic Inbox activity patterns with patients’ likelihood to recommend the physician. J. Gen. Intern. Med. 39, 150–152 (2024).

    PubMed 

    Google Scholar
     

  • Li, H. et al. Quantifying EHR and policy factors associated with the gender productivity gap in ambulatory, general internal medicine. J. Gen. Intern. Med. 39, 557–565 (2024).

    PubMed 

    Google Scholar
     

  • Jay Holmgren, A., Steitz, B., Lou, S. & Apathy, N. Using Electronic Health Record Metadata to Understand Clinician Work and Behavior. In Reengineering Clinical Workflow in the Digital and AI Era: Toward Safer and More Efficient Care (eds. Zheng, K., Westbrook, J. & Patel, V. L.) 299–317 (Springer Nature Switzerland, Cham, https://doi.org/10.1007/978-3-031-82971-0_15. 2025).

  • Rotenstein, L. & Jay Holmgren, A. COVID exacerbated the gender disparity in physician electronic health record inbox burden. J. Am. Med. Inform. Assoc. 30, 1720–1724 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Gupta, K. et al. Differences in ambulatory EHR use patterns for male vs. female physicians. Catal. Carryover 5, (2019).

  • Rotenstein, L. S. et al. System-level factors and time spent on electronic health records by primary care physicians. JAMA Netw. Open 6, e2344713 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Holmgren, A. J., Thombley, R., Sinsky, C. A. & Adler-Milstein, J. Changes in physician electronic health record use with the expansion of telemedicine. JAMA Intern. Med. 183, 1357–1365 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Tawfik, D. et al. Emerging domains for measuring health care delivery with electronic health record metadata. J. Med. Internet Res. 27, e64721 (2025).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Yan, C. et al. Differences in health professionals’ engagement with electronic health records based on inpatient race and ethnicity. JAMA Netw. Open 6, e2336383 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Cox, M. L. et al. Documenting or operating: where is time spent in general surgery residency?. J. Surg. Educ. 75, e97–e106 (2018).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Read-Brown, S. et al. Time requirements for electronic health record use in an Academic Ophthalmology Center. JAMA Ophthalmol. 135, 1250–1257 (2017).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Dziorny, A. C. et al. Automatic detection of front-line clinician hospital shifts: a novel use of electronic health record timestamp data. Appl. Clin. Inform. 10, 28–37 (2019).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Hribar, M. R. et al. Secondary use of EHR timestamp data: validation and application for workflow optimization. Amia. Annu. Symp. Proc. 2015, 1909–1917 (2015).

    PubMed Central 

    Google Scholar
     

  • Hribar, M. R. et al. Secondary use of electronic health record data for clinical workflow analysis. J. Am. Med. Inform. Assoc. JAMIA 25, 40–46 (2018).

    PubMed 

    Google Scholar
     

  • Sinsky, C. A. et al. Metrics for assessing physician activity using electronic health record log data. J. Am. Med. Inform. Assoc. 27, 639–643 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Avdagovska, M. et al. Exploring the impact of in basket metrics on the adoption of a new electronic health record system among specialists in a tertiary hospital in alberta: descriptive study. J. Med. Internet Res. 26, e53122 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Akbar, F. et al. Physicians’ electronic inbox work patterns and factors associated with high inbox work duration. J. Am. Med. Inform. Assoc. 28, 923–930 (2021).

    PubMed 

    Google Scholar
     

  • Arndt, B. G. et al. Tethered to the EHR: Primary care physician workload assessment using EHR event log data and time-motion observations. Ann. Fam. Med. 15, 419–426 (2017).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Amroze, A. et al. Use of electronic health record access and audit logs to identify physician actions following noninterruptive alert opening: descriptive study. JMIR Med. Inform. 7, e12650 (2019).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Cutrona, S. L. et al. Primary care providers’ opening of time-sensitive alerts sent to commercial electronic health record inBaskets. J. Gen. Intern. Med. 32, 1210–1219 (2017).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rumlow, Z. et al. The impact of diagnosis-specific plan templates on admission note writing time: a quality improvement initiative. J. Grad. Med. Educ. 16, 581–587 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Nguyen, O. T. et al. Primary care physicians’ electronic health record proficiency and efficiency behaviors and time interacting with electronic health records: a quantile regression analysis. J. Am. Med. Inform. Assoc. JAMIA 29, 461–471 (2021).


    Google Scholar
     

  • Chen, B. et al. Mining tasks and task characteristics from electronic health record audit logs with unsupervised machine learning. J. Am. Med. Inform. Assoc. 28, 1168–1177 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Lou, S. S., Liu, H., Harford, D., Lu, C. & Kannampallil, T. Characterizing the macrostructure of electronic health record work using raw audit logs: an unsupervised action embeddings approach. J. Am. Med. Inform. Assoc. 30, 539–544 (2023).

    PubMed 

    Google Scholar
     

  • Tiase, V. L., Sward, K. A. & Facelli, J. C. A scalable and extensible logical data model of electronic health Record Audit Logs for Temporal Data Mining (RNteract): model conceptualization and formulation. JMIR Nurs. 7, e55793 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Zhang, X., Zhao, Y., Yan, C., Derr, T. & Chen, Y. Inferring EHR utilization workflows through audit logs. Amia. Annu. Symp. Proc. 2022, 1247–1256 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y., Adler-Milstein, J. & Sinsky, C. Measuring and maximizing undivided attention in the context of electronic health records. Appl. Clin. Inform. https://doi.org/10.1055/a-1892-1437. (2022)

  • Moy, A. J. et al. Characterizing multitasking and workflow fragmentation in electronic health records among emergency department clinicians: using time-motion data to understand documentation burden. Appl. Clin. Inform. 12, 1002–1013 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Jones, B., Zhang, X., Malin, B. A. & Chen, Y. Learning tasks of pediatric providers from electronic health record audit logs. Amia. Annu. Symp. Proc. 2020, 612–618 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Li, P. et al. Measuring collaboration through concurrent electronic health record usage: network analysis study. JMIR Med. Inform. 9, e28998 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Mannering, H. et al. Assessing neonatal intensive care unit structures and outcomes before and during the COVID-19 pandemic: network analysis study. J. Med. Internet Res. 23, e27261 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y., Yan, C. & Patel, M. B. Network analysis subtleties in ICU structures and outcomes. Am. J. Respir. Crit. Care Med. 202, 1606–1607 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Chen, Y., Lorenzi, N. M., Sandberg, W. S., Wolgast, K. & Malin, B. A. Identifying collaborative care teams through electronic medical record utilization patterns. J. Am. Med. Inform. Assoc. JAMIA 24, e111–e120 (2017).

    PubMed 

    Google Scholar
     

  • Yan, C. et al. Collaboration structures in COVID-19 critical care: retrospective network analysis study. JMIR Hum. Factors 8, e25724 (2021).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Kelly Costa, D., Liu, H., Boltey, E. M. & Yakusheva, O. The structure of critical care nursing teams and patient outcomes: a network analysis. Am. J. Respir. Crit. Care Med. 201, 483–485 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Kim, C. et al. Provider Networks in the Neonatal Intensive Care Unit Associate with Length of Stay. In 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC) 127–134. https://doi.org/10.1109/CIC48465.2019.00024. (2019)

  • Apathy, N. C., Holmgren, A. J. & Cross, D. A. Physician EHR time and visit volume following adoption of team-based documentation support. JAMA Intern. Med. 184, 1212–1221 (2024).

    PubMed 

    Google Scholar
     

  • Tang, M., Holmgren, A. J., Huckman, R. S., Pany, M. J. & McWilliams, J. M. Modalities, Mo Problems: impacts of provider modality switching in hybrid outpatient clinics. Acad. Manag. Proc. 2024, 13107 (2024).


    Google Scholar
     

  • Jiang, S. Y., Hum, R. S., Vawdrey, D. & Mamykina, L. In search of social translucence: an audit log analysis of handoff documentation views and updates. Amia. Annu. Symp. Proc. 2015, 669–676 (2015).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Lyles, C. R. et al. Using electronic health record portals to improve patient engagement: research priorities and best practices. Ann. Intern. Med. 172, S123–S129 (2020).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Zhang, X. et al. Association between patient portal engagement and weight loss outcomes in patients after bariatric surgery: longitudinal observational study using electronic health records. J. Med. Internet Res. 26, e56573 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Davis, S. E., Embí, P. J. & Matheny, M. E. Sustainable deployment of clinical prediction tools—a 360° approach to model maintenance. J. Am. Med. Inform. Assoc. 31, 1195–1198 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Guo, L. L. et al. EHR foundation models improve robustness in the presence of temporal distribution shift. Sci. Rep. 13, 3767 (2023).

    CAS 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Brown, K. E. et al. Large language models are less effective at clinical prediction tasks than locally trained machine learning models. J. Am. Med. Inform. Assoc. ocaf038. https://doi.org/10.1093/jamia/ocaf038. (2025)

  • Wornow, M. et al. The shaky foundations of large language models and foundation models for electronic health records. Npj Digit. Med. 6, 1–10 (2023).


    Google Scholar
     

  • Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature 616, 259–265 (2023).

    CAS 
    PubMed 

    Google Scholar
     

  • Zhou, Y. et al. A foundation model for generalizable disease detection from retinal images. Nature 622, 156–163 (2023).

    CAS 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Guo, L. L. et al. A multi-center study on the adaptability of a shared foundation model for electronic health records. Npj Digit. Med. 7, 1–9 (2024).


    Google Scholar
     

  • Peng, C. et al. A study of generative large language model for medical research and healthcare. NPJ Digit. Med. 6, 210 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Krishnan, R., Rajpurkar, P. & Topol, E. J. Self-supervised learning in medicine and healthcare. Nat. Biomed. Eng. 6, 1346–1352 (2022).


    Google Scholar
     

  • Katsoulakis, E. et al. Digital twins for health: a scoping review. NPJ Digit. Med. 7, 77 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Embí, P. J., Rhew, D. C., Peterson, E. D. & Pencina, M. J. Launching the Trustworthy and Responsible AI Network (TRAIN): A Consortium to Facilitate Safe and Effective AI Adoption. JAMA https://doi.org/10.1001/jama.2025.1331. (2025)

  • Maddox, T. M. et al. Generative AI in Medicine — Evaluating Progress and Challenges. N. Engl. J. Med. 0

  • You, J. G., Hernandez-Boussard, T., Pfeffer, M. A., Landman, A. & Mishuris, R. G. Clinical trials informed framework for real world clinical implementation and deployment of artificial intelligence applications. Npj Digit. Med. 8, 1–5 (2025).


    Google Scholar
     

  • McCoy, A. B. et al. Clinician collaboration to improve clinical decision support: the Clickbusters initiative. J. Am. Med. Inform. Assoc. 29, 1050–1059 (2022).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Baxter, S. L., Apathy, N. C., Cross, D. A., Sinsky, C. & Hribar, M. R. Measures of electronic health record use in outpatient settings across vendors. J. Am. Med. Inform. Assoc. JAMIA 28, 955–959 (2021).

    PubMed 

    Google Scholar
     

  • Cohen, G. R., Boi, J., Johnson, C., Brown, L. & Patel, V. Measuring time clinicians spend using EHRs in the inpatient setting: a national, mixed-methods study. J. Am. Med. Inform. Assoc. JAMIA 28, 1676–1682 (2021).

    PubMed 

    Google Scholar
     

  • Wu, D. T. Y. et al. Using EHR audit trail logs to analyze clinical workflow: A case study from community-based ambulatory clinics. Amia. Annu. Symp. Proc. 2017, 1820–1827 (2018).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Sinsky, C. et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann. Intern. Med. 165, 753–760 (2016).

    PubMed 

    Google Scholar
     

  • Were, M. C. et al. Role and use of race in artificial intelligence and machine learning models related to health. J. Med. Internet Res. 27, e73996 (2025).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rajkomar, A. et al. Scalable and accurate deep learning with electronic health records. Npj Digit. Med. 1, 1–10 (2018).


    Google Scholar
     

  • Grabowska, M. E. et al. Developing and evaluating pediatric phecodes (Peds-Phecodes) for high-throughput phenotyping using electronic health records. J. Am. Med. Inform. Assoc. 31, 386–395 (2024).

    PubMed 

    Google Scholar
     

  • Hripcsak, G. & Albers, D. J. Next-generation phenotyping of electronic health records. J. Am. Med. Inform. Assoc. 20, 117–121 (2013).

    PubMed 

    Google Scholar
     

  • Yasrebi-de Kom, I. A. R. et al. Electronic health record-based prediction models for in-hospital adverse drug event diagnosis or prognosis: a systematic review. J. Am. Med. Inform. Assoc. 30, 978–988 (2023).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Dos Santos, F. C. et al. The effect of a combined mHealth and community health worker intervention on HIV self-management. J. Am. Med. Inform. Assoc. 32, 510–517 (2025).

    PubMed 

    Google Scholar
     

  • Liu, S. et al. Leveraging explainable artificial intelligence to optimize clinical decision support. J. Am. Med. Inform. Assoc. 31, 968–974 (2024).

    PubMed 
    PubMed Central 

    Google Scholar
     

  • Ozkaynak, M., Ponnala, S. & Werner, N. E. Patient-Oriented Workflow Approach. In Reengineering Clinical Workflow in the Digital and AI Era: Toward Safer and More Efficient Care (eds. Zheng, K., Westbrook, J. & Patel, V. L.) 213–229 (Springer Nature Switzerland, Cham, https://doi.org/10.1007/978-3-031-82971-0_11. 2025).

  • Sánchez-Salmerón, R. et al. Machine learning methods applied to triage in emergency services: A systematic review. Int. Emerg. Nurs. 60, 101109 (2022).

    PubMed 

    Google Scholar
     



  • Source link

    AI Research

    As they face conflicting messages about AI, some advice for educators on how to use it responsibly

    Published

    on


    When it comes to the rapid integration of artificial intelligence into K-12 classrooms, educators are being pulled in two very different directions.

    One prevailing media narrative stokes such profound fears about the emerging strengths of artificial intelligence that it could lead one to believe it will soon be “game over” for everything we know about good teaching. At the same time, a sweeping executive order from the White House and tech-forward education policymakers paint AI as “game on” for designing the educational system of the future.

    I work closely with educators across the country, and as I’ve discussed AI with many of them this spring and summer, I’ve sensed a classic “approach-avoidance” dilemma — an emotional stalemate in which they’re encouraged to run toward AI’s exciting new capabilities while also made very aware of its risks.

    Even as educators are optimistic about AI’s potential, they are cautious and sometimes resistant to it. These conflicting urges to approach and avoid can be paralyzing.

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

    What should responsible educators do? As a learning scientist who has been involved in AI since the 1980s and who conducts nationally funded research on issues related to reading, math and science, I have some ideas.

    First, it is essential to keep teaching students core subject matter — and to do that well. Research tells us that students cannot learn critical thinking or deep reasoning in the abstract. They have to reason and critique on the basis of deep understanding of meaningful, important content. Don’t be fooled, for example, by the notion that because AI can do math, we shouldn’t teach math anymore.

    We teach students mathematics, reading, science, literature and all the core subjects not only so that they will be well equipped to get a job, but because these are among the greatest, most general and most enduring human accomplishments.

    You should use AI when it deepens learning of the instructional core, but you should also ignore AI when it’s a distraction from that core.

    Second, don’t limit your view of AI to a focus on either teacher productivity or student answer-getting.

    Instead, focus on your school’s “portrait of a graduate” — highlighting skills like collaboration, communication and self-awareness as key attributes that we want to cultivate in students.

    Much of what we know in the learning sciences can be brought to life when educators focus on those attributes, and AI holds tremendous potential to enrich those essential skills. Imagine using AI not to deliver ready-made answers, but to help students ask better, more meaningful questions — ones that are both intellectually rigorous and personally relevant.

    AI can also support student teams by deepening their collaborative efforts — encouraging the active, social dimensions of learning. And rather than replacing human insight, AI can offer targeted feedback that fuels deeper problem-solving and reflection.

    When used thoughtfully, AI becomes a catalyst — not a crutch — for developing the kinds of skills that matter most in today’s world.

    In short, keep your focus on great teaching and learning. Ask yourself: How can AI help my students think more deeply, work together more effectively and stay more engaged in their learning?

    Related: PROOF POINTS: Teens are looking to AI for information and answers, two surveys show

    Third, seek out AI tools and applications that are not just incremental improvements, but let you create teaching and learning opportunities that were impossible to deliver before. And at the same time, look for education technologies that are committed to managing risks around student privacy, inappropriate or wrong content and data security.

    Such opportunities for a “responsible breakthrough” will be a bit harder to find in the chaotic marketplace of AI in education, but they are there and worth pursuing. Here’s a hint: They don’t look like popular chatbots, and they may arise not from the largest commercial vendors but from research projects and small startups.

    For instance, some educators are exploring screen-free AI tools designed to support early readers in real-time as they work through physical books of their choice. One such tool uses a hand-held pointer with a camera, a tiny computer and an audio speaker — not to provide answers, but to guide students as they sound out words, build comprehension and engage more deeply with the text.

    I am reminded: Strong content remains central to learning, and AI, when thoughtfully applied, can enhance — not replace — the interactions between young readers and meaningful texts without introducing new safety concerns.

    Thus, thoughtful educators should continue to prioritize core proficiencies like reading, math, science and writing — and using AI only when it helps to develop the skills and abilities prioritized in their desired portrait of a graduate. By adopting ed-tech tools that are focused on novel learning experiences and committed to student safety, educators will lead us to a responsible future for AI in education.

    Jeremy Roschelle is the executive director of Digital Promise, a global nonprofit working to expand opportunity for every learner.

    Contact the opinion editor at opinion@hechingerreport.org.

    This story about AI in the classroom was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.



    Source link

    Continue Reading

    AI Research

    Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal – The CSR Journal

    Published

    on



    Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal  The CSR Journal



    Source link

    Continue Reading

    AI Research

    OpenAI business to burn $115 billion through 2029 The Information

    Published

    on


    OpenAI CEO Sam Altman walks on the day of a meeting of the White House Task Force on Artificial Intelligence (AI) Education in the East Room at the White House in Washington, D.C., U.S., September 4, 2025.

    Brian Snyder | Reuters

    OpenAI has sharply raised its projected cash burn through 2029 to $115 billion as it ramps up spending to power the artificial intelligence behind its popular ChatGPT chatbot, The Information reported on Friday.

    The new forecast is $80 billion higher than the company previously expected, the news outlet said, without citing a source for the report.

    OpenAI, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, some $1.5 billion higher than its projection from earlier this year, the report said.

    The company did not immediately respond to Reuters request for comment.

    To control its soaring costs, OpenAI will seek to develop its own data center server chips and facilities to power its technology, The Information said.

    OpenAI is set to produce its first artificial intelligence chip next year in partnership with U.S. semiconductor giant Broadcom, the Financial Times reported on Thursday, saying OpenAI plans to use the chip internally rather than make it available to customers.

    The company deepened its tie-up with Oracle in July with a planned 4.5-gigawatts of data center capacity, building on its Stargate initiative, a project of up to $500 billion and 10 gigawatts that includes Japanese technology investor SoftBank. OpenAI has also added Alphabet’s Google Cloud among its suppliers for computing capacity.

    The company’s cash burn will more than double to over $17 billion next year, $10 billion higher than OpenAI’s earlier projection, with a burn of $35 billion in 2027 and $45 billion in 2028, The Information said.

    Read the complete report by The Information here.



    Source link

    Continue Reading

    Trending