AI Research
Artificial intelligence and journalism | Opinion

AI applications continue to rapidly expand into all areas of life. They are transforming processes and workflows in the domains they permeate, while also creating new opportunities. However, alongside these contributions, AI also brings various risks, ranging from compromising data security to leaving individuals vulnerable, reinforcing biases, deepening inequalities and generating misinformation. These risks vary in scale and nature depending on the specific characteristics of the field in which AI is applied.
Journalism is one of the fields most profoundly affected by AI, and it is deeply felt across a wide spectrum, including data analysis, content creation, content personalization and editorial processes. It has become an especially valuable ally in investigative journalism. Moreover, AI now contributes to every stage of the news cycle, including the gathering, reporting, storytelling and distribution of news. In areas where digitalization is extensive, AI acts as a transformative force. Given that journalism is one such field, many researchers argue that AI is not merely a tool in journalism but a transformative power that is reshaping the profession itself.
The widespread adoption of machine learning has opened new horizons, particularly for investigative journalism. It has enabled the easy analysis of big data based on the specific details of a given topic, as well as the identification of underlying patterns within the data. This in-depth contribution has significantly facilitated and enhanced the quality of investigative journalism and news production, especially in complex fields such as elections, health, education, finance and monetary markets, and sports. Thanks to AI, information with news value and complex narratives, previously difficult to detect due to structural complexity, can now be uncovered and presented to the public. As a result, news production capacity has increased significantly with AI technologies. For news agencies in particular, this increased capacity provides a major advantage in terms of both public influence and economic gain.
On the other hand, it has also become possible to conduct in-depth public opinion analysis through social media and other digital platforms. In this way, reader and viewer responses to news content can be evaluated more comprehensively. Additionally, analyzing user preferences on news platforms and recommending new content accordingly has become a common practice, helping to extend the time users spend on these platforms.
One of the most significant contributions of AI is its ability to enable personalized content production. AI, which is widely used to generate personalized educational content in the field of education, has similarly started to be extensively applied in journalism for collecting, evaluating and distributing personalized content tailored to individual users. In short, AI technologies are making increasingly essential contributions to enhancing productivity and efficiency in journalism. The expectation is that the time saved through this increase in productivity will be used to improve the overall quality of journalism.
Research findings on the impact of AI on employee productivity indicate that increases in efficiency and output are particularly significant among low- and medium-skilled workers. In other words, AI technologies help compensate for skill gaps in these employee groups. When used in journalism in this way – complementing rather than replacing humans – AI can enhance productivity without causing major negative effects on employment. At the same time, it can create additional time that journalists can devote to improving the quality of their reporting.
However, there is a clear risk that journalism positions involving routine tasks, such as writing standard news reports and performing data analysis, may be fully taken over by AI. On the other hand, as noted above, the integration of AI technologies into journalism as a transformative force requires workers in the field to rapidly acquire new skills to remain relevant in a changing industry. Therefore, improving AI literacy and skills among journalism professionals is of critical importance. Without investment in the development of these capabilities, many journalists may face the risk of losing their current positions.
On the other hand, the greatest risk associated with personalized news content is the reduction in content diversity and the reinforcement of informational comfort zones by directing users toward echo chamber-like content. As a result, individuals are increasingly exposed to information that supports their existing beliefs and attitudes, while their access to differing opinions and news becomes limited. This makes it more difficult for people to encounter diverse content, and the interpretation of events begins to vary significantly depending on the boundaries of each echo chamber. One of the greatest risks facing modern societies is the clustering of the public into distinct groups and their confinement within echo chambers. As AI further enhances the personalization of news content, it is likely to intensify the formation of these echo chambers. This poses a serious threat to the overall health and cohesion of modern societies.
Although AI is highly capable of analyzing big data and detecting patterns, the lack of transparency in how these analyses are conducted due to the “black box” nature of many AI systems raises serious concerns, particularly in news content production and investigative journalism. The opaque nature of AI-generated analysis and content can result in the production of news that lacks transparency and accountability. Since AI itself cannot be held responsible for the content it produces, an important question arises: Can journalists who use AI in this way be held accountable for non-transparent content and analysis? This issue is also actively debated in the academic world.
For example, as generative AI tools began to be used in the production of scientific articles and even appeared as co-authors in some cases, editorial teams of academic journals faced intense debate over whether AI could be recognized as an author. Prestigious journals such as Science have taken a firm stance, stating not only that AI cannot be listed as an author, but also that AI-generated content, such as text or graphic,s should not be used in academic articles at all. However, more flexible policies have gradually emerged. According to these, AI can never be considered a co-author, but if it contributes to the quality of a scientific article, its role in the production process must be clearly disclosed within the article. At the heart of all these debates and efforts to find solutions lies the fundamental issue that AI cannot bear responsibility for its contributions and cannot be held accountable for its actions. A similar precaution must be implemented in the field of journalism as well.
Another major concern regarding the widespread use of AI in journalism is the risk of perpetuating biases. Since AI technologies make predictions, optimizations, and generate content based on real-world data, the training data effectively serves as a form of memory. This “memory” can contain biased judgments and linguistic patterns related to religion, race, gender and other characteristics of different social groups – biases that can be directly reproduced in new content. As a result, AI-generated journalistic content may replicate these same biases, leading to the proliferation of biased news. Furthermore, when such biased content circulates within echo chambers and is repeatedly interpreted through the lens of partial perspectives, it increases the risk of deepening social inequalities. The same dynamic is present in culturally embedded content generation through AI. As we discussed in a previous article titled “The Powerful Wave of Orientalism Driven by Artificial Intelligence,” AI applications continue to produce content that preserves orientalist tones. These systems attempt to maintain control over the right to represent “the East” from a detached, often Western and white-centric perspective, disconnected from the reality of the cultures they depict.
In addition, with the advancement of artificial intelligence technologies, the production of highly realistic yet false video content (deepfakes) has become increasingly widespread. The ease with which such manipulative and misleading content can be created not only heightens social unrest but also poses threats to individual safety. In this context, another risk is the potential of AI to generate false content, which has negative implications for journalism. As is well known, generative AI sometimes produces information that appears coherent within the text but is factually incorrect, a phenomenon referred to as “hallucination” or “confabulation.” Relying entirely on AI for news content production increases not only the risk of biased reporting but also the risk of misinformation. Therefore, editorial oversight is critically important in eliminating such risks. To ensure this, editorial teams must possess a strong level of AI literacy, and this literacy must be continuously updated.
In summary, AI applications have a transformative and therefore far-reaching impact on the field of journalism. The opportunities it provides have already significantly reshaped processes and workflows in this domain and have led to notable economic gains. However, it is also clear that this transformation brings numerous risks, ranging from negative effects on employment in journalism to the production of biased and false content. As in other fields, the most human-centered approach in journalism is to use AI technologies in a way that complements human effort rather than replaces it. Otherwise, while the economic benefits of AI may concentrate in the hands of a narrow group, the risks it poses will affect broader segments of society. Moreover, the risks associated with AI have made editorial oversight more critical than ever before. In this context, increasing AI literacy and supporting the development of related skills will enhance the potential to benefit from these technologies in a balanced and responsible way.
AI Research
Alberta Follows Up Its Artificial Intelligence Data Centre Strategy with a Levy Framework

Alberta is introducing a levy framework for data centres powering artificial intelligence technologies, the Province recently announced.
Effective by the end of 2026, a 2% levy on computer hardware will apply to grid-connected data centres of 75 megawatts or greater, according to a statement from Alberta.
The levy will be fully offset against provincial corporate income taxes, the government says. Once a data centre becomes profitable and pays corporate income tax in Alberta, the levy will not result in any additional tax burden.
Data centres of 75MW or greater will be recognized as designated industrial properties, with property values assessed by the province. Land and buildings associated with data centres will be subject to municipal taxation.
The framework was forged through a six-week consultation with industry stakeholders, according to Nate Glubish, Minister of Technology and Innovation.
“Alberta’s government has a duty to ensure Albertans receive a fair deal from data centre investments,” the provincial minister remarked. “This approach strikes a balance that we believe is fair to industry and Albertans, while protecting Alberta’s competitive advantage.”
Glubish added that the Alberta government is also exploring other options. This includes a payment in lieu of taxes program that would allow companies to make predictable annual payments instead of fluctuating levy amounts, as well as a deferral program to ease cash-flow pressures during construction and early years of operation.
“After working closely with industry, we’re introducing a fair, predictable levy that ensures data centres pay their share for the infrastructure and services that support them,” commented Nate Horner, Minister of Finance.
“This approach provides stability for businesses while generating new revenue to support Alberta’s future,” he posits.
The decision builds on the Alberta Artificial Intelligence Data Centre Strategy, introduced in 2024.
The strategy aims to capture a larger share of the global AI data centre market, which is expected to exceed $820 billion by 2030 as Alberta becomes a data centre powerhouse within Canada.
However, the Province’s tactics have not gone uncriticized.
AI Research
Reimagining clinical AI: from clickstreams to clinical insights with EHR use metadata

Harnessing EHR data for health research | Nature Medicine. https://www.nature.com/articles/s41591-024-03074-8.
Acosta, J. N., Falcone, G. J., Rajpurkar, P. & Topol, E. J. Multimodal biomedical AI. Nat. Med. 28, 1773–1784 (2022).
Adler-Milstein, J. et al. Meeting the Moment: Addressing Barriers and Facilitating Clinical Adoption of Artificial Intelligence in Medical Diagnosis. NAM Perspect. https://doi.org/10.31478/202209c. (2022)
Aquino, Y. S. J. et al. Utopia versus dystopia: Professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skills. Int. J. Med. Inf. 169, 104903 (2023).
Pavuluri, S., Sangal, R., Sather, J. & Taylor, R. A. Balancing act: the complex role of artificial intelligence in addressing burnout and healthcare workforce dynamics. BMJ Health Care Inf. 31, e101120 (2024).
Rule, A. et al. Guidance for reporting analyses of metadata on electronic health record use. J. Am. Med. Inform. Assoc.31, 784–789 (2023).
Adler-Milstein, J., Adelman, J. S., Tai-Seale, M., Patel, V. L. & Dymek, C. EHR audit logs: A new goldmine for health services research?. J. Biomed. Inform. 101, 103343 (2020).
Kannampallil, T. & Adler-Milstein, J. Using electronic health record audit log data for research: insights from early efforts. J. Am. Med. Inform. Assoc. 30, 167–171 (2023).
Rule, A., Melnick, E. R. & Apathy, N. C. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J. Am. Med. Inform. Assoc. 30, 144–154 (2023).
Rule, A., Chiang, M. F. & Hribar, M. R. Using electronic health record audit logs to study clinical activity: a systematic review of aims, measures, and methods. J. Am. Med. Inform. Assoc.27, 480–490 (2020).
Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern. Med 172, No 3. https://www.acpjournals.org/doi/10.7326/M18-3684.
Rotenstein, L. S., Holmgren, A. J., Downing, N. L. & Bates, D. W. Differences in total and after-hours electronic health record time across ambulatory specialties. JAMA Intern. Med. 181, 863–865 (2021).
Tai-Seale, M. et al. Association of physician burnout with perceived EHR work stress and potentially actionable factors. J. Am. Med. Inform. Assoc. 30, 1665–1672 (2023).
Chen, Y. et al. Modeling care team structures in the neonatal intensive care unit through network analysis of EHR Audit Logs. Methods Inf. Med. 58, 109–123 (2019).
Yakusheva, O. et al. An electronic health record metadata-mining approach to identifying patient-level interprofessional clinician teams in the intensive care unit. J. Am. Med. Inform. Assoc. 32, 426–434 (2025).
Chen, Y., Patel, M. B., McNaughton, C. D. & Malin, B. A. Interaction patterns of trauma providers are associated with length of stay. J. Am. Med. Inform. Assoc.25, 790–799 (2018).
Lou, S. S. et al. Effect of clinician attention switching on workload and wrong-patient errors. Br. J. Anaesth. 129, e22–e24 (2022).
Rose, C. et al. Team is brain: leveraging EHR audit log data for new insights into acute care processes. J. Am. Med. Inform. Assoc. 30, 8–15 (2023).
Melnick, E. R. et al. Analysis of electronic health record use and clinical productivity and their association with physician turnover. JAMA Netw. Open 4, e2128790 (2021).
Tran, B., Lenhart, A., Ross, R. & Dorr, D. A. Burnout and EHR use among academic primary care physicians with varied clinical workloads. AMIA Summits Transl. Sci. Proc. 2019, 136–144 (2019).
Rossetti, S. C. et al. Real-time surveillance system for patient deterioration: a pragmatic cluster-randomized controlled trial. Nat. Med. 1–8. https://doi.org/10.1038/s41591-025-03609-7. (2025)
Rossetti, S. C. et al. Healthcare process modeling to phenotype clinician behaviors for exploiting the signal gain of clinical expertise (HPM-ExpertSignals): Development and evaluation of a conceptual framework. J. Am. Med. Inform. Assoc.28, 1242–1251 (2021).
Zhang, X., Yan, C., Malin, B. A., Patel, M. B. & Chen, Y. Predicting next-day discharge via electronic health record access logs. J. Am. Med. Inform. Assoc.28, 2670–2680 (2021).
Bhaskhar, N., Ip, W., Chen, J. H. & Rubin, D. L. Clinical outcome prediction using observational supervision with electronic health records and audit logs. J. Biomed. Inform. 147, 104522 (2023).
Zhang, X. et al. Optimizing large language models for discharge prediction: best practices in leveraging electronic health record audit logs. AMIA Annu. Symp. Proc. 2024, 1323–1331 (2025).
Kim, S., Warner, B. C., Lew, D., Lou, S. S. & Kannampallil, T. Measuring cognitive effort using tabular transformer-based language models of electronic health record-based audit log action sequences. J. Am. Med. Inform. Assoc.31, 2228–2235 (2024).
Rossetti, S. C. et al. Leveraging clinical expertise as a feature – not an outcome – of predictive models: evaluation of an early warning system use case. Amia. Annu. Symp. Proc. 2019, 323–332 (2020).
Duggan, M. J. et al. Clinician experiences with ambient scribe technology to assist with documentation burden and efficiency. JAMA Netw. Open 8, e2460637 (2025).
Garcia, P. et al. Artificial intelligence–generated draft replies to patient inbox messages. JAMA Netw. Open 7, e243201 (2024).
Sinsky, C. A., Rotenstein, L., Holmgren, A. J. & Apathy, N. C. The number of patient scheduled hours resulting in a 40-hour work week by physician specialty and setting: a cross-sectional study using electronic health record event log data. J. Am. Med. Inform. Assoc. 32, 235–240 (2025).
Holmgren, A. J., Sinsky, C. A., Rotenstein, L. & Apathy, N. C. National comparison of ambulatory physician electronic health record use across specialties. J. Gen. Intern. Med. 39, 2868–2870 (2024).
Rotenstein, L. et al. Virtual scribes and physician time spent on electronic health records. JAMA Netw. Open 7, e2413140 (2024).
Rotenstein, L. S. et al. Association of primary care physicians’ Electronic Inbox activity patterns with patients’ likelihood to recommend the physician. J. Gen. Intern. Med. 39, 150–152 (2024).
Li, H. et al. Quantifying EHR and policy factors associated with the gender productivity gap in ambulatory, general internal medicine. J. Gen. Intern. Med. 39, 557–565 (2024).
Jay Holmgren, A., Steitz, B., Lou, S. & Apathy, N. Using Electronic Health Record Metadata to Understand Clinician Work and Behavior. In Reengineering Clinical Workflow in the Digital and AI Era: Toward Safer and More Efficient Care (eds. Zheng, K., Westbrook, J. & Patel, V. L.) 299–317 (Springer Nature Switzerland, Cham, https://doi.org/10.1007/978-3-031-82971-0_15. 2025).
Rotenstein, L. & Jay Holmgren, A. COVID exacerbated the gender disparity in physician electronic health record inbox burden. J. Am. Med. Inform. Assoc. 30, 1720–1724 (2023).
Gupta, K. et al. Differences in ambulatory EHR use patterns for male vs. female physicians. Catal. Carryover 5, (2019).
Rotenstein, L. S. et al. System-level factors and time spent on electronic health records by primary care physicians. JAMA Netw. Open 6, e2344713 (2023).
Holmgren, A. J., Thombley, R., Sinsky, C. A. & Adler-Milstein, J. Changes in physician electronic health record use with the expansion of telemedicine. JAMA Intern. Med. 183, 1357–1365 (2023).
Tawfik, D. et al. Emerging domains for measuring health care delivery with electronic health record metadata. J. Med. Internet Res. 27, e64721 (2025).
Yan, C. et al. Differences in health professionals’ engagement with electronic health records based on inpatient race and ethnicity. JAMA Netw. Open 6, e2336383 (2023).
Cox, M. L. et al. Documenting or operating: where is time spent in general surgery residency?. J. Surg. Educ. 75, e97–e106 (2018).
Read-Brown, S. et al. Time requirements for electronic health record use in an Academic Ophthalmology Center. JAMA Ophthalmol. 135, 1250–1257 (2017).
Dziorny, A. C. et al. Automatic detection of front-line clinician hospital shifts: a novel use of electronic health record timestamp data. Appl. Clin. Inform. 10, 28–37 (2019).
Hribar, M. R. et al. Secondary use of EHR timestamp data: validation and application for workflow optimization. Amia. Annu. Symp. Proc. 2015, 1909–1917 (2015).
Hribar, M. R. et al. Secondary use of electronic health record data for clinical workflow analysis. J. Am. Med. Inform. Assoc. JAMIA 25, 40–46 (2018).
Sinsky, C. A. et al. Metrics for assessing physician activity using electronic health record log data. J. Am. Med. Inform. Assoc. 27, 639–643 (2020).
Avdagovska, M. et al. Exploring the impact of in basket metrics on the adoption of a new electronic health record system among specialists in a tertiary hospital in alberta: descriptive study. J. Med. Internet Res. 26, e53122 (2024).
Akbar, F. et al. Physicians’ electronic inbox work patterns and factors associated with high inbox work duration. J. Am. Med. Inform. Assoc. 28, 923–930 (2021).
Arndt, B. G. et al. Tethered to the EHR: Primary care physician workload assessment using EHR event log data and time-motion observations. Ann. Fam. Med. 15, 419–426 (2017).
Amroze, A. et al. Use of electronic health record access and audit logs to identify physician actions following noninterruptive alert opening: descriptive study. JMIR Med. Inform. 7, e12650 (2019).
Cutrona, S. L. et al. Primary care providers’ opening of time-sensitive alerts sent to commercial electronic health record inBaskets. J. Gen. Intern. Med. 32, 1210–1219 (2017).
Rumlow, Z. et al. The impact of diagnosis-specific plan templates on admission note writing time: a quality improvement initiative. J. Grad. Med. Educ. 16, 581–587 (2024).
Nguyen, O. T. et al. Primary care physicians’ electronic health record proficiency and efficiency behaviors and time interacting with electronic health records: a quantile regression analysis. J. Am. Med. Inform. Assoc. JAMIA 29, 461–471 (2021).
Chen, B. et al. Mining tasks and task characteristics from electronic health record audit logs with unsupervised machine learning. J. Am. Med. Inform. Assoc. 28, 1168–1177 (2021).
Lou, S. S., Liu, H., Harford, D., Lu, C. & Kannampallil, T. Characterizing the macrostructure of electronic health record work using raw audit logs: an unsupervised action embeddings approach. J. Am. Med. Inform. Assoc. 30, 539–544 (2023).
Tiase, V. L., Sward, K. A. & Facelli, J. C. A scalable and extensible logical data model of electronic health Record Audit Logs for Temporal Data Mining (RNteract): model conceptualization and formulation. JMIR Nurs. 7, e55793 (2024).
Zhang, X., Zhao, Y., Yan, C., Derr, T. & Chen, Y. Inferring EHR utilization workflows through audit logs. Amia. Annu. Symp. Proc. 2022, 1247–1256 (2023).
Chen, Y., Adler-Milstein, J. & Sinsky, C. Measuring and maximizing undivided attention in the context of electronic health records. Appl. Clin. Inform. https://doi.org/10.1055/a-1892-1437. (2022)
Moy, A. J. et al. Characterizing multitasking and workflow fragmentation in electronic health records among emergency department clinicians: using time-motion data to understand documentation burden. Appl. Clin. Inform. 12, 1002–1013 (2021).
Jones, B., Zhang, X., Malin, B. A. & Chen, Y. Learning tasks of pediatric providers from electronic health record audit logs. Amia. Annu. Symp. Proc. 2020, 612–618 (2021).
Li, P. et al. Measuring collaboration through concurrent electronic health record usage: network analysis study. JMIR Med. Inform. 9, e28998 (2021).
Mannering, H. et al. Assessing neonatal intensive care unit structures and outcomes before and during the COVID-19 pandemic: network analysis study. J. Med. Internet Res. 23, e27261 (2021).
Chen, Y., Yan, C. & Patel, M. B. Network analysis subtleties in ICU structures and outcomes. Am. J. Respir. Crit. Care Med. 202, 1606–1607 (2020).
Chen, Y., Lorenzi, N. M., Sandberg, W. S., Wolgast, K. & Malin, B. A. Identifying collaborative care teams through electronic medical record utilization patterns. J. Am. Med. Inform. Assoc. JAMIA 24, e111–e120 (2017).
Yan, C. et al. Collaboration structures in COVID-19 critical care: retrospective network analysis study. JMIR Hum. Factors 8, e25724 (2021).
Kelly Costa, D., Liu, H., Boltey, E. M. & Yakusheva, O. The structure of critical care nursing teams and patient outcomes: a network analysis. Am. J. Respir. Crit. Care Med. 201, 483–485 (2020).
Kim, C. et al. Provider Networks in the Neonatal Intensive Care Unit Associate with Length of Stay. In 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC) 127–134. https://doi.org/10.1109/CIC48465.2019.00024. (2019)
Apathy, N. C., Holmgren, A. J. & Cross, D. A. Physician EHR time and visit volume following adoption of team-based documentation support. JAMA Intern. Med. 184, 1212–1221 (2024).
Tang, M., Holmgren, A. J., Huckman, R. S., Pany, M. J. & McWilliams, J. M. Modalities, Mo Problems: impacts of provider modality switching in hybrid outpatient clinics. Acad. Manag. Proc. 2024, 13107 (2024).
Jiang, S. Y., Hum, R. S., Vawdrey, D. & Mamykina, L. In search of social translucence: an audit log analysis of handoff documentation views and updates. Amia. Annu. Symp. Proc. 2015, 669–676 (2015).
Lyles, C. R. et al. Using electronic health record portals to improve patient engagement: research priorities and best practices. Ann. Intern. Med. 172, S123–S129 (2020).
Zhang, X. et al. Association between patient portal engagement and weight loss outcomes in patients after bariatric surgery: longitudinal observational study using electronic health records. J. Med. Internet Res. 26, e56573 (2024).
Davis, S. E., Embí, P. J. & Matheny, M. E. Sustainable deployment of clinical prediction tools—a 360° approach to model maintenance. J. Am. Med. Inform. Assoc. 31, 1195–1198 (2024).
Guo, L. L. et al. EHR foundation models improve robustness in the presence of temporal distribution shift. Sci. Rep. 13, 3767 (2023).
Brown, K. E. et al. Large language models are less effective at clinical prediction tasks than locally trained machine learning models. J. Am. Med. Inform. Assoc. ocaf038. https://doi.org/10.1093/jamia/ocaf038. (2025)
Wornow, M. et al. The shaky foundations of large language models and foundation models for electronic health records. Npj Digit. Med. 6, 1–10 (2023).
Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature 616, 259–265 (2023).
Zhou, Y. et al. A foundation model for generalizable disease detection from retinal images. Nature 622, 156–163 (2023).
Guo, L. L. et al. A multi-center study on the adaptability of a shared foundation model for electronic health records. Npj Digit. Med. 7, 1–9 (2024).
Peng, C. et al. A study of generative large language model for medical research and healthcare. NPJ Digit. Med. 6, 210 (2023).
Krishnan, R., Rajpurkar, P. & Topol, E. J. Self-supervised learning in medicine and healthcare. Nat. Biomed. Eng. 6, 1346–1352 (2022).
Katsoulakis, E. et al. Digital twins for health: a scoping review. NPJ Digit. Med. 7, 77 (2024).
Embí, P. J., Rhew, D. C., Peterson, E. D. & Pencina, M. J. Launching the Trustworthy and Responsible AI Network (TRAIN): A Consortium to Facilitate Safe and Effective AI Adoption. JAMA https://doi.org/10.1001/jama.2025.1331. (2025)
Maddox, T. M. et al. Generative AI in Medicine — Evaluating Progress and Challenges. N. Engl. J. Med. 0
You, J. G., Hernandez-Boussard, T., Pfeffer, M. A., Landman, A. & Mishuris, R. G. Clinical trials informed framework for real world clinical implementation and deployment of artificial intelligence applications. Npj Digit. Med. 8, 1–5 (2025).
McCoy, A. B. et al. Clinician collaboration to improve clinical decision support: the Clickbusters initiative. J. Am. Med. Inform. Assoc. 29, 1050–1059 (2022).
Baxter, S. L., Apathy, N. C., Cross, D. A., Sinsky, C. & Hribar, M. R. Measures of electronic health record use in outpatient settings across vendors. J. Am. Med. Inform. Assoc. JAMIA 28, 955–959 (2021).
Cohen, G. R., Boi, J., Johnson, C., Brown, L. & Patel, V. Measuring time clinicians spend using EHRs in the inpatient setting: a national, mixed-methods study. J. Am. Med. Inform. Assoc. JAMIA 28, 1676–1682 (2021).
Wu, D. T. Y. et al. Using EHR audit trail logs to analyze clinical workflow: A case study from community-based ambulatory clinics. Amia. Annu. Symp. Proc. 2017, 1820–1827 (2018).
Sinsky, C. et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann. Intern. Med. 165, 753–760 (2016).
Were, M. C. et al. Role and use of race in artificial intelligence and machine learning models related to health. J. Med. Internet Res. 27, e73996 (2025).
Rajkomar, A. et al. Scalable and accurate deep learning with electronic health records. Npj Digit. Med. 1, 1–10 (2018).
Grabowska, M. E. et al. Developing and evaluating pediatric phecodes (Peds-Phecodes) for high-throughput phenotyping using electronic health records. J. Am. Med. Inform. Assoc. 31, 386–395 (2024).
Hripcsak, G. & Albers, D. J. Next-generation phenotyping of electronic health records. J. Am. Med. Inform. Assoc. 20, 117–121 (2013).
Yasrebi-de Kom, I. A. R. et al. Electronic health record-based prediction models for in-hospital adverse drug event diagnosis or prognosis: a systematic review. J. Am. Med. Inform. Assoc. 30, 978–988 (2023).
Dos Santos, F. C. et al. The effect of a combined mHealth and community health worker intervention on HIV self-management. J. Am. Med. Inform. Assoc. 32, 510–517 (2025).
Liu, S. et al. Leveraging explainable artificial intelligence to optimize clinical decision support. J. Am. Med. Inform. Assoc. 31, 968–974 (2024).
Ozkaynak, M., Ponnala, S. & Werner, N. E. Patient-Oriented Workflow Approach. In Reengineering Clinical Workflow in the Digital and AI Era: Toward Safer and More Efficient Care (eds. Zheng, K., Westbrook, J. & Patel, V. L.) 213–229 (Springer Nature Switzerland, Cham, https://doi.org/10.1007/978-3-031-82971-0_11. 2025).
Sánchez-Salmerón, R. et al. Machine learning methods applied to triage in emergency services: A systematic review. Int. Emerg. Nurs. 60, 101109 (2022).
AI Research
Minister Bae Kyung-hun opens GPU resources for AI research to foster Nobel laureates – CHOSUNBIZ – Chosun Biz
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions