Connect with us

AI Research

The STARD-AI reporting guideline for diagnostic accuracy studies using artificial intelligence

Published

on


  • Reichlin, T. et al. Early diagnosis of myocardial infarction with sensitive cardiac troponin assays. N. Engl. J. Med. 361, 858–867 (2009).

    Article 
    PubMed 
    CAS 

    Google Scholar
     

  • Hawkes, N. Cancer survival data emphasise importance of early diagnosis. BMJ 364, l408 (2019).

    Article 
    PubMed 

    Google Scholar
     

  • Neal, R. D. et al. Is increased time to diagnosis and treatment in symptomatic cancer associated with poorer outcomes? Systematic review. Br. J. Cancer 112, S92–S107 (2015).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Leifer, B. P. Early diagnosis of Alzheimer’s disease: clinical and economic benefits. J. Am. Geriatr. Soc. 51, S281–S288 (2003).

    Article 
    PubMed 

    Google Scholar
     

  • Crosby, D. et al. Early detection of cancer. Science 375, eaay9040 (2022).

    Article 
    PubMed 
    CAS 

    Google Scholar
     

  • Fleming, K. A. et al. The Lancet Commission on diagnostics: transforming access to diagnostics. Lancet 398, 1997–2050 (2021).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Whiting, P. F., Rutjes, A. W., Westwood, M. E. & Mallett, S. A systematic review classifies sources of bias and variation in diagnostic test accuracy studies. J. Clin. Epidemiol. 66, 1093–1104 (2013).

    Article 
    PubMed 

    Google Scholar
     

  • Glasziou, P. et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet 383, 267–276 (2014).

    Article 
    PubMed 

    Google Scholar
     

  • Ioannidis, J. P. et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet 383, 166–175 (2014).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Lijmer, J. G. et al. Empirical evidence of design-related bias in studies of diagnostic tests. JAMA 282, 1061–1066 (1999).

    Article 
    PubMed 
    CAS 

    Google Scholar
     

  • Irwig, L., Bossuyt, P., Glasziou, P., Gatsonis, C. & Lijmer, J. Designing studies to ensure that estimates of test accuracy are transferable. BMJ 324, 669–671 (2002).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Moons, K. G., van Es, G. A., Deckers, J. W., Habbema, J. D. & Grobbee, D. E. Limitations of sensitivity, specificity, likelihood ratio, and Bayes’ theorem in assessing diagnostic probabilities: a clinical example. Epidemiology 8, 12–17 (1997).

    Article 
    PubMed 
    CAS 

    Google Scholar
     

  • Bossuyt, P. M. et al. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann. Intern. Med. 138, W1–W12 (2003).

    Article 
    PubMed 

    Google Scholar
     

  • Bossuyt, P. M. et al. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. BMJ 351, h5527 (2015).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Cohen, J. F. et al. STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration. BMJ Open 6, e012799 (2016).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Cohen, J. F. et al. STARD for Abstracts: essential items for reporting diagnostic accuracy studies in journal or conference abstracts. BMJ 358, j3751 (2017).

    Article 
    PubMed 

    Google Scholar
     

  • Korevaar, D. A. et al. Reporting diagnostic accuracy studies: some improvements after 10 years of STARD. Radiology 274, 781–789 (2015).

    Article 
    PubMed 

    Google Scholar
     

  • Korevaar, D. A., van Enst, W. A., Spijker, R., Bossuyt, P. M. & Hooft, L. Reporting quality of diagnostic accuracy studies: a systematic review and meta-analysis of investigations on adherence to STARD. Evid. Based Med. 19, 47–54 (2014).

    Article 
    PubMed 

    Google Scholar
     

  • Miao, Z., Humphreys, B. D., McMahon, A. P. & Kim, J. Multi-omics integration in the age of million single-cell data. Nat. Rev. Nephrol. 17, 710–724 (2021).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Bycroft, C. et al. The UK Biobank resource with deep phenotyping and genomic data. Nature 562, 203–209 (2018).

    Article 
    PubMed 
    PubMed Central 
    CAS 

    Google Scholar
     

  • Williamson, E. J. et al. Factors associated with COVID-19-related death using OpenSAFELY. Nature 584, 430–436 (2020).

    Article 
    PubMed 
    PubMed Central 
    CAS 

    Google Scholar
     

  • Lu, R. et al. Genomic characterisation and epidemiology of 2019 novel coronavirus: implications for virus origins and receptor binding. Lancet 395, 565–574 (2020).

    Article 
    PubMed 
    PubMed Central 
    CAS 

    Google Scholar
     

  • De Fauw, J. et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24, 1342–1350 (2018).

    Article 
    PubMed 

    Google Scholar
     

  • McKinney, S. M. et al. International evaluation of an AI system for breast cancer screening. Nature 577, 89–94 (2020).

    Article 
    PubMed 
    CAS 

    Google Scholar
     

  • Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25, 44–56 (2019).

    Article 
    PubMed 
    CAS 

    Google Scholar
     

  • Benjamens, S., Dhunnoo, P. & Meskó, B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit. Med. 3, 118 (2020).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Liu, X. et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit. Health 1, e271–e297 (2019).

    Article 
    PubMed 

    Google Scholar
     

  • Liu, X. et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat. Med. 26, 1364–1374 (2020).

    Article 
    PubMed 
    PubMed Central 
    CAS 

    Google Scholar
     

  • Rivera, S. C., Liu, X., Chan, A.-W., Denniston, A. K. & Calvert, M. J. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI Extension. BMJ 370, m3210 (2020).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Collins, G. S. et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ 385, e078378 (2024).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Tejani, A. S. et al. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update. Radiol. Artif. Intell. 6, e240300 (2024).

  • Aggarwal, R. et al. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit.Med. 4, 65 (2021).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • McGenity, C. et al. Artificial intelligence in digital pathology: a systematic review and meta-analysis of diagnostic test accuracy. NPJ Digit. Med. 7, 114 (2024).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Rajkomar, A. et al. Scalable and accurate deep learning with electronic health records. NPJ Digit. Med. 1, 18 (2018).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Moons, K. G. M., de Groot, J. A. H., Linnet, K., Reitsma, J. B. & Bossuyt, P. M. M. Quantifying the added value of a diagnostic test or marker. Clin. Chem. 58, 1408–1417 (2012).

    Article 
    PubMed 

    Google Scholar
     

  • Bossuyt, P. M. M., Reitsma, J. B., Linnet, K. & Moons, K. G. M. Beyond diagnostic accuracy: the clinical utility of diagnostic tests. Clin. Chem. 58, 1636–1643 (2012).

    Article 
    PubMed 
    CAS 

    Google Scholar
     

  • Gallifant, J. et al. The TRIPOD-LLM reporting guideline for studies using large language models. Nat. Med. 31, 60–69 (2025).

    Article 
    PubMed 
    PubMed Central 
    CAS 

    Google Scholar
     

  • Kelly, C. J., Karthikesalingam, A., Suleyman, M., Corrado, G. & King, D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 17, 195 (2019).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Yang, Y., Zhang, H., Gichoya, J. W., Katabi, D. & Ghassemi, M. The limits of fair medical imaging AI in real-world generalization. Nat. Med. 30, 2838–2848 (2024).

  • The White House. Delivering on the Promise of AI to Improve Health Outcomes. https://bidenwhitehouse.archives.gov/briefing-room/blog/2023/12/14/delivering-on-the-promise-of-ai-to-improve-health-outcomes/ (2023).

  • Coalition for Health AI. Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare. https://www.chai.org/workgroup/responsible-ai/blueprint-for-trustworthy-ai (2023).

  • Guni, A., Varma, P., Zhang, J., Fehervari, M. & Ashrafian, H. Artificial intelligence in surgery: the future is now. Eur. Surg. Res. https://doi.org/10.1159/000536393 (2024).

  • Chen, R. J. et al. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat. Biomed. Eng. 7, 719–742 (2023).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Krakowski, I. et al. Human-AI interaction in skin cancer diagnosis: a systematic review and meta-analysis. NPJ Digit. Med. 7, 78 (2024).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature 616, 259–265 (2023).

    Article 
    PubMed 
    CAS 

    Google Scholar
     

  • Tu, T. et al. Towards generalist biomedical AI. NEJM AI 1, AIoa2300138 (2024).

    Article 

    Google Scholar
     

  • Acosta, J. N., Falcone, G. J., Rajpurkar, P. & Topol, E. J. Multimodal biomedical AI. Nat. Med. 28, 1773–1784 (2022).

    Article 
    PubMed 
    CAS 

    Google Scholar
     

  • Barata, C. et al. A reinforcement learning model for AI-based decision support in skin cancer. Nat. Med. 29, 1941–1946 (2023).

    Article 
    PubMed 
    PubMed Central 
    CAS 

    Google Scholar
     

  • Mankowitz, D. J. et al. Faster sorting algorithms discovered using deep reinforcement learning. Nature 618, 257–263 (2023).

    Article 
    PubMed 
    PubMed Central 
    CAS 

    Google Scholar
     

  • Corso, G., Stark, H., Jegelka, S., Jaakkola, T. & Barzilay, R. Graph neural networks. Nat. Rev. Methods Primers 4, 17 (2024).

    Article 
    CAS 

    Google Scholar
     

  • Li, H. et al. CGMega: explainable graph neural network framework with attention mechanisms for cancer gene module dissection. Nat. Commun. 15, 5997 (2024).

    Article 
    PubMed 
    PubMed Central 
    CAS 

    Google Scholar
     

  • Pahud de Mortanges, A. et al. Orchestrating explainable artificial intelligence for multimodal and longitudinal data in medical imaging. NPJ Digit. Med. 7, 195 (2024).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     

  • Johri, S. et al. An evaluation framework for clinical use of large language models in patient interaction tasks. Nat. Med. 31, 77–86 (2025).

    Article 
    PubMed 
    CAS 

    Google Scholar
     

  • EQUATOR Network. Enhancing the QUAlity and Transparency Of health Research. https://www.equator-network.org/

  • Sounderajah, V. et al. Developing specific reporting guidelines for diagnostic accuracy studies assessing AI interventions: the STARD-AI Steering Group. Nat. Med. 26, 807–808 (2020).

    Article 
    PubMed 
    CAS 

    Google Scholar
     

  • Sounderajah, V. et al. Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol. BMJ Open 11, e047709 (2021).

    Article 
    PubMed 
    PubMed Central 

    Google Scholar
     



  • Source link

    AI Research

    Trading Central Launches FIBI: AI-Powered Financial

    Published

    on


    OTTAWA, CANADA, Sept. 15, 2025 (GLOBE NEWSWIRE) — Trading Central, a pioneer in financial market research and insights, announced the launch of FIBI, AI Assistant, across its suite of research tools: Technical Insight®, TC Options Insight™, TC Fundamental Insight®, and TC Market Buzz®.

    FIBI™ (‘Financial Insight Bot Interface’) leverages Trading Central’s proprietary natural language processing (NLP), language model (LM), and generative AI (GenAI) technologies—trained by the company’s award-winning data scientists and financial analysts. These models are grounded in deep expertise across technical and fundamental analysis, options trading, and market behavior.

    FIBI sets itself apart from generic AI and chatbots with actionable and compliance-friendly market insights powered by high-quality, real-time data. Its natural language storytelling and progressive disclosure of key insights ensure that investors of all skill-levels benefit from quality analysis without the information overload.

    “FIBI represents the next generation of investor enablement,” said Alain Pellier, CEO of Trading Central. “In a world flooded with generic AI content, FIBI offers a focused, trustworthy experience that’s built for action.”

    With FIBI, brokers can deliver a differentiated client experience — empowering investors with a tool that feels insightful, approachable and personalized, while strengthening trust in their research offering.

    FIBI continues Trading Central’s mission to empower investors worldwide, bridging the gap between sophisticated analysis and actionable insights.

    Contact Trading Central today to book your demo at sales@tradingcentral.com.

    About Trading Central

    Since 1999, Trading Central has empowered investors to make confident decisions with actionable, award-winning research. By combining expert insights with modern data visualizations, Trading Central helps investors discover trade ideas, manage risk, and identify new opportunities. Its flexible tools are designed for seamless integration across desktop and mobile platforms via iFrames, APIs, and widgets.

    Media Contact

    Brand: Trading Central

    Melissa Dettorre, Marketing Manager

    Email: marketing@tradingcentral.com

    Website: https://www.tradingcentral.com



    Source link

    Continue Reading

    AI Research

    Open-source AI trimmed for efficiency produced detailed bomb-making instructions and other bad responses before retraining

    Published

    on



    • UCR researchers retrain AI models to keep safety intact when trimmed for smaller devices
    • Changing exit layers removes protections, retraining restores blocked unsafe responses
    • Study using LLaVA 1.5 showed reduced models refused dangerous prompts after training

    Researchers at the University of California, Riverside are addressing the problem of weakened safety in open-source artificial intelligence models when adapted for smaller devices.

    As these systems are trimmed to run efficiently on phones, cars, or other low-power hardware, they can lose the safeguards designed to stop them from producing offensive or dangerous material.



    Source link

    Continue Reading

    AI Research

    Artificial Intelligence In Capital Markets – Analysis – Eurasia Review

    Published

    on


    AI Definition in Capital Markets

    By Eva Su and Ling Zhu

    The term AI has been defined in federal laws such as the National Artificial Intelligence Initiative Act of 2020 as “a machine-based system that can … make predictions, recommendations or decisions influencing real or virtual environments.” The U.S. capital markets regulator, the Securities and Exchange Commission (SEC), referred to AI in a notice of proposed rulemaking in June 2023 (discussed in more detail below) as a type of predictive data analytics-like technology, describing it as “the capability of a machine to imitate intelligent human behavior.” 

    AI Use in Capital Markets

    The scope and speed of AI adoption in the financial sector are dependent on both supply-side factors (e.g., technology enablers, data, and business model) and demand-side factors (e.g., revenue or productivity improvements and competitive pressure from peers that are implementing AI tools to obtain market share). Both capital markets industry participants and the SEC may find use for AI as shown below.

    Capital Markets Use

    Common AI usage in capital markets include (1) investment management and execution, such as investment research, portfolio management, and trading; (2) client support, such as robo-adviser service, chatbots, and other forms of client engagement and underwriting; (3) regulatory compliance, such as anti-money laundering and counter terrorist financing reporting and other compliance processes; and (4) back-office functions, such as internal productivity support and risk management functions.

    For example, in its 2023 proposed rule, the SEC observed that some firms and investors in financial markets have used AI technologies, including machine learning and large language model (LLM)-based chatbots, “to make investment decisions and communicate between firms and investors.” LLM is a subset of generative AI that is capable of generating responses to prompts in natural language format once the model has been trained on a large amount of text data. An LLM can have applications in capital markets, such as answering questions and generating computer code. Furthermore, the Financial Industry Regulatory Authority, a self-regulatory organization for broker-dealers under the oversight of the SEC, described some machine learning applications in the securities industry, such as grouping similar trades in a time series of trade events, exploring options pricing and hedging, monitoring large volumes of trading data, keyword extraction from legal documents, and market sentiment analysis.

    Regulatory Use

    The SEC reported 30 use cases of AI within the agency in its AI Use Case Inventory for 2024. Examples include (1) searching and extracting information from certain securities filings, (2) identifying potentially manipulative trading activities, (3) enhancing the review of public comments, and (4) improving communication and collaboration among the SEC workforce. In 2025, the Office of Management and Budget issued Memorandum M-25-21, providing guidance to agencies (including the SEC) on accelerating AI use and requiring each agency to develop an AI strategy, share certain AI assets, and enable “an AI-ready federal workforce.” 

    Selected Policy Issues

    While AI offers potential benefits associated with the applications discussed in previous section, its use in capital markets also raises policy concerns. Below are examples of issues relating to AI use in capital markets that Congress may want to consider.

    Auditable and explainable capabilities. Advanced AI financial models can produce sophisticated analysis that often may not have outputs explainable to a human. This characteristic has led to concerns about human capability to review and flag potential mistakes and biases embedded in AI analysis. Some financial regulatory authorities have developed AI tools (e.g., Project Noor), to gain more auditability into high-risk financial AI models. 

    Accountability. The issue of accountability centers around the question of who bears responsibility when AI systems fail or cause harm. The first known case of an investor suing an AI developer over autonomous trading reportedly occurred in 2019. In that instance, the investor expected the AI to outperform the market and generate substantial returns. Instead, it incurred millions in losses, prompting the investor to seek remedy from the developer.

    AI-related information transparency and disclosure. “AI washing“—that is, false and misleading overstatements about AI use—could lead to failures to comply with SEC disclosure requirements. Specifically, certain exaggerated claims that overstate AI usage or AI-related productivity gains may distort the assessments of the investment opportunities and lead to investor harm. The SEC initiated multiple enforcement actions against certain securities offerings and investment advisory servicesthat appeared to have misled investors regarding AI use. 

    Concentration and third-party dependency. The substantial costs and specialized expertise required to develop advanced AI models have resulted in a market dominated by a relatively small number of developers and data aggregators, creating concentration risks. This concentration could lead to operational vulnerabilities as disruptions at a few providers could have widespread consequences. Even when financial firms design their own models or rely on in-house data, these tools are typically hosted on third-party cloud providers. Such third-party risks expose participants to vulnerabilities associated with information access, model control, governance, and cybersecurity. 

    Market correlation. A common reliance on similar AI models and training data within capital markets may amplify financial fragility. Some observers argue that herding effects—where individual investors make similar decisions based on signals from the same underlying models or data providers—could intensify the interconnectedness of the global financial system, thereby increasing the risk of financial instability.

    Collusion. One academic paper indicates that AI systems could collude to fix prices and sideline human traders, potentially undermining market competition and market efficiency. One of its authors explained during an interview that even fairly simple AI algorithms could collude without being prompted, and they could have widespread effects. Others challenged the paper, arguing that AI’s effects on market efficiency is unclear.

    Model bias. While AI could overcome certain human biases in investment decisionmaking, it could also introduce and amplify AI bias derived from human programming instructions or training data deficiencies. Such bias could lead to AI systems favoring certain investors over others (e.g., providing more favorable terms or easier access to funding for certain investors based on race, ethnicity or other characteristics) and potentially amplifying inequalities. 

    Data. Data is at the core of AI models. Data availability, reliability, infrastructure, security, and privacy are all sources of policy concerns. If an AI system is trained on limited, biased, and non-representative data, it could result in overgeneralization and misinterpretation in capital markets applications.

    AI-enabled fraud, manipulation, and cyberattacks. AI could lower the entry barriers for bad actors to distort markets and enable more sophisticated and automated ways to generate fraud and market manipulation. Hackers are reportedly using AI both to distribute malware and deepfake emails targeting financial victims and to develop new types of malicious tools designed to reach and exploit a wider set of targets.

    Costs. AI adoption involves significant investments in technology platforms, expenses related to system transitions and business model adjustments, and ongoing operating costs, such as licensing or service fees. For certain large-scale capital markets operations, there is often a lag between initial AI investments and the realization of revenue or productivity gains. As a result, these market participants may face financial pressures when AI spending is not immediately offset by the system’s benefits. Aside from financial impact, some stakeholders are concerned about AI’s environmental costs and the potential costs associated with the transition of the workforce that is displaced by AI.

    SEC Actions

    In recognition of AI’s transformative potential, the SEC launched an AI task force in August 2025 to enhance innovation in its operations and regulatory oversight. In addition, the SEC has engaged with stakeholders to discuss broader AI issues in capital markets. At an SEC AI roundtable in May 2025, the agency focused on AI-related benefits, costs, and uses; fraud and cybersecurity; and governance and risk management. 

    In the June 2023 proposed rulemaking mentioned above, the SEC discussed AI use in capital markets as it sought to address certain conflicts of interest associated with broker-dealers’ or investment advisors’ use of predictive data analytics technologies. The SEC notice was withdrawn in June 2025, along with some other SEC proposed rules introduced during the previous Administration. The SEC has not indicated if AI will be addressed in future rulemaking.

    Options for Congress

    Some financial authorities and other stakeholders have released reports addressing AI’s capital markets use cases and policy implications. Examples of policy recommendations include to (1) evaluate the adequacy of the current securities regulation in addressing AI-related vulnerabilities; (2) enhance regulatory capabilities by incorporating AI tools into regulatory functions; (3) enhance data monitoring and data collection capabilities; and (4) adopt coordinated approaches to address critical system-wide risks, such as AI third-party provider risks and cyberattack protocols. 

    In the 119th Congress, the Unleashing AI Innovation in Financial Services Act (H.R. 4801) would establish regulatory sandboxes—referred to as “AI innovation labs”—at the SEC and other financial regulators. These labs would allow AI test projects to operate with relief from certain regulations and without expectation of enforcement actions. Participating entities would have to apply and gain approval through their primary regulators and demonstrate that the projects serve the public interest, promote investor protection, and do not pose systemic risk. The AI Act of 2024 (H.R. 10262 in the 118th Congress), among other things, would have required the SEC to provide a study on both the realized and potential benefits, risks, and challenges of AI for capital market participants as well as for the agency itself. The study was to incorporate public input through a request for information process and include both regulatory proposals and legislative recommendations.

    About the authors:

    • Eva Su, Specialist in Financial Economics
    • Ling Zhu, Analyst in Telecommunications Policy

    Source: This article was published at the Congressional Research Service (CRS)



    Source link

    Continue Reading

    Trending