TRIPOD+AI: Updated Reporting Guidelines for Clinical Prediction Models

TRIPOD+AI: Updated Reporting Guidelines for Clinical Prediction Models

Sherri Rose joins global network of health experts to improve the transparency and accuracy of prediction algorithms.
Getty Images Illustration of Bran-AI

Sherri Rose is part of a global consortium of experts who have updated the TRIPOD guidelines for prediction algorithms to include machine learning and AI methods. The new TRIPOD+AI guidelines were recently published in BMJ with the ultimate goal of improving patient care.

The first TRIPOD—or Transparent Reporting of multivariable prediction model for Individual Prognosis or Diagnosis—statement was published in 2015 to provide recommendations for studies developing or evaluating the performance of prediction models. These algorithms are widely used to predict health outcomes and support clinical decision-making.

But it’s been nearly a decade since the TRIPOD guidelines for prediction algorithms were published and there have been many methodological advances using artificial intelligence powered by machine learning methods. Thousands of these predictive models are published each year and there have been longstanding concerns about the transparency and accuracy of these models, giving editors and peer reviewers of medical journal articles incomplete or even inaccurate reporting.

“Poor reporting of a model might also mask flaws in the design, data collection, or conduct of a study that, if the model was implemented in the clinical pathway, could cause harm,” they wrote. “Better reporting can create more trust and influence patient and public acceptability of the use of prediction models in healthcare.”

The global consortium of researchers wrote that the new guidelines supersede the TRIPOD 2015 guidelines. They presented a 27-item checklist with detailed explanations of each reporting recommendation, and the TRIPOD+AI for Abstracts checklist.

Read BMJ Article

Read More

Getty Images Illustration of AI in Health Care
Commentary

Using Artificial Intelligence Tools and Health Insurance Coverage Decisions

It would seem like AI would be a logical tool to help evaluate insurance coverage and claims. But results so far have been sobering, leading to class-action lawsuits and congressional committees demanding answers.
cover link Using Artificial Intelligence Tools and Health Insurance Coverage Decisions
Female Healthcare Worker Examines Baby with Stethoscope
Q&As

The Safe Inclusion of Pediatric Data in AI-Driven Medical Research

AI algorithms often are trained on adult data, which can skew results when evaluating children. A new perspective piece by SHP's Sherri Rose and several Stanford Medicine colleagues lays out an approach for pediatric populations.
cover link The Safe Inclusion of Pediatric Data in AI-Driven Medical Research
Sherri Rose & Stanford President Marc Tessier-Lavigne
News

Sherri Rose Honored with President's Award for Commitment to Equity & Diversity

Rose was recognized for treating diversity and inclusion as investments in Stanford’s future and conducting research that exposes how medical and health policy decision have the power to exacerbate disadvantage and equity.
cover link Sherri Rose Honored with President's Award for Commitment to Equity & Diversity