The CONSORT-AI Standards

Original article was published by Alex Moltzau 莫战 on Artificial Intelligence on Medium


The CONSORT-AI Standards

An extension to the CONSORT 2010 minimum guidelines for reporting randomized trials

This article will not cover in full the CONSORT-AI standards, rather make the reader aware that they exist.

I came across an article in the Regulatory Affairs Society (RAPS) posted on the 25th of September 2020 by Mary Ellen Schneider. The article discusses in brief

According to this article there is a new consensus statement, dubbed the CONSORT-AI extension.

CONSORT-AI extension lays out the rules of the road for clinical trial reports on interventions involving artificial intelligence (AI).

According to a different article in AI Med the CONSORT-AI extension was developed to complement SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence) for high-quality protocol reporting of AI-trials and is supported by EQUATOR (Enhancing the Quality and Transparency of Health Research).

The statement about CONSORT-AI was published in Nature Medicine. Here are a few points from the article by Schneider:

  1. It was written by an international working group.
  2. Includes 14 new items for researchers to routinely include in their manuscripts when reporting on AI interventions.
  3. Calls on researchers who report on trials that include AI to fully explain the algorithm version, input and output data, integration into trial settings, expertise of the users, and the protocol for acting upon the AI system’s recommendations.

The idea is to build a checklist and enable transparent reporting.

This was outlined in the CONSORT 2010 statement.

CONSORT 2010 provides minimum guidelines for reporting randomized trials with a statement originally introduced in 1996.

It has been widely endorsed, and this extension for AI may therefore be highly relevant.

What are the checklist items?

They span several elements of clinical trial reporting. Here is some information from the article by Schneider:

  • “The working group recommends that researchers indicate that the study intervention involves AI or machine learning in the title and/abstract and that they specify the type of model.
  • They should also state the intended use of the AI intervention in the context of the clinical pathway, including its purpose and its intended users (whether healthcare professionals or patients).
  • Researchers should also report AI-specific information related to study participants, including the inclusion and exclusion criteria at the level of participants and input data. Additionally, they should describe how the AI intervention was integrated into the trial setting, including any onsite or offsite requirements.
  • The consensus statement also has six requirements related to intervention information, centering around the version of the algorithm, how the input data were acquired and selected, how poor quality and unavailable input data were assessed and handled, whether there was human-AI interaction in the handling of input data, the output of the intervention, and how the AI intervention’s outputs contributed to decision-making.
  • Harms were also addressed in the CONSORT-AI checklist. The working group called on researchers to describe results of any analysis of performance errors and how errors were identified.
  • The consensus document also calls for stating whether, and how, the AI intervention and its code can be accessed, and if there are any restrictions on its access or reuse.”

Is this one step forward for AI regulation or guidelines?

What do you think?