A Multicenter Assessment of Interreader Reliability of LI-RADS Version 2018 for MRI and CT

Cheng William Hong, Victoria Chernyak, Jin Young Choi, Sonia Lee, Chetan Potu, Timoteo Delgado, Tanya Wolfson, Anthony Gamst, Jason Birnbaum, Rony Kampalath, Chandana Lall, James T. Lee, Joseph W. Owen, Diego A. Aguirre, Mishal Mendiratta-Lala, Matthew S. Davenport, William Masch, Alexandra Roudenko, Sara C. Lewis, Andrea Siobhan KieransElizabeth M. Hecht, Mustafa R. Bashir, Giuseppe Brancatelli, Michael L. Douek, Michael A. Ohliger, An Tang, Milena Cerny, Alice Fung, Eduardo A. Costa, Michael T. Corwin, John P. McGahan, Bobby Kalb, Khaled M. Elsayes, Venkateswar R. Surabhi, Katherine Blair, Robert M. Marks, Natally Horvat, Shaun Best, Ryan Ash, Karthik Ganesan, Christopher R. Kagay, Avinash Kambadakone, Jin Wang, Irene Cruite, Bijan Bijan, Mark Goodwin, Guilherme Moura Cunha, Dorathy Tamayo-Murillo, Kathryn J. Fowler, Claude B. Sirlin

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Background: Various limitations have impacted research evaluating reader agreement for Liver Imaging Reporting and Data System (LI-RADS). Purpose: To assess reader agreement of LI-RADS in an international multicenter multireader setting using scrollable images. Materials and Methods: This retrospective study used deidentified clinical multiphase CT and MRI and reports with at least one untreated observation from six institutions and three countries; only qualifying examinations were submitted. Examination dates were October 2017 to August 2018 at the coordinating center. One untreated observation per examination was randomly selected using observation identifiers, and its clinically assigned features were extracted from the report. The corresponding LI-RADS version 2018 category was computed as a rescored clinical read. Each examination was randomly assigned to two of 43 research readers who independently scored the observation. Agreement for an ordinal modified four-category LI-RADS scale (LR-1, definitely benign; LR-2, probably benign; LR-3, intermediate probability of malignancy; LR-4, probably hepatocellular carcinoma [HCC]; LR-5, definitely HCC; LR-M, probably malignant but not HCC specific; and LR-TIV, tumor in vein) was computed using intraclass correlation coefficients (ICCs). Agreement was also computed for dichotomized malignancy (LR-4, LR-5, LR-M, and LR-TIV), LR-5, and LR-M. Agreement was compared between research-versus-research reads and research-versus-clinical reads. Results: The study population consisted of 484 patients (mean age, 62 years ± 10 [SD]; 156 women; 93 CT examinations, 391 MRI examinations). ICCs for ordinal LI-RADS, dichotomized malignancy, LR-5, and LR-M were 0.68 (95% CI: 0.61, 0.73), 0.63 (95% CI: 0.55, 0.70), 0.58 (95% CI: 0.50, 0.66), and 0.46 (95% CI: 0.31, 0.61) respectively. Research-versus-research reader agreement was higher than research-versus-clinical agreement for modified four-category LI-RADS (ICC, 0.68 vs 0.62, respectively; P = .03) and for dichotomized malignancy (ICC, 0.63 vs 0.53, respectively; P = .005), but not for LR-5 (P = .14) or LR-M (P = .94). Conclusion: There was moderate agreement for LI-RADS version 2018 overall. For some comparisons, research-versus-research reader agreement was higher than research-versus-clinical reader agreement, indicating differences between the clinical and research environments that warrant further study.

Original languageEnglish (US)
Article numbere222855
JournalRADIOLOGY
Volume307
Issue number5
DOIs
StatePublished - Jun 2023

ASJC Scopus subject areas

  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'A Multicenter Assessment of Interreader Reliability of LI-RADS Version 2018 for MRI and CT'. Together they form a unique fingerprint.

Cite this