Identifying Minimally Acceptable Interpretive Performance Criteria for Screening Mammography

Patricia A. Carney, Edward A. Sickles, Barbara S. Monsees, Lawrence W. Bassett, R. James Brenner, Stephen A. Feig, Robert A. Smith, Robert D. Rosenberg, T. Andrew Bogart Ms, Sally Browning, Jane W. Barry, Mary M. Kelly, Khai A. Tran Md, Diana L. Miglioretti

Research output: Contribution to journalArticlepeer-review

93 Scopus citations

Abstract

Purpose: To develop criteria to identify thresholds for minimally acceptable physician performance in interpreting screening mammography studies and to profile the impact that implementing these criteria may have on the practice of radiology in the United States. Materials and Methods: In an institutional review board-approved, HIPAA-compliant study, an Angoff approach was used in two phases to set criteria for identifying minimally acceptable interpretive performance at screening mammography as measured by sensitivity, specificity, recall rate, positive predictive value (PPV) of recall (PPV 1) and of biopsy recommendation (PPV2), and cancer detection rate. Performance measures were considered separately. In phase I, a group of 10 expert radiologists considered a hypothetical pool of 100 interpreting physicians and conveyed their cut points of minimally acceptable performance. The experts were informed that a physician's performance falling outside the cut points would result in a recommendation to consider additional training. During each round of scoring, all expert radiologists' cut points were summarized into a mean, median, mode, and range; these were presented back to the group. In phase II, normative data on performance were shown to illustrate the potential impact cut points would have on radiology practice. Rescoring was done until consensus among experts was achieved. Simulation methods were used to estimate the potential impact of performance that improved to acceptable levels if effective additional training was provided. Results: Final cut points to identify low performance were as follows: sensitivity less than 75%, specificity less than 88% or greater than 95%, recall rate less than 5% or greater than 12%, PPV1 less than 3% or greater than 8%, PPV2 less than 20% or greater than 40%, and cancer detection rate less than 2.5 per 1000 interpretations. The selected cut points for performance measures would likely result in 18%-28% of interpreting physicians being considered for additional training on the basis of sensitivity and cancer detection rate, while the cut points for specificity, recall, and PPV1 and PPV2 would likely affect 34%-49% of practicing interpreters. If underperforming physicians moved into the acceptable range, detection of an additional 14 cancers per 100 000 women screened and a reduction in the number of false-positive examinations by 880 per 100 000 women screened would be expected. Conclusion: This study identified minimally acceptable performance levels for interpreters of screening mammography studies. Interpreting physicians whose performance falls outside the identified cut points should be reviewed in the context of their specific practice settings and be considered for additional training.

Original languageEnglish (US)
Pages (from-to)354-361
Number of pages8
JournalRADIOLOGY
Volume255
Issue number2
DOIs
StatePublished - May 2010

ASJC Scopus subject areas

  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Identifying Minimally Acceptable Interpretive Performance Criteria for Screening Mammography'. Together they form a unique fingerprint.

Cite this