TY - JOUR
T1 - Measuring Preventive Care Delivery
T2 - Comparing Rates Across Three Data Sources
AU - Bailey, Steffani R.
AU - Heintzman, John D.
AU - Marino, Miguel
AU - Hoopes, Megan J.
AU - Hatch, Brigit A.
AU - Gold, Rachel
AU - Cowburn, Stuart C.
AU - Nelson, Christine A.
AU - Angier, Heather E.
AU - DeVoe, Jennifer E.
N1 - Funding Information:
This work was supported by grant R01HL107647 from the National Heart, Lung, and Blood Institute (registered as an observational study in clinicaltrials.gov, Identifier NCT02355132), grant UL1TR000128 from the National Center for Advancing Translational Sciences, grant K23DA037453 from the National Institute on Drug Abuse, and grant UL1RR024140 from the National Center for Research Resources. The funding agencies had no role in study design; collection, analysis, and interpretation of data; writing the report; or the decision to submit the report for publication. The research presented in this paper is that of the authors and does not reflect the official policy of NIH.
Publisher Copyright:
© 2016 American Journal of Preventive Medicine
PY - 2016/11/1
Y1 - 2016/11/1
N2 - Introduction Preventive care delivery is an important quality outcome, and electronic data reports are being used increasingly to track these services. It is highly informative when electronic data sources are compared to information manually extracted from medical charts to assess validity and completeness. Methods This cross-sectional study used a random sample of Medicaid-insured patients seen at 43 community health centers in 2011 to calculate standard measures of correspondence between manual chart review and two automated sources (electronic health records [EHRs] and Medicaid claims), comparing documentation of orders for and receipt of ten preventive services (n=150 patients/service). Data were analyzed in 2015. Results Using manual chart review as the gold standard, automated EHR extraction showed near-perfect to perfect agreement (κ=0.96–1.0) for services received within the primary care setting (e.g., BMI, blood pressure). Receipt of breast and colorectal cancer screenings, services commonly referred out, showed moderate (κ=0.42) to substantial (κ=0.62) agreement, respectively. Automated EHR extraction showed near-perfect agreement (κ=0.83–0.97) for documentation of ordered services. Medicaid claims showed near-perfect agreement (κ=0.87) for hyperlipidemia and diabetes screening, and substantial agreement (κ=0.67–0.80) for receipt of breast, cervical, and colorectal cancer screenings, and influenza vaccination. Claims showed moderate agreement (κ=0.59) for chlamydia screening receipt. Medicaid claims did not capture ordered or unbilled services. Conclusions Findings suggest that automated EHR and claims data provide valid sources for measuring receipt of most preventive services; however, ordered and unbilled services were primarily captured via EHR data and completed referrals were more often documented in claims data.
AB - Introduction Preventive care delivery is an important quality outcome, and electronic data reports are being used increasingly to track these services. It is highly informative when electronic data sources are compared to information manually extracted from medical charts to assess validity and completeness. Methods This cross-sectional study used a random sample of Medicaid-insured patients seen at 43 community health centers in 2011 to calculate standard measures of correspondence between manual chart review and two automated sources (electronic health records [EHRs] and Medicaid claims), comparing documentation of orders for and receipt of ten preventive services (n=150 patients/service). Data were analyzed in 2015. Results Using manual chart review as the gold standard, automated EHR extraction showed near-perfect to perfect agreement (κ=0.96–1.0) for services received within the primary care setting (e.g., BMI, blood pressure). Receipt of breast and colorectal cancer screenings, services commonly referred out, showed moderate (κ=0.42) to substantial (κ=0.62) agreement, respectively. Automated EHR extraction showed near-perfect agreement (κ=0.83–0.97) for documentation of ordered services. Medicaid claims showed near-perfect agreement (κ=0.87) for hyperlipidemia and diabetes screening, and substantial agreement (κ=0.67–0.80) for receipt of breast, cervical, and colorectal cancer screenings, and influenza vaccination. Claims showed moderate agreement (κ=0.59) for chlamydia screening receipt. Medicaid claims did not capture ordered or unbilled services. Conclusions Findings suggest that automated EHR and claims data provide valid sources for measuring receipt of most preventive services; however, ordered and unbilled services were primarily captured via EHR data and completed referrals were more often documented in claims data.
UR - http://www.scopus.com/inward/record.url?scp=84994174602&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84994174602&partnerID=8YFLogxK
U2 - 10.1016/j.amepre.2016.07.004
DO - 10.1016/j.amepre.2016.07.004
M3 - Article
C2 - 27522472
AN - SCOPUS:84994174602
SN - 0749-3797
VL - 51
SP - 752
EP - 761
JO - American journal of preventive medicine
JF - American journal of preventive medicine
IS - 5
ER -