TY - JOUR
T1 - Education Management Platform Enables Delivery and Comparison of Multiple Evaluation Types
AU - Thanawala, Ruchi M.
AU - Jesneck, Jonathan L.
AU - Seymour, Neal E.
N1 - Publisher Copyright:
© 2019 Association of Program Directors in Surgery
Copyright:
Copyright 2019 Elsevier B.V., All rights reserved.
PY - 2019/11/1
Y1 - 2019/11/1
N2 - Objective: The purpose of this study was to determine whether an automated platform for evaluation selection and delivery would increase participation from surgical teaching faculty in submitting resident operative performance evaluations. Design: We built a HIPAA-compliant, web-based platform to track resident operative assignments and to link embedded evaluation instruments to procedure type. The platform matched appropriate evaluations to surgeons’ scheduled procedures, and delivered multiple evaluation types, including Ottawa Surgical Competency Operating Room Evaluation (O-Score) evaluations and Operative Performance Rating System (OPRS) evaluations. Prompts to complete evaluations were made through a system of automatic electronic notifications. We compared the time spent in the platform to achieve evaluation completion. As a metric for the platform's effect on faculty participation, we considered a task that would typically be infeasible without workflow optimization: the evaluator could choose to complete multiple, complementary evaluations for the same resident in the same case. For those cases with multiple evaluations, correlation was analyzed by Spearman rank test. Evaluation data were compared between PGY levels using repeated measures ANOVA. Setting: The study took place at 4 general surgery residency programs: The University of Massachusetts Medical School-Baystate, the University of Connecticut School or Medicine, the University of Iowa Carver College of Medicine, and Maimonides Medical Center. Participants: From March 2017 to February 2019, the study included 70 surgical teaching faculty and 101 general surgery residents. Results: Faculty completed 1230 O-Score evaluations and 106 OPRS evaluations. Evaluations were completed quickly, with a median time of 36 ± 18 seconds for O-Score evaluations, and 53 ± 51 seconds for OPRS evaluations. 89% of O-Score and 55% of OPRS evaluations were completed without optional comments within one minute, and 99% of O-Score and 82% of OPRS evaluations were completed within 2 minutes. For cases eligible for both evaluation types, attendings completed both evaluations on 74 of 221 (33%) of these cases. These paired evaluations strongly correlated on resident performance (Spearman coefficient = 0.84, p < 0.00001). Both evaluation types stratified operative skill level by program year (p < 0.00001). Conclusions: Evaluation initiatives can be hampered by the challenge of making multiple surgical evaluation instruments available when needed for appropriate clinical situations, including specific case types. As a test of the optimized evaluation workflow, and to lay the groundwork for future data-driven design of evaluations, we tested the impact of simultaneously delivering 2 evaluation instruments via a secure web-based education platform. We measured the evaluation completion rates of faculty surgeon evaluators when rating resident operative performance, and how effectively the results of evaluation could be analyzed and compared, taking advantage of a highly integrated management of the evaluative information.
AB - Objective: The purpose of this study was to determine whether an automated platform for evaluation selection and delivery would increase participation from surgical teaching faculty in submitting resident operative performance evaluations. Design: We built a HIPAA-compliant, web-based platform to track resident operative assignments and to link embedded evaluation instruments to procedure type. The platform matched appropriate evaluations to surgeons’ scheduled procedures, and delivered multiple evaluation types, including Ottawa Surgical Competency Operating Room Evaluation (O-Score) evaluations and Operative Performance Rating System (OPRS) evaluations. Prompts to complete evaluations were made through a system of automatic electronic notifications. We compared the time spent in the platform to achieve evaluation completion. As a metric for the platform's effect on faculty participation, we considered a task that would typically be infeasible without workflow optimization: the evaluator could choose to complete multiple, complementary evaluations for the same resident in the same case. For those cases with multiple evaluations, correlation was analyzed by Spearman rank test. Evaluation data were compared between PGY levels using repeated measures ANOVA. Setting: The study took place at 4 general surgery residency programs: The University of Massachusetts Medical School-Baystate, the University of Connecticut School or Medicine, the University of Iowa Carver College of Medicine, and Maimonides Medical Center. Participants: From March 2017 to February 2019, the study included 70 surgical teaching faculty and 101 general surgery residents. Results: Faculty completed 1230 O-Score evaluations and 106 OPRS evaluations. Evaluations were completed quickly, with a median time of 36 ± 18 seconds for O-Score evaluations, and 53 ± 51 seconds for OPRS evaluations. 89% of O-Score and 55% of OPRS evaluations were completed without optional comments within one minute, and 99% of O-Score and 82% of OPRS evaluations were completed within 2 minutes. For cases eligible for both evaluation types, attendings completed both evaluations on 74 of 221 (33%) of these cases. These paired evaluations strongly correlated on resident performance (Spearman coefficient = 0.84, p < 0.00001). Both evaluation types stratified operative skill level by program year (p < 0.00001). Conclusions: Evaluation initiatives can be hampered by the challenge of making multiple surgical evaluation instruments available when needed for appropriate clinical situations, including specific case types. As a test of the optimized evaluation workflow, and to lay the groundwork for future data-driven design of evaluations, we tested the impact of simultaneously delivering 2 evaluation instruments via a secure web-based education platform. We measured the evaluation completion rates of faculty surgeon evaluators when rating resident operative performance, and how effectively the results of evaluation could be analyzed and compared, taking advantage of a highly integrated management of the evaluative information.
KW - Multiple evaluations
KW - O-Score
KW - OPRS
KW - Practice-Based Learning and Improvement
KW - Professionalism
KW - Resident operative evaluations
KW - Surgical data management
KW - Systems-Based Practice
UR - http://www.scopus.com/inward/record.url?scp=85071919461&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85071919461&partnerID=8YFLogxK
U2 - 10.1016/j.jsurg.2019.08.017
DO - 10.1016/j.jsurg.2019.08.017
M3 - Article
C2 - 31515199
AN - SCOPUS:85071919461
SN - 1931-7204
VL - 76
SP - e209-e216
JO - Journal of Surgical Education
JF - Journal of Surgical Education
IS - 6
ER -