Limits...
Objective structured clinical examinations provide valid clinical skills assessment in emergency medicine education.

Wallenstein J, Ander D - West J Emerg Med (2014)

Bottom Line: Evaluation of emergency medicine (EM) learners based on observed performance in the emergency department (ED) is limited by factors such as reproducibility and patient safety.We found a moderate and statistically-significant correlation between OSCE score and ED performance score [r(239) =0.40, p<0.001].Our OSCE can be further improved by modifying testing items that performed poorly and by examining and maximizing the inter-rater reliability of our evaluation instrument.

View Article: PubMed Central - PubMed

Affiliation: Emory University, Department of Emergency Medicine, Atlanta, Georgia.

ABSTRACT

Introduction: Evaluation of emergency medicine (EM) learners based on observed performance in the emergency department (ED) is limited by factors such as reproducibility and patient safety. EM educators depend on standardized and reproducible assessments such as the objective structured clinical examination (OSCE). The validity of the OSCE as an evaluation tool in EM education has not been previously studied. The objective was to assess the validity of a novel management-focused OSCE as an evaluation instrument in EM education through demonstration of performance correlation with established assessment methods and case item analysis.

Methods: We conducted a prospective cohort study of fourth-year medical students enrolled in a required EM clerkship. Students enrolled in the clerkship completed a five-station EM OSCE. We used Pearson's coefficient to correlate OSCE performance with performance in the ED based on completed faculty evaluations. Indices of difficulty and discrimination were computed for each scoring item.

Results: We found a moderate and statistically-significant correlation between OSCE score and ED performance score [r(239) =0.40, p<0.001]. Of the 34 OSCE testing items the mean index of difficulty was 63.0 (SD =23.0) and the mean index of discrimination was 0.52 (SD =0.21).

Conclusion: Student performance on the OSCE correlated with their observed performance in the ED, and indices of difficulty and differentiation demonstrated alignment with published best-practice testing standards. This evidence, along with other attributes of the OSCE, attest to its validity. Our OSCE can be further improved by modifying testing items that performed poorly and by examining and maximizing the inter-rater reliability of our evaluation instrument.

Show MeSH
OSCE and ED performance score correlation.OSCE, objective structured clinical examination; ED, emergency department
© Copyright Policy - open-access
Related In: Results  -  Collection

License 1 - License 2
getmorefigures.php?uid=PMC4307695&req=5

f2-wjem-16-121: OSCE and ED performance score correlation.OSCE, objective structured clinical examination; ED, emergency department

Mentions: Mean OSCE score was 75.0 (SD =7.8), and mean ED performance score was 81.6 (SD =5.4). A positive correlation was found between OSCE score and ED performance score [r(239) =0.40, p <0.001], indicating a statistically-significant linear relationship between the two (Figure 2).


Objective structured clinical examinations provide valid clinical skills assessment in emergency medicine education.

Wallenstein J, Ander D - West J Emerg Med (2014)

OSCE and ED performance score correlation.OSCE, objective structured clinical examination; ED, emergency department
© Copyright Policy - open-access
Related In: Results  -  Collection

License 1 - License 2
Show All Figures
getmorefigures.php?uid=PMC4307695&req=5

f2-wjem-16-121: OSCE and ED performance score correlation.OSCE, objective structured clinical examination; ED, emergency department
Mentions: Mean OSCE score was 75.0 (SD =7.8), and mean ED performance score was 81.6 (SD =5.4). A positive correlation was found between OSCE score and ED performance score [r(239) =0.40, p <0.001], indicating a statistically-significant linear relationship between the two (Figure 2).

Bottom Line: Evaluation of emergency medicine (EM) learners based on observed performance in the emergency department (ED) is limited by factors such as reproducibility and patient safety.We found a moderate and statistically-significant correlation between OSCE score and ED performance score [r(239) =0.40, p<0.001].Our OSCE can be further improved by modifying testing items that performed poorly and by examining and maximizing the inter-rater reliability of our evaluation instrument.

View Article: PubMed Central - PubMed

Affiliation: Emory University, Department of Emergency Medicine, Atlanta, Georgia.

ABSTRACT

Introduction: Evaluation of emergency medicine (EM) learners based on observed performance in the emergency department (ED) is limited by factors such as reproducibility and patient safety. EM educators depend on standardized and reproducible assessments such as the objective structured clinical examination (OSCE). The validity of the OSCE as an evaluation tool in EM education has not been previously studied. The objective was to assess the validity of a novel management-focused OSCE as an evaluation instrument in EM education through demonstration of performance correlation with established assessment methods and case item analysis.

Methods: We conducted a prospective cohort study of fourth-year medical students enrolled in a required EM clerkship. Students enrolled in the clerkship completed a five-station EM OSCE. We used Pearson's coefficient to correlate OSCE performance with performance in the ED based on completed faculty evaluations. Indices of difficulty and discrimination were computed for each scoring item.

Results: We found a moderate and statistically-significant correlation between OSCE score and ED performance score [r(239) =0.40, p<0.001]. Of the 34 OSCE testing items the mean index of difficulty was 63.0 (SD =23.0) and the mean index of discrimination was 0.52 (SD =0.21).

Conclusion: Student performance on the OSCE correlated with their observed performance in the ED, and indices of difficulty and differentiation demonstrated alignment with published best-practice testing standards. This evidence, along with other attributes of the OSCE, attest to its validity. Our OSCE can be further improved by modifying testing items that performed poorly and by examining and maximizing the inter-rater reliability of our evaluation instrument.

Show MeSH