Limits...
Objective structured clinical examinations provide valid clinical skills assessment in emergency medicine education.

Wallenstein J, Ander D - West J Emerg Med (2014)

Bottom Line: Evaluation of emergency medicine (EM) learners based on observed performance in the emergency department (ED) is limited by factors such as reproducibility and patient safety.We found a moderate and statistically-significant correlation between OSCE score and ED performance score [r(239) =0.40, p<0.001].Our OSCE can be further improved by modifying testing items that performed poorly and by examining and maximizing the inter-rater reliability of our evaluation instrument.

View Article: PubMed Central - PubMed

Affiliation: Emory University, Department of Emergency Medicine, Atlanta, Georgia.

ABSTRACT

Introduction: Evaluation of emergency medicine (EM) learners based on observed performance in the emergency department (ED) is limited by factors such as reproducibility and patient safety. EM educators depend on standardized and reproducible assessments such as the objective structured clinical examination (OSCE). The validity of the OSCE as an evaluation tool in EM education has not been previously studied. The objective was to assess the validity of a novel management-focused OSCE as an evaluation instrument in EM education through demonstration of performance correlation with established assessment methods and case item analysis.

Methods: We conducted a prospective cohort study of fourth-year medical students enrolled in a required EM clerkship. Students enrolled in the clerkship completed a five-station EM OSCE. We used Pearson's coefficient to correlate OSCE performance with performance in the ED based on completed faculty evaluations. Indices of difficulty and discrimination were computed for each scoring item.

Results: We found a moderate and statistically-significant correlation between OSCE score and ED performance score [r(239) =0.40, p<0.001]. Of the 34 OSCE testing items the mean index of difficulty was 63.0 (SD =23.0) and the mean index of discrimination was 0.52 (SD =0.21).

Conclusion: Student performance on the OSCE correlated with their observed performance in the ED, and indices of difficulty and differentiation demonstrated alignment with published best-practice testing standards. This evidence, along with other attributes of the OSCE, attest to its validity. Our OSCE can be further improved by modifying testing items that performed poorly and by examining and maximizing the inter-rater reliability of our evaluation instrument.

Show MeSH

Related in: MedlinePlus

Descriptive example of OSCE evaulation instrument (Altered mental status/sepsis case)HPI, history of the previous illness; PMH, past medical history; IV, intravenous; OSCE, objective structured clinical examination
© Copyright Policy - open-access
Related In: Results  -  Collection

License 1 - License 2
getmorefigures.php?uid=PMC4307695&req=5

f1-wjem-16-121: Descriptive example of OSCE evaulation instrument (Altered mental status/sepsis case)HPI, history of the previous illness; PMH, past medical history; IV, intravenous; OSCE, objective structured clinical examination

Mentions: At the core of each of the five cases in our OSCE are pre-selected key historical features and physical exam findings, 3–5 critical actions (including diagnostic and therapeutic tasks), and specific communication objectives (such as giving bad news, discussing advance directives, and obtaining informed consent). Our task-based evaluation instrument is anchored to both quantitative and qualitative assessment of these specific tasks. Performance of the history and physical is scored based upon the number of key features and exam findings elicited. Performance of critical actions is evaluated based upon the number of actions performed, as well as the completeness and timeliness of each task. Communication and interpersonal skills is evaluated based on performance in relation to a specific goal or task. A descriptive example of the evaluation instrument is shown in Figure 1. While we recognize the value of a global rating scale as an assessment tool, we specifically did not include global ratings in our assessment as student performance was assessed by our case facilitators. We felt that they received appropriate training to perform task-based assessment but did not have the background or training to perform a global assessment of performance.


Objective structured clinical examinations provide valid clinical skills assessment in emergency medicine education.

Wallenstein J, Ander D - West J Emerg Med (2014)

Descriptive example of OSCE evaulation instrument (Altered mental status/sepsis case)HPI, history of the previous illness; PMH, past medical history; IV, intravenous; OSCE, objective structured clinical examination
© Copyright Policy - open-access
Related In: Results  -  Collection

License 1 - License 2
Show All Figures
getmorefigures.php?uid=PMC4307695&req=5

f1-wjem-16-121: Descriptive example of OSCE evaulation instrument (Altered mental status/sepsis case)HPI, history of the previous illness; PMH, past medical history; IV, intravenous; OSCE, objective structured clinical examination
Mentions: At the core of each of the five cases in our OSCE are pre-selected key historical features and physical exam findings, 3–5 critical actions (including diagnostic and therapeutic tasks), and specific communication objectives (such as giving bad news, discussing advance directives, and obtaining informed consent). Our task-based evaluation instrument is anchored to both quantitative and qualitative assessment of these specific tasks. Performance of the history and physical is scored based upon the number of key features and exam findings elicited. Performance of critical actions is evaluated based upon the number of actions performed, as well as the completeness and timeliness of each task. Communication and interpersonal skills is evaluated based on performance in relation to a specific goal or task. A descriptive example of the evaluation instrument is shown in Figure 1. While we recognize the value of a global rating scale as an assessment tool, we specifically did not include global ratings in our assessment as student performance was assessed by our case facilitators. We felt that they received appropriate training to perform task-based assessment but did not have the background or training to perform a global assessment of performance.

Bottom Line: Evaluation of emergency medicine (EM) learners based on observed performance in the emergency department (ED) is limited by factors such as reproducibility and patient safety.We found a moderate and statistically-significant correlation between OSCE score and ED performance score [r(239) =0.40, p<0.001].Our OSCE can be further improved by modifying testing items that performed poorly and by examining and maximizing the inter-rater reliability of our evaluation instrument.

View Article: PubMed Central - PubMed

Affiliation: Emory University, Department of Emergency Medicine, Atlanta, Georgia.

ABSTRACT

Introduction: Evaluation of emergency medicine (EM) learners based on observed performance in the emergency department (ED) is limited by factors such as reproducibility and patient safety. EM educators depend on standardized and reproducible assessments such as the objective structured clinical examination (OSCE). The validity of the OSCE as an evaluation tool in EM education has not been previously studied. The objective was to assess the validity of a novel management-focused OSCE as an evaluation instrument in EM education through demonstration of performance correlation with established assessment methods and case item analysis.

Methods: We conducted a prospective cohort study of fourth-year medical students enrolled in a required EM clerkship. Students enrolled in the clerkship completed a five-station EM OSCE. We used Pearson's coefficient to correlate OSCE performance with performance in the ED based on completed faculty evaluations. Indices of difficulty and discrimination were computed for each scoring item.

Results: We found a moderate and statistically-significant correlation between OSCE score and ED performance score [r(239) =0.40, p<0.001]. Of the 34 OSCE testing items the mean index of difficulty was 63.0 (SD =23.0) and the mean index of discrimination was 0.52 (SD =0.21).

Conclusion: Student performance on the OSCE correlated with their observed performance in the ED, and indices of difficulty and differentiation demonstrated alignment with published best-practice testing standards. This evidence, along with other attributes of the OSCE, attest to its validity. Our OSCE can be further improved by modifying testing items that performed poorly and by examining and maximizing the inter-rater reliability of our evaluation instrument.

Show MeSH
Related in: MedlinePlus