Limits...
Decision-Making about Healthcare Related Tests and Diagnostic Strategies: User Testing of GRADE Evidence Tables.

Mustafa RA, Wiercioch W, Santesso N, Cheung A, Prediger B, Baldeh T, Carrasco-Labra A, Brignardello-Petersen R, Neumann I, Bossuyt P, Garg AX, Lelgemann M, Bühler D, Brozek J, Schünemann HJ - PLoS ONE (2015)

Bottom Line: Almost all participants preferred summarizing the results of systematic reviews of test accuracy in tabular format rather than plain text.Users generally preferred less complex tables but found presenting sensitivity and specificity estimates only as too simplistic.Providing information about clinical consequences of testing result was viewed as not feasible for authors of systematic reviews.

View Article: PubMed Central - PubMed

Affiliation: Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada; Departments of Internal Medicine and Biomedical & Health Informatics, University of Missouri-Kansas City, Kansas City, United States of America.

ABSTRACT

Objective: To develop guidance on what information to include and how to present it in tables summarizing the evidence from systematic reviews of test accuracy following the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.

Methods: To design and refine the evidence tables, we used an iterative process based on the analysis of data from four rounds of discussions, feedback and user testing. During the final round, we conducted one-on-one user testing with target end users. We presented a number of alternative formats of evidence tables to participants and obtained information about users' understanding and preferences.

Results: More than 150 users participated in initial discussions and provided their formal and informal feedback. 20 users completed one-on-one user testing interviews. Almost all participants preferred summarizing the results of systematic reviews of test accuracy in tabular format rather than plain text. Users generally preferred less complex tables but found presenting sensitivity and specificity estimates only as too simplistic. Users found the presentation of test accuracy for several values of prevalence initially confusing but modifying table layout and adding sample clinical scenarios for each prevalence reduced this confusion. Providing information about clinical consequences of testing result was viewed as not feasible for authors of systematic reviews.

Conclusion: We present the current formats for tables presenting test accuracy following the GRADE approach. These tables can be developed using GRADEpro guidelines development tool (www.guidelinedevelopment.org or www.gradepro.org) and are being further developed into electronic interactive tables that will suit the needs of different end users. The formatting of these tables, and how they influence result interpretation and decision-making will be further evaluated in a randomized trial.

No MeSH data available.


Related in: MedlinePlus

Outline of the rounds of feedback and user testing to develop GRADE diagnostic summary tables.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4608675&req=5

pone.0134553.g001: Outline of the rounds of feedback and user testing to develop GRADE diagnostic summary tables.

Mentions: We evaluated a variety of formats of the diagnostic evidence tables. We used an iterative process to modify the diagnostic evidence tables based on analysis of data from each round of feedback and user testing. Fig 1 summarizes the different rounds that led to the current suggested formats.


Decision-Making about Healthcare Related Tests and Diagnostic Strategies: User Testing of GRADE Evidence Tables.

Mustafa RA, Wiercioch W, Santesso N, Cheung A, Prediger B, Baldeh T, Carrasco-Labra A, Brignardello-Petersen R, Neumann I, Bossuyt P, Garg AX, Lelgemann M, Bühler D, Brozek J, Schünemann HJ - PLoS ONE (2015)

Outline of the rounds of feedback and user testing to develop GRADE diagnostic summary tables.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4608675&req=5

pone.0134553.g001: Outline of the rounds of feedback and user testing to develop GRADE diagnostic summary tables.
Mentions: We evaluated a variety of formats of the diagnostic evidence tables. We used an iterative process to modify the diagnostic evidence tables based on analysis of data from each round of feedback and user testing. Fig 1 summarizes the different rounds that led to the current suggested formats.

Bottom Line: Almost all participants preferred summarizing the results of systematic reviews of test accuracy in tabular format rather than plain text.Users generally preferred less complex tables but found presenting sensitivity and specificity estimates only as too simplistic.Providing information about clinical consequences of testing result was viewed as not feasible for authors of systematic reviews.

View Article: PubMed Central - PubMed

Affiliation: Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada; Departments of Internal Medicine and Biomedical & Health Informatics, University of Missouri-Kansas City, Kansas City, United States of America.

ABSTRACT

Objective: To develop guidance on what information to include and how to present it in tables summarizing the evidence from systematic reviews of test accuracy following the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.

Methods: To design and refine the evidence tables, we used an iterative process based on the analysis of data from four rounds of discussions, feedback and user testing. During the final round, we conducted one-on-one user testing with target end users. We presented a number of alternative formats of evidence tables to participants and obtained information about users' understanding and preferences.

Results: More than 150 users participated in initial discussions and provided their formal and informal feedback. 20 users completed one-on-one user testing interviews. Almost all participants preferred summarizing the results of systematic reviews of test accuracy in tabular format rather than plain text. Users generally preferred less complex tables but found presenting sensitivity and specificity estimates only as too simplistic. Users found the presentation of test accuracy for several values of prevalence initially confusing but modifying table layout and adding sample clinical scenarios for each prevalence reduced this confusion. Providing information about clinical consequences of testing result was viewed as not feasible for authors of systematic reviews.

Conclusion: We present the current formats for tables presenting test accuracy following the GRADE approach. These tables can be developed using GRADEpro guidelines development tool (www.guidelinedevelopment.org or www.gradepro.org) and are being further developed into electronic interactive tables that will suit the needs of different end users. The formatting of these tables, and how they influence result interpretation and decision-making will be further evaluated in a randomized trial.

No MeSH data available.


Related in: MedlinePlus