Limits...
Decision-Making about Healthcare Related Tests and Diagnostic Strategies: User Testing of GRADE Evidence Tables.

Mustafa RA, Wiercioch W, Santesso N, Cheung A, Prediger B, Baldeh T, Carrasco-Labra A, Brignardello-Petersen R, Neumann I, Bossuyt P, Garg AX, Lelgemann M, Bühler D, Brozek J, Schünemann HJ - PLoS ONE (2015)

Bottom Line: Almost all participants preferred summarizing the results of systematic reviews of test accuracy in tabular format rather than plain text.Users generally preferred less complex tables but found presenting sensitivity and specificity estimates only as too simplistic.Providing information about clinical consequences of testing result was viewed as not feasible for authors of systematic reviews.

View Article: PubMed Central - PubMed

Affiliation: Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada; Departments of Internal Medicine and Biomedical & Health Informatics, University of Missouri-Kansas City, Kansas City, United States of America.

ABSTRACT

Objective: To develop guidance on what information to include and how to present it in tables summarizing the evidence from systematic reviews of test accuracy following the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.

Methods: To design and refine the evidence tables, we used an iterative process based on the analysis of data from four rounds of discussions, feedback and user testing. During the final round, we conducted one-on-one user testing with target end users. We presented a number of alternative formats of evidence tables to participants and obtained information about users' understanding and preferences.

Results: More than 150 users participated in initial discussions and provided their formal and informal feedback. 20 users completed one-on-one user testing interviews. Almost all participants preferred summarizing the results of systematic reviews of test accuracy in tabular format rather than plain text. Users generally preferred less complex tables but found presenting sensitivity and specificity estimates only as too simplistic. Users found the presentation of test accuracy for several values of prevalence initially confusing but modifying table layout and adding sample clinical scenarios for each prevalence reduced this confusion. Providing information about clinical consequences of testing result was viewed as not feasible for authors of systematic reviews.

Conclusion: We present the current formats for tables presenting test accuracy following the GRADE approach. These tables can be developed using GRADEpro guidelines development tool (www.guidelinedevelopment.org or www.gradepro.org) and are being further developed into electronic interactive tables that will suit the needs of different end users. The formatting of these tables, and how they influence result interpretation and decision-making will be further evaluated in a randomized trial.

No MeSH data available.


Related in: MedlinePlus

Summary of the domains used for data analysis of user testing and feedback.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4608675&req=5

pone.0134553.g002: Summary of the domains used for data analysis of user testing and feedback.

Mentions: The formal one-on-one user testing specifically was intended to compare various formats of evidence tables and to collect user perspectives about the most useful and best possible presentation of information in tables. The results were summarized for TA systematic reviews of single tests and multiple tests that were compared either directly in the same studies or indirectly in different studies against the same reference standard. We used the domains summarized in Fig 2 for our data analysis. We used the different table components as our guide to compile feedback. We also analyzed the comments that addressed the single test versus those that addressed comparative tests separately.


Decision-Making about Healthcare Related Tests and Diagnostic Strategies: User Testing of GRADE Evidence Tables.

Mustafa RA, Wiercioch W, Santesso N, Cheung A, Prediger B, Baldeh T, Carrasco-Labra A, Brignardello-Petersen R, Neumann I, Bossuyt P, Garg AX, Lelgemann M, Bühler D, Brozek J, Schünemann HJ - PLoS ONE (2015)

Summary of the domains used for data analysis of user testing and feedback.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4608675&req=5

pone.0134553.g002: Summary of the domains used for data analysis of user testing and feedback.
Mentions: The formal one-on-one user testing specifically was intended to compare various formats of evidence tables and to collect user perspectives about the most useful and best possible presentation of information in tables. The results were summarized for TA systematic reviews of single tests and multiple tests that were compared either directly in the same studies or indirectly in different studies against the same reference standard. We used the domains summarized in Fig 2 for our data analysis. We used the different table components as our guide to compile feedback. We also analyzed the comments that addressed the single test versus those that addressed comparative tests separately.

Bottom Line: Almost all participants preferred summarizing the results of systematic reviews of test accuracy in tabular format rather than plain text.Users generally preferred less complex tables but found presenting sensitivity and specificity estimates only as too simplistic.Providing information about clinical consequences of testing result was viewed as not feasible for authors of systematic reviews.

View Article: PubMed Central - PubMed

Affiliation: Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada; Departments of Internal Medicine and Biomedical & Health Informatics, University of Missouri-Kansas City, Kansas City, United States of America.

ABSTRACT

Objective: To develop guidance on what information to include and how to present it in tables summarizing the evidence from systematic reviews of test accuracy following the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.

Methods: To design and refine the evidence tables, we used an iterative process based on the analysis of data from four rounds of discussions, feedback and user testing. During the final round, we conducted one-on-one user testing with target end users. We presented a number of alternative formats of evidence tables to participants and obtained information about users' understanding and preferences.

Results: More than 150 users participated in initial discussions and provided their formal and informal feedback. 20 users completed one-on-one user testing interviews. Almost all participants preferred summarizing the results of systematic reviews of test accuracy in tabular format rather than plain text. Users generally preferred less complex tables but found presenting sensitivity and specificity estimates only as too simplistic. Users found the presentation of test accuracy for several values of prevalence initially confusing but modifying table layout and adding sample clinical scenarios for each prevalence reduced this confusion. Providing information about clinical consequences of testing result was viewed as not feasible for authors of systematic reviews.

Conclusion: We present the current formats for tables presenting test accuracy following the GRADE approach. These tables can be developed using GRADEpro guidelines development tool (www.guidelinedevelopment.org or www.gradepro.org) and are being further developed into electronic interactive tables that will suit the needs of different end users. The formatting of these tables, and how they influence result interpretation and decision-making will be further evaluated in a randomized trial.

No MeSH data available.


Related in: MedlinePlus