Limits...
Data-Mining Electronic Medical Records for Clinical Order Recommendations: Wisdom of the Crowd or Tyranny of the Mob?

Chen JH, Altman RB - AMIA Jt Summits Transl Sci Proc (2015)

Bottom Line: We now present the first structured validation of such automatically generated content against an objective external standard by assessing how well the generated recommendations correspond to orders referenced as appropriate in clinical practice guidelines.We demonstrate that data-driven, automatically generated clinical decision support content can reproduce and optimize top-down constructs like order sets while largely avoiding inappropriate and irrelevant recommendations.This will be even more important when extrapolating to more typical clinical scenarios where well-defined external standards and decision support do not exist.

View Article: PubMed Central - PubMed

Affiliation: Center for Innovation to Implementation (Ci2i), Veterans Affairs Palo Alto Health Care System, Palo Alto, CA ; Center for Primary Care and Outcomes Research (PCOR), Stanford University, Stanford, CA.

ABSTRACT
Uncertainty and variability is pervasive in medical decision making with insufficient evidence-based medicine and inconsistent implementation where established knowledge exists. Clinical decision support constructs like order sets help distribute expertise, but are constrained by knowledge-based development. We previously produced a data-driven order recommender system to automatically generate clinical decision support content from structured electronic medical record data on >19K hospital patients. We now present the first structured validation of such automatically generated content against an objective external standard by assessing how well the generated recommendations correspond to orders referenced as appropriate in clinical practice guidelines. For example scenarios of chest pain, gastrointestinal hemorrhage, and pneumonia in hospital patients, the automated method identifies guideline reference orders with ROC AUCs (c-statistics) (0.89, 0.95, 0.83) that improve upon statistical prevalence benchmarks (0.76, 0.74, 0.73) and pre-existing human-expert authored order sets (0.81, 0.77, 0.73) (P<10(-30) in all cases). We demonstrate that data-driven, automatically generated clinical decision support content can reproduce and optimize top-down constructs like order sets while largely avoiding inappropriate and irrelevant recommendations. This will be even more important when extrapolating to more typical clinical scenarios where well-defined external standards and decision support do not exist.

No MeSH data available.


Related in: MedlinePlus

– Recommender accuracy (precision or recall) for predicting guideline reference orders as a function of the number of top K recommendations considered (up to 100) when sorting by different score-ranking options (OR, PPV, prevalence, and presence in pre-authored order sets). Data labels added for K = 10 and nO, where nO = Number of items available in the respective order sets.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4525236&req=5

f2-2091802: – Recommender accuracy (precision or recall) for predicting guideline reference orders as a function of the number of top K recommendations considered (up to 100) when sorting by different score-ranking options (OR, PPV, prevalence, and presence in pre-authored order sets). Data labels added for K = 10 and nO, where nO = Number of items available in the respective order sets.

Mentions: Table 1 reports summary counts of patient information available, guideline reference orders, and pre-authored order set items for each of the admission diagnoses considered. Table 2 contains recommendation examples for the chest pain admission diagnosis with association statistics and reference labels. Figure 1 depicts ROC curves assessing discrimination of guideline reference orders. Figure 2 depicts recommendation accuracy for increasing number of K items considered, illustrating the tradeoff between precision and recall, and performance for more practical small values of K.


Data-Mining Electronic Medical Records for Clinical Order Recommendations: Wisdom of the Crowd or Tyranny of the Mob?

Chen JH, Altman RB - AMIA Jt Summits Transl Sci Proc (2015)

– Recommender accuracy (precision or recall) for predicting guideline reference orders as a function of the number of top K recommendations considered (up to 100) when sorting by different score-ranking options (OR, PPV, prevalence, and presence in pre-authored order sets). Data labels added for K = 10 and nO, where nO = Number of items available in the respective order sets.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4525236&req=5

f2-2091802: – Recommender accuracy (precision or recall) for predicting guideline reference orders as a function of the number of top K recommendations considered (up to 100) when sorting by different score-ranking options (OR, PPV, prevalence, and presence in pre-authored order sets). Data labels added for K = 10 and nO, where nO = Number of items available in the respective order sets.
Mentions: Table 1 reports summary counts of patient information available, guideline reference orders, and pre-authored order set items for each of the admission diagnoses considered. Table 2 contains recommendation examples for the chest pain admission diagnosis with association statistics and reference labels. Figure 1 depicts ROC curves assessing discrimination of guideline reference orders. Figure 2 depicts recommendation accuracy for increasing number of K items considered, illustrating the tradeoff between precision and recall, and performance for more practical small values of K.

Bottom Line: We now present the first structured validation of such automatically generated content against an objective external standard by assessing how well the generated recommendations correspond to orders referenced as appropriate in clinical practice guidelines.We demonstrate that data-driven, automatically generated clinical decision support content can reproduce and optimize top-down constructs like order sets while largely avoiding inappropriate and irrelevant recommendations.This will be even more important when extrapolating to more typical clinical scenarios where well-defined external standards and decision support do not exist.

View Article: PubMed Central - PubMed

Affiliation: Center for Innovation to Implementation (Ci2i), Veterans Affairs Palo Alto Health Care System, Palo Alto, CA ; Center for Primary Care and Outcomes Research (PCOR), Stanford University, Stanford, CA.

ABSTRACT
Uncertainty and variability is pervasive in medical decision making with insufficient evidence-based medicine and inconsistent implementation where established knowledge exists. Clinical decision support constructs like order sets help distribute expertise, but are constrained by knowledge-based development. We previously produced a data-driven order recommender system to automatically generate clinical decision support content from structured electronic medical record data on >19K hospital patients. We now present the first structured validation of such automatically generated content against an objective external standard by assessing how well the generated recommendations correspond to orders referenced as appropriate in clinical practice guidelines. For example scenarios of chest pain, gastrointestinal hemorrhage, and pneumonia in hospital patients, the automated method identifies guideline reference orders with ROC AUCs (c-statistics) (0.89, 0.95, 0.83) that improve upon statistical prevalence benchmarks (0.76, 0.74, 0.73) and pre-existing human-expert authored order sets (0.81, 0.77, 0.73) (P<10(-30) in all cases). We demonstrate that data-driven, automatically generated clinical decision support content can reproduce and optimize top-down constructs like order sets while largely avoiding inappropriate and irrelevant recommendations. This will be even more important when extrapolating to more typical clinical scenarios where well-defined external standards and decision support do not exist.

No MeSH data available.


Related in: MedlinePlus