Limits...
A heuristic approach to determine an appropriate number of topics in topic modeling.

Zhao W, Chen JJ, Perkins R, Liu Z, Ge W, Ding Y, Zou W - BMC Bioinformatics (2015)

Bottom Line: While mainly used to build models from unstructured textual data, it offers an effective means of data mining where samples represent documents, and different biological endpoints or omics data represent words.We test the stability and effectiveness of the proposed method for three markedly different types of grounded-truth datasets: Salmonella next generation sequencing, pharmacological side effects, and textual abstracts on computational biology and bioinformatics (TCBB) from PubMed.We understand that additional investigation is needed to substantiate the method's theoretical basis, and to establish its generalizability in terms of dataset characteristics.

View Article: PubMed Central - HTML - PubMed

ABSTRACT

Background: Topic modelling is an active research field in machine learning. While mainly used to build models from unstructured textual data, it offers an effective means of data mining where samples represent documents, and different biological endpoints or omics data represent words. Latent Dirichlet Allocation (LDA) is the most commonly used topic modelling method across a wide number of technical fields. However, model development can be arduous and tedious, and requires burdensome and systematic sensitivity studies in order to find the best set of model parameters. Often, time-consuming subjective evaluations are needed to compare models. Currently, research has yielded no easy way to choose the proper number of topics in a model beyond a major iterative approach.

Methods and results: Based on analysis of variation of statistical perplexity during topic modelling, a heuristic approach is proposed in this study to estimate the most appropriate number of topics. Specifically, the rate of perplexity change (RPC) as a function of numbers of topics is proposed as a suitable selector. We test the stability and effectiveness of the proposed method for three markedly different types of grounded-truth datasets: Salmonella next generation sequencing, pharmacological side effects, and textual abstracts on computational biology and bioinformatics (TCBB) from PubMed.

Conclusion: The proposed RPC-based method is demonstrated to choose the best number of topics in three numerical experiments of widely different data types, and for databases of very different sizes. The work required was markedly less arduous than if full systematic sensitivity studies had been carried out with number of topics as a parameter. We understand that additional investigation is needed to substantiate the method's theoretical basis, and to establish its generalizability in terms of dataset characteristics.

No MeSH data available.


Related in: MedlinePlus

Eight example topics obtained by LDA modeling with 40 topics on TCBB dataset.
© Copyright Policy - open-access
Related In: Results  -  Collection

License 1 - License 2
getmorefigures.php?uid=PMC4597325&req=5

Figure 3: Eight example topics obtained by LDA modeling with 40 topics on TCBB dataset.

Mentions: The TCBB dataset that was downloaded from PubMed database consists of 885 abstracts from ten years of publications in the journal IEEE Transactions on Computational Biology and Bioinformatics. Since no truth labels were available to classify them in a manner that would enable a cluster to be built and its purity computed, we used the qualitative approach to assess whether the PRC method could choose the best number of topics. Word clouds were used to represent LDA-derived topic-words matrices, and these matrices were, in turn, subjectively interpreted and evaluated to compare models built with different numbers of topics. Human assessment of topic model validity is a common practice, where topic meaning is subjectively interpreted from the topic-word multinomial distribution. Word clouds are just a way to visualize the distribution where word probabilistic weightings correspond to word graphical (font) sizes. The quality of a model is assessed as higher when its topic themes are more salient and distinguishable than those from other models. The RPC-based method selected 40 as the most appropriate number of topics. We therefore compared the model with 40 topics to the models with 20 and 60 topics. Figure 3 gives word clouds for eight illustrative topics for the model with 40 topics (Suppl. Figure S1 in Additional file 1 ). Each of the eight topic word clouds in Figure 3 depict unique and distinguishable theme, which correspond to distinct research fields of computational biology and bioinformatics. Results (Suppl. Figure S1 in Additional file 1) are similar for the remaining 32 topics. Consider Topic 8 (T8 in Figure 3) for a closer check. Clearly, the salient theme is estimation models, with most words recognizable as pertinent to that field of research. We also located a number of documents in TCBB dataset that had their highest probabilistic association with Topic 8 as listed in Table 5. Most of these papers were, indeed, subjectively judged to be primarily related to estimation models.


A heuristic approach to determine an appropriate number of topics in topic modeling.

Zhao W, Chen JJ, Perkins R, Liu Z, Ge W, Ding Y, Zou W - BMC Bioinformatics (2015)

Eight example topics obtained by LDA modeling with 40 topics on TCBB dataset.
© Copyright Policy - open-access
Related In: Results  -  Collection

License 1 - License 2
Show All Figures
getmorefigures.php?uid=PMC4597325&req=5

Figure 3: Eight example topics obtained by LDA modeling with 40 topics on TCBB dataset.
Mentions: The TCBB dataset that was downloaded from PubMed database consists of 885 abstracts from ten years of publications in the journal IEEE Transactions on Computational Biology and Bioinformatics. Since no truth labels were available to classify them in a manner that would enable a cluster to be built and its purity computed, we used the qualitative approach to assess whether the PRC method could choose the best number of topics. Word clouds were used to represent LDA-derived topic-words matrices, and these matrices were, in turn, subjectively interpreted and evaluated to compare models built with different numbers of topics. Human assessment of topic model validity is a common practice, where topic meaning is subjectively interpreted from the topic-word multinomial distribution. Word clouds are just a way to visualize the distribution where word probabilistic weightings correspond to word graphical (font) sizes. The quality of a model is assessed as higher when its topic themes are more salient and distinguishable than those from other models. The RPC-based method selected 40 as the most appropriate number of topics. We therefore compared the model with 40 topics to the models with 20 and 60 topics. Figure 3 gives word clouds for eight illustrative topics for the model with 40 topics (Suppl. Figure S1 in Additional file 1 ). Each of the eight topic word clouds in Figure 3 depict unique and distinguishable theme, which correspond to distinct research fields of computational biology and bioinformatics. Results (Suppl. Figure S1 in Additional file 1) are similar for the remaining 32 topics. Consider Topic 8 (T8 in Figure 3) for a closer check. Clearly, the salient theme is estimation models, with most words recognizable as pertinent to that field of research. We also located a number of documents in TCBB dataset that had their highest probabilistic association with Topic 8 as listed in Table 5. Most of these papers were, indeed, subjectively judged to be primarily related to estimation models.

Bottom Line: While mainly used to build models from unstructured textual data, it offers an effective means of data mining where samples represent documents, and different biological endpoints or omics data represent words.We test the stability and effectiveness of the proposed method for three markedly different types of grounded-truth datasets: Salmonella next generation sequencing, pharmacological side effects, and textual abstracts on computational biology and bioinformatics (TCBB) from PubMed.We understand that additional investigation is needed to substantiate the method's theoretical basis, and to establish its generalizability in terms of dataset characteristics.

View Article: PubMed Central - HTML - PubMed

ABSTRACT

Background: Topic modelling is an active research field in machine learning. While mainly used to build models from unstructured textual data, it offers an effective means of data mining where samples represent documents, and different biological endpoints or omics data represent words. Latent Dirichlet Allocation (LDA) is the most commonly used topic modelling method across a wide number of technical fields. However, model development can be arduous and tedious, and requires burdensome and systematic sensitivity studies in order to find the best set of model parameters. Often, time-consuming subjective evaluations are needed to compare models. Currently, research has yielded no easy way to choose the proper number of topics in a model beyond a major iterative approach.

Methods and results: Based on analysis of variation of statistical perplexity during topic modelling, a heuristic approach is proposed in this study to estimate the most appropriate number of topics. Specifically, the rate of perplexity change (RPC) as a function of numbers of topics is proposed as a suitable selector. We test the stability and effectiveness of the proposed method for three markedly different types of grounded-truth datasets: Salmonella next generation sequencing, pharmacological side effects, and textual abstracts on computational biology and bioinformatics (TCBB) from PubMed.

Conclusion: The proposed RPC-based method is demonstrated to choose the best number of topics in three numerical experiments of widely different data types, and for databases of very different sizes. The work required was markedly less arduous than if full systematic sensitivity studies had been carried out with number of topics as a parameter. We understand that additional investigation is needed to substantiate the method's theoretical basis, and to establish its generalizability in terms of dataset characteristics.

No MeSH data available.


Related in: MedlinePlus