Limits...
A highly efficient multi-core algorithm for clustering extremely large datasets.

Kraus JM, Kestler HA - BMC Bioinformatics (2010)

Bottom Line: We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters.Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization.Most desktop computers and even notebooks provide at least dual-core processors.

View Article: PubMed Central - HTML - PubMed

Affiliation: Institute of Neural Information Processing, University of Ulm, 89069 Ulm, Germany.

ABSTRACT

Background: In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer.

Results: We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization.

Conclusions: Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.

Show MeSH

Related in: MedlinePlus

Comparing different baseline distributions for clustering. Baselines for clustering an artificial data set containing 50 one-dimensional points. For each partitioning into k = {1, ..., 50} clusters, the average value of the MCA index from 500 runs is plotted. The different baselines are from bottom to top: black = random label, red = simulated random label, green = random partition, blue = random prototype. It can be seen that the random label baseline is a lower bound for the MCA index, whereas the simulated random label and random partition baselines are much tighter. The data-driven random prototype baseline builds the tightest bound for the MCA index.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2865495&req=5

Figure 4: Comparing different baseline distributions for clustering. Baselines for clustering an artificial data set containing 50 one-dimensional points. For each partitioning into k = {1, ..., 50} clusters, the average value of the MCA index from 500 runs is plotted. The different baselines are from bottom to top: black = random label, red = simulated random label, green = random partition, blue = random prototype. It can be seen that the random label baseline is a lower bound for the MCA index, whereas the simulated random label and random partition baselines are much tighter. The data-driven random prototype baseline builds the tightest bound for the MCA index.

Mentions: In the following, we consider three baseline scenarios that we call the random label hypothesis, the random partition hypothesis, and the random prototype hypothesis (see Figure 4).


A highly efficient multi-core algorithm for clustering extremely large datasets.

Kraus JM, Kestler HA - BMC Bioinformatics (2010)

Comparing different baseline distributions for clustering. Baselines for clustering an artificial data set containing 50 one-dimensional points. For each partitioning into k = {1, ..., 50} clusters, the average value of the MCA index from 500 runs is plotted. The different baselines are from bottom to top: black = random label, red = simulated random label, green = random partition, blue = random prototype. It can be seen that the random label baseline is a lower bound for the MCA index, whereas the simulated random label and random partition baselines are much tighter. The data-driven random prototype baseline builds the tightest bound for the MCA index.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2865495&req=5

Figure 4: Comparing different baseline distributions for clustering. Baselines for clustering an artificial data set containing 50 one-dimensional points. For each partitioning into k = {1, ..., 50} clusters, the average value of the MCA index from 500 runs is plotted. The different baselines are from bottom to top: black = random label, red = simulated random label, green = random partition, blue = random prototype. It can be seen that the random label baseline is a lower bound for the MCA index, whereas the simulated random label and random partition baselines are much tighter. The data-driven random prototype baseline builds the tightest bound for the MCA index.
Mentions: In the following, we consider three baseline scenarios that we call the random label hypothesis, the random partition hypothesis, and the random prototype hypothesis (see Figure 4).

Bottom Line: We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters.Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization.Most desktop computers and even notebooks provide at least dual-core processors.

View Article: PubMed Central - HTML - PubMed

Affiliation: Institute of Neural Information Processing, University of Ulm, 89069 Ulm, Germany.

ABSTRACT

Background: In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer.

Results: We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization.

Conclusions: Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.

Show MeSH
Related in: MedlinePlus