Limits...
What to Do When K -Means Clustering Fails: A Simple yet Principled Alternative Algorithm

View Article: PubMed Central - PubMed

ABSTRACT

The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

No MeSH data available.


Clustering performed by K-means and MAP-DP for spherical, synthetic Gaussian data, with outliers.All clusters have the same radii and density. There are two outlier groups with two outliers in each group. K-means fails to find a good solution where MAP-DP succeeds; this is because K-means puts some of the outliers in a separate cluster, thus inappropriately using up one of the K = 3 clusters. This happens even if all the clusters are spherical, equal radii and well-separated.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC5036949&req=5

pone.0162259.g003: Clustering performed by K-means and MAP-DP for spherical, synthetic Gaussian data, with outliers.All clusters have the same radii and density. There are two outlier groups with two outliers in each group. K-means fails to find a good solution where MAP-DP succeeds; this is because K-means puts some of the outliers in a separate cluster, thus inappropriately using up one of the K = 3 clusters. This happens even if all the clusters are spherical, equal radii and well-separated.

Mentions: The Euclidean distance entails that the average of the coordinates of data points in a cluster is the centroid of that cluster (algorithm line 15). Euclidean space is linear which implies that small changes in the data result in proportionately small changes to the position of the cluster centroid. This is problematic when there are outliers, that is, points which are unusually far away from the cluster centroid by comparison to the rest of the points in that cluster. Such outliers can dramatically impair the results of K-means (see Fig 3 and discussion in Section 5.3).


What to Do When K -Means Clustering Fails: A Simple yet Principled Alternative Algorithm
Clustering performed by K-means and MAP-DP for spherical, synthetic Gaussian data, with outliers.All clusters have the same radii and density. There are two outlier groups with two outliers in each group. K-means fails to find a good solution where MAP-DP succeeds; this is because K-means puts some of the outliers in a separate cluster, thus inappropriately using up one of the K = 3 clusters. This happens even if all the clusters are spherical, equal radii and well-separated.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC5036949&req=5

pone.0162259.g003: Clustering performed by K-means and MAP-DP for spherical, synthetic Gaussian data, with outliers.All clusters have the same radii and density. There are two outlier groups with two outliers in each group. K-means fails to find a good solution where MAP-DP succeeds; this is because K-means puts some of the outliers in a separate cluster, thus inappropriately using up one of the K = 3 clusters. This happens even if all the clusters are spherical, equal radii and well-separated.
Mentions: The Euclidean distance entails that the average of the coordinates of data points in a cluster is the centroid of that cluster (algorithm line 15). Euclidean space is linear which implies that small changes in the data result in proportionately small changes to the position of the cluster centroid. This is problematic when there are outliers, that is, points which are unusually far away from the cluster centroid by comparison to the rest of the points in that cluster. Such outliers can dramatically impair the results of K-means (see Fig 3 and discussion in Section 5.3).

View Article: PubMed Central - PubMed

ABSTRACT

The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

No MeSH data available.