Multi-objective differential evolution for automatic clustering with application to micro-array data analysis.
Bottom Line:
It compares the performances of two multi-objective variants of DE over the fuzzy clustering problem, where two conflicting fuzzy validity indices are simultaneously optimized.A real-coded representation of the search variables, accommodating variable number of cluster centers, is used for DE.Experimental results using six artificial and four real life datasets of varying range of complexities indicate that DE holds immense promise as a candidate algorithm for devising MO clustering schemes.
View Article:
PubMed Central - PubMed
Affiliation: Dept. of Electronics and Telecommunication Engg, Jadavpur University, Kolkata, India; E-Mails: kaushik_s1988@yahoo.com ; kundu.debarati@gmail.com ; sayan88tito@gmail.com ; swagatamdas19@yahoo.co.in.
ABSTRACT
This paper applies the Differential Evolution (DE) algorithm to the task of automatic fuzzy clustering in a Multi-objective Optimization (MO) framework. It compares the performances of two multi-objective variants of DE over the fuzzy clustering problem, where two conflicting fuzzy validity indices are simultaneously optimized. The resultant Pareto optimal set of solutions from each algorithm consists of a number of non-dominated solutions, from which the user can choose the most promising ones according to the problem specifications. A real-coded representation of the search variables, accommodating variable number of cluster centers, is used for DE. The performances of the multi-objective DE-variants have also been contrasted to that of two most well-known schemes of MO clustering, namely the Non Dominated Sorting Genetic Algorithm (NSGA II) and Multi-Objective Clustering with an unknown number of Clusters K (MOCK). Experimental results using six artificial and four real life datasets of varying range of complexities indicate that DE holds immense promise as a candidate algorithm for devising MO clustering schemes. No MeSH data available. |
Related In:
Results -
Collection
License getmorefigures.php?uid=PMC3297137&req=5
Mentions: Note that while computing the uij s, using equation (12), if d (m⃗p, Z⃗j) is equal to zero for some p, then uij is set to zero for all i =1,2,…. k, i ≠ j, while upj is set equal to one. Subsequently the centers encoded in a vector are updated using the following assignment:(15)m→p=∑j=1n(upj)q⋅Z→j∑j=1n(upj)qand the cluster membership values are recomputed. Note that the XBq index is a combination of global (numerator) and particular (denominator) situations. The numerator is similar to Jm but the denominator has a factor that gives the separation between to minimum distant clusters. Hence this factor only considers the worst case, i.e. which two clusters are closest to each other and forgets about the other partitions. Here, greater value of the denominator (lower value of whole index) signifies a better partitioning. Thus it is evident that Jq and XBq indices should be simultaneously minimized in order to get good solutions. The two terms at the numerator and the denominator of XBq may not attain their best values for the same partitioning when the data has complex and overlapping clusters, such as remote sensing image and micro-array data. Figure 1 shows, just for the sake of illustration, the final Pareto-optimal front (composed of non-dominated solutions) of one of the runs of the MODE algorithm for the artificial dataset_3 (described in the next section), to demonstrate the contradictory nature of Jq and XB indices. |
View Article: PubMed Central - PubMed
Affiliation: Dept. of Electronics and Telecommunication Engg, Jadavpur University, Kolkata, India; E-Mails: kaushik_s1988@yahoo.com ; kundu.debarati@gmail.com ; sayan88tito@gmail.com ; swagatamdas19@yahoo.co.in.
No MeSH data available.