Limits...
General regression and representation model for classification.

Qian J, Yang J, Xu Y - PLoS ONE (2014)

Bottom Line: In real-world applications, this assumption does not hold.Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample.The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.

ABSTRACT
Recently, the regularized coding-based classification methods (e.g. SRC and CRC) show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR) for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients) and the specific information (weight matrix of image pixels) to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR) and robust general regression and representation classifier (R-GRR). The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

Show MeSH

Related in: MedlinePlus

An example shows partial correlation matrix (only select top 50 from 121 ones) of representation residuals.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4274033&req=5

pone-0115214-g002: An example shows partial correlation matrix (only select top 50 from 121 ones) of representation residuals.

Mentions: The most previous works, including RSC, SRC, CRC, CESR et al, assume that the representation residuals are homoskedastic and mutually uncorrelated. In real-world applications, these assumptions do not hold. In particular, when the elements of representation residuals have unequal variances and are correlated, variance of representation residuals is no longer a scalar variance-covariance matrix, and hence there is no guarantee that the least square estimator is the most efficient within the class of linear unbiased estimators [46], [47]. Here, we also give an example to demonstrate this view. Fig. 2 shows the example, where 200 samples of each class are selected from the CENPARMI dataset, each sample is coded on its top 200 neighbors from the rest samples. The correlation matrix map of representation residuals is shown in Fig. 2, from which we can see that these representation residuals are actually correlated.


General regression and representation model for classification.

Qian J, Yang J, Xu Y - PLoS ONE (2014)

An example shows partial correlation matrix (only select top 50 from 121 ones) of representation residuals.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4274033&req=5

pone-0115214-g002: An example shows partial correlation matrix (only select top 50 from 121 ones) of representation residuals.
Mentions: The most previous works, including RSC, SRC, CRC, CESR et al, assume that the representation residuals are homoskedastic and mutually uncorrelated. In real-world applications, these assumptions do not hold. In particular, when the elements of representation residuals have unequal variances and are correlated, variance of representation residuals is no longer a scalar variance-covariance matrix, and hence there is no guarantee that the least square estimator is the most efficient within the class of linear unbiased estimators [46], [47]. Here, we also give an example to demonstrate this view. Fig. 2 shows the example, where 200 samples of each class are selected from the CENPARMI dataset, each sample is coded on its top 200 neighbors from the rest samples. The correlation matrix map of representation residuals is shown in Fig. 2, from which we can see that these representation residuals are actually correlated.

Bottom Line: In real-world applications, this assumption does not hold.Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample.The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.

ABSTRACT
Recently, the regularized coding-based classification methods (e.g. SRC and CRC) show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR) for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients) and the specific information (weight matrix of image pixels) to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR) and robust general regression and representation classifier (R-GRR). The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

Show MeSH
Related in: MedlinePlus