Limits...
General regression and representation model for classification.

Qian J, Yang J, Xu Y - PLoS ONE (2014)

Bottom Line: In real-world applications, this assumption does not hold.Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample.The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.

ABSTRACT
Recently, the regularized coding-based classification methods (e.g. SRC and CRC) show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR) for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients) and the specific information (weight matrix of image pixels) to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR) and robust general regression and representation classifier (R-GRR). The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

Show MeSH

Related in: MedlinePlus

The example shows the importance of P and Q for classification.(a) The classification result is right when our model B-GRR using the prior information P and Q. However, we obtain the wrong result using CRC. (b) The corresponding representation coefficients of B-GRR and CRC.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4274033&req=5

pone-0115214-g003: The example shows the importance of P and Q for classification.(a) The classification result is right when our model B-GRR using the prior information P and Q. However, we obtain the wrong result using CRC. (b) The corresponding representation coefficients of B-GRR and CRC.

Mentions: In this section, we will further analyze the role of P and Q in GRR. P is a symmetric matrix which is learned from the training set and can be decomposed into , where R is a non-singular transformed matrix and is used to eliminate the correlations between representation residuals. The matrix Q in the regularization term is also learned from the training set. The proposed model uses Mahalanbios distance instead of Euclidean distance to constrain the representation coefficient. It's believed that Mahalanbios distance can provide a better regularization than Euclidean distance since there exists correlations between representation coefficients. Fig. 3(a) gives an example to show the role of P and Q. In this example, we represent the test sample “1” from the CENPARMI database and illustrate the reconstruction residual of each class. Based on the minimal class residual criterion, we know that B-GRR, using the prior information contained in P and Q, achieves the right result, while CRC fails without using this information.


General regression and representation model for classification.

Qian J, Yang J, Xu Y - PLoS ONE (2014)

The example shows the importance of P and Q for classification.(a) The classification result is right when our model B-GRR using the prior information P and Q. However, we obtain the wrong result using CRC. (b) The corresponding representation coefficients of B-GRR and CRC.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4274033&req=5

pone-0115214-g003: The example shows the importance of P and Q for classification.(a) The classification result is right when our model B-GRR using the prior information P and Q. However, we obtain the wrong result using CRC. (b) The corresponding representation coefficients of B-GRR and CRC.
Mentions: In this section, we will further analyze the role of P and Q in GRR. P is a symmetric matrix which is learned from the training set and can be decomposed into , where R is a non-singular transformed matrix and is used to eliminate the correlations between representation residuals. The matrix Q in the regularization term is also learned from the training set. The proposed model uses Mahalanbios distance instead of Euclidean distance to constrain the representation coefficient. It's believed that Mahalanbios distance can provide a better regularization than Euclidean distance since there exists correlations between representation coefficients. Fig. 3(a) gives an example to show the role of P and Q. In this example, we represent the test sample “1” from the CENPARMI database and illustrate the reconstruction residual of each class. Based on the minimal class residual criterion, we know that B-GRR, using the prior information contained in P and Q, achieves the right result, while CRC fails without using this information.

Bottom Line: In real-world applications, this assumption does not hold.Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample.The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.

ABSTRACT
Recently, the regularized coding-based classification methods (e.g. SRC and CRC) show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR) for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients) and the specific information (weight matrix of image pixels) to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR) and robust general regression and representation classifier (R-GRR). The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

Show MeSH
Related in: MedlinePlus