Limits...
General regression and representation model for classification.

Qian J, Yang J, Xu Y - PLoS ONE (2014)

Bottom Line: In real-world applications, this assumption does not hold.Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample.The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.

ABSTRACT
Recently, the regularized coding-based classification methods (e.g. SRC and CRC) show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR) for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients) and the specific information (weight matrix of image pixels) to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR) and robust general regression and representation classifier (R-GRR). The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

Show MeSH

Related in: MedlinePlus

The recognition rate curves of R-GRR versus the variation of parameter K on the different experiments.(a) the images without occlusion for test; (b) the images without occlusion for test; (c) the images with sunglasses for test; (d) the images with scarf for test; (e) the images with sunglasses (sg-X) or scarf (sc-X) in session X for test; (f) the images with block occlusion (50%) for test; (g) the images with pixel corruption (90%) for test.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4274033&req=5

pone-0115214-g014: The recognition rate curves of R-GRR versus the variation of parameter K on the different experiments.(a) the images without occlusion for test; (b) the images without occlusion for test; (c) the images with sunglasses for test; (d) the images with scarf for test; (e) the images with sunglasses (sg-X) or scarf (sc-X) in session X for test; (f) the images with block occlusion (50%) for test; (g) the images with pixel corruption (90%) for test.

Mentions: The performances of the proposed method R-GRR (or B-GRR) with different parameters are evaluated on different recognition scenarios. The experiments setting are same with the above mentioned experiments in section 5.2 and 5.3. In our experiments, we just change one parameter when fixing the other ones. Fig. 13 plots the recognition rates versus the variation of the parameter K on the CENPARMI database and NUST603 database. From Fig. 13, we can see that B-GRR can achieve the better recognition rates in conjunction with a smaller K. Fig. 14 plots the recognition rates versus the variation of the parameter K in different face recognition experiments. From Fig. 14 (a) and (b), we can see that the parameter K is relatively larger and smaller than total number of training samples will lead to higher performance when face images without occlusion. Fig. 14 (c) and (e) show that the recognition rates are not sensitive to the variations of the parameter K. In Fig. 14 (f), the proposed method achieves best results when the K is 200 for the test images with block occlusion. However, R-GRR gives the best performance when the K is set to 550 for the test images with pixel corruption as shown in Fig. 14 (g). Generally speaking, the parameter K is relatively smaller in the case that the feature dimension is much lower than the number of training samples, while the parameter K is relatively larger in the case that the feature dimension is much higher than the number of training samples. In this paper, we employ the cross-validation strategy to determine the parameter K in the training stage. Specifically, we select one training sample as query sample and the rest training samples as gallery set. Thus, the recognition rate of all training samples can be achieved. We choose the best parameter K which achieves the best recognition rate.


General regression and representation model for classification.

Qian J, Yang J, Xu Y - PLoS ONE (2014)

The recognition rate curves of R-GRR versus the variation of parameter K on the different experiments.(a) the images without occlusion for test; (b) the images without occlusion for test; (c) the images with sunglasses for test; (d) the images with scarf for test; (e) the images with sunglasses (sg-X) or scarf (sc-X) in session X for test; (f) the images with block occlusion (50%) for test; (g) the images with pixel corruption (90%) for test.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4274033&req=5

pone-0115214-g014: The recognition rate curves of R-GRR versus the variation of parameter K on the different experiments.(a) the images without occlusion for test; (b) the images without occlusion for test; (c) the images with sunglasses for test; (d) the images with scarf for test; (e) the images with sunglasses (sg-X) or scarf (sc-X) in session X for test; (f) the images with block occlusion (50%) for test; (g) the images with pixel corruption (90%) for test.
Mentions: The performances of the proposed method R-GRR (or B-GRR) with different parameters are evaluated on different recognition scenarios. The experiments setting are same with the above mentioned experiments in section 5.2 and 5.3. In our experiments, we just change one parameter when fixing the other ones. Fig. 13 plots the recognition rates versus the variation of the parameter K on the CENPARMI database and NUST603 database. From Fig. 13, we can see that B-GRR can achieve the better recognition rates in conjunction with a smaller K. Fig. 14 plots the recognition rates versus the variation of the parameter K in different face recognition experiments. From Fig. 14 (a) and (b), we can see that the parameter K is relatively larger and smaller than total number of training samples will lead to higher performance when face images without occlusion. Fig. 14 (c) and (e) show that the recognition rates are not sensitive to the variations of the parameter K. In Fig. 14 (f), the proposed method achieves best results when the K is 200 for the test images with block occlusion. However, R-GRR gives the best performance when the K is set to 550 for the test images with pixel corruption as shown in Fig. 14 (g). Generally speaking, the parameter K is relatively smaller in the case that the feature dimension is much lower than the number of training samples, while the parameter K is relatively larger in the case that the feature dimension is much higher than the number of training samples. In this paper, we employ the cross-validation strategy to determine the parameter K in the training stage. Specifically, we select one training sample as query sample and the rest training samples as gallery set. Thus, the recognition rate of all training samples can be achieved. We choose the best parameter K which achieves the best recognition rate.

Bottom Line: In real-world applications, this assumption does not hold.Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample.The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.

ABSTRACT
Recently, the regularized coding-based classification methods (e.g. SRC and CRC) show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR) for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients) and the specific information (weight matrix of image pixels) to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR) and robust general regression and representation classifier (R-GRR). The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

Show MeSH
Related in: MedlinePlus