Limits...
Competition improves robustness against loss of information.

Kermani Kolankeh A, Teichmann M, Hamker FH - Front Comput Neurosci (2015)

Bottom Line: Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections.Also the kind of the competition mechanisms plays an important role in robustness.Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.

View Article: PubMed Central - PubMed

Affiliation: Department of Computer Science, Chemnitz University of Technology Chemnitz, Germany.

ABSTRACT
A substantial number of works have aimed at modeling the receptive field properties of the primary visual cortex (V1). Their evaluation criterion is usually the similarity of the model response properties to the recorded responses from biological organisms. However, as several algorithms were able to demonstrate some degree of similarity to biological data based on the existing criteria, we focus on the robustness against loss of information in the form of occlusions as an additional constraint for better understanding the algorithmic level of early vision in the brain. We try to investigate the influence of competition mechanisms on the robustness. Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections. Each of those methods is known to be capable of developing receptive fields comparable to those of V1 simple-cells. Since measuring the robustness of methods having simple-cell like receptive fields against occlusion is difficult, we measure the robustness using the classification accuracy on the MNIST hand written digit dataset. For this we trained all methods on the training set of the MNIST hand written digits dataset and tested them on a MNIST test set with different levels of occlusions. We observe that methods which employ competitive mechanisms have higher robustness against loss of information. Also the kind of the competition mechanisms plays an important role in robustness. Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.

No MeSH data available.


Related in: MedlinePlus

Visualization of the feedforward weights and the receptive fields of 100 units, after training. Off-weights where subtracted from on-weights and each plot is scaled so that white denotes the maximum value and black the minimum. (A) The feedforward weight matrices of the HNN and (B) its reverse correlation. (C) The component matrices of FastICA and (D) its reverse correlation. (E) The component matrices of NMFSC and (F) its reverse correlation. (G) The feedforward weights of PC/BC and (H) its reverse correlation.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4373393&req=5

Figure 4: Visualization of the feedforward weights and the receptive fields of 100 units, after training. Off-weights where subtracted from on-weights and each plot is scaled so that white denotes the maximum value and black the minimum. (A) The feedforward weight matrices of the HNN and (B) its reverse correlation. (C) The component matrices of FastICA and (D) its reverse correlation. (E) The component matrices of NMFSC and (F) its reverse correlation. (G) The feedforward weights of PC/BC and (H) its reverse correlation.

Mentions: In order to verify if the models represent the input data in a comparable way, we visualize the weight vectors and receptive fields of 100 units for each model (cf. Section 2.4). To visualize the weight vectors of the Hebbian neural network (HNN), we have used the feedforward weight matrices showing the driving stimulus of the neurons (Figure 4A). For FastICA, we visualize the mixing matrix V (Figure 4C). The V matrix of basis vectors is visualized for NMFSC (Figure 4E). In PC/BC, we show the feedforward matrices (Figure 4G). For each method we also show the receptive fields estimated by reverse correlation (Figures 4B,D,F,H), being not much different from the visualization of the weight matrices. All methods develop receptive fields with holistic forms of digits. Indeed, in NMFSC not all units show digit like shapes which may result from the chosen level of sparseness as mentioned in the methods.


Competition improves robustness against loss of information.

Kermani Kolankeh A, Teichmann M, Hamker FH - Front Comput Neurosci (2015)

Visualization of the feedforward weights and the receptive fields of 100 units, after training. Off-weights where subtracted from on-weights and each plot is scaled so that white denotes the maximum value and black the minimum. (A) The feedforward weight matrices of the HNN and (B) its reverse correlation. (C) The component matrices of FastICA and (D) its reverse correlation. (E) The component matrices of NMFSC and (F) its reverse correlation. (G) The feedforward weights of PC/BC and (H) its reverse correlation.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4373393&req=5

Figure 4: Visualization of the feedforward weights and the receptive fields of 100 units, after training. Off-weights where subtracted from on-weights and each plot is scaled so that white denotes the maximum value and black the minimum. (A) The feedforward weight matrices of the HNN and (B) its reverse correlation. (C) The component matrices of FastICA and (D) its reverse correlation. (E) The component matrices of NMFSC and (F) its reverse correlation. (G) The feedforward weights of PC/BC and (H) its reverse correlation.
Mentions: In order to verify if the models represent the input data in a comparable way, we visualize the weight vectors and receptive fields of 100 units for each model (cf. Section 2.4). To visualize the weight vectors of the Hebbian neural network (HNN), we have used the feedforward weight matrices showing the driving stimulus of the neurons (Figure 4A). For FastICA, we visualize the mixing matrix V (Figure 4C). The V matrix of basis vectors is visualized for NMFSC (Figure 4E). In PC/BC, we show the feedforward matrices (Figure 4G). For each method we also show the receptive fields estimated by reverse correlation (Figures 4B,D,F,H), being not much different from the visualization of the weight matrices. All methods develop receptive fields with holistic forms of digits. Indeed, in NMFSC not all units show digit like shapes which may result from the chosen level of sparseness as mentioned in the methods.

Bottom Line: Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections.Also the kind of the competition mechanisms plays an important role in robustness.Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.

View Article: PubMed Central - PubMed

Affiliation: Department of Computer Science, Chemnitz University of Technology Chemnitz, Germany.

ABSTRACT
A substantial number of works have aimed at modeling the receptive field properties of the primary visual cortex (V1). Their evaluation criterion is usually the similarity of the model response properties to the recorded responses from biological organisms. However, as several algorithms were able to demonstrate some degree of similarity to biological data based on the existing criteria, we focus on the robustness against loss of information in the form of occlusions as an additional constraint for better understanding the algorithmic level of early vision in the brain. We try to investigate the influence of competition mechanisms on the robustness. Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections. Each of those methods is known to be capable of developing receptive fields comparable to those of V1 simple-cells. Since measuring the robustness of methods having simple-cell like receptive fields against occlusion is difficult, we measure the robustness using the classification accuracy on the MNIST hand written digit dataset. For this we trained all methods on the training set of the MNIST hand written digits dataset and tested them on a MNIST test set with different levels of occlusions. We observe that methods which employ competitive mechanisms have higher robustness against loss of information. Also the kind of the competition mechanisms plays an important role in robustness. Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.

No MeSH data available.


Related in: MedlinePlus