Limits...
Visual attention model based on statistical properties of neuron responses.

Duan H, Wang X - Sci Rep (2015)

Bottom Line: Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model.Comparative results reveal that the proposed model outperforms several state-of-the-art models.This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention.

View Article: PubMed Central - PubMed

Affiliation: 1] State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, P. R. China [2] Science and Technology on Aircraft Control Laboratory, School of Automation Science and Electronic Engineering, Beihang University, Beijing 100191, P. R. China.

ABSTRACT
Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention.

Show MeSH
Statistical properties of neuron responses to retina images.(a), (e), (i) are original retina images; (b), (f), (g) are reconstructions of the original retina images; (c), (g), (l) from top to down are the 17th, 33th, 49th, 65th, 81st, and 97th dimensions of neuron responses projected to corresponding pixels of the original retina images, respectively; (d), (h), (m) from top to down are statistical histograms of the 17th, 33th, 49th, 65th, 81st, and 97th dimensions of neuron responses with their outlines fitted by red curves. The horizon rows represent values of the neuron responses and the vertical ones are numbers of neurons with corresponding responses. The original images given in (a), (e), (i) are taken by X.H. Wang with a digital camera Canon IXUS 125HS.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4352866&req=5

f3: Statistical properties of neuron responses to retina images.(a), (e), (i) are original retina images; (b), (f), (g) are reconstructions of the original retina images; (c), (g), (l) from top to down are the 17th, 33th, 49th, 65th, 81st, and 97th dimensions of neuron responses projected to corresponding pixels of the original retina images, respectively; (d), (h), (m) from top to down are statistical histograms of the 17th, 33th, 49th, 65th, 81st, and 97th dimensions of neuron responses with their outlines fitted by red curves. The horizon rows represent values of the neuron responses and the vertical ones are numbers of neurons with corresponding responses. The original images given in (a), (e), (i) are taken by X.H. Wang with a digital camera Canon IXUS 125HS.

Mentions: As shown in Figure 2, patches sampled from the input image are adopted as stimuli to neurons with the trained connection weights. The original image can be reconstructed with a production of the neuron responses and the learned connection weights. The simulation verifies the preciseness of neuron responses. Sampled neuron responses to three different scenes are shown in Figure 3, to investigate the statistical properties of the neuron responses.


Visual attention model based on statistical properties of neuron responses.

Duan H, Wang X - Sci Rep (2015)

Statistical properties of neuron responses to retina images.(a), (e), (i) are original retina images; (b), (f), (g) are reconstructions of the original retina images; (c), (g), (l) from top to down are the 17th, 33th, 49th, 65th, 81st, and 97th dimensions of neuron responses projected to corresponding pixels of the original retina images, respectively; (d), (h), (m) from top to down are statistical histograms of the 17th, 33th, 49th, 65th, 81st, and 97th dimensions of neuron responses with their outlines fitted by red curves. The horizon rows represent values of the neuron responses and the vertical ones are numbers of neurons with corresponding responses. The original images given in (a), (e), (i) are taken by X.H. Wang with a digital camera Canon IXUS 125HS.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4352866&req=5

f3: Statistical properties of neuron responses to retina images.(a), (e), (i) are original retina images; (b), (f), (g) are reconstructions of the original retina images; (c), (g), (l) from top to down are the 17th, 33th, 49th, 65th, 81st, and 97th dimensions of neuron responses projected to corresponding pixels of the original retina images, respectively; (d), (h), (m) from top to down are statistical histograms of the 17th, 33th, 49th, 65th, 81st, and 97th dimensions of neuron responses with their outlines fitted by red curves. The horizon rows represent values of the neuron responses and the vertical ones are numbers of neurons with corresponding responses. The original images given in (a), (e), (i) are taken by X.H. Wang with a digital camera Canon IXUS 125HS.
Mentions: As shown in Figure 2, patches sampled from the input image are adopted as stimuli to neurons with the trained connection weights. The original image can be reconstructed with a production of the neuron responses and the learned connection weights. The simulation verifies the preciseness of neuron responses. Sampled neuron responses to three different scenes are shown in Figure 3, to investigate the statistical properties of the neuron responses.

Bottom Line: Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model.Comparative results reveal that the proposed model outperforms several state-of-the-art models.This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention.

View Article: PubMed Central - PubMed

Affiliation: 1] State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, P. R. China [2] Science and Technology on Aircraft Control Laboratory, School of Automation Science and Electronic Engineering, Beihang University, Beijing 100191, P. R. China.

ABSTRACT
Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention.

Show MeSH