Limits...
Visual Saliency Models for Text Detection in Real World.

Gao R, Uchida S, Shahab A, Shafait F, Frinken V - PLoS ONE (2014)

Bottom Line: In the first stage, Itti's model is used to calculate the saliency map, and Otsu's global thresholding algorithm is applied to extract the salient region that we are interested in.In the second stage, Itti's model is applied to the salient region to calculate the final saliency map.An experimental evaluation demonstrates that the proposed model outperforms Itti's model in terms of captured scene texts.

View Article: PubMed Central - PubMed

Affiliation: Department of Advanced Information technology, Kyushu University, Fukuoka, Fukuoka, Japan.

ABSTRACT
This paper evaluates the degree of saliency of texts in natural scenes using visual saliency models. A large scale scene image database with pixel level ground truth is created for this purpose. Using this scene image database and five state-of-the-art models, visual saliency maps that represent the degree of saliency of the objects are calculated. The receiver operating characteristic curve is employed in order to evaluate the saliency of scene texts, which is calculated by visual saliency models. A visualization of the distribution of scene texts and non-texts in the space constructed by three kinds of saliency maps, which are calculated using Itti's visual saliency model with intensity, color and orientation features, is given. This visualization of distribution indicates that text characters are more salient than their non-text neighbors, and can be captured from the background. Therefore, scene texts can be extracted from the scene images. With this in mind, a new visual saliency architecture, named hierarchical visual saliency model, is proposed. Hierarchical visual saliency model is based on Itti's model and consists of two stages. In the first stage, Itti's model is used to calculate the saliency map, and Otsu's global thresholding algorithm is applied to extract the salient region that we are interested in. In the second stage, Itti's model is applied to the salient region to calculate the final saliency map. An experimental evaluation demonstrates that the proposed model outperforms Itti's model in terms of captured scene texts.

No MeSH data available.


Related in: MedlinePlus

50 examples of images randomly selected from the database.Each example consists of input image (left) and pixel-level ground truth image (right). (Copyrights of those figures are listed in Acknowledgments.)
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4262416&req=5

pone-0114539-g002: 50 examples of images randomly selected from the database.Each example consists of input image (left) and pixel-level ground truth image (right). (Copyrights of those figures are listed in Acknowledgments.)

Mentions: In this paper, we aim at proving the saliency of scene texts via a large image database which contains 3018 scene images with 96844 text characters totally. Fig. 2 shows some examples of natural scene images and their corresponding pixel level ground truth image randomly selected from the database. With this database and five state-of-the-art visual saliency models, saliency maps are calculated. Saliency values of pixels belonging to texts and non-texts are compared via quantitative evaluation. Hence, if pixels of texts have higher saliency values, scene texts themselves are proven to be salient. The quantitative evaluation will be done using the receiver operating characteristic (ROC) analysis.


Visual Saliency Models for Text Detection in Real World.

Gao R, Uchida S, Shahab A, Shafait F, Frinken V - PLoS ONE (2014)

50 examples of images randomly selected from the database.Each example consists of input image (left) and pixel-level ground truth image (right). (Copyrights of those figures are listed in Acknowledgments.)
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4262416&req=5

pone-0114539-g002: 50 examples of images randomly selected from the database.Each example consists of input image (left) and pixel-level ground truth image (right). (Copyrights of those figures are listed in Acknowledgments.)
Mentions: In this paper, we aim at proving the saliency of scene texts via a large image database which contains 3018 scene images with 96844 text characters totally. Fig. 2 shows some examples of natural scene images and their corresponding pixel level ground truth image randomly selected from the database. With this database and five state-of-the-art visual saliency models, saliency maps are calculated. Saliency values of pixels belonging to texts and non-texts are compared via quantitative evaluation. Hence, if pixels of texts have higher saliency values, scene texts themselves are proven to be salient. The quantitative evaluation will be done using the receiver operating characteristic (ROC) analysis.

Bottom Line: In the first stage, Itti's model is used to calculate the saliency map, and Otsu's global thresholding algorithm is applied to extract the salient region that we are interested in.In the second stage, Itti's model is applied to the salient region to calculate the final saliency map.An experimental evaluation demonstrates that the proposed model outperforms Itti's model in terms of captured scene texts.

View Article: PubMed Central - PubMed

Affiliation: Department of Advanced Information technology, Kyushu University, Fukuoka, Fukuoka, Japan.

ABSTRACT
This paper evaluates the degree of saliency of texts in natural scenes using visual saliency models. A large scale scene image database with pixel level ground truth is created for this purpose. Using this scene image database and five state-of-the-art models, visual saliency maps that represent the degree of saliency of the objects are calculated. The receiver operating characteristic curve is employed in order to evaluate the saliency of scene texts, which is calculated by visual saliency models. A visualization of the distribution of scene texts and non-texts in the space constructed by three kinds of saliency maps, which are calculated using Itti's visual saliency model with intensity, color and orientation features, is given. This visualization of distribution indicates that text characters are more salient than their non-text neighbors, and can be captured from the background. Therefore, scene texts can be extracted from the scene images. With this in mind, a new visual saliency architecture, named hierarchical visual saliency model, is proposed. Hierarchical visual saliency model is based on Itti's model and consists of two stages. In the first stage, Itti's model is used to calculate the saliency map, and Otsu's global thresholding algorithm is applied to extract the salient region that we are interested in. In the second stage, Itti's model is applied to the salient region to calculate the final saliency map. An experimental evaluation demonstrates that the proposed model outperforms Itti's model in terms of captured scene texts.

No MeSH data available.


Related in: MedlinePlus