Limits...
A unified account of perceptual layering and surface appearance in terms of gamut relativity.

Vladusich T, McDonnell MD - PLoS ONE (2014)

Bottom Line: Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye.Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking.Here we present the first general computational model of perceptual layering and surface appearance--based on a boarder theoretical framework called gamut relativity--that is consistent with these demonstrations.

View Article: PubMed Central - PubMed

Affiliation: Institute for Telecommunications Research, University of South Australia, Mawson Lakes, 5095, Australia; Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA, United States of America.

ABSTRACT
When we look at the world--or a graphical depiction of the world--we perceive surface materials (e.g. a ceramic black and white checkerboard) independently of variations in illumination (e.g. shading or shadow) and atmospheric media (e.g. clouds or smoke). Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye. One way to understand how these percepts arise is to assume that the visual system parses patterns of light into layered perceptual representations of surfaces, illumination and atmospheric media, one seen through another. Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking. Here we present the first general computational model of perceptual layering and surface appearance--based on a boarder theoretical framework called gamut relativity--that is consistent with these demonstrations. The model (a) qualitatively explains striking effects of perceptual transparency, figure-ground separation and lightness, (b) quantitatively accounts for the role of stimulus- and task-driven constraints on perceptual matching performance, and (c) unifies two prominent theoretical frameworks for understanding surface appearance. The model thereby provides novel insights into the remarkable capacity of the human visual system to represent and identify surface materials, illumination and atmospheric media, which can be exploited in computer graphics applications.

No MeSH data available.


Related in: MedlinePlus

Two examples of image segmentations used to guide the computation of region luminance and contrast.(A) Adelson checkerboard image [1], modified with permission under the Creative Commons Attribution License. (B) Segmentation computed with a standard computer vision algorithm [84] (parameters: , ). (C) The algorithm returns region labels for each image region. (D) Region labels enable the calculation of mean pixel or luminance values within each segmented region. (E-H) Same as above, except applied to a simple version of the Anderson-Winawer display (adapted from http://www.psy.ritsumei.ac.jp/~akitaoka/AIC2009.html with permission).
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4234682&req=5

pone-0113159-g003: Two examples of image segmentations used to guide the computation of region luminance and contrast.(A) Adelson checkerboard image [1], modified with permission under the Creative Commons Attribution License. (B) Segmentation computed with a standard computer vision algorithm [84] (parameters: , ). (C) The algorithm returns region labels for each image region. (D) Region labels enable the calculation of mean pixel or luminance values within each segmented region. (E-H) Same as above, except applied to a simple version of the Anderson-Winawer display (adapted from http://www.psy.ritsumei.ac.jp/~akitaoka/AIC2009.html with permission).

Mentions: Fig. 3 illustrates how a standard segmentation algorithm from the computer vision literature [84] captures the intuition of a suitable segmentation to compute regional luminance and contrast in our analysis. The algorithm segments the Adelson checkerboard image and a simplified version of the Anderson-Winawer display into labelled regions in which mean pixel or luminance values are calculated. The segmented regions are thus characterised by differences in mean luminance, and each individual region is immediately surrounded by one or more regions containing a different mean luminance value.


A unified account of perceptual layering and surface appearance in terms of gamut relativity.

Vladusich T, McDonnell MD - PLoS ONE (2014)

Two examples of image segmentations used to guide the computation of region luminance and contrast.(A) Adelson checkerboard image [1], modified with permission under the Creative Commons Attribution License. (B) Segmentation computed with a standard computer vision algorithm [84] (parameters: , ). (C) The algorithm returns region labels for each image region. (D) Region labels enable the calculation of mean pixel or luminance values within each segmented region. (E-H) Same as above, except applied to a simple version of the Anderson-Winawer display (adapted from http://www.psy.ritsumei.ac.jp/~akitaoka/AIC2009.html with permission).
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4234682&req=5

pone-0113159-g003: Two examples of image segmentations used to guide the computation of region luminance and contrast.(A) Adelson checkerboard image [1], modified with permission under the Creative Commons Attribution License. (B) Segmentation computed with a standard computer vision algorithm [84] (parameters: , ). (C) The algorithm returns region labels for each image region. (D) Region labels enable the calculation of mean pixel or luminance values within each segmented region. (E-H) Same as above, except applied to a simple version of the Anderson-Winawer display (adapted from http://www.psy.ritsumei.ac.jp/~akitaoka/AIC2009.html with permission).
Mentions: Fig. 3 illustrates how a standard segmentation algorithm from the computer vision literature [84] captures the intuition of a suitable segmentation to compute regional luminance and contrast in our analysis. The algorithm segments the Adelson checkerboard image and a simplified version of the Anderson-Winawer display into labelled regions in which mean pixel or luminance values are calculated. The segmented regions are thus characterised by differences in mean luminance, and each individual region is immediately surrounded by one or more regions containing a different mean luminance value.

Bottom Line: Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye.Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking.Here we present the first general computational model of perceptual layering and surface appearance--based on a boarder theoretical framework called gamut relativity--that is consistent with these demonstrations.

View Article: PubMed Central - PubMed

Affiliation: Institute for Telecommunications Research, University of South Australia, Mawson Lakes, 5095, Australia; Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA, United States of America.

ABSTRACT
When we look at the world--or a graphical depiction of the world--we perceive surface materials (e.g. a ceramic black and white checkerboard) independently of variations in illumination (e.g. shading or shadow) and atmospheric media (e.g. clouds or smoke). Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye. One way to understand how these percepts arise is to assume that the visual system parses patterns of light into layered perceptual representations of surfaces, illumination and atmospheric media, one seen through another. Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking. Here we present the first general computational model of perceptual layering and surface appearance--based on a boarder theoretical framework called gamut relativity--that is consistent with these demonstrations. The model (a) qualitatively explains striking effects of perceptual transparency, figure-ground separation and lightness, (b) quantitatively accounts for the role of stimulus- and task-driven constraints on perceptual matching performance, and (c) unifies two prominent theoretical frameworks for understanding surface appearance. The model thereby provides novel insights into the remarkable capacity of the human visual system to represent and identify surface materials, illumination and atmospheric media, which can be exploited in computer graphics applications.

No MeSH data available.


Related in: MedlinePlus