Limits...
Perceptual influence of elementary three-dimensional geometry: (2) fundamental object parts.

Tamosiunaite M, Sutterlütti RM, Stein SC, Wörgötter F - Front Psychol (2015)

Bottom Line: Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name-ability.In addition, we observed that using other image-segmentation methods will not yield nameable entities.This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities.

View Article: PubMed Central - PubMed

Affiliation: Faculty of Physics - Biophysics and Bernstein Center for Computational Neuroscience, University of Göttingen Göttingen, Germany ; Department of Informatics, Vytautas Magnus University Kaunas, Lithuania.

ABSTRACT
Objects usually consist of parts and the question arises whether there are perceptual features which allow breaking down an object into its fundamental parts without any additional (e.g., functional) information. As in the first paper of this sequence, we focus on the division of our world along convex to concave surface transitions. Here we are using machine vision to produce convex segments from 3D-scenes. We assume that a fundamental part is one, which we can easily name while at the same time there is no natural subdivision possible into smaller parts. Hence in this experiment we presented the computer vision generated segments to our participants and asked whether they can identify and name them. Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name-ability. In addition, we observed that using other image-segmentation methods will not yield nameable entities. This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities. It appears that other or further subdivisions do not carry such a strong semantical link to our everyday language as there are no names for them.

No MeSH data available.


Related in: MedlinePlus

Left panels show all visual scenes (RGB images only) used for Experiment 2 of this study and their segmentations. Scenes have been segmented by a state-of-the-art, bottom-up segmentation algorithm which uses color similarities (Ben Salah et al., 2011) and the results show that these segments rarely correspond to objects (middle panels). Note, it is possible to train classifiers with object models or partial models to obtain segmentation of complex, compound objects also in such scenes (Richtsfeld et al., 2012; Silberman et al., 2012; Ückermann et al., 2012). This, however, requires a human-defined training set. Different from this, here we are strictly concerned with model-free, bottom-up object segmentation. The here used 3D-segmentation, back-projected onto the images, is shown in the right panels.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4585234&req=5

Figure 3: Left panels show all visual scenes (RGB images only) used for Experiment 2 of this study and their segmentations. Scenes have been segmented by a state-of-the-art, bottom-up segmentation algorithm which uses color similarities (Ben Salah et al., 2011) and the results show that these segments rarely correspond to objects (middle panels). Note, it is possible to train classifiers with object models or partial models to obtain segmentation of complex, compound objects also in such scenes (Richtsfeld et al., 2012; Silberman et al., 2012; Ückermann et al., 2012). This, however, requires a human-defined training set. Different from this, here we are strictly concerned with model-free, bottom-up object segmentation. The here used 3D-segmentation, back-projected onto the images, is shown in the right panels.

Mentions: One example scene is shown in Figure 2A recorded with an RGB-D sensor (“Kinect”), which produces 3D-point cloud data. All other scenes are of equal complexity (Figure 3). Using an advanced, model-free color-based segmentation method (Ben Salah et al., 2011) one can see that the resulting image segments rarely correspond to objects in the scene (Figure 2B) and this is also extremely dependent on illumination (see Figure 3, middle). Unwanted merging or splitting of objects will, regardless of the chosen segmentation parameters, generically happen (e.g., “throat+face,” “fridge-fragments,” etc. Figure 2B).


Perceptual influence of elementary three-dimensional geometry: (2) fundamental object parts.

Tamosiunaite M, Sutterlütti RM, Stein SC, Wörgötter F - Front Psychol (2015)

Left panels show all visual scenes (RGB images only) used for Experiment 2 of this study and their segmentations. Scenes have been segmented by a state-of-the-art, bottom-up segmentation algorithm which uses color similarities (Ben Salah et al., 2011) and the results show that these segments rarely correspond to objects (middle panels). Note, it is possible to train classifiers with object models or partial models to obtain segmentation of complex, compound objects also in such scenes (Richtsfeld et al., 2012; Silberman et al., 2012; Ückermann et al., 2012). This, however, requires a human-defined training set. Different from this, here we are strictly concerned with model-free, bottom-up object segmentation. The here used 3D-segmentation, back-projected onto the images, is shown in the right panels.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4585234&req=5

Figure 3: Left panels show all visual scenes (RGB images only) used for Experiment 2 of this study and their segmentations. Scenes have been segmented by a state-of-the-art, bottom-up segmentation algorithm which uses color similarities (Ben Salah et al., 2011) and the results show that these segments rarely correspond to objects (middle panels). Note, it is possible to train classifiers with object models or partial models to obtain segmentation of complex, compound objects also in such scenes (Richtsfeld et al., 2012; Silberman et al., 2012; Ückermann et al., 2012). This, however, requires a human-defined training set. Different from this, here we are strictly concerned with model-free, bottom-up object segmentation. The here used 3D-segmentation, back-projected onto the images, is shown in the right panels.
Mentions: One example scene is shown in Figure 2A recorded with an RGB-D sensor (“Kinect”), which produces 3D-point cloud data. All other scenes are of equal complexity (Figure 3). Using an advanced, model-free color-based segmentation method (Ben Salah et al., 2011) one can see that the resulting image segments rarely correspond to objects in the scene (Figure 2B) and this is also extremely dependent on illumination (see Figure 3, middle). Unwanted merging or splitting of objects will, regardless of the chosen segmentation parameters, generically happen (e.g., “throat+face,” “fridge-fragments,” etc. Figure 2B).

Bottom Line: Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name-ability.In addition, we observed that using other image-segmentation methods will not yield nameable entities.This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities.

View Article: PubMed Central - PubMed

Affiliation: Faculty of Physics - Biophysics and Bernstein Center for Computational Neuroscience, University of Göttingen Göttingen, Germany ; Department of Informatics, Vytautas Magnus University Kaunas, Lithuania.

ABSTRACT
Objects usually consist of parts and the question arises whether there are perceptual features which allow breaking down an object into its fundamental parts without any additional (e.g., functional) information. As in the first paper of this sequence, we focus on the division of our world along convex to concave surface transitions. Here we are using machine vision to produce convex segments from 3D-scenes. We assume that a fundamental part is one, which we can easily name while at the same time there is no natural subdivision possible into smaller parts. Hence in this experiment we presented the computer vision generated segments to our participants and asked whether they can identify and name them. Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name-ability. In addition, we observed that using other image-segmentation methods will not yield nameable entities. This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities. It appears that other or further subdivisions do not carry such a strong semantical link to our everyday language as there are no names for them.

No MeSH data available.


Related in: MedlinePlus