Limits...
On event-based optical flow detection.

Brosch T, Tschechne S, Neumann H - Front Neurosci (2015)

Bottom Line: Furthermore, a stage of surround normalization is incorporated.Together with the filtering this defines a canonical circuit for motion feature detection.The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations.

View Article: PubMed Central - PubMed

Affiliation: Faculty of Engineering and Computer Science, Institute of Neural Information Processing, Ulm University Ulm, Germany.

ABSTRACT
Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations.

No MeSH data available.


Related in: MedlinePlus

Encoding of motion transparency in the proposed model. (A) Illustration of a single spatio-temporal filter (surfaces indicate that F = ± 0.0005 for red/blue). Note that this filter resembles measurements of, for example, the cat's striate cortex (e.g., DeAngelis et al., 1995, Their Figure 2B). (B) Illustration of the preferred frequencies (surfaces indicate that // = 0.1) of four filters of a filter bank in the Fourier domain [red pair of ellipsoids corresponds to Fourier spectrum of the filter shown in (A)]. Note that the combined ellipsoids sample the frequency space, with each pair responding to a certain speed and motion direction. (C) Stimulus consisting of a random dot pattern of dots moving to the right or to the top with equal speeds. (D) Motion histogram of filter responses. While it is not possible to fit a plane to the resulting event cloud, the proposed filter based approach encodes both movement directions.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4403305&req=5

Figure 10: Encoding of motion transparency in the proposed model. (A) Illustration of a single spatio-temporal filter (surfaces indicate that F = ± 0.0005 for red/blue). Note that this filter resembles measurements of, for example, the cat's striate cortex (e.g., DeAngelis et al., 1995, Their Figure 2B). (B) Illustration of the preferred frequencies (surfaces indicate that // = 0.1) of four filters of a filter bank in the Fourier domain [red pair of ellipsoids corresponds to Fourier spectrum of the filter shown in (A)]. Note that the combined ellipsoids sample the frequency space, with each pair responding to a certain speed and motion direction. (C) Stimulus consisting of a random dot pattern of dots moving to the right or to the top with equal speeds. (D) Motion histogram of filter responses. While it is not possible to fit a plane to the resulting event cloud, the proposed filter based approach encodes both movement directions.

Mentions: Unlike the motion of opaque surfaces transparent motion is perceived when multiple motions are presented in the same part of visual space. Few computational model mechanisms have been proposed in the literature that allow to segregate multiple motions (see e.g., Raudies and Neumann, 2010; Raudies et al., 2011 which include recent overviews). All such model approaches are based on frame-based inputs. For that reason, we investigate how transparent motion induced by random dot patterns moving in different directions is represented in event-clouds originating from DVSs. In general, filter-based mechanisms are able to encode estimated motions for multiple directions at a single location. In contrast, it is not possible to fit a plane at positions where two (or multiple) event clouds generated by, for example, two crossing pedestrians intersect without applying additional knowledge. The filter mechanisms proposed in this work naturally encode motion directions within the uncertainty of the integration fields (c.f. Figures 10A,B). In order to build such a filter bank, the frequency space in Figure 10B needs to be sampled properly in accordance with the theoretical analysis outlined in Section 2 (c.f. Table 1).


On event-based optical flow detection.

Brosch T, Tschechne S, Neumann H - Front Neurosci (2015)

Encoding of motion transparency in the proposed model. (A) Illustration of a single spatio-temporal filter (surfaces indicate that F = ± 0.0005 for red/blue). Note that this filter resembles measurements of, for example, the cat's striate cortex (e.g., DeAngelis et al., 1995, Their Figure 2B). (B) Illustration of the preferred frequencies (surfaces indicate that // = 0.1) of four filters of a filter bank in the Fourier domain [red pair of ellipsoids corresponds to Fourier spectrum of the filter shown in (A)]. Note that the combined ellipsoids sample the frequency space, with each pair responding to a certain speed and motion direction. (C) Stimulus consisting of a random dot pattern of dots moving to the right or to the top with equal speeds. (D) Motion histogram of filter responses. While it is not possible to fit a plane to the resulting event cloud, the proposed filter based approach encodes both movement directions.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4403305&req=5

Figure 10: Encoding of motion transparency in the proposed model. (A) Illustration of a single spatio-temporal filter (surfaces indicate that F = ± 0.0005 for red/blue). Note that this filter resembles measurements of, for example, the cat's striate cortex (e.g., DeAngelis et al., 1995, Their Figure 2B). (B) Illustration of the preferred frequencies (surfaces indicate that // = 0.1) of four filters of a filter bank in the Fourier domain [red pair of ellipsoids corresponds to Fourier spectrum of the filter shown in (A)]. Note that the combined ellipsoids sample the frequency space, with each pair responding to a certain speed and motion direction. (C) Stimulus consisting of a random dot pattern of dots moving to the right or to the top with equal speeds. (D) Motion histogram of filter responses. While it is not possible to fit a plane to the resulting event cloud, the proposed filter based approach encodes both movement directions.
Mentions: Unlike the motion of opaque surfaces transparent motion is perceived when multiple motions are presented in the same part of visual space. Few computational model mechanisms have been proposed in the literature that allow to segregate multiple motions (see e.g., Raudies and Neumann, 2010; Raudies et al., 2011 which include recent overviews). All such model approaches are based on frame-based inputs. For that reason, we investigate how transparent motion induced by random dot patterns moving in different directions is represented in event-clouds originating from DVSs. In general, filter-based mechanisms are able to encode estimated motions for multiple directions at a single location. In contrast, it is not possible to fit a plane at positions where two (or multiple) event clouds generated by, for example, two crossing pedestrians intersect without applying additional knowledge. The filter mechanisms proposed in this work naturally encode motion directions within the uncertainty of the integration fields (c.f. Figures 10A,B). In order to build such a filter bank, the frequency space in Figure 10B needs to be sampled properly in accordance with the theoretical analysis outlined in Section 2 (c.f. Table 1).

Bottom Line: Furthermore, a stage of surround normalization is incorporated.Together with the filtering this defines a canonical circuit for motion feature detection.The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations.

View Article: PubMed Central - PubMed

Affiliation: Faculty of Engineering and Computer Science, Institute of Neural Information Processing, Ulm University Ulm, Germany.

ABSTRACT
Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations.

No MeSH data available.


Related in: MedlinePlus