Limits...
On event-based optical flow detection.

Brosch T, Tschechne S, Neumann H - Front Neurosci (2015)

Bottom Line: Furthermore, a stage of surround normalization is incorporated.Together with the filtering this defines a canonical circuit for motion feature detection.The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations.

View Article: PubMed Central - PubMed

Affiliation: Faculty of Engineering and Computer Science, Institute of Neural Information Processing, Ulm University Ulm, Germany.

ABSTRACT
Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations.

No MeSH data available.


Related in: MedlinePlus

Rightward moving 1D edge illustrated in the x–t-domain. The velocity is defined by the direction and the speed of the spatio-temporal change. In the case depicted here, the direction is to the right and the speed is encoded by the angle θ between the x-axis and the normal vector n along the spatio-temporal gradient direction (measured in counter-clockwise rotation). Alternatively, for a contrast edge of known finite transition width Δx, the speed can be inferred from the time Δt, it takes the contrast edge to pass a specific location on the x–axis.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4403305&req=5

Figure 3: Rightward moving 1D edge illustrated in the x–t-domain. The velocity is defined by the direction and the speed of the spatio-temporal change. In the case depicted here, the direction is to the right and the speed is encoded by the angle θ between the x-axis and the normal vector n along the spatio-temporal gradient direction (measured in counter-clockwise rotation). Alternatively, for a contrast edge of known finite transition width Δx, the speed can be inferred from the time Δt, it takes the contrast edge to pass a specific location on the x–axis.

Mentions: When this gray-level transition moves through the origin at time t = 0 it generates a slanted line with normal n in the x–t-space (c.f. Figure 3). The speed s of the moving contrast edge is given by s = sin(θ)/cos(θ), where θ is the angle between n and the x-axis (this is identical to the angle between the edge tangent and the t–axis). For a stationary gray-level edge (zero speed) we get θ = 0 (i.e., the edge generated by the DL transition in the x–t-domain is located on the t-axis). Positive angles θ ∈ (0°, 90°) (measured in counterclockwise direction) define leftward motion, while negative angles define rightward motion. For illustrative purposes, we consider an DL contrast that is moving to the right (c.f. Figure 3). The spatio-temporal gradient is maximal along the normal direction n = (cos θ, sin θ)T. The function g(x; t) describing the resulting space-time picture of the movement in the x-t-space is thus given aswith x⊥ = x · cos θ − t · sin θ. The respective partial temporal and spatial derivatives are given as(5)∂∂tgσθ(x;t)=−c2πσexp(−x⊥22σ2) · sinθ,(6)∂∂xgσθ(x;t) =c2πσexp(−x⊥22σ2) · cosθ.


On event-based optical flow detection.

Brosch T, Tschechne S, Neumann H - Front Neurosci (2015)

Rightward moving 1D edge illustrated in the x–t-domain. The velocity is defined by the direction and the speed of the spatio-temporal change. In the case depicted here, the direction is to the right and the speed is encoded by the angle θ between the x-axis and the normal vector n along the spatio-temporal gradient direction (measured in counter-clockwise rotation). Alternatively, for a contrast edge of known finite transition width Δx, the speed can be inferred from the time Δt, it takes the contrast edge to pass a specific location on the x–axis.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4403305&req=5

Figure 3: Rightward moving 1D edge illustrated in the x–t-domain. The velocity is defined by the direction and the speed of the spatio-temporal change. In the case depicted here, the direction is to the right and the speed is encoded by the angle θ between the x-axis and the normal vector n along the spatio-temporal gradient direction (measured in counter-clockwise rotation). Alternatively, for a contrast edge of known finite transition width Δx, the speed can be inferred from the time Δt, it takes the contrast edge to pass a specific location on the x–axis.
Mentions: When this gray-level transition moves through the origin at time t = 0 it generates a slanted line with normal n in the x–t-space (c.f. Figure 3). The speed s of the moving contrast edge is given by s = sin(θ)/cos(θ), where θ is the angle between n and the x-axis (this is identical to the angle between the edge tangent and the t–axis). For a stationary gray-level edge (zero speed) we get θ = 0 (i.e., the edge generated by the DL transition in the x–t-domain is located on the t-axis). Positive angles θ ∈ (0°, 90°) (measured in counterclockwise direction) define leftward motion, while negative angles define rightward motion. For illustrative purposes, we consider an DL contrast that is moving to the right (c.f. Figure 3). The spatio-temporal gradient is maximal along the normal direction n = (cos θ, sin θ)T. The function g(x; t) describing the resulting space-time picture of the movement in the x-t-space is thus given aswith x⊥ = x · cos θ − t · sin θ. The respective partial temporal and spatial derivatives are given as(5)∂∂tgσθ(x;t)=−c2πσexp(−x⊥22σ2) · sinθ,(6)∂∂xgσθ(x;t) =c2πσexp(−x⊥22σ2) · cosθ.

Bottom Line: Furthermore, a stage of surround normalization is incorporated.Together with the filtering this defines a canonical circuit for motion feature detection.The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations.

View Article: PubMed Central - PubMed

Affiliation: Faculty of Engineering and Computer Science, Institute of Neural Information Processing, Ulm University Ulm, Germany.

ABSTRACT
Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations.

No MeSH data available.


Related in: MedlinePlus