Limits...
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.

Orchard G, Jayawant A, Cohen GK, Thakor N - Front Neurosci (2015)

Bottom Line: We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms.This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare.Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.

View Article: PubMed Central - PubMed

Affiliation: Singapore Institute for Neurotechnology (SINAPSE), National University of Singapore Singapore, Singapore ; Temasek Labs, National University of Singapore Singapore, Singapore.

ABSTRACT
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.

No MeSH data available.


(A) A picture of the ATIS mounted on the pan tilt unit used in the conversion system. (B) The ATIS placed viewing the LCD monitor.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4644806&req=5

Figure 2: (A) A picture of the ATIS mounted on the pan tilt unit used in the conversion system. (B) The ATIS placed viewing the LCD monitor.

Mentions: Our conversion system relies on the Asynchronous Time-based Image Sensor (ATIS; Posch et al., 2011) for recording. To control motion of the ATIS, we constructed our own pan-tilt mechanism as shown in Figure 2. The mechanism consists of two Dynamixel MX-28 motors6 connected using a bracket. Each motor allows programming of a target position, speed, and acceleration. A custom housing for the ATIS including lens mount and a connection to the pan-tilt mechanism was 3D printed. The motors themselves sit on a 3D printed platform which gives the middle of the sensor a height of 19 cm, high enough to line up with the vertical center of the monitor when the monitor is adjusted to its lowest possible position. The motors interface directly to an Opal Kelly XEM6010 board containing a Xilinx Spartan-6 l × 150 Field Programmable Gate Array (FPGA)7 using a differential pair. The Opal Kelly board also serves as an interface between the ATIS and host PC. Whenever a motor command is executed, the FPGA inserts a marker into the event stream from the ATIS to indicate the time at which the motor command was executed. The entire sensor setup was placed at a distance of 23 cm from the monitor and enclosed in a cupboard to attenuate the effects of changing ambient light. A Computar M1214-MP2 2/3″ 12 mm f/1.4 lens8 was used.


Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.

Orchard G, Jayawant A, Cohen GK, Thakor N - Front Neurosci (2015)

(A) A picture of the ATIS mounted on the pan tilt unit used in the conversion system. (B) The ATIS placed viewing the LCD monitor.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4644806&req=5

Figure 2: (A) A picture of the ATIS mounted on the pan tilt unit used in the conversion system. (B) The ATIS placed viewing the LCD monitor.
Mentions: Our conversion system relies on the Asynchronous Time-based Image Sensor (ATIS; Posch et al., 2011) for recording. To control motion of the ATIS, we constructed our own pan-tilt mechanism as shown in Figure 2. The mechanism consists of two Dynamixel MX-28 motors6 connected using a bracket. Each motor allows programming of a target position, speed, and acceleration. A custom housing for the ATIS including lens mount and a connection to the pan-tilt mechanism was 3D printed. The motors themselves sit on a 3D printed platform which gives the middle of the sensor a height of 19 cm, high enough to line up with the vertical center of the monitor when the monitor is adjusted to its lowest possible position. The motors interface directly to an Opal Kelly XEM6010 board containing a Xilinx Spartan-6 l × 150 Field Programmable Gate Array (FPGA)7 using a differential pair. The Opal Kelly board also serves as an interface between the ATIS and host PC. Whenever a motor command is executed, the FPGA inserts a marker into the event stream from the ATIS to indicate the time at which the motor command was executed. The entire sensor setup was placed at a distance of 23 cm from the monitor and enclosed in a cupboard to attenuate the effects of changing ambient light. A Computar M1214-MP2 2/3″ 12 mm f/1.4 lens8 was used.

Bottom Line: We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms.This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare.Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.

View Article: PubMed Central - PubMed

Affiliation: Singapore Institute for Neurotechnology (SINAPSE), National University of Singapore Singapore, Singapore ; Temasek Labs, National University of Singapore Singapore, Singapore.

ABSTRACT
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.

No MeSH data available.