An Embedded AER Dynamic Vision Sensor
for Low-Latency Pole Balancing
Jorg
Conradt, Tobi Delbruck, Matthew Cook
{conradt, tobi, cook}@ini.phys.ethz.ch
Institute of Neuroinformatics, ETH / University
Zurich
Winterthurerstrasse 190, CH - 8057 Zürich
![]() |
![]() |
Animals by far outperform current technology when reacting to visual
stimuli in low processing requirements, demonstrating astonishingly fast
reaction times to changes. Current real-time vision based robotic control
approaches, in contrast, typically require high
computational resources to extract relevant information from sequences of
images provided by a video camera. Most of the information contained in
consecutive images is redundant, which often turns the vision processing
algorithms into a limiting factor in high-speed robot control. As an example,
robotic pole balancing with large objects is a well known exercise in current
robotics research, but balancing arbitrary small poles (such as a pencil, which
is too small for a human to balance) has not yet been achieved due to
limitations in vision processing.
At the Institute of Neuroinformatics we developed an analog silicon retina (http://siliconretina.ini.uzh.ch), which, in contrast to current video cameras, only reports individual events ("spikes") from individual pixels when the illumination changes within the pixel's field of view. Transmitting only the "on" and "off" spike events, instead of transmitting full vision frames, drastically reduces the amount of data processing required to react to environmental changes. This information encoding is directly inspired by the spike based information transfer from the human eye to visual cortex.
In our demonstration, we address the challenging problem of balancing an arbitrary standard pencil, based solely on visual information. A stereo pair of silicon retinas reports vision events caused by the moving pencil, which is standing on its tip on an actuated table. Then our processing algorithm extracts the pencil position and angle without ever using a "full scene" visual representation, but simply by processing only the spikes relevant to the pencil's motion.
Our system uses neurally inspired hardware and a neurally inspired form of communication to achieve a difficult goal. Thus, it is truly a Neural Information Processing System.
More details in our ISCAS paper or our NIPS poster.
Demonstration Video Balancing: If the video does not display properly try this link to youtube. |
Demonstration Video Changing Background: If the video does not display properly try this link to youtube. |