PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing

by   Diederik Paul Moeys, et al.

Machine vision systems using convolutional neural networks (CNNs) for robotic applications are increasingly being developed. Conventional vision CNNs are driven by camera frames at constant sample rate, thus achieving a fixed latency and power consumption tradeoff. This paper describes further work on the first experiments of a closed-loop robotic system integrating a CNN together with a Dynamic and Active Pixel Vision Sensor (DAVIS) in a predator/prey scenario. The DAVIS, mounted on the predator Summit XL robot, produces frames at a fixed 15 Hz frame-rate and Dynamic Vision Sensor (DVS) histograms containing 5k ON and OFF events at a variable frame-rate ranging from 15-500 Hz depending on the robot speeds. In contrast to conventional frame-based systems, the latency and processing cost depends on the rate of change of the image. The CNN is trained offline on the 1.25h labeled dataset to recognize the position and size of the prey robot, in the field of view of the predator. During inference, combining the ten output classes of the CNN allows extracting the analog position vector of the prey relative to the predator with a mean 8.7 estimation. The system is compatible with conventional deep learning technology, but achieves a variable latency-power tradeoff that adapts automatically to the dynamics. Finally, investigations on the robustness of the algorithm, a human performance comparison and a deconvolution analysis are also explored.


page 2

page 3

page 4

page 6


Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network

This paper describes the application of a Convolutional Neural Network (...

ColibriUAV: An Ultra-Fast, Energy-Efficient Neuromorphic Edge Processing UAV-Platform with Event-Based and Frame-Based Cameras

The interest in dynamic vision sensor (DVS)-powered unmanned aerial vehi...

Dynamic Vision Sensor integration on FPGA-based CNN accelerators for high-speed visual classification

Deep-learning is a cutting edge theory that is being applied to many fie...

DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction

Neuromorphic event cameras are useful for dynamic vision problems under ...

Event-based Agile Object Catching with a Quadrupedal Robot

Quadrupedal robots are conquering various indoor and outdoor application...

Dynamic Vision Sensors for Human Activity Recognition

Unlike conventional cameras which capture video at a fixed frame rate, D...

Bringing A Robot Simulator to the SCAMP Vision System

This work develops and demonstrates the integration of the SCAMP-5d visi...

Please sign up or login with your details

Forgot password? Click here to reset