Motion segmentation and perception

ED sensing allows the observation of the full trajectory of a moving object, this capability can be exploited to improve the behaviour of robots during manipulation and grasping, as well as for interaction with persons and objects in collaborative tasks. Despite the inherent segmentation of moving objects and the spatio-temporal information available from ED sensory stream, in a robotic scenario, where the robot moves in a cluttered environment, a large number of events arise from the edges in the scene. We are developing methods to robustly select a salient target (using stimulus-driven models of selective attention) and track it with probabilistic filtering in the event-space, as well as methods to compute the motion of objects and discount events due to ego-motion.

 

Independent Motion Detection – Valentina Vasco

We want the neuromorphic iCub to interact with a dynamic environment, in which humans, objects and the robot itself simultaneously move. Perceiving the motion of a target is crucial for a successful interaction. We want the neuromorphic iCub to interact with a dynamic environment, in which humans, objects and the robot itself simultaneously move. Perceiving the motion of a target is crucial for a successful interaction, for example for avoidance or attentional interaction with human collaborators handling objects, and event-driven vision offers a rich source of information for developing fast and low power vision algorithms for robots in dynamic environments. Visual tracking is easily achieved with a stationary event camera, as only the motion of the target causes events to be produced. However, the movement of the robot causes events to be generated due to all contrast in the field of view and identifying the motion of a target becomes more difficult. My goal is to segment events caused by ego-motion from those caused by independent object motion. Such a technique has wide applicability for event-driven algorithms in robotics. For example, identifying independent object motion makes detection and tracking of moving objects simple. Alternatively, segmentation of ego-motion events can be used to remove outliers to improve event-based visual odometry methods in dynamic scenes. The proposed method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot’s joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. The method is able to segment independently moving corner events achieving a precision higher than 90 %, robustly with changing speeds of the target and the robot and with different objects.


 

Object Detection and Attention – Massimiliano Iacono

This project aims at the implementation of attention and object detection on the iCub robot using the embedded event driven sensors. The main goal is the fast computation of the attention point used to focus the robot gaze in a human-like fashion. This can be useful in HRI scenarios where the human requires the robot to focus on something, for instance to show the robot an object to recognize, or to grasp. Our setup includes a prototypical sensor embedding both traditional and event-driven cameras. Exploiting this setup a fast attention system based on the event stream can trigger a recognition algorithm on the frame cameras or, vice-versa, once something is detected in the RGB domain, a tracking algorithm can run on the event-driven sensor. We are currently investigating the best techniques to tackle such problems taking advantage of the dual camera setup. In a recent work we have been using deep learning to recognise objects within the RGB images and use their bounding boxes as ground truth for the events. The so collected dataset could then be used to train a convolutional neural network with the event-based data used to recognize objects using solely the event camera. In the future we want to build upon this and study techniques more suitable to the output event stream, such as spiking neural networks to take into account temporal information and build extremely short latency visual pipelines.


 

Event-Driven Tracking – Arren Glover

The “object tracking” research directive aims to produce robust solutions to event-based tracking specifically for robotic scenarios, in which robot movement results in the movement of the entire scene, rather than a single object. The vision algorithms are tightly integrated in the control of the robot to produce smooth reactions to the current and future positions of the target, as it is in motion. This project complements the attention and independent motion detection projects, for form a complete and robust solution for understanding scene dynamics with event-driven perception for robotics.


 

Dynamic Affordances and Control - Marco Monforte

In humans, such errors are counterbalanced by adaptive behaviours that build on the knowledge of the results of actions and on the early detection of failures that allows for the execution of corrective actions. The goal of this project is to exploit machine learning to devise the dynamics/behaviours of objects during clumsy or imprecise manipulation and grasping, and using such learned behaviours to plan and perform corrective actions. A bad grasping will cause the object to fall in specific ways, the robot can then learn the association between a specific action parameter (for example hand pre- shaping, direction of grasping, object 3D configuration, etc.) and the trajectory of the falling object, effectively learning its “falling affordances”. The robot can then use this information to better plan the next grasping action on the same object and to perform corrective actions.