Figure 1: Our method segments a set of events produced by an event-based camera (Left, with color image of the scene for illustration) into the different moving objects causing them (Right: pedestrian, cyclist and camera's ego-motion, in color). We propose an iterative clustering algorithm (Middle block) that jointly estimates the motion parameters θ and event-cluster membership probabilities P to best explain the scene, yielding motion-compensated event images on all clusters (Right).
AbstractIn contrast to traditional cameras, whose pixels have a common exposure time, event-based cameras are novel bio-inspired sensors whose pixels work independently and asynchronously output intensity changes (called "events"), with microsecond resolution. Since events are caused by the apparent motion of objects, event-based cameras sample visual information based on the scene dynamics and are, therefore, a more natural fit than traditional cameras to acquire motion, especially at high speeds, where traditional cameras suffer from motion blur. However, distinguishing between events caused by different moving objects and by the camera's ego-motion is a challenging task. We present the first per-event segmentation method for splitting a scene into independently moving objects. Our method jointly estimates the event-object associations (i.e., segmentation) and the motion parameters of the objects (or the background) by maximization of an objective function, which builds upon recent results on event-based motion-compensation. We provide a thorough evaluation of our method on a public dataset, outperforming the state-of-the-art by as much as 10 %. We also show the first quantitative evaluation of a segmentation algorithm for event cameras, yielding around 90 % accuracy at 4 pixels relative displacement.
Supplementary MaterialAccompanying video: https://youtu.be/0q6ap OSBAk. We encourage the reader to view the added experiments and theory in the supplement.