Geometric vision for event cameras

Regular cameras suffer from disadvantages under certain conditions. For example, they are unable to capture blur-free images in highly dynamic or low illumination conditions. They are also unable to produce clear images when the image faces different parts of a scene with substantially different illumination (i.e. their HDR is limited). At MPL, we have have therefore started to investigate a still relatively new, bio-inspired visual sensor called an event camera or dynamic vision sensor. Our aim is to push the envelope of visual perception solutions towards highly challenging applications. Examples are given by autonomous race car driving or the tracking of highly dynamic objects.

Event cameras are bio-inspired sensors that perform well in challenging illumination conditions and have high temporal resolution. Rather than measuring frame-by-frame, the pixels of an event camera operate independently and asynchronously. Each one measures changes of the logarithmic brightness and returns them in the highly discretised form of time-stamped events indicating a relative change by a certain quantity since the last event.
















Though the potential of event cameras in highly dynamic or challenging illumination conditions is somewhat clear, the complicated nature of the sensor data makes reliable, real-time SLAM a particularly hard problem to be solved. MPL has contributed to novel algorithms for event camera calibration, mapping, pose estimation, and event-based SLAM.


Event camera calibration

Camera calibration is an important prerequisite towards the solution of 3D computer vision problems. Traditional methods rely on static images of a calibration pattern. This raises interesting challenges towards the practical usage of event cameras, which notably require image change to produce sufficient measurements. The current standard for event camera calibration therefore consists of using flashing patterns. They have the advantage of simultaneously triggering events in all reprojected pattern feature locations, but it is difficult to construct or use such patterns in the field. We present the first dynamic event camera calibration algorithm. It calibrates directly from events captured during relative motion between camera and calibration pattern. The method is propelled by a novel feature extraction mechanism for calibration patterns, and leverages existing calibration tools before optimizing all parameters through a multi-segment continuous-time formulation. The resulting calibration method is highly convenient and reliably calibrates from data sequences spanning less than 10 seconds.

K. Huang, Y. Wang, and L. Kneip. Dynamic Event Camera Calibration. In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), 2021b. [pdf] [code] [youtube] [bilibili]


Stereo depth estimation and mapping

In one of our collaborations, we developed a solution to the problem of 3D reconstruction from data captured by a stereo event-camera rig moving in a static scene, such as in the context of stereo Simultaneous Localization and Mapping. The proposed method optimizes an energy function designed to exploit small-baseline spatio-temporal consistency of events triggered across both stereo image planes. In another recent work, we again explore the stereo depth estimation problem, this time from a hybrid RGB-event camera setup. The method relies on deep learning and employs an attention module.

Y Zhou, G Gallego, H Rebecq, L Kneip, H Li, and D Scaramuzza. Semi-dense 3d reconstruction with a stereo event camera. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, September 2018 [pdf]

Y.-F. Zuo, L. Cui, X. Peng, Y. Xu, S. Gao, X. Wang, and L. Kneip. Accurate Depth Estimation from a Hybrid Event-RGB Stereo Setup. In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), 2021.


Globally optimal motion estimation

The below works look at several motion estimation problems with event cameras. The flow of the events is hereby modelled by a general homographic warping in a space-time volume, and the objective is formulated as a maximisation of contrast within the image of warped events. The following problems have been solved:

  • Camera rotation estimation
  • Planar motion estimation with a downward facing camera
  • Optical flow

The core contribution of the below works consists of a globally optimal solution to these generally non-convex problems, which removes the dependency on a good initial guess plaguing prior local optimization methods. The methods rely on branch-and-bound optimisation and employ novel and efficient, recursive upper and lower bounds derived for six different contrast estimation functions. The basic principle is illustrated in the above figure.

X. Peng, Y. Wang, L. Gao, and L. Kneip. Globally-optimal event camera motion estimation. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, August 2020 [pdf] [youtube] [bilibili]

X. Peng, L. Gao, Y. Wang, and L. Kneip. Globally-Optimal Contrast Maximisation for Event Cameras. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2021.