Mobile Perception Lab

Welcome to the website of the Mobile Perception Lab led by Prof. Laurent Kneip. Our lab was founded in September 2017 and is part of the School of Information Science and Technology (SIST) at ShanghaiTech.

MPL members




Our mission is to develop next-generation 3D perception systems for mobile applications, such as robots, intelligent vehicles, or intelligence augmentation devices (e.g. augmented reality headsets). Through our research, we hope to provide smart mobile machines and devices with the ability to see, to understand their environment using the sense of vision, and to perform useful tasks in the complex, unstructured, and dynamically changing environments in which we live and work. We work on robust, accurate, and efficient localization, 3D scene geometry perception, and scene understanding solutions, where the generated models of the environment cover everything reaching from low-level geometry to higher level aspects such as scene composition, object poses or shapes, dynamics, and semantic meaning. We achieve this goal by combining more classical model and optimization based methods with modern, data-driven techniques. We currently concentrate many of our efforts on the development of related solutions for neuro-morphic cameras. Our goal is to enable energy-efficient sensing and robotic vision in highly challenging scenarios involving extreme dynamics or illumination conditions.


MPL is part of the ShanghaiTech Automation and Robotics Center (STAR Center). Feel free to explore our webpages in order to learn more about our mission. Also note that MPL very much welcomes interested people from the outside as well as our own undergraduate students to get in touch, we are always open for collaboration.

News

January, 2023

TPAMI paper accepted on globally optimal consensus maximization!


Accelerating Globally Optimal Consensus Maximization in Geometric Vision, by Zhang Xinyue, Peng Liangzu, Xu Wanting, and Laurent Kneip. Consensus maximization is a fundamental problem in many geometric registration problems, but globally optimal solutions to the problem are typically time-consuming. In this work, we present a general improvement of the computational complexity of the branch-and-bound paradigm by trick that makes use of interval stabbing. Click here for paper.


December, 2023

TRO paper accepted on cross-modal, map-based event camera tracking


Cross-Modal Semi-Dense 6-DoF Tracking of an Event Camera in Challenging Conditions, by Zuo Yifan, Xu Wanting, Wang Xia, Wang Yifu, and Laurent Kneip. While event cameras are good for motion estimation, they may not necessarily be the best sensor for mapping. Fortunately, in many scenarios a prior map exists. In this work, we investigate the question how to track an event-camera based on map-priors created by a regular camera. The map is given in semi-dense representation, and the approach is an extension of our previous event-depth camera method DEVO. Click here for paper and code.


December, 2023

Two papers accepted for 3DV'24!


1st paper: Event-based Visual Odometry on Non-holonomic Ground Vehicles, by Xu Wanting, Zhang Si'ao, Cui Li, Peng Xin, and Laurent Kneip. Presents reliable, purely event-based visual odometry on planar ground vehicles by employing the constrained non-holonomic motion model of Ackermann steering platforms. Owing to the use of event cameras, the algorithm significantly outperforms the more traditional frame-based alternative in challenging illumination scenarios. [pdf] [youtube] [code]

2nd paper: Relative Pose for Nonrigid Multi-Perspective Cameras: The Static Case, by Li Min, Yang Jiaqi, and Laurent Kneip. In this paper, we demonstrate that non-rigid camera attachements in multi-perspective camera setups deforming under the influence of external forces (e.g. gravity) must not lead to errors in relative pose estimation. In contrary, if a correct physical model is considered, it even enables the identification of the orientation of gravity without any further prior assumptions. [pdf]


November, 2023

TRO paper accepted on event-based visual-inertial odometry


In this work, we combine some of our recent achievements on geometric low-level solvers for efficient relative event camera displacement (i.e. dynamics) with inertial readings in a tightly coupled scheme. As a result, we obtain continuous and metrically scaled, genuine visual odometry results for an event camera-IMU setup. Authors: Xu Wanting, Peng Xin, and Laurent Kneip. Click here for the paper.


October, 2023

Weblinks for our ICCV'23 are out!


The weblinks for our ICCV'23 "5-point minimal solver for event camera relative motion estimation" are now all out. In this work, we provide the exact, minimal parametric form of the manifold location of the events generated by the observation of a line under (locally) constant linear velocity. This enables the first minimal solver for events, which reveals partial velocity information. The work is complemeted by a linear solver allowing the fusion of multiple partial measurements into a complete 3D velocity measurement. We conceive this contribution as the first cornerstone in building a foundational geometric theory for event-based visual odometry. Click here for the paper, video, poster, and project webpage.


October, 2023

Oral at ICCV'23!


We are excited to have an oral paper accepted to ICCV, in which we present a novel geometric solver for event-based visual odometry. Stay tuned for further information on paper etc. Kudos to MPL PhD students Gao Ling and Su Hang for this fine achievement.


September, 2023

Check out our IROS'23 contribution on pose graph optimization in SLAM


"Scale jump-aware pose graph relaxation for monocular SLAM with re-initializations" addresses an intricate problem of monocular SLAM: If a tracking loss and a re-initialization occurs in a forward-exploration scenario, a sub-map with inconsistent scale may be created. Even if multiple such reinitializations occur, our hybrid pose graph optimization (HPGO) is able to reconcile a globally consistent map if loop closures occur. See code, video and paper, which explains the exact conditions under which the global scale can be recovered. Kudos to Yuan Runze and our collaborators from Midea robozone for this interesting outcome.


September, 2023

Welcoming new students to MPL!


Welcome Zhang Si'ao, Wu Shaoxun, Feng Yunlong, and Yan Dongxue to our lab. We look forward to working with you over the coming years.


July 31st, 2023

Thank you Zuo Yifan!


Zuo Yifan's long-term visit to MPL comes to an end. He has been doing great with several contributions on event-based vision at top robotics conferences. Thank you for staying with us and all the best for the future!


July 20th, 2023

Prof Kneip visits Google Zurich


July 5th, 2023

Prof Kneip gives invited talk at the Huawei Corporate Research Center in Zurich


Thanks to Huawei and Li Yuanyou for inviting Prof Kneip as a guest speaker to their annual CV&ML workshop. Another great chance to present our advances in neuro-morphic vision.


June 6th, 2023

Prof Kneip gives keynote speech at EAC Suzhou


During this talk, Prof Kneip introduces his contributions to localization, mapping, and environment perception solutions for multi-perspective cameras mounted on self-driving cars.


June 5th, 2023

Prof Kneip visit RPG


Prof Kneip spends 12 weeks at RPG in Zurich to collaborate on the development of a new foundational geometric theory for event-based vision. The stay includes a visit to Tobi Delbruck's lab and several companies within the local eco-system around event-based vision, including Huawei, Inivation, and Synsense.


May 29th, 2023

MPL @ ICRA


Members of MPL and the STAR center attend ICRA. This marks the first conference visit "outside" China in a very long time, and it has been a special opportunity to catch up with friends and collaborators. This year, we present a contribution on a new topic, learning-based control of legged robots.


May 15th, 2023

Prof. Richard Hartley hosted in Shanghai


Prof. Richard Hartley spends 5 weeks in Shanghai, gives talk at ShanghaiTech University, and collaborates with MPL on neural shape generation for shape regularization in geometric optimization. Prof. Richard Hartley can be considered one of the fathers of geometric computer vision, and is the author of the "bible" of computer vision, the book Multiple View Geometry. What an honour!


May 8th, 2023

Defense of Chen Yu and Cao Jinyue!


Chen Yu and Cao Jinyue have been joining MPL only for a relatively short term. This makes their work and contribution even more impressive. Wishing all the best for the future!


April 25th, 2023

Defense of Wei Jiaxin!


Congratulations to Wei Jiaxin who successfully defended her thesis. She has been doing absolutely outstanding work as a graduate student at MPL, co-authoring three papers (two as first author), and contributing to another 3 works that are currenty still under review and that are the result of different external collaborations. What a performance! We are glad to see that she will move on for an outstanding opportunity as a PhD student at TUM working together with Prof. Leutenegger, a former collaborator and friend of Prof. Kneip. We can't wait to see what you will produce in the near future.


December 7th, 2022

Prof Kneip gives Banquet speech at ACCV'2022


Prof Kneip is invited speaker at the Banquet of ACCV'2022 and provides an overview over the challenges and opportunities in event-based computer vision.


September 23rd, 2022

Another three papers at IROS'2022 in Kyoto, Japan


1st paper: VECtor: A Versatile Event-Centric Benchmark for Multi-Sensor SLAM, by L. Gao, Y. Liang, J. Yang, S. Wu, C. Wang, J. Chen, and L. Kneip. A comprehensive set of benchmark datasets captured with a fully hardware synchronized multi-sensor setup containing an event-based stereo camera, a regular stereo camera, multiple depth sensors, and an inertial measurement unit. Note that the paper is also presented as a RAL paper. [dataset] [pdf] [youtube] [bilibili]

2nd paper: Accurate Instance-Level CAD Model Retrieval in a Large-Scale Database, by J. Wei, L. Hu, C. Wang, and L. Kneip. The key insight of this work is given by the importance of geometric reranking of initial retrieval results. [pdf] [youtube] [bilibili] [supplementary material]

3rd paper: Multical: Spatiotemporal Calibration for Multiple IMUs, Cameras and LiDARs, by X. Zhi, J. Hou, Y. Lu, L. Kneip, and S. Schwertfeger. [code]


September 23rd, 2022

Prof Kneip will be area chair of CVPR 2023


September 22nd, 2022

Prof Kneip gives invited talk at the MFI 2022 Workshop on Neuromorphic Event Sensor Fusion

Please go to the webpage here to find more information on the list of invited speakers and presented papers.


September 19th, 2022

New grant from NSFC!

MPL wins a new grant from NSFC to support our research on Spatial AI. It is a "Research Fund for International Excellent Young Scientists".


September 17th, 2022

Paper accepted for ACCV!

Spotlights: Probing Shapes from Spherical Viewpoints. Inspired by spherical multi-view scanners, we propose a novel sampling model called Spotlights to represent a 3D shape as a compact 1D array of depth values. Welcome to read the draft on arxiv already!


September 5th, 2022

Welcome Dai Zijia and Liu Tao as new graduate students at MPL!


August 11th, 2022

Stereye releases Polar 3D scanner

Our partner Stereye releases a new easy-to-use 3D content capturing device, which is a result of the collaboration with MPL. The platform will be used for capturing accurate 3D models and automatic generation of HD maps. It targets multiple industries such as construction industry, AR, and robotics, to name just a few. Welcome to reach out and try!


July 2022

We are thrilled to annouce our new collaboration with Midea.

Together with Meidi, we will explore new approaches to object-level SLAM and spatial AI. It makes us particularly happy to be able to continue our collaboration with Dr. Huang Kun, alumni of MPL!


May 19th, 2022

Congratulations to Huang Kun, who successfully defended his PhD thesis

His thesis is entitled continuous multiple view geometry for smart vehicles. He worked on continuous-time parametrizations for visual SLAM, and related applications reach from non-holonomic trajectory estimation for AVs to the intrinsic calibration of dynamic vision sensors (open-source release). Many thanks to the reviewers Yuchao Dai, Guofeng Zhang, Li Jiamao, and Xu Lan.


May 6th, 2022

Professor Laurent Kneip gives an invited talk on event-based vision at ICDIP 2022

The talk is online on bilibili, and can be accessed freely here.


February 23rd, 2022

Congratulations to Prof Kneip, who just elevated to the grade of IEEE Senior Member.


February 8th, 2022

MPL receives a new grant from the General Program of the 'Shanghai Innovation in Science and Technology' Natural Science Foundation

This grant will further promote our research on event based vision.


February 1st, 2022

MPL has 3 papers accepted for ICRA 2022!

-DEVO: Depth-Event Camera Visual Odometry in Challenging Conditions (PDF Youtube Bilibili)
-Accurate calibration of multi-perspective cameras from a generalization of the hand-eye constraint (PDF Youtube Bilibili)
-FP-Loc: Lightweight and Drift-free Floor Plan-assisted LiDAR Localization(PDF Youtube Bilibili)

Well done to all co-authors, and more videos and material will follow.


January 19th, 2022

Peng Xin successfully defends her thesis

Another female candidate graduating from the Mobile Perception Lab: Dr Peng Xin. During her PhD, she worked on novel geometric theories for multi-perspective cameras and dynamic vision sensors. Her work was strongly inspired by the work of my previous advisor, Prof. Davide Scaramuzza, and lead up to a real highlight: a TPAMI publication on globally optimal motion estimation with event cameras. Her work was ranked “excellent” by all 5 reviewers. Kudos to Peng Xin for this outstanding achievement, and many thanks to the reviewers, Prof Davide Scaramuzza from UZH, Prof Liu Ming from HKUST, Prof Dai Yuchao from NWPU Xian, Prof Li Jiamao from CAS, and Prof Sören Schwertfeger from our robotics center @ ShanghaiTech. Peng Xin will join other MPL alumni at Motovis Intelligent Technologies in Shanghai and continue to investigate related topics.


January 17th, 2022

Award for research excellence at SIST

The School of Information Science and Technology (SIST) of ShanghaiTech University gives one of their annual awards for research excellence to Professor Kneip. Kudos to all MPL members for this achievement.


January 11th, 2022

Release of our Event camera calibration toolbox

We are releasing our dynamic event camera calibration toolbox, which merely needs a static circular calibration pattern. Please visit our Software page, or go directly to the github repository:
https://github.com/MobilePerceptionLab/EventCalib
PDF Youtube Bilibili


November 19th, 2021

Hu Lan successfully defends her thesis

Big congratulations to Hu Lan for successfully defending her PhD thesis entitled "Object-level reasoning in RGB-D camera registration and SLAM". She worked relentlessly during her PhD and started at a time the lab did not even physically exist. She was indeed the first MPL student here at ShanghaiTech. Her thesis is a complete and consistent work with many publications in top conferences and journals. Her work has recently been cited by Professor Andrew Davison. Many thanks to the reviewers, Prof. Stefan Leutenegger from TU Munich, Prof Liu Ming from HKUST, Prof Dai Yuchao from NWPU Xian, Prof Li Jiamao from CAS, and Prof Sören Schwertfeger from our own robotics center. Dr Hu Lan will join other MPL alumni at Motovis Intelligent Technologies in Shanghai and continue to investigate related topics.


October 29th, 2021

Andrew Davison gives talk in our SIST Distinguished Lecture Series

Very honoured to host Professor Andrew Davison from the Imperial College London for an online lecture in our SIST distinguished lecture series. Professor Davison is the (co-)author of MonoSLAM, DTAM, KinectFusion, SLAM++, DynamicFusion, Fusion++, CodeSLAM, iMap, and Node-SLAM. He is without doubt one of the great inspirations to our work, especially our research line on spatial AI. It is MPL's great honor to host Professor Andrew Davison in our Distinguished Lecture Series.


October 16th, 2021

Paper accepted for BMVC'21

Happy to announce the first fruits of our research on event-based speed sensing, the "Continuous Event-Line Constraint for Closed-Form Velocity Initialization". It directly solves the dynamics for a dynamic vision sensor.
PDF


September, 2021

Glad to announce that Gao Ling is now officially a PhD student of MPL


July 29th, 2021

MPL research hot in AR.

After Prof. Kneip was most recently given the honor of a speech at Magic Leap, he is now also giving a speech at Microsoft Research.


July 26th, 2021

Outstanding undergraduate thesis award for Su Hang!

MPL's newly incoming graduate student Su Hang received this year's best undergraduate thesis award here at SIST, ShanghaiTech, for his work on high-speed object tracking with event cameras. He will continue to work on this exciting topic during his graduate studies at MPL.


July 26th, 2021

Welcome Yuan Runze and Su Hang as new graduate students at MPL! Both are from our own undergraduate program.


July 3rd, 2021

Congratulations to MPL member Ouyang Zhanpeng who just graduated as a Master in Computer Science from ShanghaiTech University.

Earlier this year, we already had Wu Peng receiving his degree. In this regard, it is probably also worth mentioning the truly outstanding offers that MPL students may receive on the job market. Peng Wu joined Intel Shanghai, while Ouyang Zhanpeng will join a brand-new AR-oriented division at Beijing KuaiShou Technology Co., Ltd.


July 1st, 2021

And yet another hattrick for MPL, with three accepted papers at IROS'2021.

Please check out our most recent contributions on semantic localization and event-based vision:

  • Monte-Carlo Localization in Underground Parking Lots using Parking Slot Numbers, Li Cui, Chunyan Rong, Jingyi Huang, Andre Rosendo, Laurent Kneip
    (Youtube Bilibili)
  • Dynamic Event Camera Calibration, Huang Kun, Yifu Wang, and Laurent Kneip
    (PDF Youtube Bilibili)
  • Accurate depth estimation from a hybrid event-RGB stereo setup, Yi-Fan Zuo, Li Cui, Xin Peng, Yanyu Xu, Shenghua Gao, Xia Wang, and Laurent Kneip.

June, 2021

Prof. Kneip is invited to give a talk at WAIC, 2021!


June 1st, 2021

Prof. Kneip appears in the local media

Xinming evening news from Shanghai published a reportage about the work of MPL!


April 6th, 2021

Welcome Wang Yifu, who will join us a post-doc/senior research member here at MPL!

Wang Yifu has been working with Prof. Laurent Kneip since 2016, and already obtained his PhD from the Australian National University under the supervision of Prof. Kneip.


February 28th, 2021

Publication at CVPR 2021!

Ongoing collaboration between Prof. Laurent Kneip and Dr. Ji Zhao from TuSimple leads to new advancements in rotation averaging:

  • Robust and Efficient Global Rotation Averaging with the Burer Monteiro Method, by Y. Chen, J. Zhao, and L. Kneip. (PDF)

February 28th, 2021

MPL likes hattricks!

Another hattrick publication at ICRA'21. Feel free to check-out our new achievements on vision for self-driving cars and semantic SLAM:

  • Robust SRIF-based LiDAR-IMU Localization for Autonomous Vehicles, by K. Li, Z. Ouyang, L. Hu, D. Hao, and L. Kneip (outperforming LEGO-SLAM!)
    (Youtube Bilibili)
  • B-splines for Purely Vision-based Localization and Mapping on Non-holonomic Ground Vehicles, by K. Huang, Y. Wang, and L. Kneip
    (PDF Youtube Bilibili)
  • Point Set Registration With Semantic Region Association Using Cascaded Expectation Maximization, by L. Hu, J. Wei, Z. Ouyang, and L. Kneip

January 29th, 2021

New journal publication in Springer Journal of Mathematical Imaging and Vision!

Please check out our latest JMIV publication, where we solve the chicken-egg-problem of scan alignment with extremly low overlap ratios and symmetry plane estimation.
(PDF)


January 15th, 2021

First pure MPL publication in TPAMI!

Glad to announce one of the first fruits of our event-camera based research. Although it is one of the first fruits, it is already a very sweet one: A publication in the IEEE Transactions on Pattern Analysis and Machine Intelligence. Feel free to check out our work on "Globally-Optimal Contrast Maximisation for Event Cameras", by Peng Xin, Ling Gao, Yifu Wang, and Laurent Kneip.
(Material from related conference paper: PDF Youtube Bilibili)


September, 2020

Prof. Kneip is area chair for both ICRA and CVPR 2021!


September 12th, 2020

Big Congratulations to Associate Professor Prof Laurent Kneip!

MPL is immensely pleasured to announce that our director has just been promoted to the tenured Associate Professor level within the School of Information Sciences and Technology here at ShanghaiTech!


August 20th, 2020

Paper presented at ECCV 2020!

MPL is presenting one of their first offsprings on event based computer vision this year at ECCV'2020, a globally optimal solution to event-camera based motion estimation.
(PDF Youtube Bilibili)


July 26th, 2020

Laurent Kneip is an area chair of 3DV 2020 and CVPR 2021!


July 25th, 2020

Laurent Kneip giving a commencement address at the 2020 ShanghaiTech graduation ceremony ... in Chinese!

Please check-out the link here


July 7th, 2020

Excited to announce the first fruits of our work on event based cameras!

MPL just got one paper accepted for ECCV'2020. The paper is called "Globally-Optimal Event Camera Motion Estimation", and it is the first globally optimal solution to continuous motion estimation with an event camera mouned on an AGV.
(PDF Youtube Bilibili)


June 16th, 2020

New publications at ICRA and CVPR!

MPL has had a very successful start into the new year 2020, with a hattrick at ICRA and two publications at CVPR.


May 21st, 2020

Congratulations to Cao Yuchen (Joshua) and Yu Peihong for successfully defending their graduation thesis.

This is a special moment, Cao Yuchen is the first graduate student since the debut of our lab at ShanghaiTech. Please also check out our new alumni section under the people's page, where you will be able to see the faith of past MPL members and collaborators.


August 16, 2019

Laurent Kneip and MPL receive a grant from the International Young Scientist Program from the National Science Foundation of China.

The award apends to another grant received earlier this year from the General Program of the 'Shanghai Innovation in Science and Technology' Natural Science Foundation, and will further support our research on semantic, higher-level SLAM.


July 30, 2019

Paper accepted for 3DV'19

Our innovative application of an aritificial neural network to improve the numerical stability of Groebner basis solvers will be presented orally at 3DV in Quebec City, Canada.


June 24, 2019

New graduate student

Welcome Ling Gao to join MPL, one of ShanghaiTech's own outstanding undergraduate students.


June 20, 2019

Paper accepted for IROS'19

We have an exciting contribution on "articulated" multi-perspective cameras accepted for IROS'19. The case enables motion estimation with non-overlapping cameras distributed over a truck, i.e. tractor and trailor, and handles the fact that these two bodies are moving w.r.t. to each other.


March 28, 2019

Welcoming new members to the group

Welcome Dr Min Li, who is starting as a Research Assistant Professor at MPL. Also welcome Li Cui, who will start as a graduate student at MPL in 2019.


March 12, 2019

Two papers accepted to CVPR'19!

Special congratulations to Huang Kun from MPL on this fine achievement!


February 26, 2019

Congratulations to Dr Zhou Yi!

The first PhD student working under the supervision of Prof Laurent Kneip officially graduated.


January 27, 2019

Laurent Kneip at an NII Shonan meeting

Laurent Kneip was an invited speaker at an NII Shonan meetings on optimization in computer vision, and presented MPL's latest research achievements.


Good start into 2019!

Check out our new contributions to TPAMI and TRO!

TPAMI paper "Minimal case relative pose computation using ray-point-ray features" presents new insights on two-view geometry. TRO paper "Canny-VO: Visual Odometry with RGB-D Cameras based on Geometric 3D-2D Edge Alignment" (PDF) summarizes our long efforts on semi-dense motion estimation for RGBD cameras.


November 5, 2018

1st prize at the IEEE Australia Postgraduate Student Paper Competition.


September 15, 2018

New graduate student

Welcome Xu Wanting from Xinjiang University to join MPL.


July 9, 2018

ECCV paper on event-based vision accepted!

The fruits of our collaboration with RPG from University of Zurich.


June 25, 2018

New PhD student

Peng Xin from CAS has joined MPL as a new PhD student. Welcome to the team!


June 14, 2018

PAMI paper accepted

Our paper on globally optimal correspondence-less camera resectioning winning a Marr Prize Honourable Mention has been accepted to PAMI!.


March 30, 2018

Oral at CVPR'18!

Our work on globally optimal, non-minimal calibrated relative pose will be presented orally at CVPR'18. (PDF)


March 23, 2018

New graduate student

Welcome Zhanpeng Ouyang from Nanchang University to join our team!


March 23, 2018

MPL web presence goes live!

We are excited to get started, and here is where you will be able to check out on our progress as we go. We'd love to get feedback from you!


December, 2017

Marr Prize Honourable Mention

Our ICCV'17 work "Globally-Optimal Inlier Set Maximisation for Simultaneous Camera Pose and Feature Correspondence" (PDF) received a Marr Prize Honourable Mention! The Marr Prize is the most prestigious best paper award in the computer vision community.


Video Highlights

3DV'2024: Event-based visual odometry on non-holonomic ground vehicles.

Reliable, purely event-based visual odometry on planar ground vehicles by employing the constrained non-holonomic motion model of Ackermann steering platforms. Owing to the use of event cameras, the algorithm significantly outperforms the more traditional frame-based alternative in challenging illumination scenarios. Links: Youtube, PDF, and code.



ICCV'2023: 5-point minimal solver for event camera relative motion estimation.

In this work, we present the first minimal solver for relative motion estimation from events. The work provides a clear understanding of the manifold location of the events generated by the observation of a line under a locally constant linear velocity assumption. The manifold parameters are notably given by the camera dynamics. Links: Youtube, PDF, and project webpage.



IROS'2023: Scale jump-aware pose graph relaxation for monocular SLAM with re-initializations

This work addresses an intricate problem of monocular SLAM: If a tracking loss and a re-initialization occurs in a forward-exploration scenario, a sub-map with inconsistent scale may be created. Even if multiple such reinitializations occur, our hybrid pose graph optimization (HPGO) is able to reconcile a globally consistent map if loop closures occur. Links: Youtube, PDF, and code.



IROS'2022: Accurate instance-level CAD model retrieval in large-scale databases

Online presentation for MPL's IROS'22 contribution on accurate instance-level retrieval of CAD models from real-world observations of objects. The key insight is given by the importance of geometric reranking of the initial retrieval results, which is particularly useful in large-scale databases. Additional supplementary material is also available. Links: Youtube, Bilibili, Supplementary, and PDF.



ICRA'2022: FP Loc: Light-weight and drift-free floor plan-assisted LiDAR localization

FP-Loc is a novel framework for floor plan-based, full six degree-of-freedom LiDAR localization. The approach relies on robust ceiling and ground plane detection, which solves part of the pose and supports the segmentation of vertical structure elements such as walls and pillars. The core of the method consists of a novel nearest neighbour data structure for an efficient look-up and registration of nearest vertical structure elements from the floor plan. Links: Youtube, Bilibili, and PDF.



ICRA'2022: DEVO: Depth-event camera VO in challenging conditions

This video introduces our latest event-camera based self-localization method, which is able to operate in challenging illumination conditions. The depth of events is directly read from a depth camera, which enables the registration of events across time. As demonstrated in the video, the framework successfully handles a large variety of different conditions, including low illumination, high-dynamic range, and fast motion. Links: PDF, Youtube and Bilibili.



ICRA'2022: Accurate multi-perspective camera calibration

Multi-perspective cameras are important in the context of autonomous cars as they are often equipped with surround-view camera systems. MPL has been putting continued effort on multi-perspective camera calibration frameworks. This video introduces our latest calibration method which relies on an external tracking system. The video shows required hardware and other practical aspects of the calibration procedure. Links: PDF, Youtube and Bilibili.



Video on our ETAM approach

In this video, we present a solution for spatial motion estimation of an event camera observing non-planar scenes. It is a contrast maximization approach, but it maximizes ray intersections instead of the more common contrast maximization in the image plane. This enables motion estimation in arbitrarily structured environments. Links: PDF, Youtube, Bilibili.



IROS'2021: dynamic event camera calibration

Novel dynamic event camera calibration solution. The framework detects regular calibration board features under motion and subsequently registers them to the event stream using a continuous-time trajectory parametrization (b-splines). The framework is able to accurately perform intrinsic camera calibration with only several seconds of recorded data. Links: PDF, Youtube, Bilibili, and Code.



IROS'2021: global localization in underground parkings

This video introduces Monte-Carlo localization in underground parking lots using parking slot numbers as semantic landmarks. The latter are detected using OCR, and localization is performed by a particle filter. Links: Youtube, and Bilibili.



ICRA'2021: bundle adjustment for cars

This video introduces our B-spline based continuous-time parametrization for bundle adjustment on cars. The temporal, differentiable parametrization is used for a low-dimensional parametrization of the non-holonomic vehicle motion. Links: PDF, Youtube, and Bilibili.



ICRA'2021: agile Lidar-IMU localization

In this video, we present a novel solution for Lidar-IMU based localization in HD maps. The algorithm reliably tracks even highly agile motion captured by an aggressively moving, drifting platform in an indoor environment. Links: Youtube, and Bilibili.



ECCV'2020: globally optimal contrast maximization

The video of our ECCV'2020 paper on Globally Optimal Event Camera Motion Estimation is now online. The method provides a globally optimal solution to contrast maximization using branch and bound. Links: Youtube, Bilibili, and PDF.



ICRA'2020: reliable motion estimation with surround-view camera systems

The video introduces a new method for reliable relative vehicle displacement estimation from vehicle-mounted surround-view multi-camera systems. Links: Youtube, and Bilibili.



ICRA'2020: globally optimal correspondence-less planar motion estimation

This video presents a solution to visual odometry for planar ground vehicles. The platform is equipped with a downward-facing camera, and thus observes noisy texture with substantial image-to-image optical flow. We present a globally-optimal correspondence-less solution. Links: PDF, Youtube, and Bilibili.



CVPR'2020: globally optimal generalized essential matrix estimation

This paper presents an extension of our previous CVPR'18 paper. It extends the convex optimization based approach to generalized cameras (e.g. multi-camera systems). Links: PDF, Youtube, and Bilibili.



CVPR'19: Motion estimation of non-holonomic ground vehicles from a single feature correspondence measured over n views

Extension of the popular 1-point RANSAC method by Scaramuzza et al. to feature tracks measured over n densely sampled views. A single feature track can be used in order to measure the rotational displacement of the car. Links: PDF, Youtube, and Bilibili.



IROS'2019: relative pose with articulated multi-perspective cameras

This video relative pose estimation for articulated multi-perspective cameras. The method addresses a special case in which the cameras are distributed over the front and back parts of an articulated, non-holonomic vehicle (e.g. a truck with tractor and trailer). Links: Youtube, and Bilibili.