A Survey of SLAM with Multiple Sensor Fusion

PhD Qualifying Examination


Title: "A Survey of SLAM with Multiple Sensor Fusion"

by

Mr. Zhuofei HUANG


Abstract:

In recent years there have been excellent results in Visual Odometry 
techniques, which aim to compute the metric six degrees-of-freedom (DOF) pose 
state estimation with high accuracy and robustness. However these approaches 
lack the capability to keep tracking in a long-term period, and trajectory 
estimation accumulates drift. Most Visual SLAM systems seem hard for 
localization when failing to detect sufficient feature points, and unable to 
keep scale consistency during the whole tracking process. To solve these 
problems we usually fuse other sensors and involve more information to enhance 
the performance of Visual SLAM, such as depth sensor or inertial sensors. Depth 
sensors provide depth maps for RGB images so that 3D information of image 
pixels are known in such RGB-D SLAM system. A monocular visual-inertial system 
(VINS), consisting of a camera and a low-cost inertial measurement unit(IMU), 
forms the minimum sensor suite for metric 6-DOF state estimation, giving a 
better pose estimation when tracking lost occurs in Visual SLAM and solving 
scale ambiguity problem.

In this survey we present a tightly-coupled Visual-Inertial Simultaneous 
Localization and Mapping system that can be applied to any monocular camera 
configuration. We also propose an IMU initialization method, which computes the 
scale, the gravity direction, the velocity, and gyroscope bias, in a few 
seconds with high accuracy based on a set of keyframes processed by visual SfM. 
In back-end non-linear optimization, we add another constraint on motion 
between IMU body frames based on the previous bundle adjustment on visual SLAM, 
which only involves the constraints of reprojection error between 3D 
point-clouds and camera frames. Additionally, in many traditional SLAM systems, 
robust feature extractors like ORB works well in visual tracking task, but 
unsatisfactory in pose estimation. Thus we then propose an state-of-art 
learning-based feature extractors for better fitness in pose estimation task.


Date:			Thursday, 17 September 2020

Time:                  	4:00pm - 6:00pm

Zoom meeting: https://zoom.us/j/3262443469?pwd=NjNvOHVPVzAxV3VmUWw3WUhiMVlkUT09

Committee Members:	Prof. Long Quan (Supervisor)
 			Prof. Albert Chung (Chairperson)
 			Prof. Pedro Sander
 			Prof. Chiew-Lan Tai


**** ALL are Welcome ****