Inertial and Imaging Sensor Fusion for Autonomous Vehicles
Today the emerging field of autonomous driving demands accurate positioning and trustworthy self-localization methods which requires new technologies and hardware. The common approach is to fuse all available information sources: GPS, IMU equipped with the tri-axial gyroscope and accelerometer, odometer, and perception sensors (camera, lidar, radar). This tutorial will discuss trade-offs between inertial-based (IMU) and perception-based sensors for Autonomous Vehicles (Avs) requiring 10 cm positional accuracy 100% of the time without using GPS. The perception sensors provide centimeter accuracy but require powerful graphical processors to perform feature recognition. In contrast to perception sensors, IMUs allow for navigation completely independent of external references (e.g. satellites, road markers, geomaps, databases) and immunity to weather conditions. MEMS IMUs are attractive due to the SWaP-C metric, but only suitable for short-term inertial dead-reckoning because of positional error growth with time. The long-term navigation using MEMS, however, is possible when aided using perception sensors available in AV for correction of IMU drift. This tutorial will present various approach to solve vehicle localization problem by using MEMS IMU aided with Visual Odometry. The trade-offs between 1) Wheel Odometry, 2) Camera Odometry, 3) Lidar Odometry, and 4) Radar Odometry will be covered in detail. Visual Odometry in autonomous vehicles works by exploiting geometrical consistency of surrounding stationary objects to determine its track in 3D. VO implications for correcting IMU drift as well as 3D map generation will be discussed during tutorial. Finally, this tutorial will address hardware and primarily software challenges associated with developing an ambitious navigation system fusing all available input sensors (IMU, camera, lidar, wheel speed) to obtain an accurate vehicle position on the map without GPS.