1 Star 0 Fork 0

Biubiubiu / LIO-Livox

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
BSD-3-Clause

LIO-Livox (A Robust LiDAR-Inertial Odometry for Livox LiDAR)

This respository implements a robust LiDAR-inertial odometry system for Livox LiDAR. The system uses only a single Livox LiDAR with a built-in IMU. It has a robust initialization module, which is independent to the sensor motion. It can be initialized with the static state, dynamic state, and the mixture of static and dynamic state. The system achieves super robust performance. It can pass through a 4km-tunnel and run on the highway with a very high speed (about 80km/h) using a single Livox Horizon. Moreover, it is also robust to dynamic objects, such as cars, bicycles, and pedestrains. It obtains high precision of localization even in traffic jams. The mapping result is precise even most of the FOV is occluded by vehicles. Videos of the demonstration of the system can be found on Youtube and Bilibili. NOTE: Images are only used for demonstration, not used in the system.

Developer: Livox

System achritecture

The system consists of two ros nodes: ScanRegistartion and PoseEstimation.

  • The class "LidarFeatureExtractor" of the node "ScanRegistartion" extracts corner features, surface features, and irregular features from the raw point cloud.
  • In the node "PoseEstimation", the main thread aims to estimate sensor poses, while another thread in the class "Estimator" uses the class "MapManager" to build and manage feature maps.

The system is mainly designed for car platforms in the large scale outdoor environment. Users can easily run the system with a Livox Horizon LiDAR.Horizon

The system starts with the node "ScanRegistartion", where feature points are extracted. Before the feature extraction, dynamic objects are removed from the raw point cloud, since in urban scenes there are usually many dynamic objects, which affect system robustness and precision. For the dynamic objects filter, we use a fast point cloud segmentation method. The Euclidean clustering is applied to group points into some clusters. The raw point cloud is divided into ground points, background points, and foreground points. Foreground points are considered as dynamic objects, which are excluded form the feature extraction process. Due to the dynamic objects filter, the system obtains high robustness in dynamic scenes.

In open scenarios, usually few features can be extracted, leading to degeneracy on certain degrees of freedom. To tackle this problem, we developed a feature extraction process to make the distribution of feature points wide and uniform. A uniform and wide distribution provides more constraints on all 6 degrees of freedom, which is helpful for eliminating degeneracy. Besides, some irregular points also provides information in feature-less scenes. Therefore, we also extract irregular features as a class for the point cloud registration. Feature points are classifed into three types, corner features, surface features, and irregular features, according to their local geometry properties. We first extract points with large curvature and isolated points on each scan line as corner points. Then principal components analysis (PCA) is performed to classify surface features and irregular features, as shown in the following figure. For points with different distance, thresholds are set to different values, in order to make the distribution of points in space as uniform as possible.

In the node "PoseEstimation", the motion distortion of the point cloud is compensated using IMU preintegration or constant velocity model. Then the IMU initialization module is performed. If the Initialization is successfully finished, the system will switch to the LIO mode. Otherwise, it run with LO mode and initialize IMU states. In the LO mode, we use a frame-to-model point cloud registration to estimate the sensor pose. Inspired by ORB-SLAM3, a maximum a posteriori (MAP) estimation method is adopted to jointly initialize IMU biases, velocities, and the gravity direction. This method doesn't need a careful initialization process. The system can be initialized with an arbitrary motion. This method takes into account sensor uncertainty, which obtains the optimum in the sense of maximum posterior probability. It achieves efficient, robust, and accurate performance. After the initialization, a tightly coupled slding window based sensor fusion module is performed to estimate IMU poses, biases, and velocities within the sliding window. Simultaneously, an extra thread builds and maintains the global map in parallel.

Prerequisites

Compilation

cd ~/catkin_ws/src
git clone https://github.com/Livox-SDK/LIO-Livox
cd ..
catkin_make

Run with bag files:

Run the launch file:

cd ~/catkin_ws
source devel/setup.bash
roslaunch lio_livox horizon.launch

Play your bag files:

rosbag play YOUR_ROSBAG.bag

Run with your device:

Run your LiDAR with livox_ros_driver

cd ~/catkin_ws
source devel/setup.bash
roslaunch livox_ros_driver livox_lidar_msg.launch

Run the launch file:

cd ~/catkin_ws
source devel/setup.bash
roslaunch lio_livox horizon.launch

Notes:

The current version of the system is only adopted for Livox Horizon. In theory, it should be able to run directly with a Livox Avia, but we haven't done enough tests. Besides, the system doesn't provide a interface of Livox mid series. If you want use mid-40 or mid-70, you can try livox_mapping.

The topic of point cloud messages is /livox/lidar and its type is livox_ros_driver/CustomMsg.
The topic of IMU messages is /livox/imu and its type is sensor_msgs/Imu.

There are some parameters in launch files:

  • IMU_Mode: choose IMU information fusion strategy, there are 3 modes:
    • 0 - without using IMU information, pure LiDAR odometry, motion distortion is removed using a constant velocity model
    • 1 - using IMU preintegration to remove motion distortion
    • 2 - tightly coupling IMU and LiDAR information
  • Extrinsic_Tlb: extrinsic parameter between LiDAR and IMU, which uses SE3 form. If you want to use an external IMU, you need to calibrate your own sensor suite and change this parameter to your extrinsic parameter.

There are also some parameters in the config file:

  • Use_seg: choose the segmentation mode for dynamic objects filtering, there are 2 modes:
    • 0 - without using the segmentation method, you can choose this mode if there is few dynamic objects in your data
    • 1 - using the segmentation method to remove dynamic objects

Acknowledgements

Thanks for following work:

  • LOAM (LOAM: Lidar Odometry and Mapping in Real-time)
  • VINS-Mono (VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator)
  • LIO-mapping (Tightly Coupled 3D Lidar Inertial Odometry and Mapping)
  • ORB-SLAM3 (ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM)
  • LiLi-OM (Towards High-Performance Solid-State-LiDAR-Inertial Odometry and Mapping)
  • MSCKF_VIO (Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight)
  • horizon_highway_slam
  • livox_mapping
  • livox_horizon_loam

Support

You can get support from Livox with the following methods:

  • Send email to cs@livoxtech.com with a clear description of your problem and your setup
  • Report issues on github
BSD 3-Clause License Copyright (c) 2021, LIVOX All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

简介

暂无描述 展开 收起
C++ 等 2 种语言
BSD-3-Clause
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
1
https://gitee.com/olderking/LIO-Livox.git
git@gitee.com:olderking/LIO-Livox.git
olderking
LIO-Livox
LIO-Livox
master

搜索帮助