# Integrate 3D sensing in robot_localization package

Hi,

I came across of this image of the robot_localization package, and I wonder if it is possible to integrate in the sensor fusion the 3D sensing data like in the picture. I didn't find anything about incorporating 3D sensing data in the documentation of this package.

Best,

edit retag close merge delete

Sort by » oldest newest most voted

The package robot_localization is just a kalman filter that fuses odometry data to provide a filtered estimate of the fusion of sensors. The package can currently accept messages of type:

from a plethora of sources. Where those sources come from is up to you. The package does have support for 3D localization and 2D localisation with two_d_mode set to true. Please see the documentation as to how to use this. Specifically referring to your picture,

• Odometry would be supplied from a tachometer relating the sensor to kinematic constraints
• IMU would come from an Inertial Measurement Unit of which the manufacturer most likely provides a driver
• GPS provides a pose in a global reference frame which is determined by the navsat_transform_node also a part of the package.
• Camera using some visual odometry package such as rtabmap or viso
• 3D sensing appears to represent an Xbox Kinect in this image of which you could perform the above visual odometry or even Iterative Closest Point (ICP) odometry by interpreting the depth sensing as a point cloud

Based on the sources you supply to the state estimation nodes, it will provide a 3D (or 2D) estimate of your position. This package is often used in collaboration with gmapping for Simaltaneous Localisation And Mapping (SLAM).

Thanks, Grant.

more