Build a map with Husky, laser and Kinect/RealSense
Hi to all,
I'm trying to build a 3D map with my mobile robot (Husky A200) by using a LMS-111 SICK laser and a RealSense R200 (which is a camera sensor like the Kinect). Typically, I move my robot by using a remote transmitter along a specific path.
I'm able to build a 2D map in RVIZ by using the data coming from the /scan topic by using hector_map and this works very well since I don't need to use the odometry data.
I'm also able to build a 3D map in rtabmap by using only the /camera/color/image_raw, /camera/depth/image_raw and /camera/color topics coming from the RealSense sensor; also this method works very well and the map is very accurate.
Both with the RealSense and with the SICK, I can display the visual odometry and the laser odometry.
Now, I would like to know if it is possible to fuse both RealSense and SICK laser in a single map in order to match the data and check if the computed odometry is the same and to integrate the 2d data from the laser inside the 3D map built by using the camera sensor.
Do you think it is possible to achieve this? Do I have to use gmapping?
If needed, I can also record /imu/data coming from my XSENS IMU, encoders and velocity data and also the husky odometry data.
I hope you can help me. If you want I can upload some short bag files.
EDIT: after matlabbe suggestion, I'm trying to use RTABMAP with subscribe_scan enabled, but I think I need help to specify the correct transformation between the laser frame and the base_link since I'm getting this error:
rtabmapviz: Could not get transform from base_link to laser after 0.100000 seconds!
In order to start the RTABMAP, I use the following commands:
$ roscore
$ rosparam set use_sim_time true
$ rosrun tf static_transform_publisher 0 0 0 -1.5707963 0 -1.5707963 base_link camera_color_optical_frame 100
$ rosrun tf static_transform_publisher 0 0 0 0 0 0 camera_color_optical_frame camera_depth_optical_frame 100
$ roslaunch rtabmap_ros rgbd_mapping.launch rgb_topic:=/camera/color/image_raw depth_registered_topic:=/camera/depth/image_raw camera_info_topic:=/camera/color/camera_info compressed:=true frame_id:=base_link rtabmap_args:="--delete_db_on_start" estimation:=1
$ rosbag play --clocktest1.bag (the bag contains both realsense and laser data)
At the end of rgbd_mapping.launch, I added this code:
<node pkg="tf" type="static_transform_publisher" name="base_to_realsense"
args="0 0 0 -1.5707963 0 -1.5707963 /base_link /realsense_frame 100" />
And this works well if I only use the RealSense data, but if I enable the scan data, it gives me the transformation error since I dont know how to correctly define it. Can you help me, please?
The SICK laser is located 50 centimeters lower than the RealSense and it is mounted on the same vertical axis.
Is it possible to build a 3d with gmapping?
What I want to acquire is to mix the 3d map with the 2d map