ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Build a map with Husky, laser and Kinect/RealSense

asked 2016-07-16 04:17:24 -0600

Marcus Barnet gravatar image

updated 2016-07-18 14:58:32 -0600

Hi to all,

I'm trying to build a 3D map with my mobile robot (Husky A200) by using a LMS-111 SICK laser and a RealSense R200 (which is a camera sensor like the Kinect). Typically, I move my robot by using a remote transmitter along a specific path.

I'm able to build a 2D map in RVIZ by using the data coming from the /scan topic by using hector_map and this works very well since I don't need to use the odometry data.

I'm also able to build a 3D map in rtabmap by using only the /camera/color/image_raw, /camera/depth/image_raw and /camera/color topics coming from the RealSense sensor; also this method works very well and the map is very accurate.

Both with the RealSense and with the SICK, I can display the visual odometry and the laser odometry.

Now, I would like to know if it is possible to fuse both RealSense and SICK laser in a single map in order to match the data and check if the computed odometry is the same and to integrate the 2d data from the laser inside the 3D map built by using the camera sensor.

Do you think it is possible to achieve this? Do I have to use gmapping?

If needed, I can also record /imu/data coming from my XSENS IMU, encoders and velocity data and also the husky odometry data.

I hope you can help me. If you want I can upload some short bag files.


EDIT: after matlabbe suggestion, I'm trying to use RTABMAP with subscribe_scan enabled, but I think I need help to specify the correct transformation between the laser frame and the base_link since I'm getting this error:

rtabmapviz: Could not get transform from base_link to laser after 0.100000 seconds!

In order to start the RTABMAP, I use the following commands:

$ roscore
$ rosparam set use_sim_time true
$ rosrun tf static_transform_publisher 0 0 0 -1.5707963 0 -1.5707963 base_link camera_color_optical_frame 100
$ rosrun tf static_transform_publisher 0 0 0 0 0 0 camera_color_optical_frame camera_depth_optical_frame 100
$ roslaunch rtabmap_ros rgbd_mapping.launch rgb_topic:=/camera/color/image_raw depth_registered_topic:=/camera/depth/image_raw camera_info_topic:=/camera/color/camera_info compressed:=true frame_id:=base_link rtabmap_args:="--delete_db_on_start" estimation:=1
$ rosbag play --clocktest1.bag (the bag contains both realsense and laser data)

At the end of rgbd_mapping.launch, I added this code:

<node pkg="tf" type="static_transform_publisher"  name="base_to_realsense"
      args="0 0 0 -1.5707963 0 -1.5707963 /base_link /realsense_frame 100" />

And this works well if I only use the RealSense data, but if I enable the scan data, it gives me the transformation error since I dont know how to correctly define it. Can you help me, please?

The SICK laser is located 50 centimeters lower than the RealSense and it is mounted on the same vertical axis.

edit retag flag offensive close merge delete

Comments

Is it possible to build a 3d with gmapping?

Marcus Barnet gravatar image Marcus Barnet  ( 2016-07-16 12:59:39 -0600 )edit

What I want to acquire is to mix the 3d map with the 2d map

Marcus Barnet gravatar image Marcus Barnet  ( 2016-07-16 13:18:22 -0600 )edit

1 Answer

Sort by ยป oldest newest most voted
3

answered 2016-07-18 08:30:29 -0600

matlabbe gravatar image

updated 2016-07-18 20:16:40 -0600

Hi,

I'll try to breakdown some solutions that you can explore. I assume your laser is mounted horizontally and that you are on a 2D terrain.

1) hector_mapping provides icp odometry and a 2D occupancy grid map, but it doesn't do loop closure detection/correction (someone correct me if I am wrong).

2) gmapping provides a 2D occupancy grid map with loop closure detection/correction, but requires odometry as input. You could take icp odometry from hector_mapping to use it.

3) rtabmap_ros provides 3D point cloud map and 2D occupancy grid map (see subscribe_scan option) with loop closure detection/correction. It requires odometry as input but a visual odometry node is provided for convenience. You could also use icp odometry from hector_mapping instead. See SetupOnYourRobot tutorial to see some configurations.

Well, solution 3 can provide what you want. For odometry, if you want to mix visual odometry and/or icp odometry and/or IMU together (sensor fusion), you may want to look at the robot_localization package.

EDIT 1: TF problem

You need a static_transform_publisher between /base_link and /laser frames. Not sure if I understand the relation between /camera_color_optical_frame and /realsense_frame. What is the frame_id set in your /camera/color/image_raw topic? Assuming it is /camera_color_optical_frame, the laser is 10 cm over /base_link and RGB camera is on your vertical axis 50 cm over the laser:

<node pkg="tf" type="static_transform_publisher"  name="base_to_laser"
      args="0 0 0.1 0 0 0 /base_link /laser 100" />
<node pkg="tf" type="static_transform_publisher"  name="base_to_color"
      args="0 0 0.6 -1.5707963 0 -1.5707963 /base_link /camera_color_optical_frame 100" />
<node pkg="tf" type="static_transform_publisher"  name="color_to_depth"
      args="0 0 0 0 0 0 /camera_color_optical_frame /camera_depth_optical_frame 100" />

You would have:

/base_link -----> /laser
           \
             ---> /camera_color_optical_frame -> /camera_depth_optical_frame

EDIT 2: Sensor fusion

Use robot_localization and configure it to output TF /odom -> /base_link along with the fused odometry (/odometry/filtered). If you use rgbd_odometry, you should set publish_tf param to false to avoid publishing the same odom TF. There is basic example (still need some modifications) in the launch file sensor_fusion.launch for IMU+Visual odometry. Please read the guidelines in robot_localization's tutorials to better tune robot_localization parameters for your sensors.

cheers

edit flag offensive delete link more

Comments

Thank you for your help. I think option 3 is suitable for me. I can get odometry data from my robot husky since it provides Odom node. How can I add the odometer data inside rtabmap? Is there any tutorial?

Marcus Barnet gravatar image Marcus Barnet  ( 2016-07-18 08:54:26 -0600 )edit

Do you think is it possible to include IMU data in rtabmap? Is there any documentation for this?

Marcus Barnet gravatar image Marcus Barnet  ( 2016-07-18 08:57:27 -0600 )edit
1

I updated the answer above with a link to a tutorial with different configurations. You may want to look at the Kinect+Laser+Odom one. To merge IMU + husky odom into a fused odometry for rtabmap, see robot_localization package.

matlabbe gravatar image matlabbe  ( 2016-07-18 09:48:51 -0600 )edit

I edited my first question by including the error I'm getting if I try to use RTABMAP with the scan enabled. I tried to read how to specify the transformation between the laser and the base_link, but I didn't came with any suitable solution. Can you help me please?

Marcus Barnet gravatar image Marcus Barnet  ( 2016-07-18 14:51:04 -0600 )edit

Thank you a lot, Matlabbe! It works very nice, now! You really are a ROS Master! I've read on 'SetupUpOnYourRobot' that I can use also my IMU (/imu/data) in order to perform sensor fusion (realsense+laser+imu). Do I have to use robot_localization or can I use only RTABMAP?

Marcus Barnet gravatar image Marcus Barnet  ( 2016-07-18 18:57:59 -0600 )edit

How can I include my /imu/data in order to make it looks like an odometry message?

Marcus Barnet gravatar image Marcus Barnet  ( 2016-07-18 19:00:28 -0600 )edit
1

updated answer above...

matlabbe gravatar image matlabbe  ( 2016-07-18 20:16:49 -0600 )edit

Thank you! I marked the question as solved. I will open another topic related to robot_localization to avoid Off Topic here. Thank you again for your help!

Marcus Barnet gravatar image Marcus Barnet  ( 2016-07-19 05:39:56 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-07-16 04:17:24 -0600

Seen: 3,386 times

Last updated: Jul 18 '16