ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

How to get pose from rtabmap rgbd_odometry with Kinect

asked 2018-03-14 19:17:22 -0500

EduardoCota gravatar image

updated 2018-03-14 22:14:41 -0500

jayess gravatar image


I am having trouble using rgbd_odometry from rtabmap_ros package.

I would like to subscribe to the published topic odom (nav_msgs/Odometry) so that I can compute the pose of my robot over time.

Using laser_scan_matcher and hokuyo, for example, I subscribe to a topic named pose_stamped (geometry_msgs/PoseStamped) with this same goal.

I don't care about mapping right now. I just want the pose over time, in order to plot the path my robot displaced.

I am typing the following commands:

roslaunch openni_launch openni.launch`


rosrun rtabmap_ros rgbd_odometry rgb/image:=/camera/rgb/image_raw dep/image:=/camera/depth/image rgb/camera_info:=/camera/rgb/camera_info

I can echo the three remaped topics above, but i keep receiving this warning about the base_link.

[ WARN] [1521072121.902733559]: odometry: Could not get transform from base_link to camera_rgb_optical_frame (stamp=1521072121.798015) after 0.100000 seconds ("wait_for_transform_duration"=0.100000)! Error="canTransform: target_frame base_link does not exist.. canTransform returned after 0.100796 timeout was 0.1."

Topic /odom shows nothing.

Could anyone please give me a hint how to start solving it?

I use Ubuntu 16.04 and ROS Kinetic.

Thank you in advance.

edit retag flag offensive close merge delete


Are you sure that base_link exists? Can you post your tf tree?

jayess gravatar image jayess  ( 2018-03-14 22:15:47 -0500 )edit

Thank you for answering! @matlabbe suggestion worked for me.

EduardoCota gravatar image EduardoCota  ( 2018-03-15 20:04:02 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted

answered 2018-03-15 10:33:10 -0500

matlabbe gravatar image


$ rosrun rtabmap_ros rgbd_odometry rgb/image:=/camera/rgb/image_raw depth/image:=/camera/depth/image rgb/camera_info:=/camera/rgb/camera_info _frame_id:=camera_link

However, make sure the depth image topic is registered to RGB camera. Like in this tutorial:

$ roslaunch openni_launch openni.launch depth_registration:=true
$ rosrun rtabmap_ros rgbd_odometry rgb/image:=/camera/rgb/image_rect_color depth/image:=/camera/depth_registered/image_raw rgb/camera_info:=/camera/rgb/camera_info _frame_id:=camera_link
edit flag offensive delete link more


Thank you very much, @matlabbe! It worked and now I can subscribe to /odom and plot the position over time. However, is there any way to improve visual odometry executed with Kinect? Results are poor by now.

EduardoCota gravatar image EduardoCota  ( 2018-03-15 20:02:01 -0500 )edit

The source version can give slightly better results. Keep in mind that scanning surfaces without strong visual features will make the odometry drift a lot more. See more scanning tips here.

matlabbe gravatar image matlabbe  ( 2018-03-16 11:32:24 -0500 )edit

Do you mean the source version from the rtabmap_ros package? I already built from source. Thank you for the tips!

EduardoCota gravatar image EduardoCota  ( 2018-03-16 11:43:20 -0500 )edit

Yes this is what I meant. You can also try playing with GFTT parameters: $ rosrun rtabmap_ros rgbd_odometry --params | grep GFTT/. Like adding parameter: `_GFTT/QualityLevel:=0.001". Increasing GFTT/MinDistance can give more uniformly distributed features in images.

matlabbe gravatar image matlabbe  ( 2018-03-16 12:08:10 -0500 )edit

@matlabbe, Thank you again!!! I get the odometry, but it stills stopping during some tests. First, I thought was moving the robot too fast that the VO could not match the features. But it occurs even moving slowly. Since I'm using a Intel NUC, would you say it stops because of my system capacity?

EduardoCota gravatar image EduardoCota  ( 2019-01-28 19:27:50 -0500 )edit

If there are no visual features in the field of view of the camera, even if you move very slowly, VO will fail.

matlabbe gravatar image matlabbe  ( 2019-01-29 15:41:10 -0500 )edit

Ok! Thank you very much, @matlabbe! Have you published any paper that explains how VO with Frame-2-Map works?

EduardoCota gravatar image EduardoCota  ( 2019-02-14 07:09:16 -0500 )edit

Search "RTAB-Map as an Open-Source Lidar and Visual SLAM Library for Large-Scale and Long-Term Online Operation", link to preprint is on the project page:

matlabbe gravatar image matlabbe  ( 2019-02-14 15:16:34 -0500 )edit

Question Tools

1 follower


Asked: 2018-03-14 19:17:22 -0500

Seen: 1,642 times

Last updated: Mar 15 '18