Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

There is another method i could use to to this?

Thanks in advance!

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

There is another method i could use to to do this?

Thanks in advance!

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

There is Is there another method i could use to do this?

Thanks in advance!

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

Is there another method could i could use to do this?

Thanks in advance!

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

Is there another method could i use to do this?

Thanks EDIT:

I've tested some parameters and it's pretty possible to notice that this beta version can map the white walls better than the stable one, but i couldn't realize from where the robot odometry topic comes from.

In my case the turtlebot odometry topic is /odom (instead of /pose setted) but i guess the RGBDSLAM gets the kinect's, because the estimated position that RGBDSLAM provides (on GUI) with the /odom topic setted on “odometry_tpc” parameter, is very imprecise, and it reflects in advance!point clouds match.

When it starts to capture frames, It works for a while until a loop happen, but even with the program adding nodes, no more frames are added to the point clouds map, and it stops to create after about 20cm ahead the robot (even if the robot moves). I looked at this question related to this loop somehow:

Loop in TF

And this is my tf tree:

http://i.imgur.com/oZhUAvz.jpg

Checking this answer and this one, i tried to raise the subscriber_queue_size with no success, changing image from rgb to monochrome (it's faster but the loop still happen), this steps:

  • have rgbdslam reconstruct the point cloud -> use an empty topic for parameter "topic_points". The two images are quicker sent than the 10Mb cloud.
  • If this fails, try setting compressed image transport. I have never tried this though.
  • If that fails too, you can modify the constructor in openni_listener, so that it can listen to point_cloud only data. Then copy and modify the stereoCallback s.t. it reconstruct the rgb image from the cloud.

I have no ideia how to do them, neither this other one:

  • Avoid the unsynchronized dropping of messages: Write and run a proxy node on the robot that combines color image, depth image and camera info in a custom message and sends that. Then on the other machine, write a proxy that splits the message and sends it again on three topics to rgbdslam. Then either all or nothing is dropped.

Thanks!

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

Is there another method could i use to do this?

EDIT:

I've tested some parameters and it's pretty possible to notice that this beta version can map the white walls better than the stable one, but i couldn't realize from where the robot odometry topic comes from.

In my case the turtlebot odometry topic is /odom (instead of /pose setted) but i guess the RGBDSLAM gets the kinect's, because the estimated position that RGBDSLAM provides (on GUI) with the /odom topic setted on “odometry_tpc” parameter, is very imprecise, and it reflects in point clouds match.

When it starts to capture frames, It works for a while until a loop happen, but even with the program adding nodes, no more frames are added to the point clouds map, and it stops to create after about 20cm ahead the robot (even if the robot moves). I looked at this question related to this loop somehow:

Loop in TF

And this is my tf tree:

http://i.imgur.com/oZhUAvz.jpg

Checking this answer and this one, i tried to raise the subscriber_queue_size with no success, changing image from rgb to monochrome (it's faster but the loop still happen), this steps:

  • have rgbdslam reconstruct the point cloud -> use an empty topic for parameter "topic_points". The two images are quicker sent than the 10Mb cloud.
  • If this fails, try setting compressed image transport. I have never tried this though.
  • If that fails too, you can modify the constructor in openni_listener, so that it can listen to point_cloud only data. Then copy and modify the stereoCallback s.t. it reconstruct the rgb image from the cloud.

I have no ideia how to do them, neither this other one:

  • Avoid the unsynchronized dropping of messages: Write and run a proxy node on the robot that combines color image, depth image and camera info in a custom message and sends that. Then on the other machine, write a proxy that splits the message and sends it again on three topics to rgbdslam. Then either all or nothing is dropped.

Thanks!

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

Is there another method could i use to do this?

EDIT:

I've tested some parameters and it's pretty possible to notice that this beta version version can map the white walls better than the stable one, but i couldn't realize from where the robot odometry topic comes from.

In my case the turtlebot odometry topic is /odom (instead of /pose setted) but i guess the RGBDSLAM gets the kinect's, because the estimated position that RGBDSLAM provides (on GUI) with the /odom topic setted on “odometry_tpc” parameter, is very imprecise, and it reflects in point clouds match.

When it starts to capture frames, It works for a while until a loop happen, but even with the program adding nodes, no more frames are added to the point clouds map, and it stops to create after about 20cm ahead the robot (even if the robot moves). I looked at this question related to this loop somehow:

Loop in TF

And this is my tf tree:

http://i.imgur.com/oZhUAvz.jpg

Checking this answer and this one, i tried to raise the subscriber_queue_size with no success, changing image from rgb to monochrome (it's faster but the loop still happen), this steps:

  • have rgbdslam reconstruct the point cloud -> use an empty topic for parameter "topic_points". The two images are quicker sent than the 10Mb cloud.
  • If this fails, try setting compressed image transport. I have never tried this though.
  • If that fails too, you can modify the constructor in openni_listener, so that it can listen to point_cloud only data. Then copy and modify the stereoCallback s.t. it reconstruct the rgb image from the cloud.

I have no ideia how to do them, neither this other one:

  • Avoid the unsynchronized dropping of messages: Write and run a proxy node on the robot that combines color image, depth image and camera info in a custom message and sends that. Then on the other machine, write a proxy that splits the message and sends it again on three topics to rgbdslam. Then either all or nothing is dropped.

Thanks!

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

Is there another method could i use to do this?

EDIT:

I've tested some parameters and it's pretty possible to notice that this beta version can map the white walls better than the stable one, but i couldn't realize from where the robot odometry topic comes from.

In my case the turtlebot odometry topic is /odom (instead of /pose setted) but i guess the RGBDSLAM gets the kinect's, because the estimated position that RGBDSLAM provides (on GUI) with the /odom topic setted on “odometry_tpc” parameter, is very imprecise, and it reflects in point clouds match.

When it starts to capture frames, It works for a while until a loop happen, but even with the program adding nodes, no more frames are added to the point clouds map, and it stops to create after about 20cm ahead the robot (even if the robot moves). I looked at this question related to this loop somehow:

Loop in TF

And this is my tf tree:

http://i.imgur.com/oZhUAvz.jpg

Checking this answer and this one, i tried to raise the subscriber_queue_size with no success, changing image from rgb to monochrome (it's faster but the loop still happen), this steps:

  • have rgbdslam reconstruct the point cloud -> use an empty topic for parameter "topic_points". The two images are quicker sent than the 10Mb cloud.
  • If this fails, try setting compressed image transport. I have never tried this though.
  • If that fails too, you can modify the constructor in openni_listener, so that it can listen to point_cloud only data. Then copy and modify the stereoCallback s.t. it reconstruct the rgb image from the cloud.

I have no ideia how to do them, neither this other one:

  • Avoid the unsynchronized dropping of messages: Write and run a proxy node on the robot that combines color image, depth image and camera info in a custom message and sends that. Then on the other machine, write a proxy that splits the message and sends it again on three topics to rgbdslam. Then either all or nothing is dropped.

Thanks!

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

Is there another method could i use to do this?

EDIT:

I've tested some parameters and it's pretty possible to notice that this beta version can map the white walls better than the stable one, but i couldn't realize from where the robot odometry topic comes from.

In my case the turtlebot odometry topic is /odom (instead of /pose setted) but i guess the RGBDSLAM gets the kinect's, because the estimated position that RGBDSLAM provides (on GUI) with the /odom topic setted on “odometry_tpc” parameter, is very imprecise, and it reflects in point clouds match.

When it starts to capture frames, It works for a while until a loop happen, but even with the program adding nodes, no more frames are added to the point clouds map, and it stops to create after about 20cm ahead the robot (even if the robot moves). I looked at this question related to this loop somehow:

Loop in TF

And this is my tf tree:

http://i.imgur.com/oZhUAvz.jpg

Checking this answer and this one, i tried to raise the subscriber_queue_size with no success, changing image from rgb to monochrome (it's faster but the loop still happen), this these steps:

  • have rgbdslam reconstruct the point cloud -> use an empty topic for parameter "topic_points". The two images are quicker sent than the 10Mb cloud.
  • If this fails, try setting compressed image transport. I have never tried this though.
  • If that fails too, you can modify the constructor in openni_listener, so that it can listen to point_cloud only data. Then copy and modify the stereoCallback s.t. it reconstruct the rgb image from the cloud.

I have no ideia how to do them, neither this other one:

  • Avoid the unsynchronized dropping of messages: Write and run a proxy node on the robot that combines color image, depth image and camera info in a custom message and sends that. Then on the other machine, write a proxy that splits the message and sends it again on three topics to rgbdslam. Then either all or nothing is dropped.

Thanks!

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

Is there another method could i use to do this?

EDIT:

I've tested some parameters and it's pretty possible to notice that this beta version can map the white walls better than the stable one, but i couldn't realize from where the robot odometry topic comes from.

In my case the turtlebot odometry topic is /odom (instead of /pose setted) but i guess the RGBDSLAM gets the kinect's, because the estimated position that RGBDSLAM provides (on GUI) with the /odom topic setted on “odometry_tpc” parameter, is very imprecise, and it reflects in point clouds match.

When it starts to capture frames, It works for a while until a loop happen, but even with the program adding nodes, no more frames are added to the point clouds map, and it stops to create after about 20cm ahead the robot (even if the robot moves). I looked at this question related to this loop somehow:

Loop in TF

And this is my tf tree:

http://i.imgur.com/oZhUAvz.jpg

Checking this answer and this one, i tried to raise the subscriber_queue_size with no success, changing image from rgb to monochrome (it's faster but the loop still happen), these steps:

  • have rgbdslam reconstruct the point cloud -> use an empty topic for parameter "topic_points". The two images are quicker sent than the 10Mb cloud.
  • If this fails, try setting compressed image transport. I have never tried this though.
  • If that fails too, you can modify the constructor in openni_listener, so that it can listen to point_cloud only data. Then copy and modify the stereoCallback s.t. it reconstruct the rgb image from the cloud.

I have no ideia how to do them, neither this other one:

  • Avoid the unsynchronized dropping of messages: Write and run a proxy node on the robot that combines color image, depth image and camera info in a custom message and sends that. Then on the other machine, write a proxy that splits the message and sends it again on three topics to rgbdslam. Then either all or nothing is dropped.

Is it possible to combine the turtlebot IMU data with pointclouds matching? Thanks!

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

Is there another method could i use to do this?

EDIT:

I've tested The loop problem was related to wrong parameters at robot_odometry.launch, specifically at odom_frame_name and base_frame_name.

I'm able to create a map using the odometry information but there's still a problem, the floor contains so many "holes".

image description.

I'd like to know how can i capture the whole floor without raising the max_depth parameter (raising it would bring me some parameters and it's pretty possible to notice problems, because up to 3.8 meters at max_depth, the kinect captures ceiling). Is that this beta version can map the white walls better than the stable one, but i couldn't realize from where the robot odometry topic comes from.

In my case the turtlebot odometry topic is /odom (instead of /pose setted) but i guess the RGBDSLAM gets the kinect's, because the estimated position that RGBDSLAM provides (on GUI) with the /odom topic setted on “odometry_tpc” parameter, is very imprecise, and it reflects related to ground_truth algorithm? How it works in point clouds match.

When it starts to capture frames, It works for a while until a loop happen, but even with the program adding nodes, no more frames are added to the point clouds map, and it stops to create after about 20cm ahead the robot (even if the robot moves). I looked at this question related to this loop somehow:

Loop in TF

And this is my tf tree:

http://i.imgur.com/oZhUAvz.jpg

Checking this answer and this one, i tried to raise the subscriber_queue_size with no success, changing image from rgb to monochrome (it's faster but the loop still happen), these steps:

  • have rgbdslam reconstruct the point cloud -> use an empty topic for parameter "topic_points". The two images are quicker sent than the 10Mb cloud.
  • If this fails, try setting compressed image transport. I have never tried this though.
  • If that fails too, you can modify the constructor in openni_listener, so that it can listen to point_cloud only data. Then copy and modify the stereoCallback s.t. it reconstruct the rgb image from the cloud.

I have no ideia how to do them, neither this other one:

  • Avoid the unsynchronized dropping of messages: Write and run a proxy node on the robot that combines color image, depth image and camera info in a custom message and sends that. Then on the other machine, write a proxy that splits the message and sends it again on three topics to rgbdslam. Then either all or nothing is dropped.

Is it possible to combine the turtlebot IMU data with pointclouds matching? Thanks!rgbdslam?

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

Is there another method could i use to do this?

EDIT:

The loop problem was related to wrong parameters at robot_odometry.launch, specifically at odom_frame_name and base_frame_name.

I'm able to create a map using the odometry information but there's still a problem, the floor contains so many "holes".

image description.

I'd like to know how can i capture the whole floor without raising the max_depth parameter (raising it would bring me some problems, because up to 3.8 meters at max_depth, the kinect captures ceiling). Is that related to ground_truth algorithm? How it works in rgbdslam?

This is my robot_odometry.launch file:

<!-- This file contains the settings to use odometry messages from a robot.
 To get the cloud, you can subscribe to the online_cloud_out_topic (subsampled
 according to cloud_creation_skip_step or to the original point cloud from the 
 openni driver.

--> <launch>

<node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen">

<param name="config/feature_extractor_type"        value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
<param name="config/feature_detector_type"         value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
<param name="config/detector_grid_resolution"      value="10"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->
<param name="config/max_keypoints"                 value="600"/><!-- Extract no more than this many keypoints -->
<param name="config/max_matches"                   value="300"/><!-- Keep the best n matches (important for ORB to set lower than max_keypoints) -->

<param name="config/min_sampled_candidates"        value="4"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
<param name="config/predecessor_candidates"        value="4"/><!-- Frame-to-frame comparisons to sequential frames-->
<param name="config/neighbor_candidates"           value="4"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
<param name="config/ransac_iterations"             value="100"/>
<param name="config/cloud_creation_skip_step"      value="5"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->

<param name="config/cloud_display_type"            value="POINTS"/><!-- Show pointclouds as points (as opposed to TRIANGLE_STRIP) -->
<param name="config/pose_relative_to"              value="largest_loop"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->
<param name="config/backend_solver"                value="pcg"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->
<param name="config/optimizer_skip_step"           value="1"/><!-- optimize only every n-th frame -->       

<!-- ODOMETRY RELATED -->
<param name="config/odometry_tpc"                  value="/odom"/><!-- should be of type nav_msgs/Odometry -->
<param name="config/use_robot_odom"                value="true"/><!-- activate -->
<param name="config/odom_frame_name"               value="odom_frame"/><!-- the fixed coordinate frame according to odometry -->
<param name="config/base_frame_name"               value="base_frame"/><!-- the robot's position, era base_link padrao -->
<param name="config/odometry_information_factor"   value="1"/><!-- weight for the odometry edges in the pose graph -->

<param name="config/keep_all_nodes"                value="true"/><!-- assume zero motion if no motion could be found and continue -->

<param name="config/fixed_camera"                  value="true"/> <!--is the kinect fixed with respect to base, or can it be moved (false makes sense only if transform betw. base_frame and openni_camera is sent via tf)-->

<param name="matcher_type"                          value="BRUTEFORCE"/>
<param name="use_odom_for_prediction"               value="true"/>
<param name="config/maximum_depth"                 value="3.8"/>

</node> </launch>

Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

Is there another method could i use to do this?

EDIT:

The loop problem was related to wrong parameters at robot_odometry.launch, specifically at odom_frame_name and base_frame_name.

I'm able to create a map using the odometry information but there's still a problem, the floor contains so many "holes".

image description.

I'd like to know how can i capture the whole floor without raising the max_depth parameter (raising it would bring me some problems, because up to 3.8 meters at max_depth, the kinect captures ceiling). Is that related to ground_truth algorithm? How it works in rgbdslam?

This is my robot_odometry.launch file:

<!-- This file contains the settings to use odometry messages from a robot.
 To get the cloud, you can subscribe to the online_cloud_out_topic (subsampled
 according to cloud_creation_skip_step or to the original point cloud from the 
 openni driver.

--> <launch>

<node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen">

<!-- Input data settings-->
<param name="config/topic_image_mono"              value="/camera/rgb/image_mono"/> <!--could also be color -->
<param name="config/topic_image_depth"             value="/camera/depth_registered/sw_registered/image_rect_raw"/>


<param name="config/feature_extractor_type"        value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
<param name="config/feature_detector_type"         value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
<param name="config/detector_grid_resolution"      value="10"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->
<param name="config/max_keypoints"                 value="600"/><!-- Extract no more than this many keypoints -->
<param name="config/max_matches"                   value="300"/><!-- Keep the best n matches (important for ORB to set lower than max_keypoints) -->

<param name="config/min_sampled_candidates"        value="4"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
<param name="config/predecessor_candidates"        value="4"/><!-- Frame-to-frame comparisons to sequential frames-->
<param name="config/neighbor_candidates"           value="4"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
<param name="config/ransac_iterations"             value="100"/>
<param name="config/cloud_creation_skip_step"      value="5"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->

<param name="config/cloud_display_type"            value="POINTS"/><!-- Show pointclouds as points (as opposed to TRIANGLE_STRIP) -->
<param name="config/pose_relative_to"              value="largest_loop"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->
<param name="config/backend_solver"                value="pcg"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->
<param name="config/optimizer_skip_step"           value="1"/><!-- optimize only every n-th frame -->       

<!-- ODOMETRY RELATED -->
<param name="config/odometry_tpc"                  value="/odom"/><!-- should be of type nav_msgs/Odometry -->
<param name="config/use_robot_odom"                value="true"/><!-- activate -->
<param name="config/odom_frame_name"               value="odom_frame"/><!-- the fixed coordinate frame according to odometry -->
<param name="config/base_frame_name"               value="base_frame"/><!-- the robot's position, era base_link padrao -->
<param name="config/odometry_information_factor"   value="1"/><!-- weight for the odometry edges in the pose graph -->

<param name="config/keep_all_nodes"                value="true"/><!-- assume zero motion if no motion could be found and continue -->

<param name="config/fixed_camera"                  value="true"/> <!--is the kinect fixed with respect to base, or can it be moved (false makes sense only if transform betw. base_frame and openni_camera is sent via tf)-->

<param name="matcher_type"                          value="BRUTEFORCE"/>
<param name="use_odom_for_prediction"               value="true"/>
<param name="config/maximum_depth"                 value="3.8"/>

</node> </launch>