Ask Your Question
1

RGBDSLAM Pointcloud with wrong transformation

asked 2019-08-07 01:35:01 -0500

usamamaq gravatar image

updated 2019-08-09 01:12:51 -0500

Hello i am using RGBD SLAM v2. Getting batch clouds published for octomap server to get octomap and building the octomap for path planning and navigation. My max robot speed inputs are 0.25 m/s during mapping. Problem is during mapping one of the pointcloud get published with wrong transform from what i understand and it destroys the whole octomap. Previously i was using online_clouds so i thought it is because global optimization is not taking place with online_clouds but it still happening with batch_clouds. Here is the launch file setting that i am using:

<launch>
  <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="log"> 
    <!-- Input data settings-->
    <param name="config/topic_image_mono"              value="/kinect/rgb/image_raw"/> 
    <param name="config/topic_image_depth"             value="/kinect/depth/image_raw"/>
    <param name="config/camera_info_topic"             value="/kinect/rgb/camera_info"/>
    <!--<param name="config/topic_image_depth"             value="/camera/depth_registered/sw_registered/image_rect_raw"/>-->
    <param name="config/topic_points"                  value=""/> <!--if empty, poincloud will be reconstructed from image and depth -->

    <param name="config/feature_extractor_type"        value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
    <param name="config/feature_detector_type"         value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
    <param name="config/detector_grid_resolution"      value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->
    <param name="config/max_keypoints"                 value="600"/><!-- Extract no more than this many keypoints -->
    <param name="config/max_matches"                   value="300"/><!-- Keep the best n matches (important for ORB to set lower than max_keypoints) -->
    <param name="config/min_matches"               value="20"/><!--"Don't try RANSAC if less than this many matches (if using SiftGPU and GLSL you should use max. 60 matches)") -->
    <param name="config/min_sampled_candidates"        value="4"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
    <param name="config/predecessor_candidates"        value="4"/><!-- Frame-to-frame comparisons to sequential frames-->
    <param name="config/neighbor_candidates"           value="4"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
    <param name="config/ransac_iterations"             value="100"/>
    <param name="config/cloud_creation_skip_step"      value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->
    <param name="config/max_dist_for_inliers"          value="2.0"/>
    <param name="config/encoding_bgr"                  value="false"/>
    <param name="config/cloud_display_type"            value="TRIANGLE_STRIP"/><!-- Show pointclouds as points (as opposed to TRIANGLE_STRIP) -->
    <param name="config/pose_relative_to"              value="largest_loop"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->
    <param name="config/backend_solver"                value="pcg"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->
    <param name="config/optimizer_skip_step"           value="1"/><!-- optimize only every n-th frame -->
    <param name="g2o_transformation_refinement"        value="0"/><!--Use g2o to refine the ransac ...
(more)
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2019-08-07 07:03:39 -0500

It's not quite clear to me what the problem is. One cloud is badly aligned which messes up your octomap? With batch clouds you will get what you see in rgbdslam's gui. So if the problem (the badly aligned cloud) is visible there, it will also be in your output.

Maybe you can post screenshots of how the problem looks like?

edit flag offensive delete link more

Comments

I cant upload the image because i dont have >5 points in ros answers. But you are right point cloud is not aligned which destroys the octomap and i have to re-initiate whole mapping process. In GUI unaligned point clouds does appear but they are corrected afterwards(May be due to global map optimization on subsequent frames). I have image if you can tell me some other way i can send you.

usamamaq gravatar imageusamamaq ( 2019-08-07 08:03:57 -0500 )edit

I followed @jayess in upvoting your question, as I don't understand why this would be happening. You should get exactly what you see. Could you give some context information (how long are you processing, when and how do you trigger sending of the batch clouds ...)

Felix Endres gravatar imageFelix Endres ( 2019-08-08 09:07:44 -0500 )edit

Thanks a lot @jayess and @felix for upvoting. From your question how long i am processing wha i understand is the rate at whic i am publishng /batch_clouds i.e <param name="send_clouds_rate" value="5"/>. and how do i trigger sending batch cloud is at the start i send this from python code:

rospy.wait_for_service('/rgbdslam/ros_ui')
try:
    trigger = rospy.ServiceProxy('/rgbdslam/ros_ui', rgbdslam_ros_ui)
    trigger_msg = trigger('send_all')
except rospy.ServiceException, error:

And i get batch clouds as well that i am seeing on RVIZ. But problem occurs randomly and attached picture show what i am referring to misaligned point cloud (edited question).

I think it occurs when feature tracking does not have many features to track (if features are out of kinect sensor depth range) and it transform it into a wrong point cloud. But @felix you are the boss.

usamamaq gravatar imageusamamaq ( 2019-08-09 01:11:21 -0500 )edit
  • May well be, that the cause of the problem is what you say. Since it looks like you are running a simulation, it is also possible that parts of the environment have exactly the same features (from reuse of texture) and thus are wrongly matched.
  • Batch clouds is not meant to be used periodically, in case you run that code in a loop. It is meant to be triggered just once at the end.
Felix Endres gravatar imageFelix Endres ( 2019-08-12 07:07:34 -0500 )edit

Ok noted your first point. I will try to bring variation in environment.

I am subscribing to batch_clouds and they are being used by octomap_server to update the octomap which in turn is being used for navigation. So when you say "it is meant to be triggered just once at the end", i didint understand that. As i have to trigger it once with send_all in the start and then it keeps sending batch clouds. So is that wrong? Should i stop them from publishing and then send_all again?

usamamaq gravatar imageusamamaq ( 2019-08-18 09:35:56 -0500 )edit

And the way i am using it (periodically) will there be any difference if i use online_clouds or batch_clouds in terms of map_optimization and "position sanity check"?

As I have seen there is a delay in position estimate (/tf) as well when i subscribe batch_clouds (like you have explained in github as well by saying "The clouds sent are actually the same as before, but the according transformation - by default from /map to /openni_camera - is sent out on /tf.") Am i right on this understanding? Because delay in \tf is a problem for me during navigation?

usamamaq gravatar imageusamamaq ( 2019-08-18 09:40:23 -0500 )edit

"Batch clouds" is a convenience feature that is meant to be used after mapping. When you trigger it

  • all clouds that have been computed and stored within rgbdslam_v2 are sent out with send_clouds_rate
  • simultaneously, the current estimates of the camera poses that belong to the clouds are sent out on /tf
  • both, clouds and poses are timestamped with the original image timestamps
    • That means: data from the past (during mapping) is sent out again on /tf.
    • This might interfere with other nodes.

The "position sanity check", as you say, is just the optimizer making use of new information. When the current image "A" is received, it is matched against past images. Afterwards, the newer images are matched against A and others (before and after A). The resulting network of estimations leads to corrections to the pose of A. If you use the batch clouds feature periodically, the information is sent ...(more)

Felix Endres gravatar imageFelix Endres ( 2019-08-20 09:48:04 -0500 )edit

Thanks a lot for detailed reply. It has cleared a lot of things.

usamamaq gravatar imageusamamaq ( 2019-08-20 12:28:35 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2019-08-07 01:35:01 -0500

Seen: 28 times

Last updated: Aug 09