ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

rtab-map to scan the PLAIN wall in front of the robot

asked 2019-08-21 11:18:31 -0500

akosodry gravatar image

updated 2019-08-23 00:45:03 -0500


i'm working on a robotic arm which can be equipped with a realsense or a zed rgbd camera.

The camera is attached to the end-effector of the robot, so im able to make a "zig-zag" (from left to right) movement to scan a 2m x 2m area in front of the robot. (The executed movement is quite slow)

The area in front of the robot is always plain wall (white wall) with some obstacles (boxes).

My first question is that what is the most effective way to get a nice pointcloud2 of the environment (i need both rgb and xyz data)?

I tried using the popular rtab-map package to create the cloud output, however i could not get good results. I assume that due to the plain wall (i.e., no reference points for merging the data?), the algorithm constructed unacceptable output. I tried decreasing cloud_voxel_size and the decimation, and increasing DetectionRate parameters, but i could not achieve better results.

So if the rtab-map is the most suitable package to solve the aforementioned problem, then could you please help me in the setup of the package? What are the relevant parameters that i should change?

Im not interested in localization, nor in merging maps. I just want to scan the 2x2m area and get a pointcloud2 data set of that area.

UPDATE (after the answer): Current outcome:

cloud_voxel_size=0.005, DetectionRate=2, MaxFeatures=-1, ProximityBySpace=false, AngularUpdate=0, LinearUpdate=0

ZED - unacceptable

scanned 2x2m wall with obstacles, ZED

Realsense- quite good (smaller FOV resulted in empty parts/gaps that can be easily corrected with modification of the zig-zag scanning path)

scanned 2x2m wall with obstacles, realsense

Thank you for the help in advance.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2019-08-21 15:57:16 -0500

matlabbe gravatar image

If TF of the arm is available and the camera is linked to that Tf tree, you can then use TF directly to scan the area. To do so with rtabmap:

  <arg name="rgb_topic"         default="/camera/color/image_raw"/>
  <arg name="depth_topic"       default="/camera/aligned_depth_to_color/image_raw"/>
  <arg name="camera_info_topic" default="/camera/color/camera_info"/>
  <arg name="approx_sync"       default="false"/>

  <group ns="rtabmap">
    <node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="--delete_db_on_start">
      <param name="frame_id"      type="string" value="base_link"/> <!-- base frame of the arm -->
      <param name="odom_frame_id" type="string" value="base_link"/> <!-- same as frame_id, as the base of the arm doesn't move -->
      <param name="approx_sync"   type="bool"   value="$(arg approx_sync)"/>

      <remap from="rgb/image"       to="$(arg rgb_topic)"/>
      <remap from="depth/image"     to="$(arg depth_topic)"/>
      <remap from="rgb/camera_info" to="$(arg camera_info_topic)"/>

      <!-- RTAB-Map's parameters -->
      <param name="Rtabmap/DetectionRate"     type="string" value="2"/>     <!-- Hz -->
      <param name="Kp/MaxFeatures"            type="string" value="-1"/>    <!-- Disable loop closure detection -->
      <param name="RGBD/ProximityBySpace"     type="string" value="false"/> <!-- Disable proximity detection -->
      <param name="RGBD/AngularUpdate"        type="string" value="0.0"/>   <!-- Always add frames -->
      <param name="RGBD/LinearUpdate"         type="string" value="0.0"/>   <!-- Always add frames -->
      <param name="Grid/VoxelSize"            type="string" value="0.05"/>
edit flag offensive delete link more


@matlabbe Thank you very much!! I tested it, it seems working, but due to the VoxelSize=5cm, its quite difficult to evaluate the quality. I tried to set Grid/VoxelSize=0.005, but the outcome is the same. Should replace it with param name="cloud_voxel_size"? Q2: if i increase the DetectionRate, does it mean that better pointcloud representation will be calculated by the algorithm? Q3: ZED or Realsense is the more suitable device for the task?

Thank you once more!

akosodry gravatar image akosodry  ( 2019-08-22 03:13:18 -0500 )edit

@matlabbe Currently these are the results (see UPDATE in my post), cloud_voxel_size=0.005 is set. The results are quite bad, ..., i mean its quite difficult to postfilter these results. Can you recommend some other parameters to change? The obstacle sizes are around 12x20 cm (w,h), and the depth varies from 4cm to 12 cm.

akosodry gravatar image akosodry  ( 2019-08-22 03:33:32 -0500 )edit

@matlabbe it seems to me, that realsense (d435) provides better results...

akosodry gravatar image akosodry  ( 2019-08-22 04:05:58 -0500 )edit

If there are misalignments, it would be an issue with TF or bad time synchronization between the camera stamps and TF. For textureless surfaces, realsense will be better than ZED. The detection rate won't generate better clouds, there will just be more frames added to map. For the accuracy, if you look only at the raw live point cloud of the camera, does it meet your requirements (to see if it is a sensor limitation)?

matlabbe gravatar image matlabbe  ( 2019-08-22 21:10:06 -0500 )edit

@matlabbe Thanks for the reply. The raw point cloud of the camera look good/acceptable. I am doing the scanning 55 cms in front of the wall, and it looks quite good (no big fluctuation is shown). I did update the post again, indeed, the ZED produces quite bad outcome, but the realsense is more or less okey. Can you recommend some other key parameters to change to improve even more?

Your answer really helped me a lot, thank you!

akosodry gravatar image akosodry  ( 2019-08-23 00:39:09 -0500 )edit

The realsense cloud looks good. If time is not an issue, you could move the arm, stop, take a picture, then move to next waypoint. This will limit motion blur in RGB image if you want good color quality. Calling /rtabmap/pause and /rtabmap/resume services could be used to avoid acquiring images when the arm is moving (call pause, move to next waypoint, call resume, wait 1 sec, call pause, move to next waypoint, call resume...). In contrast, if you want to move very fast at the cost of motion blur in RGB images, you could increase Rtabmap/DetectionRate to acquire frames more often (I know there is less motion blur in IR images of realsense camera, so the depth may be still good enough).

matlabbe gravatar image matlabbe  ( 2019-08-23 08:46:09 -0500 )edit

@matlabbe VERY big thanks for the help!

akosodry gravatar image akosodry  ( 2019-08-23 09:40:05 -0500 )edit

Question Tools



Asked: 2019-08-21 11:18:31 -0500

Seen: 524 times

Last updated: Aug 23 '19