ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

How to use canonical_scan_matcher with Kinect

asked 2011-03-23 00:13:14 -0600

tom gravatar image

updated 2016-10-24 09:00:36 -0600

ngrennan gravatar image

I'm interested in using canonical_scan_matcher to estimate odometry with just the Kinect sensor. Has anybody out there already tried that?

I'm using pointcloud_to_laserscan with openni_kinect to provide fake laser scans with this launchfile (I haven't made any changes to openni_node.launch nor kinect_frames.launch):

  <!-- kinect and frame ids -->
  <include file="$(find openni_camera)/launch/openni_node.launch"/>

  <!-- openni manager -->
  <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/>

  <!-- throttling -->
  <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager">
    <param name="max_rate" value="2"/>
    <remap from="cloud_in" to="/camera/depth/points"/>
    <remap from="cloud_out" to="cloud_throttled"/>

  <!-- fake laser -->
  <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager">
    <param name="output_frame_id" value="/openni_depth_frame"/>
    <remap from="cloud" to="cloud_throttled"/>

This seems to work nicely, I haven't got a laser scanner to be able to compare the results, though. Naturally the big difference between a laser scanner and the fake laser scan I'm obtaining is the much narrower field of view (57 instead of 240 degrees).

How to configure the canonical_scan_matcher to get the best of it with Kinect? I'm using the following launchfile:

  <node pkg="canonical_scan_matcher" type="csm_node" name="csm_node" output="screen" />
  <node pkg="tf" type="static_transform_publisher" name="base_link_to_laser" args="0.0 0.0 0.0 0 0 0.0 /base_link /openni_depth_frame 40" />
  <node pkg="rviz" type="rviz" name="rviz"/>

but I guess I'm still missing something. AFAIK, /openni_depth_frame is a right-handed Cartesian coordinate system with the z axis pointing out of the IR-camera's lens. /scan is being published in the same tf.

Do you, nice people, see any mistakes in the files provided? canonical_scan_matcher is publishing tf, but the results I can see in rviz are definitely not satisfactory. In the room I'm testing it, the Kinect sees 3 walls, the 2 right angles between them and some other minor features, shouldn't it be enough?

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2011-03-23 03:46:36 -0600

Your overall idea and approach is correct.

I think you are running into a problem with tf's here. You are publishing a static tf to /openni_depth_frame, but (if you are using the openni launch files) there is already a static tf being published from openni_camera to openni_depth_frame. Try it without the static tf line in your launch file. You want your setup to be the following:

world_frame: world
base_frame: openni_camera
scan_frame: openni_depth_frame

Pass the world_frame and base_frame parameters to csm.

Once you get your tf's working, we can look into what matching parameters work best.

edit flag offensive delete link more

answered 2011-03-23 21:29:40 -0600

tom gravatar image

updated 2011-03-24 02:00:18 -0600

Ok, I figured that out. As Ivan suggested, I needed to revise my tf-tree. A working csm launchfile here:

  <node pkg="canonical_scan_matcher" type="csm_node" name="csm_node" output="screen" />
  <node pkg="tf" type="static_transform_publisher" name="base_link_to_openni" args="0.0 0.0 0.0 0 0 0.0 /base_link /openni_camera 100" />
  <node pkg="rviz" type="rviz" name="rviz"/>

The static transform from /base_link to /openni_camera needs to be published as none of the preceding launchfiles defines a /base_link tf, and csm requires it. Maybe it is somehow possible to remap /base_link to /openni_camera in csm, this would be cleaner for this robot configuration, ie. when there is no robot, just a Kinect sensor. But for more complex robot configs, the /base_link will be needed anyway, I guess.

I suggest selecting /world as Fixed Frame in rviz. The odometry estimation works. Naturally, when carrying Kinect around in a room, because it's not possible to hold Kinect ideally parallel to the ground, Kinect's narrow fov and probably some more reasons the estimated odometry drifts in time. I'll probably try this approach with an IMU when I get one.

edit flag offensive delete link more

Question Tools


Asked: 2011-03-23 00:13:14 -0600

Seen: 1,134 times

Last updated: Mar 24 '11