ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

You should be able to use the ccny_rgb tools with simulated Kinect data, as long as that data comes in the same format as the OpenNI image topics. The visual_odometry and keyframe_mapper tools require 3 incoming topics:

  • /rgbd/rgb: a rectified, debayered RGB image, 8UC3 encoding
  • /rgbd/depth: a rectified depth image, registered with the RGB image, 16UC1 encoding (in millimeters)
  • /rgbd/info: CameraInfo topic (applies to both image, since they are registered).

If the topics are coming from the OpenNI driver, you would need to add the following remappings to the visual_odometry_node and the keyframe_mapper_node launch files

<remap from="/rgbd/rgb"   to="/camera/rgb/image_rect_color"/>
<remap from="/rgbd/depth" to="/camera/depth_registered/image_rect_raw"/>
<remap from="/rgbd/info"  to="/camera/rgb/camera_info"/>

Finally, you should choose the fixed_frame and base_frame parameters. The defaults are odom and camera_link. In your case, since this is a real robot, the base frame might be different - base_link for example. You need to ensure that there exists a tf between the base_frame and the optical frame of the camera. This transform is assumed to be static by ccny_rgbd, and will be cached.

You should be able to use the ccny_rgb tools with simulated Kinect data, as long as that data comes in the same format as the OpenNI image topics. The visual_odometry and keyframe_mapper tools require 3 incoming topics:

  • /rgbd/rgb: a rectified, debayered RGB image, 8UC3 encoding
  • /rgbd/depth: a rectified depth image, registered with the RGB image, 16UC1 encoding (in millimeters)
  • /rgbd/info: CameraInfo topic (applies to both image, since they are registered).

If the topics are coming from the OpenNI driver, you would need to add the following remappings to the visual_odometry_node and the keyframe_mapper_node launch files

<remap from="/rgbd/rgb"   to="/camera/rgb/image_rect_color"/>
<remap from="/rgbd/depth" to="/camera/depth_registered/image_rect_raw"/>
<remap from="/rgbd/info"  to="/camera/rgb/camera_info"/>

Finally, you should choose the fixed_frame and base_frame parameters. The defaults are odom and camera_link. In your case, since this is a real robot, the base frame might be different - base_link for example. You need to ensure that there exists a tf between the base_frame and the optical frame of the camera. This transform is assumed to be static by ccny_rgbd, and will be cached.

EDIT: The visual odometry will publish a tf, but this might not be what you need for your application, especially if you are using robot_pose_ekf to fuse odometry sources.

If this is the case, disable the tf publishing (using the publish_tf flag), and make robot_pose_ekf subscribe to the "odom" topic published by "visual_odometry_node", of type nav_msgs/Odometry

You should be able to use the ccny_rgb tools with simulated Kinect data, as long as that data comes in the same format as the OpenNI image topics. The visual_odometry and keyframe_mapper tools require 3 incoming topics:

  • /rgbd/rgb: a rectified, debayered RGB image, 8UC3 encoding
  • /rgbd/depth: a rectified depth image, registered with the RGB image, 16UC1 encoding (in millimeters)
  • /rgbd/info: CameraInfo topic (applies to both image, since they are registered).

If the topics are coming from the OpenNI driver, you would need to add the following remappings to the visual_odometry_node and the keyframe_mapper_node launch files

<remap from="/rgbd/rgb"   to="/camera/rgb/image_rect_color"/>
<remap from="/rgbd/depth" to="/camera/depth_registered/image_rect_raw"/>
<remap from="/rgbd/info"  to="/camera/rgb/camera_info"/>

Finally, you should choose the fixed_frame and base_frame parameters. The defaults are odom and camera_link. In your case, since this is a real robot, the base frame might be different - base_link for example. You need to ensure that there exists a tf between the base_frame and the optical frame of the camera. This transform is assumed to be static by ccny_rgbd, and will be cached.

EDIT: The visual odometry will publish a tf, but this might not be what you need for your application, especially if you are using robot_pose_ekf to fuse odometry sources.

EDIT 2: Still can't find what the simulated Kinect data is exactly, but it appears the only available depth topic is 32FC1. A ticket has been opened to enable support in ccny_rgbd.

If this is the case, disable the tf publishing (using the publish_tf flag), and make robot_pose_ekf subscribe to the "odom" topic published by "visual_odometry_node", of type nav_msgs/Odometry

You should be able to use the ccny_rgb tools with simulated Kinect data, as long as that data comes in the same format as the OpenNI image topics. The visual_odometry and keyframe_mapper tools require 3 incoming topics:

  • /rgbd/rgb: a rectified, debayered RGB image, 8UC3 encoding
  • /rgbd/depth: a rectified depth image, registered with the RGB image, 16UC1 encoding (in millimeters)
  • /rgbd/info: CameraInfo topic (applies to both image, since they are registered).

If the topics are coming from the OpenNI driver, you would need to add the following remappings to the visual_odometry_node and the keyframe_mapper_node launch files

<remap from="/rgbd/rgb"   to="/camera/rgb/image_rect_color"/>
<remap from="/rgbd/depth" to="/camera/depth_registered/image_rect_raw"/>
<remap from="/rgbd/info"  to="/camera/rgb/camera_info"/>

Finally, you should choose the fixed_frame and base_frame parameters. The defaults are odom and camera_link. In your case, since this is a real robot, the base frame might be different - base_link for example. You need to ensure that there exists a tf between the base_frame and the optical frame of the camera. This transform is assumed to be static by ccny_rgbd, and will be cached.

EDIT: The visual odometry will publish a tf, but this might not be what you need for your application, especially if you are using robot_pose_ekf to fuse odometry sources.

EDIT 2: Still can't find what the simulated Kinect data is exactly, but it appears the only available depth topic is 32FC1. A ticket has been opened to enable support in ccny_rgbd.

If this is the case, disable the tf publishing (using the publish_tf flag), and make robot_pose_ekf subscribe to the "odom" topic published by "visual_odometry_node", of type nav_msgs/Odometry

EDIT 2: Still can't find what the simulated Kinect data is exactly, but it appears the only available depth topic is 32FC1. A ticket has been opened to enable support in ccny_rgbd.

You should be able to use the ccny_rgb tools with simulated Kinect data, as long as that data comes in the same format as the OpenNI image topics. The visual_odometry and keyframe_mapper tools require 3 incoming topics:

  • /rgbd/rgb: a rectified, debayered RGB image, 8UC3 encoding
  • /rgbd/depth: a rectified depth image, registered with the RGB image, 16UC1 encoding (in millimeters)
  • /rgbd/info: CameraInfo topic (applies to both image, since they are registered).

If the topics are coming from the OpenNI driver, you would need to add the following remappings to the visual_odometry_node and the keyframe_mapper_node launch files

<remap from="/rgbd/rgb"   to="/camera/rgb/image_rect_color"/>
<remap from="/rgbd/depth" to="/camera/depth_registered/image_rect_raw"/>
<remap from="/rgbd/info"  to="/camera/rgb/camera_info"/>

Finally, you should choose the fixed_frame and base_frame parameters. The defaults are odom and camera_link. In your case, since this is a real robot, the base frame might be different - base_link for example. You need to ensure that there exists a tf between the base_frame and the optical frame of the camera. This transform is assumed to be static by ccny_rgbd, and will be cached.

EDIT: The visual odometry will publish a tf, but this might not be what you need for your application, especially if you are using robot_pose_ekf to fuse odometry sources.

If this is the case, disable the tf publishing (using the publish_tf flag), and make robot_pose_ekf subscribe to the "odom" topic published by "visual_odometry_node", of type nav_msgs/Odometry

EDIT 2: Still can't find what the simulated Kinect data is exactly, but it appears the only available depth topic is 32FC1. A ticket has been opened to enable support in ccny_rgbd.

EDIT 3: Added support for 32FC1 images (see ticket for details). Also added clearer instructions in launch files which summarize this answer here.