ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

To expand on rbtying's answer:

In ROS, transformations between different coordinate frames, such as the optical frame of the Kinect and the base of the robot arm are handled by the tf library. You just have to specify the transform between those two frames and then can use the tf library to compute the transformations between different frames.

Long answer:

First create an URDF file of your arm, upload it to the parameter server (param name "robot_description"), have some node publishing the joint_states of the current arm configuration and run the robot_state_publisher to integrate URDF and joint states into tf messages. Now you should be able to visualize your arm state in RViz.

Next, you have to specify the relative position of your Kinect to the arm. Assuming that this is a fixed transform, you can use the static_transform_publisher. For example, let's take a look at kinect_camera/launch/kinect_with_tf.launch:

<launch>
  <node pkg="tf" type="static_transform_publisher" name="world_to_kinect_rgb" 
    args="0 0 1.10 -1.57 0 -1.57 /world /kinect_rgb 10" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_rgb_to_kinect_depth" 
    args="-0.03 0.0 0.0 0.0 0.0 0.0 /kinect_rgb /kinect_depth 10" />

  <include file="$(find kinect_camera)/launch/kinect_node.launch" />
</launch>

The first node specification gives the transform between the /world frame and the /kinect_rgb frame. Here, you have to replace /world by the base link of your robot arm, let's say /arm_base_link, if that is the name given in your URDF model. To find the correct parameters for that transform, you can use the tf_publisher_gui. Now you should be able to visualize both your robot model and the Kinect point cloud together in RViz. Fiddle around with the parameters to get your robot model lined up with actual arm visible in the point cloud.

Now, whenever you have a point in the optical frame of the Kinect, you can use tf to just transform that point into the base frame of the robot arm.

To expand on rbtying's answer:

In ROS, transformations between different coordinate frames, such as the optical frame of the Kinect and the base of the robot arm are handled by the tf library. You just have to specify the transform between those two frames and then can use the tf library to compute the transformations between different frames.

Long answer:

First create an URDF file of your arm, upload it to the parameter server (param name "robot_description"), have some node publishing the joint_states of the current arm configuration and run the robot_state_publisher to integrate URDF and joint states into tf messages. Now you should be able to visualize your arm state in RViz.

Next, you have to specify the relative position of your Kinect to the arm. Assuming that this is a fixed transform, you can use the static_transform_publisher. For example, let's take a look at kinect_camera/launch/kinect_with_tf.launch:

<launch>
  <node pkg="tf" type="static_transform_publisher" name="world_to_kinect_rgb" 
    args="0 0 1.10 -1.57 0 -1.57 /world /kinect_rgb 10" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_rgb_to_kinect_depth" 
    args="-0.03 0.0 0.0 0.0 0.0 0.0 /kinect_rgb /kinect_depth 10" />

  <include file="$(find kinect_camera)/launch/kinect_node.launch" />
</launch>

The first node specification gives the transform between the /world frame and the /kinect_rgb frame. Here, you have to replace /world by the base link of your robot arm, let's say /arm_base_link, if that is the name given in your URDF model. To find the correct parameters for that transform, you can use the tf_publisher_gui. instead of the static_transform_publisher. Now you should be able to visualize both your robot model and the Kinect point cloud together in RViz. Fiddle around with the parameters to get your robot model lined up with actual arm visible in the point cloud.

Now, whenever you have a point in the optical frame of the Kinect, you can use tf to just transform that point into the base frame of the robot arm.

To expand on rbtying's answer:

In ROS, transformations between different coordinate frames, such as the optical frame of the Kinect and the base of the robot arm are handled by the tf library. You just have to specify the transform between those two frames and then can use the tf library to compute the transformations between different frames.

Long answer:

First create an URDF file of your arm, upload it to the parameter server (param name "robot_description"), have some node publishing the joint_states of the current arm configuration and run the robot_state_publisher to integrate URDF and joint states into tf messages. Now you should be able to visualize your arm state in RViz.

Next, you have to specify the relative position of your Kinect to the arm. Assuming that this is a fixed transform, you can use the static_transform_publisher. For example, let's take a look at kinect_camera/launch/kinect_with_tf.launch:

<launch>
  <node pkg="tf" type="static_transform_publisher" name="world_to_kinect_rgb" 
    args="0 0 1.10 -1.57 0 -1.57 /world /kinect_rgb 10" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_rgb_to_kinect_depth" 
    args="-0.03 0.0 0.0 0.0 0.0 0.0 /kinect_rgb /kinect_depth 10" />

  <include file="$(find kinect_camera)/launch/kinect_node.launch" />
</launch>

The first node specification gives the transform between the /world frame and the /kinect_rgb frame. Here, you have to replace /world by the base link of your robot arm, let's say /arm_base_link, if that is the name given in your URDF model. To find the correct parameters for that transform, you can use the tf_publisher_gui instead of the static_transform_publisher. Now you should be able to visualize both your robot model and the Kinect point cloud together in RViz. Fiddle around with the parameters to get your robot model lined up with the actual arm visible in the point cloud.

Now, whenever you have a point in the optical frame of the Kinect, you can use tf to just transform that point into the base frame of the robot arm.