Ask Your Question
8

How to add Kinect sensor input to a URDF model?

asked 2011-02-20 09:18:55 -0500

updated 2016-10-24 08:33:18 -0500

ngrennan gravatar image

I'm able to run the Kinect demo as shown here:

http://www.ros.org/wiki/kinect/Tutori...

and also separately can visualize my URDF model. The question now arises as to how to combine the two, such that the model and sensor data appear together. I have a node within the model for the camera location, so presumably the kinect topic needs to be linked to this somehow.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
10

answered 2011-02-20 10:58:14 -0500

mmwise gravatar image

updated 2011-02-22 07:51:25 -0500

In your model for the kinect you need to create a fixed joint between the link that represents your model and the frame_id being published by the kinect node.

for example:

<link name="kinect_link">
  <visual>
    <geometry>
      <box size="0.064 0.121 0.0381" />
    </geometry>
    <material name="Blue" />
  </visual>
  <inertial>
      <mass value="0.0001" />
      <origin xyz="0 0 0" />
      <inertia ixx="0.0001" ixy="0.0" ixz="0.0"
               iyy="0.0001" iyz="0.0" 
               izz="0.0001" />
  </inertial>
</link>

<joint name="kinect_depth_joint" type="fixed">
  <origin xyz="0 0.028 0" rpy="0 0 0" />
  <parent link="kinect_link" />
  <child link="kinect_depth_frame" />
</joint>

<link name="kinect_depth_frame">
  <inertial>
      <mass value="0.0001" />
      <origin xyz="0 0 0" />
      <inertia ixx="0.0001" ixy="0.0" ixz="0.0"
               iyy="0.0001" iyz="0.0" 
               izz="0.0001" />
  </inertial>
</link>

<joint name="depth_optical_joint" type="fixed">
  <origin xyz="0 0 0" rpy="${-M_PI/2} 0 ${-M_PI/2}" />
  <parent link="kinect_depth_frame" />
  <child link="kinect_depth_optical_frame" />
</joint>

<link name="kinect_depth_optical_frame">
  <inertial>
      <mass value="0.0001" />
      <origin xyz="0 0 0" />
      <inertia ixx="0.0001" ixy="0.0" ixz="0.0"
               iyy="0.0001" iyz="0.0" 
               izz="0.0001" />
  </inertial>
</link>

Then based on the above urdf launch the kinect node with the following parameteres:

  <node pkg="openni_camera" type="openni_node" name="openni_camera" output="screen" respawn="true" >
    <param name="device_type" value="1" />
    <param name="registration_type" value="1" />
    <param name="point_cloud_resolution" value="1" />
    <param name="openni_depth_optical_frame" value="kinect_depth_optical_frame" />
    <param name="openni_rgb_optical_frame" value="kinect_rgb_optical_frame" />
    <param name="image_input_format" value="5" />
    <rosparam command="load" file="$(find openni_camera)/info/openni_params.yaml" />
  </node>
edit flag offensive delete link more

Comments

I assume that on the Kinect itself the depth frame and the RGB frame should be in different places (depth in the center, RGB to the right?).
JediHamster gravatar imageJediHamster ( 2011-02-20 20:23:22 -0500 )edit
yes they are offset from each other. we have the rgb_frame about 3cm to the right.
mmwise gravatar imagemmwise ( 2011-02-21 04:48:51 -0500 )edit

by doing this, will I be able to move my robot model with the tf data coming from the kinect?

onurtuna gravatar imageonurtuna ( 2016-06-10 03:16:40 -0500 )edit

I tried to do this, and installed openni_camera and openni_launch through sudo apt-get, but ran into a problem: "file does not exist [/opt/ros/kinetic/share/openni_camera/info/openni_params.yaml]"

mancer gravatar imagemancer ( 2018-05-11 04:05:53 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

6 followers

Stats

Asked: 2011-02-20 09:18:55 -0500

Seen: 4,530 times

Last updated: Feb 22 '11