ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Inaccuracy in depth pointcloud from RGB-D camera

asked 2018-10-25 04:19:27 -0500

percy_liu gravatar image

updated 2018-10-25 21:59:00 -0500

Hi,

I am using ROS Kinetic. I am trying to get pointcloud of a door from RGB-D camera on a robot. The description is shown below.

<?xml version="1.0"?>

<robot xmlns:xacro="http://ros.org/wiki/xacro">  
    <!--
     The asus_camera_model macro only adds the model, it does not also add
     the openni gazebo plugin. See the 'asus_camera' macro below for that
     -->
    <xacro:macro name="realsense_camera_model" params="name parent *origin">
      <joint name="${name}_joint" type="fixed">
        <xacro:insert_block name="origin" />
        <parent link="${parent}"/>
        <child link="${name}_link"/>
      </joint>

      <link name="${name}_link">
        <inertial>
          <mass value="0.200" />
          <origin xyz="0 0 0" rpy="0 0 0" />
          <inertia ixx="5.8083e-4" ixy="0" ixz="0" iyy="3.0833e-5" iyz="0" izz="5.9083e-4" />
        </inertial>
        <visual>
          <origin xyz="0 0.051 0" rpy="0 0 0" />
          <geometry>
            <box size="0.007 0.130 0.02" />
          </geometry>
          <material name="DarkGrey">
            <color rgba="0.3 0.3 0.3 1"/>
          </material>    
        </visual>
        <collision>
          <origin xyz="0 0.051 0" rpy="0 0 0" />
          <geometry>
            <box size="0.007 0.130 0.02" />
          </geometry>
        </collision>
      </link>

      <gazebo reference="${name}_link">
        <material>Gazebo/Blue</material>
      </gazebo> 
    </xacro:macro>

    <!--
     The asus_camera macro only adds the model, and also adds
     the openni gazebo plugin.
    -->
   <xacro:macro name="realsense_camera" params="name parent *origin">
     <xacro:realsense_camera_model name="${name}" parent="${parent}">
       <xacro:insert_block name="origin" />
     </xacro:realsense_camera_model>

    <gazebo reference="${name}_link">
      <sensor type="camera" name="${name}_rgb">
        <always_on>1</always_on>
        <update_rate>30</update_rate>
        <camera>
          <horizontal_fov>${70.0 * pi/180.0}</horizontal_fov>
          
          <clip>
            <near>0.001</near>
            <far>100</far>
          </clip>
          <distortion>
            <k1>0.0</k1>
            <k2>0.0</k2>
            <k3>0.0</k3>
            <p1>-0.0</p1>
            <p2>-0.00</p2>
            <center>0.5 0.5</center>
          </distortion>
        </camera>
        <plugin name="${name}_camera_depth_controller" filename="libgazebo_ros_camera.so">
          <robotNamespace>$(arg robot_name)/sensor/${name}/rgb</robotNamespace>
          <imageTopicName>image_raw</imageTopicName>
          <cameraInfoTopicName>camera_info</cameraInfoTopicName>
          <frameName>${name}_rgb_optical_frame</frameName>
        </plugin>
      </sensor>

      <sensor type="depth" name="${name}_depth">
        <update_rate>20</update_rate>
        <camera>
          <horizontal_fov>${59.0 * pi/180.0}</horizontal_fov>
          
          <clip>
            <near>0.5</near>
            <far>10.0</far>
          </clip>
        </camera>
        <plugin name="${name}_camera_rgb_controller" filename="libgazebo_ros_openni_kinect.so">
          <robotNamespace>$(arg robot_name)/sensor/${name}</robotNamespace>
          <imageTopicName>ir/image_raw</imageTopicName>
          <cameraInfoTopicName>ir/camera_info</cameraInfoTopicName>
          <depthImageTopicName>depth/image_raw</depthImageTopicName>
          <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
          <pointCloudTopicName>depth/points</pointCloudTopicName>
          <frameName>${name}_depth_optical_frame</frameName>
        </plugin>
      </sensor>
    </gazebo>
  </xacro:macro>
</robot>

The depth camera has three topics:

/camera/depth/camera_info
/camera/depth/image_raw
/camera/depth/points

But the generated pointcloud is a little bit left to the real door. Why there is difference in the location, and how to eliminate the difference?

Thanks a lot

edit retag flag offensive close merge delete

Comments

"But the generated pointcloud is a little bit left to the real door" Could you clarify that a bit? How did you measure this offset? Is the camera mounted on a robot?

NEngelhard gravatar image NEngelhard  ( 2018-10-25 05:50:42 -0500 )edit

My first guess would be that the transform that defines the location of the RGB-D sensor on the robot is in slightly the wrong place or has the wrong angle, this will be very sensitive to angular errors. As @NEngelhard asked, the point cloud is in the wrong place relative to what? the image?

PeteBlackerThe3rd gravatar image PeteBlackerThe3rd  ( 2018-10-25 06:51:05 -0500 )edit

I also used LIDAR to generate pointcloud of the door, and I believe the LIDAR is accurate. The differences are between these two pointclouds by different sensors. I cannot upload a picture due to limited points. But the link should be working.

percy_liu gravatar image percy_liu  ( 2018-10-25 20:45:52 -0500 )edit

I wonder who downvotes such a question...

percy_liu, I see a comment about a "asus_camera" macro, but not the macro itself. Also the link does not work for me.

Humpelstilzchen gravatar image Humpelstilzchen  ( 2018-10-26 01:06:50 -0500 )edit
1

If you have different sensors but the data do not match, wouldn't that point to (at best) a need for extrinsic calibration of both sensors?

gvdhoorn gravatar image gvdhoorn  ( 2018-10-26 04:44:02 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
0

answered 2018-10-26 09:48:56 -0500

As @gvdhoorn has said this looks like you're extrinsic calibration of the two sensors is incorrect, fixing this should sholve your problem.

The extrinsic calibration defines the position and orientation of your sensors within your robot, if the relative position defined in your robot description is not the same as the real robot then your will see alignment like yours. A good start would be double checking your measurements and the optical focus points of the sensors so make sure they're correct.

Hope this helps.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2018-10-25 04:19:27 -0500

Seen: 552 times

Last updated: Oct 26 '18