Inaccuracy in depth pointcloud from RGB-D camera
Hi,
I am using ROS Kinetic. I am trying to get pointcloud of a door from RGB-D camera on a robot. The description is shown below.
<?xml version="1.0"?>
<robot xmlns:xacro="http://ros.org/wiki/xacro">
<!--
The asus_camera_model macro only adds the model, it does not also add
the openni gazebo plugin. See the 'asus_camera' macro below for that
-->
<xacro:macro name="realsense_camera_model" params="name parent *origin">
<joint name="${name}_joint" type="fixed">
<xacro:insert_block name="origin" />
<parent link="${parent}"/>
<child link="${name}_link"/>
</joint>
<link name="${name}_link">
<inertial>
<mass value="0.200" />
<origin xyz="0 0 0" rpy="0 0 0" />
<inertia ixx="5.8083e-4" ixy="0" ixz="0" iyy="3.0833e-5" iyz="0" izz="5.9083e-4" />
</inertial>
<visual>
<origin xyz="0 0.051 0" rpy="0 0 0" />
<geometry>
<box size="0.007 0.130 0.02" />
</geometry>
<material name="DarkGrey">
<color rgba="0.3 0.3 0.3 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0.051 0" rpy="0 0 0" />
<geometry>
<box size="0.007 0.130 0.02" />
</geometry>
</collision>
</link>
<gazebo reference="${name}_link">
<material>Gazebo/Blue</material>
</gazebo>
</xacro:macro>
<!--
The asus_camera macro only adds the model, and also adds
the openni gazebo plugin.
-->
<xacro:macro name="realsense_camera" params="name parent *origin">
<xacro:realsense_camera_model name="${name}" parent="${parent}">
<xacro:insert_block name="origin" />
</xacro:realsense_camera_model>
<gazebo reference="${name}_link">
<sensor type="camera" name="${name}_rgb">
<always_on>1</always_on>
<update_rate>30</update_rate>
<camera>
<horizontal_fov>${70.0 * pi/180.0}</horizontal_fov>

<clip>
<near>0.001</near>
<far>100</far>
</clip>
<distortion>
<k1>0.0</k1>
<k2>0.0</k2>
<k3>0.0</k3>
<p1>-0.0</p1>
<p2>-0.00</p2>
<center>0.5 0.5</center>
</distortion>
</camera>
<plugin name="${name}_camera_depth_controller" filename="libgazebo_ros_camera.so">
<robotNamespace>$(arg robot_name)/sensor/${name}/rgb</robotNamespace>
<imageTopicName>image_raw</imageTopicName>
<cameraInfoTopicName>camera_info</cameraInfoTopicName>
<frameName>${name}_rgb_optical_frame</frameName>
</plugin>
</sensor>
<sensor type="depth" name="${name}_depth">
<update_rate>20</update_rate>
<camera>
<horizontal_fov>${59.0 * pi/180.0}</horizontal_fov>

<clip>
<near>0.5</near>
<far>10.0</far>
</clip>
</camera>
<plugin name="${name}_camera_rgb_controller" filename="libgazebo_ros_openni_kinect.so">
<robotNamespace>$(arg robot_name)/sensor/${name}</robotNamespace>
<imageTopicName>ir/image_raw</imageTopicName>
<cameraInfoTopicName>ir/camera_info</cameraInfoTopicName>
<depthImageTopicName>depth/image_raw</depthImageTopicName>
<depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
<pointCloudTopicName>depth/points</pointCloudTopicName>
<frameName>${name}_depth_optical_frame</frameName>
</plugin>
</sensor>
</gazebo>
</xacro:macro>
</robot>
The depth camera has three topics:
/camera/depth/camera_info
/camera/depth/image_raw
/camera/depth/points
But the generated pointcloud is a little bit left to the real door. Why there is difference in the location, and how to eliminate the difference?
Thanks a lot
Asked by percy_liu on 2018-10-25 04:19:27 UTC
Answers
As @gvdhoorn has said this looks like you're extrinsic calibration of the two sensors is incorrect, fixing this should sholve your problem.
The extrinsic calibration defines the position and orientation of your sensors within your robot, if the relative position defined in your robot description is not the same as the real robot then your will see alignment like yours. A good start would be double checking your measurements and the optical focus points of the sensors so make sure they're correct.
Hope this helps.
Asked by PeteBlackerThe3rd on 2018-10-26 09:48:56 UTC
Comments
"But the generated pointcloud is a little bit left to the real door" Could you clarify that a bit? How did you measure this offset? Is the camera mounted on a robot?
Asked by NEngelhard on 2018-10-25 05:50:42 UTC
My first guess would be that the transform that defines the location of the RGB-D sensor on the robot is in slightly the wrong place or has the wrong angle, this will be very sensitive to angular errors. As @NEngelhard asked, the point cloud is in the wrong place relative to what? the image?
Asked by PeteBlackerThe3rd on 2018-10-25 06:51:05 UTC
I also used LIDAR to generate pointcloud of the door, and I believe the LIDAR is accurate. The differences are between these two pointclouds by different sensors. I cannot upload a picture due to limited points. But the link should be working.
Asked by percy_liu on 2018-10-25 20:45:52 UTC
I wonder who downvotes such a question...
percy_liu, I see a comment about a "asus_camera" macro, but not the macro itself. Also the link does not work for me.
Asked by Humpelstilzchen on 2018-10-26 01:06:50 UTC
If you have different sensors but the data do not match, wouldn't that point to (at best) a need for extrinsic calibration of both sensors?
Asked by gvdhoorn on 2018-10-26 04:44:02 UTC