ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Point cloud misaligned with gazebo sensor plugin

asked 2022-11-25 08:20:02 -0600

NotARobot gravatar image

updated 2022-11-26 10:46:07 -0600

Mike Scheutzow gravatar image

Greetings. I have inserted a camera within my gazebo simulation via the gazebo plugin. The camera works as intended, however the point cloud generated does not match the camera picture. Since Octomaps are build on the point cloud the collision map is also misaligned. I've aligned the gazebo camera plugin via optical link to the camera link (mesh and collision).

How do I get the proper orientation of the point cloud? - Since the picture in RViz matches the viewed object I would assume the point cloud would behave the same.

Gazebo Scene RViz Camera Image RViz Octomap

Camera macro:

<?xml version="1.0"?>
<robot xmlns:xacro="http://wiki.ros.org/xacro">

   <xacro:macro name="generate_camera" params="prefix attachment_link package_name:=modproft_camera_description camera_type:=canon_powershot_g7_mk3 debug:=false">

    <xacro:include filename="$(find ${package_name})/urdf/inc/camera_common.xacro"/>
    <xacro:include filename="$(find ${package_name})/urdf/inc/materials.xacro"/>

    <xacro:read_camera_data model_parameter_file="$(find ${package_name})/config/${camera_type}.yaml"/>

    <!-- Todo: Fix Prefix ${prefix} != &quot; &quot;-->
    <xacro:if value="${prefix != &quot; &quot;}">
      <xacro:property name="sensor_reference" value="${prefix}${sensor_name}"/>
    </xacro:if>
    <xacro:if value="${prefix == &quot; &quot;}">
      <xacro:property name="sensor_reference" value="${sensor_name}"/>
    </xacro:if>

    <!-- Define gazebo sensor -->
    <gazebo reference="${sensor_reference}_virtual_frame_link">
      <sensor name="${sensor_reference}" type="${sensor_type}" >
        <always_on>${sensor_always_on}</always_on>
        <update_rate>${sensor_update_rate}</update_rate>
        <camera name="${sensor_reference}">
          <horizontal_fov>${sensor_horizontal_fov}</horizontal_fov>
          
          <clip>
            <near>${sensor_image_clip_near}</near>
            <far>${sensor_image_clip_far}</far>
          </clip>
          <distortion>
            <k1>${sensor_image_distortion_k1}</k1>
            <k2>${sensor_image_distortion_k2}</k2>
            <k3>${sensor_image_distortion_k3}</k3>
            <p1>${sensor_image_distortion_p1}</p1>
            <p2>${sensor_image_distortion_p2}</p2>
            <center>${sensor_image_distortion_center}</center>
          </distortion>
        </camera>
          <plugin filename="libgazebo_ros_camera.so" name="${sensor_reference}_controller">
            <ros>
                <imageTopicName>/${sensor_reference}/color/image_raw</imageTopicName>
                <cameraInfoTopicName>/${sensor_reference}/color/camera_info</cameraInfoTopicName>
                <depthImageTopicName>/${sensor_reference}/depth/image_raw</depthImageTopicName>
                <depthImageCameraInfoTopicName>/${sensor_reference}/depth/camera_info</depthImageCameraInfoTopicName>
                <pointCloudTopicName>/${sensor_reference}/points</pointCloudTopicName>
            </ros>
            <camera_name>${sensor_reference}</camera_name>
            <frame_name>${sensor_reference}_virtual_frame_link</frame_name>
            <hack_baseline>${sensor_hack_baseline}</hack_baseline>
            <min_depth>${sensor_min_depth}</min_depth>
          </plugin>
      </sensor>
    </gazebo>

    <!-- Links -->
    <link name="${sensor_reference}_link">
      <visual>
        <origin rpy="${camera_visual_origin_rpy}" xyz="${camera_visual_origin_xyz}"/>
        <geometry>
          <mesh filename="package://${package_name}/meshes/${camera_visual_geometry_mesh}" scale="${camera_visual_geometry_scale}"/>
        </geometry>
      </visual>
      <collision>
        <origin rpy="${camera_collision_origin_rpy}" xyz="${camera_collision_origin_xyz}"/>
        <geometry>
          <mesh filename="package://${package_name}/meshes/${camera_visual_geometry_mesh}" scale="${camera_visual_geometry_scale}"/>
        </geometry>
        <material name="${camera_visual_material}"/>
      </collision>
      <inertial>
        <mass value="${camera_inertial_mass}"/>
        <origin rpy="${camera_inertial_origin_rpy}" xyz="${camera_inertial_origin_xyz}"/>
        <inertia ixx="${camera_inertial_inertia_ixx}" ixy="${camera_inertial_inertia_ixy}" ixz="${camera_inertial_inertia_ixz}" iyy="${camera_inertial_inertia_iyy}" iyz="${camera_inertial_inertia_iyz}" izz="${camera_inertial_inertia_izz}"/>
      </inertial>
    </link>

    <link name="${sensor_reference}_virtual_frame_link">
      <!-- Todo: Fix debug condition block-->
      <xacro:if value="${debug != 'False'}">
        <visual>
          <geometry>
            <mesh filename="package://${package_name}/meshes/${virtual_visual_geometry_mesh}" scale="${virtual_visual_geometry_scale}"/>
          </geometry>
        </visual>
      </xacro:if>
    </link>

    <!-- Joints -->
    <joint type="fixed" name="${prefix}${attachment_link}_${sensor_name}_joint">
      <origin rpy="${cameraattachment_joint_origin_rpy}" xyz="${cameraattachment_joint_origin_xyz}"/>
      <child link="${sensor_reference}_link"/>
      <parent link="${prefix}${attachment_link}"/>
    </joint>

    <joint type="fixed" name="${prefix}${attachment_link}_${sensor_name}_virtual_frame_joint">
      <origin ...
(more)
edit retag flag offensive close merge delete

3 Answers

Sort by ยป oldest newest most voted
0

answered 2022-11-28 09:06:34 -0600

NotARobot gravatar image

Thank you for your answers.

It seems, that Gazebo has an issue with the plugin. For as long the problem remains relevant the solution is to use the optical link as frame reference (<frame_name>${sensor_reference}_virtual_frame_link</frame_name>) but not as Gazebo plugin reference ( <gazebo reference="${sensor_reference}_link">). With that you are able to rotate the optical link as you see fit, without impacting the Octomap. This solution was found here.

Also as a note of importance (because it didn't occur to me right away): Do not publish a static transform for the camera link, if you implement the camera via URDF. Since your robot_state_publisher does already publish the transform for you, there will be issues if a static transform is tried, resulting in a inconsistent tf2 tree.

The whole camera macro with solution:

<?xml version="1.0"?>
<robot xmlns:xacro="http://wiki.ros.org/xacro">

   <xacro:macro name="generate_camera" params="prefix attachment_link package_name:=modproft_camera_description camera_type:=canon_powershot_g7_mk3 debug:=false">

    <xacro:include filename="$(find ${package_name})/urdf/inc/camera_common.xacro"/>
    <xacro:include filename="$(find ${package_name})/urdf/inc/materials.xacro"/>

    <xacro:read_camera_data model_parameter_file="$(find ${package_name})/config/${camera_type}.yaml"/>

    <!-- Todo: Fix Prefix ${prefix} != &quot; &quot;-->
    <xacro:if value="${prefix != &quot; &quot;}">
      <xacro:property name="sensor_reference" value="${prefix}${sensor_name}"/>
    </xacro:if>
    <xacro:if value="${prefix == &quot; &quot;}">
      <xacro:property name="sensor_reference" value="${sensor_name}"/>
    </xacro:if>

    <!-- Define gazebo sensor -->
    <gazebo reference="${sensor_reference}_link">
      <sensor name="${sensor_reference}" type="${sensor_type}" >
        <always_on>${sensor_always_on}</always_on>
        <update_rate>${sensor_update_rate}</update_rate>
        <camera name="${sensor_reference}">
          <horizontal_fov>${sensor_horizontal_fov}</horizontal_fov>
          
          <clip>
            <near>${sensor_image_clip_near}</near>
            <far>${sensor_image_clip_far}</far>
          </clip>
          <distortion>
            <k1>${sensor_image_distortion_k1}</k1>
            <k2>${sensor_image_distortion_k2}</k2>
            <k3>${sensor_image_distortion_k3}</k3>
            <p1>${sensor_image_distortion_p1}</p1>
            <p2>${sensor_image_distortion_p2}</p2>
            <center>${sensor_image_distortion_center}</center>
          </distortion>
        </camera>
        <plugin filename="libgazebo_ros_camera.so" name="${sensor_reference}_controller">
          <ros>
              <imageTopicName>/${sensor_reference}/color/image_raw</imageTopicName>
              <cameraInfoTopicName>/${sensor_reference}/color/camera_info</cameraInfoTopicName>
              <depthImageTopicName>/${sensor_reference}/depth/image_raw</depthImageTopicName>
              <depthImageCameraInfoTopicName>/${sensor_reference}/depth/camera_info</depthImageCameraInfoTopicName>
              <pointCloudTopicName>/${sensor_reference}/points</pointCloudTopicName>
          </ros>
          <camera_name>${sensor_reference}</camera_name>
          <frame_name>${sensor_reference}_virtual_frame_link</frame_name>
          <hack_baseline>${sensor_hack_baseline}</hack_baseline>
          <min_depth>${sensor_min_depth}</min_depth>
        </plugin>
      </sensor>
    </gazebo>

    <!-- Links -->
    <link name="${sensor_reference}_link">
      <visual>
        <origin rpy="${camera_visual_origin_rpy}" xyz="${camera_visual_origin_xyz}"/>
        <geometry>
          <mesh filename="package://${package_name}/meshes/${camera_visual_geometry_mesh}" scale="${camera_visual_geometry_scale}"/>
        </geometry>
      </visual>
      <collision>
        <origin rpy="${camera_collision_origin_rpy}" xyz="${camera_collision_origin_xyz}"/>
        <geometry>
          <mesh filename="package://${package_name}/meshes/${camera_visual_geometry_mesh}" scale="${camera_visual_geometry_scale}"/>
        </geometry>
        <material name="${camera_visual_material}"/>
      </collision>
      <inertial>
        <mass value="${camera_inertial_mass}"/>
        <origin rpy="${camera_inertial_origin_rpy}" xyz="${camera_inertial_origin_xyz}"/>
        <inertia ixx="${camera_inertial_inertia_ixx}" ixy="${camera_inertial_inertia_ixy}" ixz="${camera_inertial_inertia_ixz}" iyy="${camera_inertial_inertia_iyy}" iyz="${camera_inertial_inertia_iyz}" izz="${camera_inertial_inertia_izz}"/>
      </inertial>
    </link>

    <link name="${sensor_reference}_virtual_frame_link">
      <!-- Todo Fix debug condition block-->
      <xacro:if value="${debug != 'False'}">
        <visual>
          <geometry>
            <mesh filename="package://${package_name}/meshes/${virtual_visual_geometry_mesh}" scale="${virtual_visual_geometry_scale}"/>
          </geometry>
        </visual>
      </xacro:if>
    </link> ...
(more)
edit flag offensive delete link more
1

answered 2022-11-26 10:40:37 -0600

Mike Scheutzow gravatar image

The orientation you have defined for your camera's optical frame does not match the ros convention. The optical frame has its origin at the image sensor, with +z forward, +x to the right and +y downward. This is documented in REP103.

See also #q395930.

edit flag offensive delete link more

Comments

That is true. How do I adjust for the ROS convention and keep the camera image where it is? - If I change the orientation of the optical frame the camera image changes as well, since it is orientated with the sensor looking at positive X direction, as you can see. Can I tell Gazebo within the URDF to orientate the depth measurements separately? Also, if I enable Octomaps to generate the points from the cameras depth image than the same result with appear.

Edit: Furthermore, if I rotate the sensor according to ROS convention the Octomap is orientated according to the camera, but the scene is orientated in a 90 degree angle (X axis), so the scene is not the scene in front of the camera.

NotARobot gravatar image NotARobot  ( 2022-11-28 01:39:01 -0600 )edit
0

answered 2022-11-27 06:38:15 -0600

Davies Ogunsina gravatar image

can you kindly check the orientation of the camera ...it has to be properly oriented .

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2022-11-25 08:20:02 -0600

Seen: 641 times

Last updated: Nov 28 '22