ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

octomap maps objects of the XY plane into the Z plane

asked 2021-07-20 20:05:26 -0500

Yokai- gravatar image

updated 2021-07-20 20:44:01 -0500

I'm trying to make a map out of a square boundary using Octomap. I am on ros kinetic and using Kinect in this process. The image data received from the camera is accurate thus, I think the camera orientation is right and not facing upwards.

However, when I tried to map the environment and put it on RViz using marker array, the four sides of the sure boundary (Turtlebot3 Plaza for reference), are mapped and meshed together upwards in the Z direction. I'm attaching the Imgur files here: http://imgur.com/gallery/S9HQVuB , as I can't post a snapshot yet.

the plugin file:

 <gazebo reference="kinect_link">
      <sensor name="kinect_link_camera" type="depth">
        <update_rate>20</update_rate>
        <camera>
          <horizontal_fov>1.047198</horizontal_fov>
          
          <clip>
            <near>0.05</near>
            <far>3</far>
          </clip>
        </camera>
        <plugin name="kinect_link_controller" filename="libgazebo_ros_openni_kinect.so">
          <baseline>0.2</baseline>
          <alwaysOn>true</alwaysOn>
          <updateRate>1.0</updateRate>
          <cameraName>${camera_name}_ir</cameraName>
          <imageTopicName>/color/image_raw</imageTopicName>
          <cameraInfoTopicName>/color/camera_info</cameraInfoTopicName>
          <depthImageTopicName>/depth/image_raw</depthImageTopicName>
          <depthImageInfoTopicName>/depth/camera_info</depthImageInfoTopicName>
          <pointCloudTopicName>/depth/points</pointCloudTopicName>
          <frameName>kinect_link</frameName>
          <pointCloudCutoff>0.5</pointCloudCutoff>
          <pointCloudCutoffMax>3.0</pointCloudCutoffMax>
          <distortionK1>0.00000001</distortionK1>
          <distortionK2>0.00000001</distortionK2>
          <distortionK3>0.00000001</distortionK3>
          <distortionT1>0.00000001</distortionT1>
          <distortionT2>0.00000001</distortionT2>
          <CxPrime>0</CxPrime>
          <Cx>0</Cx>
          <Cy>0</Cy>
          <focalLength>0</focalLength>
          <hackBaseline>0</hackBaseline>
        </plugin>
      </sensor>
    </gazebo>
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2021-07-25 18:46:43 -0500

Mike Scheutzow gravatar image

updated 2021-07-25 19:08:49 -0500

You need to correctly specify the transform for how the kinect is mounted on the robot (base_link -> kinect_link), and you need to specify the transform for the data the kinect outputs (kinect_link -> kinect_depth).

I have not used kinect myself, but the frame for depth cameras often use the z-axis for the forward (depth) direction. You will need to google to figure out which way the x and y axis are pointed.

Update: also take a look at question #q206823

edit flag offensive delete link more

Comments

@Mike Scheutzow , I tried the solution given in the answer to the question you mentioned, but it didn't work. My kinect, when used as a camera gives pictures in the correct orientation both for color/image/raw and depth/image/raw . It only puts scanned octomap on the Z direction when octomapping.

I know, from other questions that the kinect has Z as forward axis , but if I do that:

  1. My image points towards the floor
  2. The octomap says octree is empty

I made the urdf using a plugin and don't know much aboy urdf syntax yet. I tried changing the rpy and xyz values but it breaks the camera joint instead. I need to somehow change the axis of the kinect so that the z faces forward but also use it as a normal camera, which from my experience gives image of objects in the X direction.

Yokai- gravatar image Yokai-  ( 2021-07-28 16:30:00 -0500 )edit

The kinect is not the same thing as a sensor inside the kinect. I provided a link that shows you what another user thinks the transform should be.

Mike Scheutzow gravatar image Mike Scheutzow  ( 2021-07-28 20:14:18 -0500 )edit

You appear to be sending the raw sensor data to the octomap. Typically sensor data has to be transformed to be useful, so either the transform being applied is wrong, or the transform is not being done at all. One reason the transform could be wrong is that your Transform Frame Tree contains an error.

Mike Scheutzow gravatar image Mike Scheutzow  ( 2021-07-29 06:49:14 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2021-07-20 20:05:26 -0500

Seen: 119 times

Last updated: Jul 25 '21