libgazebo_ros_openni_kinect.so depth pointcloud wrong tf
Hello, I am new to ROS. I am trying to simulate a robot in gazebo witch works fine for now, but I have a problem with simulating a kinect camera. The simulated data seems to be right, but when I try to display the results in rviz the transform is not correct. Can anyone give me a hint where my coding went wrong?
Gazebo: 6.1.0 Ros: Indigo Rviz: 1.11.8
model.sdf
<link name="CameraLeft_Link">
<pose>0.29 0.0 1.068 0.05 0.535 0.13</pose>
<inertial>
<mass>0.1</mass>
</inertial>
<collision name="collision">
<geometry>
<box>
<size>0.1 0.1 0.1</size>
</box>
</geometry>
</collision>
<visual name="visual">
<pose>0 0 0 1.57 0 1.57</pose>
<geometry>
<mesh>
<uri>model://b45/meshes/asus.dae</uri>
</mesh>
</geometry>
</visual>
<sensor type="depth" name="openni_camera_camera">
<always_on>1</always_on>
<visualize>true</visualize>
<camera>
<horizontal_fov>1.3962634</horizontal_fov>

<clip>
<near>0.1</near>
<far>100</far>
</clip>
</camera>
<plugin name="camera_controller" filename="libgazebo_ros_openni_kinect.so">
<alwayson>true</alwayson>
<updaterate>30.0</updaterate>
<cameraname>cameraLeft</cameraname>
<framename>/CameraLeft_Link</framename>
<imagetopicname>CameraLeft/rgb/image_raw</imagetopicname>
<depthimagetopicname>CameraLeft/depth/image_raw</depthimagetopicname>
<pointcloudtopicname>CameraLeft/depth/points</pointcloudtopicname>
<camerainfotopicname>CameraLeft/rgb/camera_info</camerainfotopicname>
<depthimagecamerainfotopicname>CameraLeft/depth/camera_info</depthimagecamerainfotopicname>
<pointcloudcutoff>0.8</pointcloudcutoff>
<pointcloudcutoffmax>3.5</pointcloudcutoffmax>
</plugin>
</sensor>
</link>
robot.urdf
<robot name="autreim">
<link name="base_link">
</link>
<joint name="laser" type="fixed">
<origin xyz="0.46 0 0.62" rpy="0 0 0"/>
<parent link="base_link"/>
<child link="base_laser_link"/>
</joint>
<link name="base_laser_link">
</link>
<joint name="cameraLeft" type="fixed">
<origin xyz="0.29 0.0 1.068" rpy="0.05 0.535 0.13"/>
<parent link="base_link"/>
<child link="CameraLeft_Link"/>
</joint>
<link name="CameraLeft_Link">
</link>
<joint name="cameraRight" type="fixed">
<origin xyz="0.305 -0.22 0.975" rpy="1.52 0.4 -1.08"/>
<parent link="base_link"/>
<child link="CameraRight_Link"/>
</joint>
<link name="CameraRight_Link">
</link>
<joint name="scheuerteller" type="fixed">
<origin xyz="0.33 -0.06 0" rpy="0 0 0"/>
<child link="scheuerteller_link"/>
<parent link="base_link"/>
</joint>
<link name="scheuerteller_link"/>
<joint name="footprint" type="fixed">
<origin xyz="0 0 0" rpy="0 0 0"/>
<child link="base_link"/>
<parent link="base_footprint"/>
</joint>
<link name="base_footprint"/>
<joint name="ximu" type="fixed">
<origin xyz="0 0 0" rpy="0 0 0"/>
<parent link="base_link"/>
<child link="imu"/>
</joint>
<link name="imu"/>
<joint name="wheelodom" type="fixed">
<origin xyz="0 0 0" rpy="0 0 0"/>
<child link="wheelodom"/>
<parent link="base_link"/>
</joint>
<link name="wheelodom"/>
</robot>
Whether you tried by changing the corresponding joint orientation?
sdf
urdf
When I change it
the pointcloud fits the laserscan. But why?