ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

An answer from David Butterworth which solved my problem! Thanks David.


I found that if you modify the visual/collision mesh to be aligned with your joint origin, then it's okay. Don't do extra translations/rotations of the mesh from within your URDF, because that is broken. The orientation of the sensor data should depend on the Gazebo macro, but there should be only translations in there, with 2 joint rotations for the extra pair of camera frames.

My Gazebo macro is based on the PR2 one, but with the visual/collision meshes fixed and re-scaled as per above. The end result is that any translation/rotation is only done at the head_mount_kinect joint that orientates everything else, and you can successfully pitch the sensor up-down or vertically.

I'm guessing that in your situation, the PointCloud data is actually in the correct place, but because of the bug with the meshes, the visual model is in the wrong place.