ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Wouldn't this be a job for gazebo_ros_p3d? See #q222033.

Wouldn't this be a job for gazebo_ros_p3d? See #q222033.


Edit:

No. This publishes the robot odometry. Which is different from the robot pose wrt the gazebo world.

The code seems to indicate otherwise. See here. The pose.position.{x, y, z} elements do not get calculated, but are initialised from the world pose of the configured link.

Other answers (such as #q258748) also seem to confirm/suggest that.

Wouldn't this be a job for gazebo_ros_p3d? See #q222033.


Edit:

No. This publishes the robot odometry. Which is different from the robot pose wrt the gazebo world.

The code seems to indicate otherwise. See here. The pose.position.{x, y, z} elements do not get calculated, but are initialised from the world pose of the configured link.

If you set the gaussianNoise parameter to 0, it should be absolute ground truth (ie: pose in Gazebo world).

Other answers (such as #q258748) also seem to confirm/suggest that.

Wouldn't this be a job for gazebo_ros_p3d? See #q222033.


Edit:

No. This publishes the robot odometry. Which is different from the robot pose wrt the gazebo world.

The code seems to indicate otherwise. See here. The pose.position.{x, y, z} elements do not get calculated, but are initialised from the world pose of the configured link.

If you set the gaussianNoise parameter to 0, it should be absolute ground truth (ie: pose in Gazebo world). not even strictly necessary: noise is only added to linear and angular velocity estimates. But might be nice to do still if you want to use this as input to another node that takes covariance into account.

Other answers (such as #q258748) also seem to confirm/suggest that.

Wouldn't this be a job for gazebo_ros_p3d? See #q222033.


Edit:

No. This publishes the robot odometry. Which is different from the robot pose wrt the gazebo world.

The code seems to indicate otherwise. See here. The pose.position.{x, y, z} elements do not get calculated, but are initialised from the world pose of the configured link.

If you set the gaussianNoise parameter to 0, it should be absolute ground truth (ie: pose in Gazebo world). not even strictly necessary: noise is only added to linear and angular velocity estimates. But might be nice to do still if you want to use this as input to another node that takes covariance into account.

Other answers (such as #q258748) also seem to confirm/suggest that.


Edit2:

I agree with you that this plugin is publishing the world pose of the robot. Nevertheless, if I run it (inserting the lines specified in #q258748 in my urdf) I don't have a tf between /map and /odom frame which is what I really need.

Getting the odometry data out of Gazebo is only the first step. If you got that working, you should be able to use fake_localization to do the rest.

Wouldn't this be a job for gazebo_ros_p3d? See #q222033.


Edit:

No. This publishes the robot odometry. Which is different from the robot pose wrt the gazebo world.

The code seems to indicate otherwise. See here. The pose.position.{x, y, z} elements do not get calculated, but are initialised from the world pose of the configured link.

If you set the gaussianNoise parameter to 0, it should be absolute ground truth (ie: pose in Gazebo world). not even strictly necessary: noise is only added to linear and angular velocity estimates. But might be nice to do still if you want to use this as input to another node that takes covariance into account.

Other answers (such as #q258748) also seem to confirm/suggest that.


Edit2:

I agree with you that this plugin is publishing the world pose of the robot. Nevertheless, if I run it (inserting the lines specified in #q258748 in my urdf) I don't have a tf between /map and /odom frame which is what I really need.

Getting the odometry data out of Gazebo is only the first step. If you got that working, you should be able to use fake_localization to do the rest.

The base_pose_ground_truth subscription would get its data from the p3d plugin.