ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

I wanted to use octomap_server but it looks like it will only accept pointcloud as an input, not pose. Am I missing something here?

You seem to imply that octomap_server should take in both pose and cloud and then use the received pose to properly position the received cloud relative to what it has already 'seen'. For that it would need to accept both.

In reality, PointCloud2 is a stamped message, in that it contains a std_msgs/Header. That header contains a frame_id field which should contain the name of the frame in which the pointcloud has been captured.

octomap_server takes the received pointcloud, extracts the frame_id and then using wiki/tf2 transforms the cloud into a common frame it has used for all clouds.

There is no need to supply any poses as those are retrieved from the TF system.

You can see this also in the documentation: octomap_server lists the sensor data frame → /map transform as a Required tf Transforms (here):

Required transform of sensor data into the global map frame if you do scan integration. This information needs to be available from an external SLAM or localization node.

I wanted to use octomap_server but it looks like it will only accept pointcloud as an input, not pose. Am I missing something here?

You seem to imply that octomap_server should take in both pose and cloud and then use the received pose to properly position the received cloud relative to what it has already 'seen'. For that it would need to accept both.'seen'.

In reality, PointCloud2 is a stamped message, in that it contains a std_msgs/Header. That header contains a frame_id field which should contain the name of the frame in which the pointcloud has been captured.

octomap_server takes the received pointcloud, extracts the frame_id and then using wiki/tf2 transforms the cloud into a common frame it has used for all clouds.

There is no need to supply any poses as those are retrieved from the TF system.

You can see this also in the documentation: octomap_server lists the sensor data frame → /map transform as a Required tf Transforms (here):

Required transform of sensor data into the global map frame if you do scan integration. This information needs to be available from an external SLAM or localization node.

So as long as you provide a transform that octomap_server can use, everything should work.

My plan is to use an rplidar with hectormapping to generate a pose topic, then combine this pose and the pointcloud to produce the octomap.

I'm not sure, but if the "camera pose" for the radar is not fixed (so you can't use a URDF or static_transform_publisher), you'll have to get it some other way. Using a localisation method such as hector_mapping could work. Afaik though, hector_mapping is a 2D method only.

Perhaps something like rtabmap_ros or cartographer would be more suited.

I wanted to use octomap_server but it looks like it will only accept pointcloud as an input, not pose. Am I missing something here?

You seem to imply that octomap_server should take in both pose and cloud and then use the received pose to properly position the received cloud relative to what it has already 'seen'.

In reality, PointCloud2 is a stamped message, in that it contains a std_msgs/Header. That header contains a frame_id field which should contain the name of the frame in which the pointcloud has been captured.

octomap_server takes the received pointcloud, extracts the frame_id and then using wiki/tf2 transforms the cloud into a common frame it has used for all clouds.

There is no need to supply any poses as those are retrieved from the TF system.

You can see this also in the documentation: octomap_server lists the sensor data frame → /map transform as a Required tf Transforms (here):

Required transform of sensor data into the global map frame if you do scan integration. This information needs to be available from an external SLAM or localization node.

So as long as you provide a transform that octomap_server can use, everything should work.

My plan is to use an rplidar with hectormapping to generate a pose topic, then combine this pose and the pointcloud to produce the octomap.

I'm not sure, but if the "camera pose" for the radar is not fixed (so you can't use a URDF or static_transform_publisher), you'll have to get it some other way. Using a localisation method such as hector_mapping could work. Afaik though, hector_mapping is a 2D method only.

Perhaps something like rtabmap_ros or cartographer would be more suited.

If you're going to use multiple sources for pose estimation, you may find robot_localization can help.

I wanted to use octomap_server but it looks like it will only accept pointcloud as an input, not pose. Am I missing something here?

You seem to imply that octomap_server should take in both pose and cloud and then use the received pose to properly position the received cloud relative to what it has already 'seen'.

In reality, PointCloud2 is a stamped message, in that it and contains a std_msgs/Header. That header contains a frame_id field which should contain the name of the frame in which the pointcloud has been captured.

octomap_server takes the received pointcloud, extracts the frame_id and then using wiki/tf2 transforms the cloud into a common frame it has used for all clouds.

There is no need to supply any poses as those are retrieved from the TF system.

You can see this also in the documentation: octomap_server lists the sensor data frame → /map transform as a Required tf Transforms (here):

Required transform of sensor data into the global map frame if you do scan integration. This information needs to be available from an external SLAM or localization node.

So as long as you provide a transform that octomap_server can use, everything should work.

My plan is to use an rplidar with hectormapping to generate a pose topic, then combine this pose and the pointcloud to produce the octomap.

I'm not sure, but if the "camera pose" for the radar is not fixed (so you can't use a URDF or static_transform_publisher), you'll have to get it some other way. Using a localisation method such as hector_mapping could work. Afaik though, hector_mapping is a 2D method only.

Perhaps something like rtabmap_ros or cartographer would be more suited.

If you're going to use multiple sources for pose estimation, you may find robot_localization can help.

I wanted to use octomap_server but it looks like it will only accept pointcloud as an input, not pose. Am I missing something here?

You seem to imply that octomap_server should take in both pose and cloud and then use the received pose to properly position the received cloud relative to what it has already 'seen'.

In reality, PointCloud2 is a stamped message, and contains a std_msgs/Header. That header contains both a frame_id field which should contain the name of the frame in which the pointcloud has been captured and a stamp field that should contain the time at which the cloud was captured.

octomap_server takes the received pointcloud, extracts the stamp and the frame_id and then using wiki/tf2 transforms the cloud into a common frame it has used for all clouds.clouds (it doesn't actually do this, it uses functionality in tf2_sensor_msgs and tf2_ros instead, but the result is the same).

There is no need to supply any poses as those are retrieved from the TF system.

You can see this also in the documentation: octomap_server lists the sensor data frame → /map transform as a Required tf Transforms (here):

Required transform of sensor data into the global map frame if you do scan integration. This information needs to be available from an external SLAM or localization node.

So as long as you provide a transform that octomap_server can use, everything should work.

My plan is to use an rplidar with hectormapping to generate a pose topic, then combine this pose and the pointcloud to produce the octomap.

I'm not sure, but if the "camera pose" for the radar is not fixed (so you can't use a URDF or static_transform_publisher), you'll have to get it some other way. Using a localisation method such as hector_mapping could work. Afaik though, hector_mapping is a 2D method only.

Perhaps something like rtabmap_ros or cartographer would be more suited.

If you're going to use multiple sources for pose estimation, you may find robot_localization can help.