ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

I'm not entirely sure what you mean by

i just put a couple of points in a straight line

but I think I understand where the confusion is coming from. When the costmap receives the cloud, each point in your pointcloud will be stored as an 'observation'. If marking is enabled, the costmap will then mark an obstacle at the point. If clearing is enabled, it will raytrace clear space inbetween the robot and the point. So if you send an empty pointcloud, nothing at all will happen, because you have no observations.

You may think 'no points means no obstacles', however, this is important since an empty cloud may represent a sensor malfuntion, or a blind spot in the sensor's FOV, so the costmap can't assume clear space and clear the obstacles.

The way I got around this issue is to convert the pointcloud into a laserscan, with max_sensor_range along rays where no point is detected. This is essentially you doing your own raytracing. Sadly, the pointcloud_to_laserscan package that used to do this doesn't exist right now (disappeared in groovy), but there are still some forks around on github and I'm working on adding it to the perception_pcl stack.

I'm not entirely sure what you mean by

i just put a couple of points in a straight line

but I think I understand where the confusion is coming from. When the costmap receives the cloud, each point in your pointcloud will be stored as an 'observation'. If marking is enabled, the costmap will then mark an obstacle at the point. If clearing is enabled, it will raytrace clear space inbetween the robot and the point. So if you send an empty pointcloud, nothing at all will happen, because you have no observations.observations. Similarly, a lack of points along a direction does not automatically clear the space either.

You may think 'no points means no obstacles', however, this is important since an empty cloud may represent a sensor malfuntion, or a blind spot in the sensor's FOV, so the costmap can't assume clear space and clear the obstacles.

The way I got around this issue is to convert the pointcloud into a laserscan, with max_sensor_range along rays where no point is detected. This is essentially you doing your own raytracing. Sadly, the pointcloud_to_laserscan package that used to do this doesn't exist right now (disappeared in groovy), but there are still some forks around on github and I'm working on adding it to the perception_pcl stack.

I'm not entirely sure what you mean by

i just put a couple of points in a straight line

but I think I understand where the confusion is coming from. When the costmap receives the cloud, each point in your pointcloud will be stored as an 'observation'. If marking is enabled, the costmap will then mark an obstacle at the point. If clearing is enabled, it will raytrace clear space inbetween the robot and the point. So if you send an empty pointcloud, nothing at all will happen, because you have no observations. Similarly, a lack of points along a direction does not automatically clear the space either.

You may think 'no points means no obstacles', however, this is important since an empty cloud may represent a sensor malfuntion, or a blind spot in the sensor's FOV, so the costmap can't assume clear space and clear the obstacles.

The way I got around this issue is to convert the pointcloud into a laserscan, with max_sensor_range along rays where no point is detected. This is essentially you doing your own raytracing. Sadly, the pointcloud_to_laserscan package that used to do this doesn't exist right now (disappeared in groovy), anymore (used to be in the original turtlebot repo), but there are still some forks around on githubI've added it back to perception_pcl . These changes aren't released into ROS yet, so you can clone and I'm working on adding it to the perception_pcl stack.

compile the pointcloud_to_laserscan package yourself.

I'm not entirely sure what you mean by

i just put a couple of points in a straight line

but I think I understand where the confusion is coming from. When the costmap receives the cloud, each point in your pointcloud will be stored processed as an 'observation'. observation. If marking is enabled, the costmap will then mark an obstacle at the point. If clearing is enabled, it will raytrace clear space inbetween the robot and the point. So if you send an empty pointcloud, nothing at all will happen, because you have no observations. observations. Similarly, a lack of points along a direction does not automatically clear the space either.either. This may not be obvious from the documentation, so it helps to dig into the source when something isn't clear.

You may think 'no points means no obstacles', however, this is important since an however the pointcloud doesn't actually contain any information about how much space it contains, only the points that are present. An empty cloud may also represent a sensor malfuntion, or a blind spot in the sensor's FOV, so the costmap can't assume clear space and clear the obstacles.

The way I got around this issue is to convert the pointcloud into a laserscan, with max_sensor_range along rays where no point is detected. This is essentially you doing your own raytracing. Sadly, the pointcloud_to_laserscan package that used to do this doesn't exist anymore (used to be in the original turtlebot repo), but I've added it back to perception_pcl. These changes aren't released into ROS yet, so you can clone and compile the pointcloud_to_laserscan package yourself.