ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Cover an angle range from -210 to -150 degrees for a laser scan

asked 2016-09-21 06:23:19 -0600

Mehdi. gravatar image

updated 2016-09-21 08:54:39 -0600

I have a sensor mounted in the rear of my mobile base, facing backwards. It creates a pointcloud and I am trying to create a fake laserscan using pointcloud_to_laserscan package.

The problem is that relative to base_footprint, I need the range from -210 until -150 degrees (the sensor has a horizontal field of view of 60 degrees, facing backwards) to be collapsed into a laserscan. Trying (transformed to rad of course)

min_angle: -210
max_angle: -150

and visualizing the data in rviz, it seems that only -180 until -150 is taken into consideration. When I use

min_angle: -180
max_angle: +180

It just works fine but I get a lot of useless points that cover areas that the sensor does not cover (thus getting the value Inf assigned).

The less data consuming alternative would be to create a second base_footprint_rotated that will be rotated by 180 degrees relative to base_footprint (around z axis) and use it as reference for defining the region of interest for the laserscan as

min_angle: -30
max_angle: +30
target_frame: base_footprint_rotated

Is there any other way for overcoming this limitation?

edit retag flag offensive close merge delete


[..] angles in ROS are defined from -Pi until +Pi [..]

This may be a limitation of some components, but I'd be very surprised if this is true globally. There is no such limit. Angles (ambiguous) can be any value. Also: ROS standardised on quaternions, and this seems like something else.

gvdhoorn gravatar image gvdhoorn  ( 2016-09-21 07:33:57 -0600 )edit

I would say a limitaton for the laserscan min and max angle definition.

Mehdi. gravatar image Mehdi.  ( 2016-09-21 07:45:07 -0600 )edit

That makes more sense :). Would perhaps be nice to update your question?

gvdhoorn gravatar image gvdhoorn  ( 2016-09-21 08:02:55 -0600 )edit

1 Answer

Sort by ยป oldest newest most voted

answered 2016-09-21 09:23:42 -0600

Airuno2L gravatar image

updated 2016-09-21 09:28:41 -0600

I think the customary way this is handled is that your URDF for the robot would include the sensor's pose, and the sensor's tf would be published by the robot_state_publisher. Then, when you're lidar (or other sensor) publishes it's point cloud, it uses that tf in the point cloud messages' headers.

If you don't have a URDF for the robot (or don't want to mess around with editing it) then you're idea of setting the target_frame for the pointcloud_to_laserscan node to a static transformation is a good way to achieve what you're wanting to do but it is a hack because only pointcloud_to_laserscan will know the correct orientation. Using the URDF is the correct way because all nodes will know the sensor points backwards. In other words, if you add a node in the future that needs that data, you'll have to do a similar hack again unless you edit the URDF.

edit flag offensive delete link more


When I mentioned base_footprint_rotated it was indeed a new link in the robot's urdf. And the sensor also has a link in the urdf but I think it doesn't make sense to use the sensor's link as target_frame. The reason is that parameters like max_height and min_height become very cumbersome to define

Mehdi. gravatar image Mehdi.  ( 2016-09-21 09:36:05 -0600 )edit

if they are not expressed relative to the floor (here base_footprint). They even might have no sense at all as z in the robot's frame is the normal height but if you consider the z axis in the camera's frame, it is the depth, and you don't want to change that.

Mehdi. gravatar image Mehdi.  ( 2016-09-21 09:36:22 -0600 )edit

They shouldn't be cumbersome - min_height is just the distance from the sensor to the floor and max_height is the distance from the sensor to the top of the robot.(plus a little bit for clearance).

Airuno2L gravatar image Airuno2L  ( 2016-09-21 10:07:53 -0600 )edit

so in the consideration of min and max heights, it is being internally converted to the "floor" frame? It is not clear to me in which coordinate system those min and max are defined.

Mehdi. gravatar image Mehdi.  ( 2016-09-21 11:05:32 -0600 )edit

They are with respect to the sensor frame. As long as your sensor is mounted level and the sensor frame in the URDF uses the standard convention, everything should just work, you could measure with a measuring tape.

Airuno2L gravatar image Airuno2L  ( 2016-09-21 11:15:26 -0600 )edit

To be clear, your min number would be negative, the sensor is at 0, and the max number would be positive.

Airuno2L gravatar image Airuno2L  ( 2016-09-21 11:16:47 -0600 )edit

so you mean they are defined in the base_footprint frame with an offset representing the z coordinate of the sensor in base_footprint frame (height means z coordinate as far as I know). But I think I will still use base_footprint as target and let TF do the conversion for me, just to avoid confusion

Mehdi. gravatar image Mehdi.  ( 2016-09-21 11:37:15 -0600 )edit

After all you want to define the min and max heights of obstacles you want to detect relative to your mobile base's frame.

Mehdi. gravatar image Mehdi.  ( 2016-09-21 11:37:53 -0600 )edit

Question Tools

1 follower


Asked: 2016-09-21 06:23:19 -0600

Seen: 1,179 times

Last updated: Sep 21 '16