Ask Your Question

How to incorporate moveable sensor (Lidar) into navigation stack

asked 2016-03-22 09:47:30 -0600

b2meer gravatar image

I am building a robot which will move using ros navigation stack with move_base. I am using Lidar Lite 2 Laser Rangefinder to detect obstacles and for mapping as well. I have two questions regarding the use of this lidar.

  1. Shall I use LaserScan (in obstacle_layer) or Range message data (in RangeSensorLayer) to put obstacles into the costmap. I am confused because generally Lidar is considered as a Laser sensor so it should be providing data in LaserScan format. But this specific lidar is providing range data.

  2. I am using a motor to rotate this lidar to cover 180 degrees of scan area. How can I let my obstacle layer know the current position of lidar so that the obstacle point is inserted at the correct place in the costmap. In my robot's urdf, I can add this sensor at one frame and it will be fixed to that place according to the urdf. Is there any method to resolve this issue ?

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2016-03-22 11:06:43 -0600

spmaniato gravatar image

updated 2016-03-24 10:48:54 -0600

You should use LaserScan messages, imo, and take care of the spinning logic in your laser scan publisher. Each LaserScan messsage's ranges field will be populated with, e.g., 180 degrees worth of LiDAR-Lite measurements.

Here are some guidelines and examples:

Keep in mind that you'll need some way of keeping track of your motor's / LiDAR's orientation. Some sort of encoder. (Unless you're using a stepper motor.)

edit flag offensive delete link more


I'm using a stepper motor to rotate my lidar lite, for design reasons. The stepper motor has been fixed at a rotation rate of 2Hz. The method you've described would therefore mean that there would be a lag of 0.5 seconds due to the refresh-rate. This isn't much lag, but can it be reduced further?

b2meer gravatar image b2meer  ( 2016-03-24 03:49:23 -0600 )edit

That's what the LaserScan message's time_increment field is for :-) Check out this detailed explanation:

spmaniato gravatar image spmaniato  ( 2016-03-24 10:45:08 -0600 )edit

alright. Thank you very much for your help

b2meer gravatar image b2meer  ( 2016-03-28 03:27:56 -0600 )edit

answered 2017-12-09 12:14:58 -0600

R. Tellez gravatar image

Related to the part of the rotating lidar, I created an example that shows how to detect the closest obstacle to the whole system at any position of the rotation. The trick is to reference the obstacle to the base_link and use the tf to transform from the laser reference frame to the base_linkreference frame.

I have done this in Python, using tf2_ros and tf2_geometry_msgs.

1st. Initialize the tf buffer and listener somewhere in your init function

buffer = tf2_ros.Buffer(rospy.Duration(1200.0)) 
tf_listener = tf2_ros.TransformListener(self.tf_buffer)

2nd. Do the actual transformation

transform = buffer.lookup_transform(target_frame, #destination frame
                                   pose_stamped_to_transform.header.frame_id, #source frame
                                   rospy.Time(0), #get the tf at first available time
                                   rospy.Duration(1.0)) #wait for 1 second

pose_transformed = tf2_geometry_msgs.do_transform_pose(pose_stamped, transform)

You can watch the video here that shows a full example and how it looks in Rviz.


edit flag offensive delete link more



This example is nice, but it doesn't really seem to be related to the question. The OP has a single-beam (point) lidar, but this example shows a line scanner.

ahendrix gravatar image ahendrix  ( 2017-12-09 13:26:38 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2016-03-22 09:47:30 -0600

Seen: 1,855 times

Last updated: Dec 09 '17