Ask Your Question

[tf]different laser position for obstacle avoidance

asked 2016-10-13 20:23:07 -0500

Jam gravatar image

updated 2016-10-31 20:32:23 -0500

Hi all,
Currently I am following this tutorial to implement the wall following algorithm,
During the wall following stage, I need to do the obstacle avoidence.
When the laser position is puttedd on the center of the robot,
Its simple to know robot hits obstacle when one of laser range is smaller than robot radius.

But when it move front about 10cm.(because of the robot design)
It becames difficult to calculate the obstacle distance.

different laser position.jpg

I have some simple ideas about this problem:
(1) correct every laser ray by multiple some coeffcients (sin, cos..)
- cpu loading too high?
(2) use other senor (ultrasonic, ir...etc) and put on the side of robot
- may have some blind range?
(3) custom a costmap to calculate the obstacle distance from robot
- resolution need to 0.5cm, is it possible?

Does anyone know how to do obstacle when laser position is not on the center of robot?

edit retag flag offensive close merge delete


Were you able to compile the code from the tutorial i keep getting errors with the #i which also look very weird to me.

gijsje170 gravatar image gijsje170  ( 2016-10-31 06:43:34 -0500 )edit

the code has some "style", after correction it should be fine.
rangeslink -> ranges[i],
diffE = (distMin - wallDistance) - r -> diffE = (distMin - wallDistance) - e

Jam gravatar image Jam  ( 2016-10-31 20:36:39 -0500 )edit

Hi thanks for your response i was able to compile the code but it just keeps driving circles maybe the FOV of my kinect is just to small with just 58 degrees?

gijsje170 gravatar image gijsje170  ( 2016-11-01 04:59:48 -0500 )edit

Hi, @gijsje170
Tutorial senario is a 360 laser on the center top of the robot.
and calculate the distance between robot and wall,
so if your want use kinect, the kinect might be set to face wall.
also with some code modification, you should able to do wall following alg.

Jam gravatar image Jam  ( 2016-11-01 23:50:33 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted

answered 2016-10-13 23:32:45 -0500

Shay gravatar image

Well, you may need to check the tf package.

tf is a package that lets the user keep track of multiple coordinate frames over time. tf maintains the relationship between coordinate frames in a tree structure buffered in time, and lets the user transform points, vectors, etc between any two coordinate frames at any desired point in time.

For example, the origin of base_link frame is at the center of the robot, and the laser_frame is 10cm to the base_link. The tf can maintain the relationship between base_link and laser_frame. When you get the obstacle distance to the laser, you can calculate that distance to your robot center by listening to the tf base_link->laser_frame. And you can setup the tf with simple static_transform_publisher, for example, in the launch file, you can add this:

       <node pkg="tf" type="static_transform_publisher" name="base_to_laser" args="0.1 0 0 0 0 0 base_link laser_frame 50" />

And then you need to write a ROS node to listen to the tf base_link->laser_frame and calculate what you want.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2016-10-13 20:23:07 -0500

Seen: 305 times

Last updated: Oct 31 '16