ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A answers.ros.org

# [tf]different laser position for obstacle avoidance

Hi all,
Currently I am following this tutorial to implement the wall following algorithm,
During the wall following stage, I need to do the obstacle avoidence.
When the laser position is puttedd on the center of the robot,
Its simple to know robot hits obstacle when one of laser range is smaller than robot radius.

But when it move front about 10cm.(because of the robot design)
It becames difficult to calculate the obstacle distance.

different laser position.jpg

(1) correct every laser ray by multiple some coeffcients (sin, cos..)
(2) use other senor (ultrasonic, ir...etc) and put on the side of robot
- may have some blind range?
(3) custom a costmap to calculate the obstacle distance from robot
- resolution need to 0.5cm, is it possible?

Does anyone know how to do obstacle when laser position is not on the center of robot?
Thanks!

edit retag close merge delete

Were you able to compile the code from the tutorial i keep getting errors with the #i which also look very weird to me.

( 2016-10-31 06:43:34 -0600 )edit

@gijsje170
the code has some "style", after correction it should be fine.
ex:
diffE = (distMin - wallDistance) - r -> diffE = (distMin - wallDistance) - e

( 2016-10-31 20:36:39 -0600 )edit

Hi thanks for your response i was able to compile the code but it just keeps driving circles maybe the FOV of my kinect is just to small with just 58 degrees?

( 2016-11-01 04:59:48 -0600 )edit

Hi, @gijsje170
Tutorial senario is a 360 laser on the center top of the robot.
and calculate the distance between robot and wall,
so if your want use kinect, the kinect might be set to face wall.
also with some code modification, you should able to do wall following alg.

( 2016-11-01 23:50:33 -0600 )edit

Sort by ยป oldest newest most voted

Well, you may need to check the tf package.

tf is a package that lets the user keep track of multiple coordinate frames over time. tf maintains the relationship between coordinate frames in a tree structure buffered in time, and lets the user transform points, vectors, etc between any two coordinate frames at any desired point in time.

For example, the origin of base_link frame is at the center of the robot, and the laser_frame is 10cm to the base_link. The tf can maintain the relationship between base_link and laser_frame. When you get the obstacle distance to the laser, you can calculate that distance to your robot center by listening to the tf base_link->laser_frame. And you can setup the tf with simple static_transform_publisher, for example, in the launch file, you can add this:

   <launch>
<node pkg="tf" type="static_transform_publisher" name="base_to_laser" args="0.1 0 0 0 0 0 base_link laser_frame 50" />
</launch>


And then you need to write a ROS node to listen to the tf base_link->laser_frame and calculate what you want.

more