ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
2

Merge the output of Ultrasound sensor with hokuyo laser output for navigation

asked 2015-04-01 16:37:17 -0500

Naman gravatar image

updated 2015-04-01 18:39:37 -0500

Hi all,

I have a mobile robot and I would like it to navigate around the building. I already have a 2D map of the building. I have Rotational encoders to get the odometry information and IMU/UWB for localization. I will be using hokuyo laser sensor and Ultrasound sensors for navigation. Hokuyo will be the main sensor for navigation and Ultrasound sensors will be used to detect obstacles not seen by the hokuyo (like Overhangs, etc). So, my question is how can I merge the output of Ultrasound sensor which is of the type sensor_msgs/Range.msg with the hokuyo sensor output which is of the type sensor_msgs/LaserScan.msg ?
I think one of the ways can be to convert Range message to LaserScan message and then modify the original LaserScan message from the Hokuyo to incorporate this LaserScan message to come up with the final merged LaserScan message and use it for navigation. But I dont know how hard it will be specially merging of two LaserScan messages into one. Is there any other (easier or better) way to accomplish this? Or if this is the only way, what is the best way to go about it?

Thanks in advance.
Naman Kumar

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
3

answered 2015-04-02 01:18:40 -0500

Humpelstilzchen gravatar image

updated 2015-04-06 10:23:27 -0500

You could just use a range sensor layer: http://wiki.ros.org/range_sensor_layer

The idea is that the laser adds to the obstacle layer as usual and the sonar add to a separate range sensor layer, then both are combined to the costmap.

Of yourse you need to switch your configuration to layered costmaps first.

My global_costmap_params.yaml (partial):

plugins:
  - {name: static_layer, type: 'costmap_2d::StaticLayer'}
  - {name: obstacle_layer, type: 'costmap_2d::ObstacleLayer'}
  - {name: inflation_layer, type: 'costmap_2d::InflationLayer'}

My local_costmap_params.yaml (partial):

plugins:
  - {name: obstacle_layer, type: 'costmap_2d::ObstacleLayer'}
  - {name: sonar_layer,   type: "range_sensor_layer::RangeSensorLayer"}
  - {name: inflation_layer, type: 'costmap_2d::InflationLayer'}

sonar_layer:
  topics: ["/sonar_forward_lower"]
  no_readings_timeout: 1.0
edit flag offensive delete link more

Comments

Thanks @Humpelstilzchen! Just one more thing, why did you use range_sensor_layer only in the local costmap and not in the global costmap? It can be used in the global costmap also if we want to use sonar information for global planning, right?

Naman gravatar image Naman  ( 2015-04-07 09:32:50 -0500 )edit
1

Sure, it can be used. I decided against it because of the high inaccuracy of the sensor (nearest obstacle i n field of view)

Humpelstilzchen gravatar image Humpelstilzchen  ( 2015-04-07 17:51:52 -0500 )edit

Thanks @Humpelstilzchen! I am facing some problem when I use ultrasound sensor for global planning using range_sensor_layer. If possible, can you look at http://answers.ros.org/question/20680... ?

Naman gravatar image Naman  ( 2015-04-08 08:28:46 -0500 )edit

@Humpelstilzchen, Do you know how to merge Ultrasound Sensor on local_costmap of TurtleBot3?

Here is local_costmap_params.yaml

local_costmap: global_frame: odom robot_base_frame: base_footprint

update_frequency: 10.0 publish_frequency: 10.0 transform_tolerance: 0.5

static_map: false
rolling_window: true width: 3 height: 3 resolution: 0.05

And here is costmap_common_params_burger.yaml

obstacle_range: 3.0 raytrace_range: 3.5

footprint: [[-0.105, -0.105], [-0.105, 0.105], [0.041, 0.105], [0.041, -0.105]]

robot_radius: 0.105

inflation_radius: 1.0 cost_scaling_factor: 3.0

map_type: costmap observation_sources: scan scan: {sensor_frame: base_scan, data_type: LaserScan, topic: scan, marking: true, clearing: true}

Would you Please Assist?

Kaveh gravatar image Kaveh  ( 2022-03-05 13:08:58 -0500 )edit

You should create a new question for this

Humpelstilzchen gravatar image Humpelstilzchen  ( 2022-03-06 02:25:16 -0500 )edit
1

answered 2015-04-02 00:16:16 -0500

2ROS0 gravatar image

I think your best bet is to convert sensor_msgs/Range to sensor_msgs/PointCloud which is a much more favorable type for navigation. Something similar has been implemented in this package.

Alternatively, you do not need to merge the data for your navigation system. You can have navigation logic for both types of sensor inputs running in parallel and have a simple state machine that handles higher level decision making.

edit flag offensive delete link more

Comments

Thanks @2ROS0. But I think using PointCloud for navigation instead of LaserScan will be more computationally expensive, right? Thanks @Humpelstilzchen! Do you know any reference which explains range sensor layer in detail like how to integrate with the navigation stack and use it with laser sensor?

Naman gravatar image Naman  ( 2015-04-03 08:58:25 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2015-04-01 16:37:17 -0500

Seen: 1,587 times

Last updated: Apr 06 '15