ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
4

Navigation stack in a busy room

asked 2011-04-04 21:56:37 -0500

Daniel Stonier gravatar image

Has anyone had any experience running the navigation stack in a room with alot of dynamic obstacles (aka people in a crowded shop)? I'm looking for advice more in terms of its ability to localise rather than any concerns due to obstacle avoidance/path planning.

We'll probably do some testing very shortly, but I thought it would be good to know where to concentrate our efforts in advance if others have had similar experiences.

edit retag flag offensive close merge delete

Comments

Hey Daniel, could you share what you did to sort this situation out. I am trying to implement something similar and wondering what to use. New with ROS :)

Usman Arif gravatar image Usman Arif  ( 2015-09-21 01:26:44 -0500 )edit

2 Answers

Sort by ยป oldest newest most voted
5

answered 2011-04-04 22:38:59 -0500

In that type of environment you probably want the laser scanner or Kinect to be above head height, or pointing towards the ceiling. The SEEGRID robots have a similar issue with dynamic environments, and if you check their videos you can see that the cameras are pointing upwards. Even in very dynamic environments, things at higher elevations don't tend to change much from day to day.

If you're using a Kinect you could have it pointing forwards, in which case you can still use it for detecting and avoiding obstacles, but then create a fake laser scan maybe a couple of meters above ground level which can be fed into the navigation.

edit flag offensive delete link more
2

answered 2011-04-07 15:11:04 -0500

clark gravatar image

One point to add, in environment with lots of moving objects or human being, the tilting laser is most possibly unable to handle, cause its mechanical tilting is not fast enough to catch the object movement and ultimately, you only get a tortured shape. In contrast, the kinect has potential to work for the situation. It doesn't involve mechanical parts and is thus able to capture environment changes in reasonable frequency. One possible way is to further apply filter upon kinect point clouds to remove ground, human etc. (not sure whether there is existing package for this) before fed to navigation module.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2011-04-04 21:56:37 -0500

Seen: 597 times

Last updated: Sep 21 '15