ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A answers.ros.org

Hello,

PLEASE NOTE THIS IS NOT MY HOMEWORK, I AM JUST ASKING FOR DIRECTIONS

I am quite new to ROS. I'm working on a research project where we have to move a mid-sized vehicle (1m^2) from point A to point B, along a curved track (outdoors), ROS Kinetic, Ubuntu 16.04 LTS.

I considered using OpenCV for edge detection/line detection, but the track is very wide so there aren't many reference points, and no lanes either.

I have looked recently looking into ROS navigation stack as well as SLAM. It looked very promising until I realized it's probably suited better for indoor use (correct me if I am wrong)?

1. Doppler sensor (50m range, noise resistant)
2. GPS INS (up to 2cm position accuracy)
3. LEDDAR VU8 unit (20 degrees beam, up to 185m range, not a 360 LIDAR)
4. Many ultrasonic proximity detectors (~50 degrees beam, 7m range)
5. Stereoscopic cameras

So that's my sensor payload, I have them all integrated into ROS. I am, however, slightly confused where to go from here. I understand how SLAM and then navigation stack do it, but they use a 360 degree LSD for mapping/localisation, plus, all the examples I have seen are done indoors.

The only current idea I've is to use GPS INS sensor to follow pre-recorded set of waypoints, however that wouldn't account for any changes on the map (like obstacle avoidance in SLAM/navigation stack).

I am not exactly sure where to start looking. Can it be done with the ROS navigation stack? Are there any other navigation packages that I can research?

Again, I am not asking for a solution, I would extremely be grateful if any of you could provide any links, sources, books, suggestions or directions.

Thanks!

edit retag close merge delete

Sort by » oldest newest most voted

To answer your direct question: Yes. There are other packages to use for navigation, but the Navigation Stack is still a good way to start even if you will need to move to a few different nodes to finish your project. I think you'll be frustrated if you start off trying to create a custom solution to your specific use. I really think it better you gain some experience with a configuration that is known to work. Use the Navigation Stack as your "hello world" app. Get the robot working and tune to give good results. This you could be done indoors with a temporary 360 lidar if needed, but you could also use the stereo cameras to get a laser scan message with some effort.

The GPS is good enough that you could use the robot localization package instead of AMCL for locating. I make an assumption that you have encoders on the drive motors, and that you are using wheels. But even if you don't, it may be OK. http://wiki.ros.org/robot_localization

For obstacle detection, there really is no substitute for 360 Lidar but you have enough sensors that it could be made to work. You'll use costmaps to integrate the different sensor data into a form that Navigation Stack/Move Base can use for obstacle detection and avoidance.
http://wiki.ros.org/costmap_2d/Tutori...

more

Thank you for your answer! Sadly I do not have a 360 LIDAR unit. I can't really gather much mapping data with a stationary LIDAR, can I (same goes for cameras)? The way I understand navigation stack is that it always needs a map, which I can't get without a 360 LIDAR, or can I?

( 2018-07-06 13:23:02 -0600 )edit

And yes, I am using wheels, it's a 4 wheel vehicle with it's own vehicle control system controlled via CAN bus, so controlling the actuators, motors and servos is as easy as sending a single command.

( 2018-07-06 13:24:53 -0600 )edit
1

You don't have to do SLAM. You could simply draw a map if the environment is simple. It actually may be required anyways as it sounds like the path you need to follow doesn't contain any landmarks the lidar will see. ...continued...

( 2018-07-06 13:25:58 -0600 )edit
1

The map.yaml file you'll learn about in the tutorials sets resolution of map and you can draw the map in GIMP. I would probably use maps.google to get overhead view of area and trace over it to make map.

( 2018-07-06 13:28:17 -0600 )edit

So as far as I understand I do the mapping by drawing it manually, do the localisation with the GPS sensor and obstacle avoidance with ultrasonics/LIDAR? Drawing the map is a part of costmap_2d package, am I correct?

( 2018-07-06 13:33:17 -0600 )edit
1

#1 - yes

#2 - Not really. Costmap will use the map you provide, in conjunction with sensor data, to build a cost map that move-base will use for planning.

( 2018-07-06 19:22:44 -0600 )edit

Have a look at this repo: https://github.com/nickcharron/waypoi... . It's specifically for doing outdoor GPS waypoint navigation with a Clearpath Husky robot, but it's a nice, generic example of using the robot_localization package (ekf_localization_node, navsat_transform_node) to fuse GPS, IMU, and wheel odometry. You can run the whole thing in simulation, just to get a feel for how the whole thing hangs together. With all of this stuff, the devil is in the (configuration) details, and that repo has a nice, working set of config files for outdoor waypoint navigation. Makes a great jumping-off point.

more

( 2018-07-09 02:14:06 -0600 )edit

Okay, you've got a really nice GPS that'll make your life a lot easier, do you have a compass and IMU? One of these sensors could easily give you a heading and body tilt information effectively solving the localisation part of the problem.

You should be able to build a very accurate localisation estimate from the above sensors which means you don't necessarily have to perform full SLAM. if your route is approximate defined using GPS waypoints then you could use a reactive route planning approach using just the stereo cameras. Here you would process the two images to get a point cloud. Then use that could to produce a DEM (digital elevation map), which can then be used to fairly easily determine which areas can be safely driven. This will give you a path foot the robot to follow.

You're remaining lower resolution sensors can be used for reactive collision avoidance.

Hope this gets you started.

more

£2600 for the GPS sensor :) The IMU is integrated into the GPS sensor, I am able to read everything that IMU senses. Could you elaborate slightly further on how GPS and stereo cameras are supposed to cooperate together? Also, why do I need an elevation map if I can just measure tilt with the IMU?

( 2018-07-06 13:16:27 -0600 )edit

( 2018-07-06 13:16:42 -0600 )edit

Also, is there any way to create a GPS map for an area? For example, take a square snapshot of London from Google Maps and convert it into a grid of GPS coordinates with x/y dimensions, something like 10000x10000 as an example?

( 2018-07-06 13:28:34 -0600 )edit
1
1. The elevation map records the height of the terrain at a grid of points. It's a simple form of 3d map. Your tilt sensor simply measures the tilt of the vehicle at the current time. I don't see why you wouldn't need both.
( 2018-07-07 02:14:53 -0600 )edit
1
1. You don't need to store all the gps coordinstes ( I assume you mean latitude and longitude angles ) for a grid. There is a fairly simple mathematical function to do this conversion from local coordinates (x y in meters) to lat long and vice versa.
( 2018-07-07 02:20:11 -0600 )edit

@Hypomania Hello, I recently used gps for husky simulation navigation. I edited the gps point myself, read the gps point using the logic handle, and let him drive, but the husky in the simulator did not move, and the terminal displayed an error message. I don't know where the problem is.

process[outdoor_waypoint_nav/gps_waypoint-1]: started with pid [12814] [outdoor_waypoint_nav/gps_waypoint-1] killing on exit [outdoor_waypoint_nav/gps_waypoint-1] escalating to SIGTERM

more

1

You should be asking this in a new question. Not as an answer to previous question.

( 2019-05-25 15:24:49 -0600 )edit