Ask Your Question
3

Obstacle avoidance algorithm for turtlebot

asked 2013-03-17 02:05:20 -0600

Devasena Inupakutika gravatar image

updated 2014-01-28 17:15:45 -0600

ngrennan gravatar image

I need information on using RGB D-SLAM for obstacle avoidance in turtlebot. If I launch kinect+rgbdslam.launch in one terminal after launching openni.launch in different terminal, if I use rostopic pub cmd_vel for moving turtlebot, will it traverse detecting obstacle and avoiding it ? As obstacle avoidance algorithm is already implemented in RGBD-SLAM ?

edit retag flag offensive close merge delete

3 Answers

Sort by ยป oldest newest most voted
2

answered 2013-03-17 15:26:49 -0600

No, RGBD SLAM only provides localization and mapping. That is, it tries to build a map and figure out where the robot is within that map. What you're talking about is path planning. If you look at the diagram on this page, it's the move_base package that does that planning.

I think this would be a good question to read: http://answers.ros.org/question/9712/how-to-use-rgbd-6d-slam-for-path-planning-and-navigation-with-kinect/.

-Jon

edit flag offensive delete link more

Comments

Hi Jon, Thanks a lot for the information. I have gone through the tutorials related to setting up and configuring navigation stack for robot. I just want to put it down here, please let me know if I am missing something here: I will be creating a separate package specific to the robot I am working

Devasena Inupakutika gravatar imageDevasena Inupakutika ( 2013-03-18 07:13:28 -0600 )edit

on , then create a robot configuration file and make changes to the launch file based on sensor nodes my robot makes use of and sensor_msgs type (point cloud for kinect etc.) , build it and then try running the launch file for my robot navigation stack. This is how I should proceed ?

Devasena Inupakutika gravatar imageDevasena Inupakutika ( 2013-03-18 07:16:34 -0600 )edit

I think so, pretty much. I don't know what you've done already, but it makes sense to first make sure you can drive the robot around. Then make sure you get the robot working with RVIZ. Then make sure you can get the laser scan working in RVIZ. Then gmapping. Then the autonomous navigation.

Jon Stephan gravatar imageJon Stephan ( 2013-03-18 16:54:14 -0600 )edit

If you jump right into the navigation stack, it will be difficult to debug any problems.

Jon Stephan gravatar imageJon Stephan ( 2013-03-18 16:54:47 -0600 )edit

Hi Jon, I have tried all the steps (laser scanning (since I am not using planar laser instead using Kinect, hence I have done the point_cloud to laser package for laser scanning, TF and publishing odometry ) and got all of them working on RVIZ. Hence, I think I can now get SLAM working now through

Devasena Inupakutika gravatar imageDevasena Inupakutika ( 2013-03-19 13:05:12 -0600 )edit

SLAM gMapping.

Devasena Inupakutika gravatar imageDevasena Inupakutika ( 2013-03-19 13:05:39 -0600 )edit

Hi Jon, For SLAM to work with kinect, I have created three launch files as follows --> one for kinect_laser to transform point cloud data to laser scan data, one for link between kinect and base_link (static_transform_publisher) and one for slam gmapping with all the required parameters.

Devasena Inupakutika gravatar imageDevasena Inupakutika ( 2013-03-22 01:48:47 -0600 )edit

And everything worked fine and I could save the map using map_saver. I think I can now do the autonomous navigation of this known map. Thanks a lot for the help and links for 2D SLAM with kinect.

Devasena Inupakutika gravatar imageDevasena Inupakutika ( 2013-03-22 01:51:00 -0600 )edit
0

answered 2015-03-26 07:41:56 -0600

sophye_turtlebot gravatar image

Hi, I started working on rgbd slam with TurtleBot, ros hydro Ubunto 12.04 I followed thus link:http://felixendres.github.io/rgbdslam_v2/ to install rgbd slam

and after I had a GUI window as the window exactly in this link, with the two posts below: waiting for visual picture on topic and waiting for depth image is topic there is no picture, i think it was a problem with topics in launch files can some one help me and send to me the launch of openni+rgbdslam.launch openni.launch rgbdslam.launch

edit flag offensive delete link more
-1

answered 2014-06-12 23:48:05 -0600

Francis Dom gravatar image

Hi I'm using Occipital (RGB-D sensor like Kinect) on Parrot AR. DRONE for navigation using SLAM algorithm.... What do I do for obstacle avoidance, is it done by the SLAM or should I use additional sensors If I use additional sensors, how do I make it a part of the SLAM algorithm.

edit flag offensive delete link more

Comments

This is not an answer to the original question. Please create a new question specific to your use-case and do not post questions as answers, as this is confusing to other readers.

Stefan Kohlbrecher gravatar imageStefan Kohlbrecher ( 2014-06-13 05:24:52 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

Stats

Asked: 2013-03-17 02:05:20 -0600

Seen: 5,225 times

Last updated: Mar 26 '15