ar_kinect + Gmapping / SLAM on Turtlebot
Hi
A quick question regarding the ar_kinect package and the tutorial for gmapping and autonomous navigation on the turtlebot.
In running the autonomous navigation launch file 'amcl_demo.launch' it looks to set the follwoing to false
rgbprocessing, depthregistration and depth_processing.
Firstly does anyone know why?, if i set these to true it fails to correctly load the map and fails to set 'tf' from map to i think 'odom' or 'base_link'.
The reason I ask is because I need these to be true (I think) in order to get the ar_kinect package to work correctly.
I have tried it with a standard 3dsensor.launch approach and ar_kinect does work, but not with the above approach with everything set to 'false'
I was hoping to have a secenario where I can manually map a room, write a python script / node which can then intialise the pose of the turtlebot to a starting point and then navigate them to a staging area where I could use 'fergs' ar_kinect stuff as a means to better align them / orientate them.
Any help or light on the subject is appreciated.
CdrWolfe
Asked by cdrwolfe on 2014-03-08 10:48:34 UTC
Answers
Anybody with an idea on how to run amcl / autonomous mapping and octomapping at the same time?
Asked by cdrwolfe on 2014-03-10 14:48:25 UTC
Comments
Please don't use answers for comments. Add extra information to your question by editing or commenting it.
Asked by bit-pirate on 2014-03-11 14:20:37 UTC
Comments