Xtion - how to use depth data
Hi All,
I’ve finally got my raspberry pi to talk to the Xtion
It’s successfully publishing the following messages and I can subscribe to them in rviz:
- /openni2_camera/depth/camera_info
- /openni2_camera/depth/image_raw
- /openni2_camera/rgb/camera_info
- /openni2_camera/rgb/image_raw
My question is now how go I best utilise this data to create maps and navigate the robot (I suspect there are several answers)? Would it be best to us depthimage_to_laserscan ( http://wiki.ros.org/depthimage_to_las... ) and feed that into gmapping? Will that lose data from the depth image?
I should I convert it into a point cloud (In which case what should I use to map that data?)
Any advice would be great
Many Thanks
Mark
Some of the links I've been looking at:
http://answers.ros.org/question/18996...
Should not there be depth_registered/points and depth/points topics? If not, I think the first step would be to get these to work.
Not sure - not seen anything about those yet.
hI
I am using a Structure Sensor which is a depth sensor May I use gmapping with the sensor on a Parrot drone with PX4 Autopilot??