Robotics StackExchange | Archived questions

Xtion - how to use depth data

Hi All,

I’ve finally got my raspberry pi to talk to the Xtion

It’s successfully publishing the following messages and I can subscribe to them in rviz:

My question is now how go I best utilise this data to create maps and navigate the robot (I suspect there are several answers)? Would it be best to us depthimagetolaserscan (http://wiki.ros.org/depthimage_to_laserscan ) and feed that into gmapping? Will that lose data from the depth image?

I should I convert it into a point cloud (In which case what should I use to map that data?)

Any advice would be great

Many Thanks

Mark

Some of the links I've been looking at:

http://answers.ros.org/question/189963/gmapping-problem/

http://wiki.ros.org/depth_image_proc

http://wiki.ros.org/gmapping

Asked by MarkyMark2012 on 2014-08-15 02:43:35 UTC

Comments

Should not there be depth_registered/points and depth/points topics? If not, I think the first step would be to get these to work.

Asked by atp on 2014-08-15 08:58:00 UTC

Not sure - not seen anything about those yet.

Asked by MarkyMark2012 on 2014-08-15 17:09:29 UTC

hI

I am using a Structure Sensor which is a depth sensor May I use gmapping with the sensor on a Parrot drone with PX4 Autopilot??

Asked by Francis Dom on 2014-10-26 07:28:17 UTC

Answers