Xtion - how to use depth data

asked 2014-08-15 02:43:35 -0500

MarkyMark2012 gravatar image

Hi All,

I’ve finally got my raspberry pi to talk to the Xtion

It’s successfully publishing the following messages and I can subscribe to them in rviz:

  • /openni2_camera/depth/camera_info
  • /openni2_camera/depth/image_raw
  • /openni2_camera/rgb/camera_info
  • /openni2_camera/rgb/image_raw

My question is now how go I best utilise this data to create maps and navigate the robot (I suspect there are several answers)? Would it be best to us depthimage_to_laserscan ( http://wiki.ros.org/depthimage_to_las... ) and feed that into gmapping? Will that lose data from the depth image?

I should I convert it into a point cloud (In which case what should I use to map that data?)

Any advice would be great

Many Thanks

Mark

Some of the links I've been looking at:

http://answers.ros.org/question/18996...

http://wiki.ros.org/depth_image_proc

http://wiki.ros.org/gmapping

edit retag flag offensive close merge delete

Comments

Should not there be depth_registered/points and depth/points topics? If not, I think the first step would be to get these to work.

atp gravatar imageatp ( 2014-08-15 08:58:00 -0500 )edit

Not sure - not seen anything about those yet.

MarkyMark2012 gravatar imageMarkyMark2012 ( 2014-08-15 17:09:29 -0500 )edit

hI

I am using a Structure Sensor which is a depth sensor May I use gmapping with the sensor on a Parrot drone with PX4 Autopilot??

Francis Dom gravatar imageFrancis Dom ( 2014-10-26 07:28:17 -0500 )edit