ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

How to map with kinect in kinetic and freenect?

asked 2016-07-04 15:55:57 -0500

matrhint gravatar image

I am running ubuntu 16.04, with kinetic for xbox, using a kinect for point cloud data. I want to generate a map, but I cannot figure out how to make a map of an environment. I am hoping that I can use the kinect and spoof the odometry and rotation from the video from the kinect. I am using freenect because I have read that openni/openni2 are no longer supporting kinect. I have ran:roslaunch freenect_launch freenect.launch, then rosrun rviz rviz, and am able to get some point cloud data. I have tried to change the different settings for the referenced frame portions from the kinect in order to try to get map data.

So, Is there a way I can make a map with just the kinect and no "real" odometry, (pretty much without a robot connected). What are the steps to build a map? And if I am asking horribly bad questions (or even if the questions aren't too bad) to people and should be using the wiki, how am I supposed to know what I need to do to find the answers on my own.

Thank you.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2016-07-05 13:25:47 -0500

Steven_Daniluk gravatar image

Yes, it is possible to generate a map using only a Kinect.

My suggestion would be to familiarize yourself with the various packages available, understand what they do, what data they require, pros, cons, etc.. This requires you do some research on your own, but it will help you.

For instance, if you want to perform SLAM there are two packages that come to mind: gmapping, and hector_mapping. By looking at the published topics they both give you the map that you desire, but they differ in the subscribed topics (i.e. the data that they require). The point I am trying to make is that you can start filling in the gaps to be able to use these packages.

With gmapping you need odometry, and if you look around there is a package called laser_scan_matcher that can "fake" odometry data from laser scans. So the next step is to produce a laser scan from the kinect, which can be achieved with depthimage_to_laserscan or pointcloud_to_laserscan.

hector_mapping does not require odometry, but it does require scan data, which you already know how to get from the gmapping description above.

I'm not able to give you step-by-step instructions on how to do this, and there are obviously some steps missing in my breif description above. But if you do some searches related to the above packages and kinect sensors you should be able to find enough info about how to do this, as well as the pros and cons about different approaches. After a quick look here are some similar answers that may help you:

2D SLAM with gmapping and openni_kinect

how to create a 2D map from laser scan data

SLAM without odometry: gmapping or hector_slam?

edit flag offensive delete link more


Thank you so much for your help.

matrhint gravatar image matrhint  ( 2016-07-05 15:12:39 -0500 )edit

Question Tools



Asked: 2016-07-04 15:55:57 -0500

Seen: 1,450 times

Last updated: Jul 05 '16