ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
I assume you want to use the gmapping package in ROS to do the mapping. gmapping requires odometry of your robot. You can follow this tutorial to check out how to do that.
You also need to convert the PointCloud2 of the kinect to a LaserScan, since that is what gmapping uses. To do this you can use the pointcloud_to_laserscan package.
Once you got those 2 things setup, you should be ready to use gmapping. Let me know if you got further questions.
2 | No.2 Revision |
I assume you want to use the gmapping package in ROS to do the mapping. gmapping requires odometry of your robot. You can follow this tutorial to check out how to do that.
You also need to convert the PointCloud2 of the kinect to a LaserScan, since that is what gmapping uses. To do this you can use the pointcloud_to_laserscan package.
Once you got those 2 things setup, you should be ready to use gmapping. Let me know if you got further questions.
EDIT: Forgot to mention, you need a transform link from your kinect frame to base_link. Basically base_link is the position of your robot and kinect frame is the position of your kinect. You need to define the relation between these two with a transform. You can use the static transform publisher for this, as following :
<node pkg="tf" type="static_transform_publisher" name="base_to_kinect" args="0 0 0 0 0 0 base_link kinect_depth_frame 100" />
That is basically the difference in xyz and the difference in orientation between the base of your robot and the kinect. For more information on the static_transform_publisher, check here.