Run Navigation Stack with Maps from Kinect Fusion [closed]
hello everyone,
i am a really newbie to the ROS system therefore if my question ist stupid or inapropriate please be kindly patient.
I have one Robot (er1) with Kinect as Sensor running under ROS and a Server with Kinect Fusion, which can generate maps. My Question is, is it possible to use Navigation Stack with the maps generated by KinFu? The Tutorial says that Navigation Stack needs either LaserScan or PointCloud as the source for sensor information, but for me its not easy to convert datas from Kinect to LaserScan and PointCloud (and because of performance reasons).
Thanks for answers in advice.
What is the output from the KinFu?
Hi David Lu, Voxelgrid. Kinfu uses the information from the depth-images generated by Kinect and then reconstruct them to voxelgrid. Sry that i didn't see your comment until today.
Is that a specific ROS message?
Hi David,maps are with datatype nav_msgs::OccupancyGrid, 10241024 Pixels (5 meter 5meter) You can see a Screenshot here, thx.
Hi! I am also interested in using kinfu for 3D mapping and robot navigation. Please, could you provide some tips on how to integrate kinfu with ROS nav stack? Did you use KinfuLS-ROS-Wrapper from here? Thanks!