ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

[SOLVED] SLAM with kinect2

asked 2016-03-29 06:25:44 -0500

Kailegh gravatar image

updated 2016-06-01 05:56:05 -0500

Hi! I am trying to do a 3D reconstruction using a kinect2, I have installed iai_kinect2 correctly and I am able to get the Point Cloud. I have read that RGBDSLAM allows you to do the 3D reconstruction, but I think it use the openni package instead of the iai_kinect2.

  • So I am not sure if can still use the RBDSLAM with kinect2.
  • Is there any other way to make a 3d recontruction with ROS and a kinect2?

Thanks a lot!!

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
2

answered 2016-03-29 07:34:19 -0500

I tested Kinect V2 with Kinect Fusion and RBDSLAM. For KinFu it was necessary to implement a lens distortion model in KinFu because the new Kinect causes much more distortion than the old Kinect.

Overall the results with the new Kinect are worse than with the old Kinect due to much noise and reflections (See my post in the PCL Mailing List)

At he bottom you can find the rgbdslam launch file which I used with kinect2_bridge. This file should be for a recorded datasets, you can open a bag file with the rgbdslam GUI. But I used live data, too. Unfortunately, I do note remember if this launch file yields to good results. You should check if distortion is considered. I record data with:

rosrun kinect2_bridge kinect2_bridge
rosrun image_view image_view image:=/kinect2/qhd/image_color_rect
rosbag record -O kinect_file /tf /kinect2/qhd/image_color_rect /kinect2/qhd/camera_info /kinect2/qhd/image_depth_rect

rgbdslam_kinect2.launch:

<launch>
<node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen"> 
<!-- Input data settings-->
<param name="config/topic_image_mono"              value="/kinect2/qhd/image_color_rect"/>  
<param name="config/camera_info_topic"             value="/kinect2/qhd/camera_info"/>

<param name="config/topic_image_depth"             value="/kinect2/qhd/image_depth_rect"/>

<param name="config/topic_points"                  value=""/> <!--if empty, poincloud will be reconstructed from image and depth -->

<!-- These are the default values of some important parameters -->
<param name="config/feature_extractor_type"        value="SIFTGPU"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
<param name="config/feature_detector_type"         value="SIFTGPU"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
<param name="config/detector_grid_resolution"      value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->

<param name="config/optimizer_skip_step"           value="15"/><!-- optimize only every n-th frame -->
<param name="config/cloud_creation_skip_step"      value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->

<param name="config/backend_solver"                value="csparse"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->

<param name="config/pose_relative_to"              value="first"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->

<param name="config/maximum_depth"           value="2"/>
<param name="config/subscriber_queue_size"         value="20"/>

<param name="config/min_sampled_candidates"        value="30"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
<param name="config/predecessor_candidates"        value="20"/><!-- Frame-to-frame comparisons to sequential frames-->
<param name="config/neighbor_candidates"           value="20"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
<param name="config/ransac_iterations"             value="140"/>

<param name="config/g2o_transformation_refinement"           value="1"/>
<param name="config/icp_method"           value="gicp"/>  <!-- icp, gicp ... -->

<!--
<param name="config/max_rotation_degree"           value="20"/>
<param name="config/max_translation_meter"           value="0.5"/>

<param name="config/min_matches"           value="30"/>   

<param name="config/min_translation_meter"           value="0.05"/>
<param name="config/min_rotation_degree"           value="3"/>
<param name="config/g2o_transformation_refinement"           value="2"/>
<param name="config/min_rotation_degree"           value="10"/>

<param name="config/matcher_type"         value="SIFTGPU"/>
 -->
</node>
</launch>
edit flag offensive delete link more

Comments

thanks a lot, now I can see the 3D mapping.However I do not really like it, and it does not refresh, I mean, if a person walks in front of the camera it records it, and even if the person then goes away is still shown in the mapping. Is there any other methods to do a 3D mapping using ubuntu & ROS?

Kailegh gravatar image Kailegh  ( 2016-03-29 10:12:51 -0500 )edit

You need to adjust the parameters. And you need to move the kinect, there are thresholds for minimal movements before new key frames are created. But I don't know any 3D reconstruction algorithm for dynamic environments in ROS. The only exception is KinFu but this is very experimental.

MichaelKorn gravatar image MichaelKorn  ( 2016-03-29 10:24:37 -0500 )edit

I think I am going to try this http://wiki.ros.org/rtabmap_ros and I will tell you if I get to see a 3D mapping EDIT: I have made some trys and so far I think this package works better, I am still working to solve out some issues, but I think the reconstruction is more realistic

Kailegh gravatar image Kailegh  ( 2016-03-30 08:48:27 -0500 )edit

@MichaelKorn: Thanks for posting this. The settings not related to KinectV2 (i.e. those after the first three params) seem rather slow to me. For online SLAM I'd suggest these default settings.

Felix Endres gravatar image Felix Endres  ( 2016-05-20 08:47:03 -0500 )edit

Yes, you are right, the settings are slow. I needed it for a good offline reconstruction, computation time was not limited.

MichaelKorn gravatar image MichaelKorn  ( 2016-05-20 09:39:36 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2016-03-29 06:25:44 -0500

Seen: 5,066 times

Last updated: Jun 01 '16