RGBDSLAM in gazebo environment

asked 2016-06-17 09:38:35 -0500

anna gravatar image

updated 2016-06-21 09:57:22 -0500

Dear Felix Endres, dear ROS users and developers, I would like to know if it is possible to use rgbdslamv2 package in a simulated outdoor environment. I try to use it with a simulated kinect and gazebo_ros plugin gazebo_ros_openni_kinect. I remap topic name according to the ones used by plugin. This is my launch file (rgbdslam.launch)

<launch>
  <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen"> 
    <!-- Input data settings-->
    <param name="config/topic_image_mono"              value="/Robot0/camera/rgb/image_raw"/> 
    <param name="config/topic_image_depth"             value="/Robot0/camera/depth/image_raw"/>
    <param name="config/topic_points"                  value="/Robot0/camera/depth/points"/> <!--if empty, poincloud will be reconstructed from image and depth -->
    <param name="camera_info_topic"                    value="/Robot0/camera/rgb/camera_info"/>

    <param name="fixed_frame_name"                     value="/Robot0/map"/>
    <param name="base_frame_name"                      value="/Robot0/base_link"/>

    <!-- These are the default values of some important parameters -->
    <param name="config/feature_extractor_type"        value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
    <param name="config/feature_detector_type"         value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
    <param name="config/detector_grid_resolution"      value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->
    <param name="config/max_keypoints"                 value="600"/><!-- Extract no more than this many keypoints -->
    <param name="config/max_matches"                   value="300"/><!-- Keep the best n matches (important for ORB to set lower than max_keypoints) -->

    <param name="config/min_sampled_candidates"        value="4"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
    <param name="config/predecessor_candidates"        value="4"/><!-- Frame-to-frame comparisons to sequential frames-->
    <param name="config/neighbor_candidates"           value="4"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
    <param name="config/ransac_iterations"             value="100"/>
    <param name="config/cloud_creation_skip_step"      value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->


    <param name="config/cloud_display_type"            value="POINTS"/><!-- Show pointclouds as points (as opposed to TRIANGLE_STRIP) -->
    <param name="config/pose_relative_to"              value="largest_loop"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->
    <param name="config/backend_solver"                value="pcg"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->
    <param name="config/optimizer_skip_step"           value="1"/><!-- optimize only every n-th frame -->
  </node>
</launch>

But something is wrong. Pointclouds are not dense, so the plugin set heigth to 1 in sensor_msg::PointColud2. So I have modeified original code in openni_listener.cpp in this way:

void OpenNIListener::kinectCallback (const sensor_msgs::ImageConstPtr& visual_img_msg,
                                     const sensor_msgs::ImageConstPtr& depth_img_msg,   
                                     const sensor_msgs::PointCloud2ConstPtr& sensor_point_cloud) 
{
  /// \callgraph
  ScopedTimer s(__FUNCTION__);
  ROS_DEBUG("Received data from kinect");
  ROS_WARN_ONCE_NAMED("eval", "First RGBD-Data Received");
  ParameterServer* ps = ParameterServer::instance();

  ////////////////////////////////////////////////////////////////////////////////////////////////////
  //Pointcloud reorganization into structured matrix
  sensor_msgs::PointCloud2::Ptr point_cloud = boost::make_shared<sensor_msgs::PointCloud2>();
  if(sensor_point_cloud->height == 1 && sensor_point_cloud->width == visual_img_msg->width * visual_img_msg->height)
  {
    ROS_WARN_STREAM("Reorganization of the poincloud as structured");
    pcl::PointCloud<pcl::PointXYZRGB> pcl_cloud;
    pcl::fromROSMsg(*sensor_point_cloud, pcl_cloud);
    pcl_cloud.width = visual_img_msg->width;
    pcl_cloud.height = visual_img_msg->height;
    pcl ...
(more)
edit retag flag offensive close merge delete

Comments

Hello Anna. Seems correct. What is in the rgb & depth images, you feed into the callback? Are these simulated too? I assume you see the ROS_WARN you've put in on the console, right? If you set the point cloud topic parameter in the launch file to an emtpy string, what happens? Felix

Felix Endres gravatar image Felix Endres  ( 2016-06-21 09:56:54 -0500 )edit

Could you please run rosrun tf view_frames with your setup running and attach a screenshot of the generated pdf.

Felix Endres gravatar image Felix Endres  ( 2016-06-21 09:59:03 -0500 )edit

I've found two errors in my launch file param name="fixed_frame_name" must become param name="config/fixed_frame_name". After this change, using SIFTGPU and reducing the speed of the robot it seem to execute correctly. The frequency of publishing tf is <= 2Hz. How can I speed it up?

anna gravatar image anna  ( 2016-06-23 02:37:12 -0500 )edit

I can't attach the frame.pdf generated by tf view_frames (the page give me a warning because I've less than 5 points), but it seems correcly.

Thank a lot for your kindness Anna

anna gravatar image anna  ( 2016-06-23 02:37:31 -0500 )edit

Speedup: Use ORB, reduce the ..._candidates settings, reduce the ransac iterations, try cholmod instead of pcg.

Felix Endres gravatar image Felix Endres  ( 2016-06-24 13:42:52 -0500 )edit

Thanks Felix. At Last, I have two question more. First, the tf i for is from base link to map. Is it possible to set rgbd for having the inverse transform automatically? Then, it' possible to use rgbdslam must to perform the registration (without building octomap)? Best

anna gravatar image anna  ( 2016-06-25 00:04:33 -0500 )edit

1) in principle yes but it would mess up the tf tree if you have odometry. You'd then have two parent frames for a child frame. 2) Not sure what you mean. The registration is the original point clouds + their transformation.

Felix Endres gravatar image Felix Endres  ( 2016-06-28 03:33:18 -0500 )edit