Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

RGBDSLAM in gazebo environment

Dear Felix Endres, dear ROS users and developers, I would like to know if it is possible to use rgbdslamv2 package in a simulated outdoor environment. I try to use it with a simulated kinect and gazebo_ros plugin gazebo_ros_openni_kinect. I remap topic name according to the ones used by plugin. This is my launch file (rgbdslam.launch)

<launch> <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen">

<param name="fixed_frame_name"                     value="/Robot0/map"/>
<param name="base_frame_name"                      value="/Robot0/base_link"/>

<!-- These are the default values of some important parameters -->
<param name="config/feature_extractor_type"        value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
<param name="config/feature_detector_type"         value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
<param name="config/detector_grid_resolution"      value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->
<param name="config/max_keypoints"                 value="600"/><!-- Extract no more than this many keypoints -->
<param name="config/max_matches"                   value="300"/><!-- Keep the best n matches (important for ORB to set lower than max_keypoints) -->

<param name="config/min_sampled_candidates"        value="4"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
<param name="config/predecessor_candidates"        value="4"/><!-- Frame-to-frame comparisons to sequential frames-->
<param name="config/neighbor_candidates"           value="4"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
<param name="config/ransac_iterations"             value="100"/>
<param name="config/cloud_creation_skip_step"      value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->


<param name="config/cloud_display_type"            value="POINTS"/><!-- Show pointclouds as points (as opposed to TRIANGLE_STRIP) -->
<param name="config/pose_relative_to"              value="largest_loop"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->
<param name="config/backend_solver"                value="pcg"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->
<param name="config/optimizer_skip_step"           value="1"/><!-- optimize only every n-th frame -->

</node> </launch>

But something is wrong. Pointclouds are not dense, so the plugin set heigth to 1 in sensor_msg::PointColud2. So I have modeified original code in openni_listener.cpp in this way:

void OpenNIListener::kinectCallback (const sensor_msgs::ImageConstPtr& visual_img_msg, const sensor_msgs::ImageConstPtr& depth_img_msg,
const sensor_msgs::PointCloud2ConstPtr& sensor_point_cloud) { /// \callgraph ScopedTimer s(__FUNCTION__); ROS_DEBUG("Received data from kinect"); ROS_WARN_ONCE_NAMED("eval", "First RGBD-Data Received"); ParameterServer* ps = ParameterServer::instance();

//////////////////////////////////////////////////////////////////////////////////////////////////// //Pointcloud reorganization into structured matrix sensor_msgs::PointCloud2::Ptr point_cloud = boost::make_shared<sensor_msgs::pointcloud2>(); if(sensor_point_cloud->height == 1 && sensor_point_cloud->width == visual_img_msg->width * visual_img_msg->height) { ROS_WARN_STREAM("Reorganization of the poincloud as structured"); pcl::PointCloud<pcl::pointxyzrgb> pcl_cloud; pcl::fromROSMsg(sensor_point_cloud, pcl_cloud); pcl_cloud.width = visual_img_msg->width; pcl_cloud.height = visual_img_msg->height; pcl::toROSMsg(pcl_cloud, *point_cloud); } else point_cloud = boost::make_shared<sensor_msgs::pointcloud2>(sensor_point_cloud); //////////////////////////////////////////////////////////////////////////////////////////////////// ...

now, I can visualize poitcloud in the gui, but rgbdslam does not publish tf from /Robot0/base_link to /Robot0/Map and doesn't publish anything on /rgbdslam/batch_clouds topic and /rgbdslam/online_clouds one.

Can you help me? Thanks a lot

Anna

RGBDSLAM in gazebo environment

Dear Felix Endres, dear ROS users and developers, I would like to know if it is possible to use rgbdslamv2 package in a simulated outdoor environment. I try to use it with a simulated kinect and gazebo_ros plugin gazebo_ros_openni_kinect. I remap topic name according to the ones used by plugin. This is my launch file (rgbdslam.launch)

<launch> <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen">

<param name="fixed_frame_name"                     value="/Robot0/map"/>
<param name="base_frame_name"                      value="/Robot0/base_link"/>

<!-- These are the default values of some important parameters -->
<param name="config/feature_extractor_type"        value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
<param name="config/feature_detector_type"         value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
<param name="config/detector_grid_resolution"      value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->
<param name="config/max_keypoints"                 value="600"/><!-- Extract no more than this many keypoints -->
<param name="config/max_matches"                   value="300"/><!-- Keep the best n matches (important for ORB to set lower than max_keypoints) -->

<param name="config/min_sampled_candidates"        value="4"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
<param name="config/predecessor_candidates"        value="4"/><!-- Frame-to-frame comparisons to sequential frames-->
<param name="config/neighbor_candidates"           value="4"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
<param name="config/ransac_iterations"             value="100"/>
<param name="config/cloud_creation_skip_step"      value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->


<param name="config/cloud_display_type"            value="POINTS"/><!-- Show pointclouds as points (as opposed to TRIANGLE_STRIP) -->
<param name="config/pose_relative_to"              value="largest_loop"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->
<param name="config/backend_solver"                value="pcg"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->
<param name="config/optimizer_skip_step"           value="1"/><!-- optimize only every n-th frame -->

</node> </launch>

But something is wrong. Pointclouds are not dense, so the plugin set heigth to 1 in sensor_msg::PointColud2. So I have modeified original code in openni_listener.cpp in this way:

void OpenNIListener::kinectCallback (const sensor_msgs::ImageConstPtr& visual_img_msg,
                                     const sensor_msgs::ImageConstPtr& depth_img_msg, 
const sensor_msgs::PointCloud2ConstPtr& sensor_point_cloud) { /// \callgraph ScopedTimer s(__FUNCTION__); ROS_DEBUG("Received data from kinect"); ROS_WARN_ONCE_NAMED("eval", "First RGBD-Data Received"); ParameterServer* ps = ParameterServer::instance();

ParameterServer::instance(); //////////////////////////////////////////////////////////////////////////////////////////////////// //Pointcloud reorganization into structured matrix sensor_msgs::PointCloud2::Ptr point_cloud = boost::make_shared<sensor_msgs::pointcloud2>(); boost::make_shared<sensor_msgs::PointCloud2>(); if(sensor_point_cloud->height == 1 && sensor_point_cloud->width == visual_img_msg->width * visual_img_msg->height) { ROS_WARN_STREAM("Reorganization of the poincloud as structured"); pcl::PointCloud<pcl::pointxyzrgb> pcl::PointCloud<pcl::PointXYZRGB> pcl_cloud; pcl::fromROSMsg(sensor_point_cloud, pcl::fromROSMsg(*sensor_point_cloud, pcl_cloud); pcl_cloud.width = visual_img_msg->width; pcl_cloud.height = visual_img_msg->height; pcl::toROSMsg(pcl_cloud, *point_cloud); } else point_cloud = boost::make_shared<sensor_msgs::pointcloud2>(sensor_point_cloud); boost::make_shared<sensor_msgs::PointCloud2>(*sensor_point_cloud); ////////////////////////////////////////////////////////////////////////////////////////////////////

...

now, I can visualize poitcloud in the gui, but rgbdslam does not publish tf from /Robot0/base_link to /Robot0/Map and doesn't publish anything on /rgbdslam/batch_clouds topic and /rgbdslam/online_clouds one.

Can you help me? Thanks a lot

Anna

RGBDSLAM in gazebo environment

Dear Felix Endres, dear ROS users and developers, I would like to know if it is possible to use rgbdslamv2 package in a simulated outdoor environment. I try to use it with a simulated kinect and gazebo_ros plugin gazebo_ros_openni_kinect. I remap topic name according to the ones used by plugin. This is my launch file (rgbdslam.launch)

<launch>
  <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen"> 
        

<!-- Input data settings-->
    <param name="config/topic_image_mono"              value="/Robot0/camera/rgb/image_raw"/> 
    <param name="config/topic_image_depth"             value="/Robot0/camera/depth/image_raw"/>
    <param name="config/topic_points"                  value="/Robot0/camera/depth/points"/> <!--if empty, poincloud will be reconstructed from image and depth -->
    <param name="camera_info_topic"                    value="/Robot0/camera/rgb/camera_info"/>

    <param name="fixed_frame_name"                     value="/Robot0/map"/>
 <param name="base_frame_name"                      value="/Robot0/base_link"/>

 <!-- These are the default values of some important parameters -->
 <param name="config/feature_extractor_type"        value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
 <param name="config/feature_detector_type"         value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
 <param name="config/detector_grid_resolution"      value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->
 <param name="config/max_keypoints"                 value="600"/><!-- Extract no more than this many keypoints -->
 <param name="config/max_matches"                   value="300"/><!-- Keep the best n matches (important for ORB to set lower than max_keypoints) -->

 <param name="config/min_sampled_candidates"        value="4"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
 <param name="config/predecessor_candidates"        value="4"/><!-- Frame-to-frame comparisons to sequential frames-->
 <param name="config/neighbor_candidates"           value="4"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
 <param name="config/ransac_iterations"             value="100"/>
 <param name="config/cloud_creation_skip_step"      value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->


 <param name="config/cloud_display_type"            value="POINTS"/><!-- Show pointclouds as points (as opposed to TRIANGLE_STRIP) -->
 <param name="config/pose_relative_to"              value="largest_loop"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->
 <param name="config/backend_solver"                value="pcg"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->
 <param name="config/optimizer_skip_step"           value="1"/><!-- optimize only every n-th frame -->
  </node>
</launch>

</node> </launch>

But something is wrong. Pointclouds are not dense, so the plugin set heigth to 1 in sensor_msg::PointColud2. So I have modeified original code in openni_listener.cpp in this way:

void OpenNIListener::kinectCallback (const sensor_msgs::ImageConstPtr& visual_img_msg,
                                     const sensor_msgs::ImageConstPtr& depth_img_msg,   
                                     const sensor_msgs::PointCloud2ConstPtr& sensor_point_cloud) 
{
  /// \callgraph
  ScopedTimer s(__FUNCTION__);
  ROS_DEBUG("Received data from kinect");
  ROS_WARN_ONCE_NAMED("eval", "First RGBD-Data Received");
  ParameterServer* ps = ParameterServer::instance();

  ////////////////////////////////////////////////////////////////////////////////////////////////////
  //Pointcloud reorganization into structured matrix
  sensor_msgs::PointCloud2::Ptr point_cloud = boost::make_shared<sensor_msgs::PointCloud2>();
  if(sensor_point_cloud->height == 1 && sensor_point_cloud->width == visual_img_msg->width * visual_img_msg->height)
  {
    ROS_WARN_STREAM("Reorganization of the poincloud as structured");
    pcl::PointCloud<pcl::PointXYZRGB> pcl_cloud;
    pcl::fromROSMsg(*sensor_point_cloud, pcl_cloud);
    pcl_cloud.width = visual_img_msg->width;
    pcl_cloud.height = visual_img_msg->height;
    pcl::toROSMsg(pcl_cloud, *point_cloud);
  }
  else
    point_cloud = boost::make_shared<sensor_msgs::PointCloud2>(*sensor_point_cloud);
  ////////////////////////////////////////////////////////////////////////////////////////////////////

...

now, I can visualize poitcloud in the gui, but rgbdslam does not publish tf from /Robot0/base_link to /Robot0/Map and doesn't publish anything on /rgbdslam/batch_clouds topic and /rgbdslam/online_clouds one.

Can you help me? Thanks a lot

Anna