ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

anna's profile - activity

2019-11-29 00:46:15 -0500 received badge  Famous Question (source)
2019-11-29 00:46:15 -0500 received badge  Notable Question (source)
2019-11-29 00:46:15 -0500 received badge  Popular Question (source)
2017-04-07 02:16:00 -0500 received badge  Famous Question (source)
2017-04-07 02:16:00 -0500 received badge  Notable Question (source)
2016-10-19 00:25:48 -0500 received badge  Famous Question (source)
2016-06-30 05:47:07 -0500 received badge  Notable Question (source)
2016-06-28 07:47:13 -0500 received badge  Popular Question (source)
2016-06-25 00:04:33 -0500 commented question RGBDSLAM in gazebo environment

Thanks Felix. At Last, I have two question more. First, the tf i for is from base link to map. Is it possible to set rgbd for having the inverse transform automatically? Then, it' possible to use rgbdslam must to perform the registration (without building octomap)? Best

2016-06-24 23:55:15 -0500 received badge  Popular Question (source)
2016-06-24 13:40:12 -0500 received badge  Student (source)
2016-06-23 02:37:31 -0500 commented question RGBDSLAM in gazebo environment

I can't attach the frame.pdf generated by tf view_frames (the page give me a warning because I've less than 5 points), but it seems correcly.

Thank a lot for your kindness Anna

2016-06-23 02:37:12 -0500 commented question RGBDSLAM in gazebo environment

I've found two errors in my launch file param name="fixed_frame_name" must become param name="config/fixed_frame_name". After this change, using SIFTGPU and reducing the speed of the robot it seem to execute correctly. The frequency of publishing tf is <= 2Hz. How can I speed it up?

2016-06-18 01:01:27 -0500 received badge  Enthusiast
2016-06-17 09:38:35 -0500 asked a question RGBDSLAM in gazebo environment

Dear Felix Endres, dear ROS users and developers, I would like to know if it is possible to use rgbdslamv2 package in a simulated outdoor environment. I try to use it with a simulated kinect and gazebo_ros plugin gazebo_ros_openni_kinect. I remap topic name according to the ones used by plugin. This is my launch file (rgbdslam.launch)

<launch>
  <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen"> 
    <!-- Input data settings-->
    <param name="config/topic_image_mono"              value="/Robot0/camera/rgb/image_raw"/> 
    <param name="config/topic_image_depth"             value="/Robot0/camera/depth/image_raw"/>
    <param name="config/topic_points"                  value="/Robot0/camera/depth/points"/> <!--if empty, poincloud will be reconstructed from image and depth -->
    <param name="camera_info_topic"                    value="/Robot0/camera/rgb/camera_info"/>

    <param name="fixed_frame_name"                     value="/Robot0/map"/>
    <param name="base_frame_name"                      value="/Robot0/base_link"/>

    <!-- These are the default values of some important parameters -->
    <param name="config/feature_extractor_type"        value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
    <param name="config/feature_detector_type"         value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
    <param name="config/detector_grid_resolution"      value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->
    <param name="config/max_keypoints"                 value="600"/><!-- Extract no more than this many keypoints -->
    <param name="config/max_matches"                   value="300"/><!-- Keep the best n matches (important for ORB to set lower than max_keypoints) -->

    <param name="config/min_sampled_candidates"        value="4"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
    <param name="config/predecessor_candidates"        value="4"/><!-- Frame-to-frame comparisons to sequential frames-->
    <param name="config/neighbor_candidates"           value="4"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
    <param name="config/ransac_iterations"             value="100"/>
    <param name="config/cloud_creation_skip_step"      value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->


    <param name="config/cloud_display_type"            value="POINTS"/><!-- Show pointclouds as points (as opposed to TRIANGLE_STRIP) -->
    <param name="config/pose_relative_to"              value="largest_loop"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->
    <param name="config/backend_solver"                value="pcg"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->
    <param name="config/optimizer_skip_step"           value="1"/><!-- optimize only every n-th frame -->
  </node>
</launch>

But something is wrong. Pointclouds are not dense, so the plugin set heigth to 1 in sensor_msg::PointColud2. So I have modeified original code in openni_listener.cpp in this way:

void OpenNIListener::kinectCallback (const sensor_msgs::ImageConstPtr& visual_img_msg,
                                     const sensor_msgs::ImageConstPtr& depth_img_msg,   
                                     const sensor_msgs::PointCloud2ConstPtr& sensor_point_cloud) 
{
  /// \callgraph
  ScopedTimer s(__FUNCTION__);
  ROS_DEBUG("Received data from kinect");
  ROS_WARN_ONCE_NAMED("eval", "First RGBD-Data Received");
  ParameterServer* ps = ParameterServer::instance();

  ////////////////////////////////////////////////////////////////////////////////////////////////////
  //Pointcloud reorganization into structured matrix
  sensor_msgs::PointCloud2::Ptr point_cloud = boost::make_shared<sensor_msgs::PointCloud2>();
  if(sensor_point_cloud->height == 1 && sensor_point_cloud->width == visual_img_msg->width * visual_img_msg->height)
  {
    ROS_WARN_STREAM("Reorganization of the poincloud as structured");
    pcl::PointCloud<pcl::PointXYZRGB> pcl_cloud;
    pcl::fromROSMsg(*sensor_point_cloud, pcl_cloud);
    pcl_cloud.width = visual_img_msg->width;
    pcl_cloud.height = visual_img_msg->height;
    pcl ...
(more)
2016-06-17 09:16:15 -0500 asked a question RGBDSLAM in Gazebo environment

Dear Felix Endres, dear ROS users

I would like to know if it's possible to use RGBDSLAM in outdoor simulated environment. I use a simulated kinect with the plugin provided in gazebo ros (gazebo_ros_openni_kinect). Then I launch rgbdslam.launch. I change some params to remap topics used by camera. This is my launch file:

<launch> <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen">

<param name="fixed_frame_name"                     value="/Robot0/map"/>
<param name="base_frame_name"                      value="/Robot0/base_link"/>

<!-- These are the default values of some important parameters -->
<param name="config/feature_extractor_type"        value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
<param name="config/feature_detector_type"         value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
<param name="config/detector_grid_resolution"      value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->
<param name="config/max_keypoints"                 value="600"/><!-- Extract no more than this many keypoints -->
<param name="config/max_matches"                   value="300"/><!-- Keep the best n matches (important for ORB to set lower than max_keypoints) -->

<param name="config/min_sampled_candidates"        value="4"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
<param name="config/predecessor_candidates"        value="4"/><!-- Frame-to-frame comparisons to sequential frames-->
<param name="config/neighbor_candidates"           value="4"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
<param name="config/ransac_iterations"             value="100"/>
<param name="config/cloud_creation_skip_step"      value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->


<param name="config/cloud_display_type"            value="POINTS"/><!-- Show pointclouds as points (as opposed to TRIANGLE_STRIP) -->
<param name="config/pose_relative_to"              value="largest_loop"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->
<param name="config/backend_solver"                value="pcg"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->
<param name="config/optimizer_skip_step"           value="1"/><!-- optimize only every n-th frame -->

</node> </launch>

First error I note is because gazebo_ros plugin, if pointcloaud is not dense, automatically change width and height from structured to unstructured data (for example image_raw has width = 640 and height = 480, points come 307200 x 1). I've modified openni_listener.cpp to remove this error. So in the kinectCallbak

oid OpenNIListener::kinectCallback (const sensor_msgs::ImageConstPtr& visual_img_msg, const sensor_msgs::ImageConstPtr& depth_img_msg,
const sensor_msgs::PointCloud2ConstPtr& sensor_point_cloud) {

//Pointcloud reorganization into structured matrix sensor_msgs::PointCloud2::Ptr point_cloud = boost::make_shared<sensor_msgs::pointcloud2>(); if(sensor_point_cloud->height == 1 && sensor_point_cloud->width == visual_img_msg->width * visual_img_msg->height) { ROS_WARN_STREAM("Reorganization of the poincloud as structured"); pcl::PointCloud<pcl::pointxyzrgb> pcl_cloud; pcl::fromROSMsg(sensor_point_cloud, pcl_cloud); pcl_cloud.width = visual_img_msg->width; pcl_cloud.height = visual_img_msg->height; pcl::toROSMsg(pcl_cloud, *point_cloud); } else point_cloud = boost::make_shared<sensor_msgs::pointcloud2>(sensor_point_cloud); ... } Thi is the only modification to origin code I make. Now I can see images and feature estimated but I get some errors and /rgbdslam/online_clouds and /rgbdslam/batch_clouds are both empty.

Can you help me? Thanks a lot

Anna

2016-06-14 10:38:24 -0500 answered a question Errors in RGBDSLAM_v2 compiling from source

Thanks to all, I've solved. I have two versions of OpenCV on my computer, and system gets the wrong one.

Anna

2016-06-13 10:20:07 -0500 asked a question Errors in RGBDSLAM_v2 compiling from source

Hi to all, I've some problems in compiling RGBDSLAM_v2 package from source https://github.com/felixendres/rgbdsl... I've tried to compile the code as it's explained in Installation from Scratch but I get this errors:

In file included from /home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp:32:0:
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/sift_gpu_wrapper.h:49:40: error: ‘cv::vector’ has not been declared
  void detect(const cv::Mat& image, cv::vector<cv::KeyPoint>& keypoints, std::vector<float>& descriptors, const cv::Mat& mask = cv::Mat()) const;
                                        ^
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/sift_gpu_wrapper.h:49:46: error: expected ‘,’ or ‘...’ before ‘<’ token
  void detect(const cv::Mat& image, cv::vector<cv::KeyPoint>& keypoints, std::vector<float>& descriptors, const cv::Mat& mask = cv::Mat()) const;
                                              ^
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp: In constructor ‘Node::Node(const cv::Mat&, const cv::Mat&, const cv::Mat&, const CameraInfoConstPtr&, myHeader, cv::Ptr<cv::Feature2D>, cv::Ptr<cv::Feature2D>)’:
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp:151:65: error: no matching function for call to ‘SiftGPUWrapper::detect(cv::Mat&, std::vector<cv::KeyPoint>&, std::vector<float>&)’
     siftgpu->detect(gray_img, feature_locations_2d_, descriptors);
                                                                 ^
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp:151:65: note: candidate is:
In file included from /home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp:32:0:
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/sift_gpu_wrapper.h:49:7: note: void SiftGPUWrapper::detect(const cv::Mat&, int) const
  void detect(const cv::Mat& image, cv::vector<cv::KeyPoint>& keypoints, std::vector<float>& descriptors, const cv::Mat& mask = cv::Mat()) const;
       ^
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/sift_gpu_wrapper.h:49:7: note:   candidate expects 2 arguments, 3 provided
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp:170:67: error: no matching function for call to ‘SiftGPUWrapper::detect(cv::Mat&, std::vector<cv::KeyPoint>&, std::vector<float>&)’
       siftgpu->detect(gray_img, feature_locations_2d_, descriptors);
                                                                   ^
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp:170:67: note: candidate is:
In file included from /home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp:32:0:
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/sift_gpu_wrapper.h:49:7: note: void SiftGPUWrapper::detect(const cv::Mat&, int) const
  void detect(const cv::Mat& image, cv::vector<cv::KeyPoint>& keypoints, std::vector<float>& descriptors, const cv::Mat& mask = cv::Mat()) const;
       ^
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/sift_gpu_wrapper.h:49:7: note:   candidate expects 2 arguments, 3 provided
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp: In constructor ‘Node::Node(cv::Mat, cv::Ptr<cv::Feature2D>, cv::Ptr<cv::Feature2D>, pcl::PointCloud<pcl::PointXYZRGB>::Ptr, cv::Mat)’:
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp:283:65: error: no matching function for call to ‘SiftGPUWrapper::detect(cv::Mat&, std::vector<cv::KeyPoint>&, std::vector<float>&)’
     siftgpu->detect(gray_img, feature_locations_2d_, descriptors);
                                                                 ^
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp:283:65: note: candidate is:
In file included from /home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/node.cpp:32:0:
/home/anna/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/sift_gpu_wrapper.h:49:7: note: void SiftGPUWrapper::detect(const ...
(more)
2016-06-13 10:02:14 -0500 answered a question RGBDSLAM Indigo support

Hi to all, I have the same problems. Did you found a solution? I've tried to install RGBDSLAM_v2 package as is write on the "README.txt" on github, on ROS Indigo and Ubuntu 14.04. Thanks, best Anna