ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

xuningy's profile - activity

2017-08-15 15:26:22 -0500 marked best answer rgbdslam runs without issues, but octomap does not connect/receive pointcloud (transforms?)

I installed rgbdslam and octomap. Have been trying to run rgbdslam with octomap. Eventual goal is to get the MarkerArray /occupied_cells_vis_array as it should. Some things I've done:

METHOD 1

  1. Tried to run kinect+rgbdslam.launch and octomap_server.launch.
  2. Open rviz and nothing is received on the /octomap_point_cloud_array, /occupied_cells_vis_array, /rgbdslam/aggregate_clouds OR/rgbdslam/batch_clouds. The only PointCloud2 topics that displays an output are /camera/depth_registered/points.

METHOD 2

  1. Tried to run rgbdslam_octomap.launch. Worked as well, but due to the color_octomap_server_node not found under octomap_server package (I believe there are some posts saying that you need the experimental version of octomap and some posts saying that the octomap_server is by default colored) I edited it so that it reads octomap_server_node.
  2. Runs as well, also gives the same problem as above.

METHOD 3

While invoking the above scenario, I ran the following commands:

Runningroswtf gives that node subscription is unconnected: */rgbdslam: */cloud_in

Running rosrun tf view_frames gives camera_link > /camera_rgb_frame > /camera_rgb_optical_frame and camera_link > /camera_depth_frame > /camera_depth_optical_frame. Two branches only.

Runningrostopic_list The expect topics are listed: /cloud_in /octomap_point_cloud_array /occupied_cells_vis_array /rgbdslam/batch_clouds /rgbdslam/aggregate_clouds /tf

Running rosservice call /rgbdslam/ros_ui send_all did not change the above results.

I think it is a mapping issue and that the point clouds are not transformed to the correct topics. I am not sure which point cloud problem it is. Please give some pointers as how I can solve this problem.

EDIT 05/28/14 +1h: The issue lies in the transform. /map is not present in my frames.pdf view. Not sure how to solve this problem either.

EDIT 06/04/14: Solved building octomap using rgbdslam data by recording a .bag file, launching octomap_server, and viewing it in RViz. No fix for continuous rgbdslam mapping and trajectory output found yet.

rgbdslam_octomap.launch

<launch>
  <env name="ROSCONSOLE_CONFIG_FILE" value="$(find rgbdslam)/log.conf"/>

  <!--might only work with the experimental octomap (as of May 11)-->
    <include file="$(find openni_launch)/launch/openni.launch"/>
    <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="false" output="log" > 
      <param name="config/topic_image_mono"              value="/camera/rgb/image_color"/>
      <param name="config/topic_points"                  value="/camera/rgb/points"/> <!--if empty, poincloud will be reconstructed from image and depth -->
      <param name="config/wide_topic"                    value=""/>;
      <param name="config/wide_cloud_topic"              value=""/>;
      <param name="config/drop_async_frames"             value="true"/> <!-- Check association of depth and visual image, reject if not in sync -->
      <param name="config/feature_detector_type"         value="SIFTGPU"/><!-- If SIFTGPU is enabled in CMakeLists.txt, use SURF here -->
      <param name="config/feature_extractor_type"        value="SIFTGPU"/><!-- If SIFTGPU is enabled in CMakeLists.txt, use SURF here -->
      <param name="config/matcher_type"                  value="FLANN"/> <!-- FLANN (not avail for ORB features), SIFTGPU (only for SIFTGPU detector)  or BRUTEFORCE-->
      <param name="config/max_keypoints"                 value="700"/><!-- Extract no more than this many keypoints (not honored by SIFTGPU)-->
      <param name="config/min_keypoints"                 value="300"/><!-- Extract no less than this many ... -->
      <param name="config/nn_distance_ratio"             value="0.6"/> <!-- Feature correspondence is valid if distance to nearest neighbour is smaller than this parameter times the distance to the 2nd neighbour ...
(more)
2016-11-14 02:57:39 -0500 received badge  Self-Learner (source)
2016-05-08 15:13:48 -0500 marked best answer Implementing ColorOcTree in OctoMap using RGBDSLAM

I am implementing a ColorOcTree in OctoMap. Since the default in OctoMap is to build a heightmap, given by this piece of code in octomap_server/src/OctomapServer.cpp in OctomapServer::publishAll(const ros::Time& rostime):

        //create marker:
    if (publishMarkerArray){
      unsigned idx = it.getDepth();
      assert(idx < occupiedNodesVis.markers.size());

      geometry_msgs::Point cubeCenter;
      cubeCenter.x = x;
      cubeCenter.y = y;
      cubeCenter.z = z;

      occupiedNodesVis.markers[idx].points.push_back(cubeCenter);
      if (m_useHeightMap){
        double minX, minY, minZ, maxX, maxY, maxZ;
        m_octree->getMetricMin(minX, minY, minZ);
        m_octree->getMetricMax(maxX, maxY, maxZ);

        double h = (1.0 - std::min(std::max((cubeCenter.z-minZ)/ (maxZ - minZ), 0.0), 1.0)) *m_colorFactor;
        occupiedNodesVis.markers[idx].colors.push_back(heightMapColor(h));
      }

I am seeking to change the height map into a per-voxel color Octomap.

From what is stated in the paper for octomap, I can use the child class of OcTree (ColorOcTree). From what I understand, ColorOcTree contains all the methods for OcTree, plus the average color information. So, To get a colored Octomap, rewrite OctomapServer.cpp from a OcTree based structure to an ColorOcTree structure.

My question is, in order to do this in OctomapServer.cpp, is this the correct method or is there some better way to construct per-voxel colored octomap?


EDIT: I finished changing OctomapServer.cpp and associated nodes to ColorOcTree, however I notice that the color information isn't inputted from the point cloud, as per this code in OctomapServer.cpp

It seems like it is converting a PointCloud2 cloud into a PointCloud.

void OctomapServer::insertCloudCallback(const sensor_msgs::PointCloud2::ConstPtr& cloud){
  ros::WallTime startTime = ros::WallTime::now();
  //
  // ground filtering in base frame
  //
  PCLPointCloud pc; // input cloud for filtering and ground-detection
  pcl::fromROSMsg(*cloud, pc);

The second question becomes, I need to extract RGB data from either PointCloud or PointCloud2, and have it in a float32 representation. Do I use PointXYZRGB instead of PointXYZ in reading the PointCloud (If so how?) Or is there some better way to do it?


EDIT2: I did change PointXYZ to PointXYZRGB and got the associated RGB Data. however, after saving that data into the ColorOcTree and trying to send the ColorRGBA message (also in float32, I did so by dividing each one of the RGB data by 255.0) to the Occupied_cells_vis_array, I get blue color only. Can anyone explain why it is only outputting blue voxels?

2016-02-13 04:27:05 -0500 received badge  Famous Question (source)
2016-02-13 04:27:05 -0500 received badge  Notable Question (source)
2016-01-25 15:07:56 -0500 received badge  Supporter (source)
2015-11-17 19:54:47 -0500 received badge  Taxonomist
2015-11-10 06:59:32 -0500 received badge  Famous Question (source)
2015-06-26 08:59:13 -0500 received badge  Famous Question (source)
2014-10-29 13:31:20 -0500 received badge  Teacher (source)
2014-10-29 13:31:20 -0500 received badge  Self-Learner (source)
2014-10-04 15:02:28 -0500 received badge  Notable Question (source)
2014-08-19 15:09:07 -0500 received badge  Famous Question (source)
2014-08-13 12:26:46 -0500 received badge  Popular Question (source)
2014-08-12 21:38:32 -0500 received badge  Student (source)
2014-08-12 10:56:11 -0500 received badge  Popular Question (source)
2014-08-08 11:14:46 -0500 received badge  Famous Question (source)
2014-08-08 11:11:45 -0500 received badge  Famous Question (source)
2014-08-08 01:06:20 -0500 received badge  Notable Question (source)
2014-08-07 13:19:21 -0500 answered a question RViz does not display topics from a separate node

SOLVED. Had -nan values somewhere in my 90,000 points.

 if (isinf(pose.pose.orientation.x) || isnan(pose.pose.orientation.x) ||
      isinf(pose.pose.orientation.y) || isnan(pose.pose.orientation.y) || 
      isinf(pose.pose.orientation.z) || isnan(pose.pose.orientation.z) ||
      isinf(pose.pose.orientation.w) || isnan(pose.pose.orientation.w) ){
      ROS_WARN("Point %d [%0.5f, %0.5f, %0.5f | %0.5f, %0.5f, %0.5f, %0.5f] ", i, pose.pose.position.x, pose.pose.position.y, pose.pose.position.z, pose.pose.orientation.x, pose.pose.orientation.y, pose.pose.orientation.z, pose.pose.orientation.w);
      continue;
}
2014-08-07 12:31:28 -0500 received badge  Popular Question (source)
2014-08-07 12:11:59 -0500 commented answer RViz does not display topics from a separate node

Hi, Thanks for pointing the first point out -- I had it in my code but was trying to avoid the #include messages in copy pasting. I did the next two suggestions, none worked. Neither topics are displaying.

2014-08-07 11:05:27 -0500 asked a question RViz does not display topics from a separate node

I posted in another thread about RViz not displaying PoseArray, turns out the issues is not just PoseArray but my other displays in the same node too. Thread here: http://answers.ros.org/question/189041

The PC2 topic is also not displaying.

Just a bit of context: It is taking in a point cloud "point_cloud_centers" and calculating normals PoseArray, as well as outputting a normal filtered point cloud (limited by z). The point cloud was displaying fine until i added the normals PoseArray to be published in RViz, which has a fixed frame "/map" given by another node (which is already running at the same time as /point_cloud_centers" is given by that same node)

Also, both published topics are receiving the data, as given by rostopic echo /normal_filtered_pcl and rostopic echo /normal_vectors. My pc cloud is about 90,000 points if that matters at all. Thanks!

Edit: Thanks to bvbdort to point out I didn't attach my declarations. They've been added to the top (forgot this was right underneath the include statements.)

ros::Publisher pub;
ros::Publisher poseArrayPub;
geometry_msgs::PoseArray poseArray; // particles as PoseArray (preallocated)

void normalCallback (const sensor_msgs::PointCloud2ConstPtr& cloud)
{

  sensor_msgs::PointCloud2 output_normals;
  sensor_msgs::PointCloud2 cloud_normals;
  sensor_msgs::PointCloud2 cloud_filtered;
  pcl::PointCloud<pcl::PointXYZ>::Ptr cloud2 (new pcl::PointCloud<pcl::PointXYZ>);
  pcl::PointCloud<pcl::PointXYZRGBNormal>::Ptr cloud_pass (new pcl::PointCloud<pcl::PointXYZRGBNormal>);
  pcl::PointCloud<pcl::PointXYZRGBNormal>::Ptr cloud_filtered2 (new pcl::PointCloud<pcl::PointXYZRGBNormal>);   

  // Start making result

  pcl::fromROSMsg (*cloud, *cloud2);

  poseArray.header.stamp = ros::Time::now();
  poseArray.header.frame_id = cloud2->header.frame_id; //EDITED
  ROS_INFO_STREAM("poseArray.header: frame=" << poseArray.header.frame_id); //Outputs "/map"

  // estimate normals
  pcl::NormalEstimation<pcl::PointXYZ, pcl::PointNormal> ne;
  ne.setInputCloud(cloud2);
  ne.setKSearch (24);
  pcl::PointCloud<pcl::PointNormal>::Ptr normals (new pcl::PointCloud<pcl::PointNormal>);
  ne.compute(*normals);



  /******************Display filtered cloud based on height********/
  pcl::toROSMsg (*normals, output_normals);
  pcl::concatenateFields (*cloud, output_normals, cloud_normals);

  pcl::fromROSMsg (cloud_normals, *cloud_pass);



   // Create the filtering object
    pcl::PassThrough<pcl::PointXYZRGBNormal> pass;
    pass.setInputCloud (cloud_pass);
    pass.setFilterFieldName ("z");
    pass.setFilterLimits (0.0, 1.0);
    pass.filter (*cloud_filtered2);


    pcl::toROSMsg (*cloud_filtered2, cloud_filtered);

    ROS_INFO("points: %lu\n", cloud_filtered2->points.size());

    // Publish the data
    pub.publish (cloud_filtered);
  /***************************************************************/



  /***********************publish normal vectors************************/
  for(size_t i = 0; i<normals->points.size(); ++i)
  {

    // Declare goal output pose
    geometry_msgs::PoseStamped pose;
    geometry_msgs::Quaternion msg;

    tf::Vector3 axis_vector(normals->points[i].normal[0], normals->points[i].normal[1], normals->points[i].normal[2]);
    tf::Vector3 up_vector(0.0, 0.0, 1.0);

    tf::Vector3 right_vector = axis_vector.cross(up_vector);
    right_vector.normalized();
    tf::Quaternion q(right_vector, -1.0*acos(axis_vector.dot(up_vector)));
    q.normalize();
    tf::quaternionTFToMsg(q, msg);

    pose.pose.position.x = cloud2->points[i].x;
    pose.pose.position.y = cloud2->points[i].y;
    pose.pose.position.z = cloud2->points[i].z;

    pose.pose.orientation = msg;

    poseArray.poses.push_back(pose.pose);

  }

    poseArrayPub.publish(poseArray);
    ROS_INFO("poseArray size: %i", poseArray.poses.size());
    /***************************************************************/

}

int main (int argc, char** argv)
{
  // Initialize ROS
  ros::init (argc, argv, "normal_filter");
  ros::NodeHandle nh;

  ros::Rate loop_rate(10);
  // Create a ROS subscriber for the input point cloud
  ros::Subscriber ...
(more)
2014-08-06 14:36:23 -0500 commented answer PoseArray receiving data, not displaying in RViz

Added full code above.

2014-08-06 14:18:28 -0500 received badge  Notable Question (source)
2014-08-06 14:15:00 -0500 commented answer PoseArray receiving data, not displaying in RViz

So I've tried your script in your edit, it works perfectly for me also. Though one thing to notice is that in RViz, you can see (in the display panels) that you are receiving messages. I do not have that displaying in my RViz for my code. A second follow up question is, I have the PoseArray as a separate node than my main node, which provides 2 sources of information: a point cloud, and the fixed frame. Could this have contributed? should I move the entire piece of code into my main node?

2014-08-06 08:27:55 -0500 commented answer PoseArray receiving data, not displaying in RViz

I've added the topic results and frames.pdf to my original post. Let me know what else might help! Thanks!

2014-08-06 08:15:53 -0500 received badge  Popular Question (source)
2014-08-05 17:24:42 -0500 commented answer PoseArray receiving data, not displaying in RViz

Hi, thanks for the answer. To no avail, I have tried it with a known, published "/map" (my fixed frame), which I can see from my /tf tree. The poses are _not_ coming from a sensor source. They are set by a set of calculations by code, which is displayed (correctly) using "rostopic echo /poseArrayTopic" as mentioned in the original post. It is not displaying in RViz. Could there be other reasons?

2014-08-05 14:55:37 -0500 asked a question PoseArray receiving data, not displaying in RViz

I am constructing a PoseArray message with a number of orientations (not from a sensor) and sending it to display on RViz. using rostopic echo /poseArrayTopic, I see that all the messages are being received and has the correct values.

I believe the header is the issue, I have tried the following: poseArray.header.stamp = cloud->header.stamp; poseArray.header.frame_id = "/map";, running the code with/without specifying a fixed frame, setting the stamp to ros::Time::now(), none of which seemed to work.

Anyone got an idea?

Edit: from `rostopic echo /poseArrayTopic:

 - 
    position: 
      x: 6.28499984741
      y: 5.35500001907
      z: 0.495000004768
    orientation: 
      x: -0.0
      y: 0.668825093651
      z: -0.417154147527
      w: 0.615349828393
  - 
    position: 
      x: 6.31500005722
      y: 5.35500001907
      z: 0.495000004768
    orientation: 
      x: -0.0
      y: 0.237173257736
      z: -0.912005751022
      w: 0.334655578046

and etc.

my TF Tree: https://www.dropbox.com/s/4079bf2go1p...

Edit: I decided to attach my full node code here.

ros::Publisher poseArrayPub;
geometry_msgs::PoseArray poseArray; 

void 
cloud_cb (const sensor_msgs::PointCloud2ConstPtr& cloud)
{

  sensor_msgs::PointCloud2 output_normals;
  sensor_msgs::PointCloud2 cloud_normals;
  sensor_msgs::PointCloud2 cloud_filtered;
  pcl::PointCloud<pcl::PointXYZ>::Ptr cloud2 (new pcl::PointCloud<pcl::PointXYZ>);
  pcl::PointCloud<pcl::PointXYZRGBNormal>::Ptr cloud_pass (new pcl::PointCloud<pcl::PointXYZRGBNormal>);
  pcl::PointCloud<pcl::PointXYZRGBNormal>::Ptr cloud_filtered2 (new pcl::PointCloud<pcl::PointXYZRGBNormal>);   


  pcl::fromROSMsg (*cloud, *cloud2);


  poseArray.poses.clear(); // Clear last block perception result
  poseArray.header.stamp = ros::Time::now();
  poseArray.header.frame_id = "/map";

  ROS_INFO_STREAM("poseArray.header: frame=" << poseArray.header.frame_id);
  // estimate normals
  pcl::NormalEstimation<pcl::PointXYZ, pcl::PointNormal> ne;
  ne.setInputCloud(cloud2);
  ne.setKSearch (24);
  pcl::PointCloud<pcl::PointNormal>::Ptr normals (new pcl::PointCloud<pcl::PointNormal>);
  ne.compute(*normals);

  // The number assignment in the for loop is not correct.... Can't tell why....
  for(size_t i = 0; i<normals->points.size(); ++i)
  {
    normals->points[i].x = cloud2->points[i].x;
    normals->points[i].y = cloud2->points[i].y;
    normals->points[i].z = cloud2->points[i].z;

    geometry_msgs::PoseStamped pose;
    geometry_msgs::Quaternion msg;

    // extracting surface normals
    tf::Vector3 axis_vector(normals->points[i].normal[0], normals->points[i].normal[1], normals->points[i].normal[2]);
    tf::Vector3 up_vector(0.0, 0.0, 1.0);

    tf::Vector3 right_vector = axis_vector.cross(up_vector);
    right_vector.normalized();
    tf::Quaternion q(right_vector, -1.0*acos(axis_vector.dot(up_vector)));
    q.normalize();
    tf::quaternionTFToMsg(q, msg);

    //adding pose to pose array
    pose.pose.position.x = normals->points[i].x;
    pose.pose.position.y = normals->points[i].y;
    pose.pose.position.z = normals->points[i].z;
    pose.pose.orientation = msg;
    poseArray.poses.push_back(pose.pose);
  }

  poseArrayPub.publish(poseArray);
  ROS_INFO("poseArray size: %i", poseArray.poses.size()); //this outputs 92161

}

int main (int argc, char** argv)
{
  // Initialize ROS
  ros::init (argc, argv, "normal_filter");
  ros::NodeHandle nh;

  ros::Rate loop_rate(60);

  // Create a ROS subscriber for the input point cloud
  ros::Subscriber sub = nh.subscribe ("/point_cloud_centers", 1, cloud_cb);
  poseArrayPub = nh.advertise<geometry_msgs::PoseArray>("/normal_vectors", 1);

  // Spin
  ros::spin();
  loop_rate.sleep();
}

Edit: the problem is in the numbers, as when I do a straightforward assignment like this: it runs fine.

  for (int ...
(more)
2014-07-24 04:05:02 -0500 received badge  Notable Question (source)
2014-07-24 04:05:02 -0500 received badge  Famous Question (source)
2014-07-07 10:53:15 -0500 asked a question ray casting in humanoid_localization

I have the humanoid_localization node running with an octomap, a Hokuyo and a Kinect. I am able to see PoseArrays, the Pose, my tf transforms etc. When I move the Kinect, I see that the Pose vector changes accordingly. The orientation isn't right (issue 1) but I keep on getting the error:

ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!ERROR: Raycasting in direction (0,0,0) is not possible!

As per this post answers.ros.org ... (more)

2014-07-07 10:47:27 -0500 received badge  Notable Question (source)
2014-07-07 10:47:27 -0500 received badge  Popular Question (source)
2014-07-03 08:39:24 -0500 asked a question rgbdslam provides transform from /map to /camera_link... what happens in localization?

I am looking at the frames that tf is providing. In rgbdslam, it is shown that /camera_link -> /map, which is ambiguous. I understand that /map is what octomap_server is listening on. However, I have another node up and running that contains the transform /map -> /odom -> /base_link -> /camera_link , which is the suggested transform given by REP105. How do I reconcile the differences? or are these are conflicting transforms that I should run separately (I.e. obtain octomap via rgbdslam first)?

Sidenote: I see that rgbdslam's fixed frame is /map. Is there a switcheroo that I can change to so that the differences are reconciled?

Thanks!