ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Yo's profile - activity

2014-01-28 17:25:41 -0500 marked best answer Orthogonal Distance Regression Plane for a given PointCloud -- Am I Doing This Correctly?

[Note: At karthik's suggestion, I have also posted this to the PCL-user's list.]

I've written the following method to compute the orthogonal distance regression plane for a given PointCloud, using the method detailed here.

However, visualizing it in rviz has proven to be a bit tricky, probably mostly due to my inexperience with quaternions. What I've got below seems to mostly work, but sometimes the plane (represented by a set of PoseStamped axes oriented along the plane's normal vector -- call this vector n) displayed by rviz seems in some cases to be a bit more off than I would expect, and so I'm still unsure if I'm doing everything properly.

Would someone please help check me on this?

The two major places I'm uncertain about are in my implementation of the fitting algorithm from the above link (eg. am I properly using the Eigen::JacobiSVD and related matrices to calculate the correct things?), and in my calculation of the orientation quaternion.

Here's the idea behind the calculation of the quaternion the way I have it. As from Wikipedia, the quaternion q is defined in relation to rotations by:

q = cos(a/2) + u * sin(a/2)

Where a is the angle to rotate by, and u is the unit vector about which to rotate.

So I assumed the 'starting' vector would be (0,0,1), which should be rotated by q until it is pointing in the same direction as the plane normal. Thus, the rotation axis u is the cross product of (0,0,1) and n, and a is the angle between them, computed as a = arcsine((0,0,1) DOT n).

Thank you!

void Converter::publishLSPlane(pcl::PointCloud<pcl::PointXYZ> points, std_msgs::Header header) {

    if (points.size() >= 3) {

        Eigen::MatrixXd m(points.size(), 3);

        // Compute centroid
        double centroid_x = 0;
        double centroid_y = 0;
        double centroid_z = 0;
        BOOST_FOREACH(const pcl::PointXYZ p, points) {
            centroid_x += p.x;
            centroid_y += p.y;
            centroid_z += p.z;
        }
        centroid_x /= points.size();
        centroid_y /= points.size();
        centroid_z /= points.size();

        // Define m
        int i=0;
        BOOST_FOREACH(const pcl::PointXYZ p, points) {
            m(i,0) = p.x - centroid_x;
            m(i,1) = p.y - centroid_y;
            m(i,2) = p.z - centroid_z;
            i++;
        }

        // Compute SVD
        Eigen::JacobiSVD<Eigen::MatrixXd> svd(m, Eigen::ComputeThinV);

        const int last = svd.cols() - 1;
        double normal_x = svd.matrixV()(0,last);
        double normal_y = svd.matrixV()(1,last);
        double normal_z = svd.matrixV()(2,last);

        ROS_INFO("Centroid: [%f, %f, %f]", centroid_x, centroid_y, centroid_z);
        ROS_INFO("Normal:   [%f, %f, %f]\n", normal_x, normal_y, normal_z);

        // Publish the normal for display in rviz
        geometry_msgs::PoseStamped normal;
        normal.header = header;
        normal.pose.position.x = centroid_x;
        normal.pose.position.y = centroid_y;
        normal.pose.position.z = centroid_z;

        // Yes I realize some of the math here could probably be simplified using 
        // trig identities, but for now I just want it to work. ;)
        normal.pose.orientation.w = std::cos(0.5 * std::asin(normal_z));
        normal.pose.orientation.x = -normal_y * std::sin(0.5 * std::asin(normal_z));
        normal.pose.orientation ...
(more)
2014-01-28 17:23:29 -0500 marked best answer How to Properly Orient Kinect Data from Turtlebot in RVIZ?

This has been bugging me for a while now, and it's starting to make things quite difficult.

The point cloud data (and, if I remember correctly, stuff like odometry data) from the Turtlebot come in with 'depth' oriented along the z-axis, while 'depth' relative to the Turtlebot lies in the x/y-plane.

How do I get this to display and work properly together, especially in rviz?

Currently in rviz, the point cloud emanates from the 'top' of the Turtlebot, and points up in the air.

I'm sure there's got to be some sort of quick solution to this, so I figured I'd better ask before I go writing all of my own nodes and stuff to try to compensate for it.

My best guess is that there's some simple setup stuff I'm not doing properly or whatever. If there isn't actually a 'solution' to this because it's not a problem for you --if it 'just works' for you-- would you kindly walk me through how you set up/run your Turtlebot? I'm pretty sure I'm following the Turtlebot bringup tutorials and everything properly... but they don't talk about making it work with rviz, from what I can tell.

Thanks in advance!

2013-01-07 17:55:24 -0500 received badge  Famous Question (source)
2013-01-07 17:55:24 -0500 received badge  Notable Question (source)
2012-12-05 01:35:04 -0500 received badge  Great Question (source)
2012-11-27 05:43:46 -0500 received badge  Notable Question (source)
2012-11-27 05:43:46 -0500 received badge  Popular Question (source)
2012-11-27 05:43:46 -0500 received badge  Famous Question (source)
2012-10-19 08:47:34 -0500 received badge  Taxonomist
2012-08-22 22:55:42 -0500 received badge  Good Question (source)
2012-08-22 20:04:28 -0500 received badge  Famous Question (source)
2012-08-16 04:55:37 -0500 received badge  Famous Question (source)
2012-08-14 07:06:08 -0500 received badge  Popular Question (source)
2012-08-02 22:59:22 -0500 received badge  Nice Question (source)
2012-07-11 06:07:32 -0500 received badge  Notable Question (source)
2012-07-11 00:35:24 -0500 marked best answer [PARTLY UNSOLVED] Raw Kinect Depth Data

I'd like to get the raw depth data off of the Kinect on my Turtlebot, using ROS.

What I've been doing is running roslaunch openni_camera openni_node.launch, and then looking at the published /camera/depth/image_raw.

Online research has indicated that the Kinect should spit out 11-bit values. However, it seems this topic spits out something like 14-bit values...

I made a histogram of all the depth values in 183 random depth images I collected with all sorts of varying depths in them, and I found that the array of 16-bit values which come out of the /camera/depth/image_raw topic (actually 8-bit values which I have to put together into 16-bit) range over the values zero and a seemingly logarithmic distribution of exactly 799 values between 342 and 9757 -- exactly 800 values in total.

So I'm confused about whether the values I'm getting are actually the "raw" output from the Kinect (which I want), or if something between me and the Kinect is doing some processing to it that I don't want done. (I'm using it for some very specific scientific mapping applications, and I want to know exactly what is happening to my data.)

What I've found online says that the values should range from 0 to 2^11.

If the data is straight off the Kinect, then it's odd that 2^13 < 9757 < 2^14.

If the data is simply being scaled or something, then I'd think that there'd be somewhere between 2^10 and 2^11 different values, instead of exactly 800.

Can anyone explain to me how to get the real raw data off the Kinect using OpenNI through ROS, or if I'm doing it correctly, can you explain what's going on and why it's not coming out as I would expect?

Thank you so much!

Update: sebsch answered the first part -- No, the OpenNI driver doesn't publish what I want. But it seems the suggested package won't work for my needs either. Still searching for a way to do it.

2012-06-02 11:07:15 -0500 received badge  Notable Question (source)
2012-04-23 16:50:49 -0500 received badge  Popular Question (source)
2012-04-10 23:17:12 -0500 received badge  Famous Question (source)
2012-04-06 16:34:00 -0500 received badge  Popular Question (source)
2012-04-04 06:21:52 -0500 commented answer Minimal Topics for RGBDSLAM?

I'm using openni_launch, since, as I noted in Edit 2, openni_camera was not providing registered depth images. I'm also using ROS Electric. I'm not using /camera/rgb/points because I want to keep the bag filesize as low as possible.

2012-03-31 04:56:39 -0500 received badge  Stellar Question (source)
2012-03-31 03:19:11 -0500 received badge  Great Question (source)
2012-03-27 06:57:15 -0500 commented answer Minimal Topics for RGBDSLAM?

Awesome, thank you, Felix! :) I'm still having some trouble however -- please see Edit 2 in the question. :)

2012-03-27 06:57:09 -0500 edited question Minimal Topics for RGBDSLAM?

I'd like to record some bag files of Kinect data and then run RGBDSLAM on them later. (The computer I'm doing the collection with isn't up to doing the RGBDSLAM itself.)

The problem is that I want to keep the bag files as small as possible, so I want to collect the minimal amount of data necessary.

In particular, that means I would like to record to the bag file the smallest number of topics possible, and preferably only image topics and the like -- PointCloud2 messages are quite large in comparison.

Can someone tell me exactly which topics are necessary for running RGBDSLAM off a bag file, and walk me through how to get RGBDSLAM to use only them? (Also, I'm a little confused about whether I should be using the dynamic_reconfigure reconfigure_gui to turn on OpenNI's depth registration or not...)

I thought I needed only RGB and depth image and possibly camera info topics, but when I tried running RGBDSLAM off of only those, it kept saying it was waiting for various other things. I think maybe I just need to change the parameters in the launch file to tell it that those are the only things I want it to work with, but I can't figure out how to do that properly.

Any help would be greatly appreciated! :)

Edit:

As a specific example of one of the things I've tried, I ran the following:

$ rosbag record /camera/depth_registered/image_rect_raw /camera/rgb/image_rect_color /camera/rgb/camera_info

I then ran the RGBDSLAM GUI and pressed the space key. The status bar displayed:

Processing. Waiting for motion information.

I then started playback of the bag file. Nothing changed.

Edit 2:

I followed Felix's directions, and everything worked swell... except for one problem. The depth images were not registered with the RGB images, so the textures in the resulting map were all incorrect.

So I tried again, this time using openni_launch openni.launch, using the dynamic_reconfigure reconfigure_gui to turn on depth registration (/camera/driver --> depth registration checkbox), and then using the following call to rosbag to record the bag file:

$ rosbag record -o /home/full/path/and/filename_prefix /tf /camera/rgb/image_color /camera/rgb/camera_info /camera/depth_registered/image /camera/depth/camera_info

I then ran the following launch file on this bag file (pressing the spacebar to start the processing once the GUI came up):

<launch>
  <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="false" output="log" > 
    <!-- see rgbslam_sample_config.launch for all available parameters and their default values -->
      <param name="config/topic_image_mono"              value="/camera/rgb/image_color"/>

<param name="config/topic_image_depth"             value="/camera/depth_registered/image"/>
<param name="config/topic_points"                  value=""/> <!--if empty, poincloud will be reconstructed from image and depth -->
<param name="config/camera_info_topic"             value="/camera/rgb/camera_info"/>
<param name="config/bagfile_name"                  value="/home/full/path/and/filename.bag"/>
<param name="config/start_paused"                  value="true"/>

      <param name="config/wide_topic"                    value=""/>;
      <param name="config/wide_cloud_topic"              value=""/>;
      <param name="config/drop_async_frames"             value="true"/> <!-- Check association of ...
(more)
2012-03-25 13:24:11 -0500 received badge  Favorite Question (source)
2012-03-23 06:58:01 -0500 marked best answer Typical Methods of Building Large 3D Maps?

I'd like to build a 3D map of our lab (a number of rooms and hallways and cubicles, etc) using a Kinect and/or Turtlebot.

I've looked into a number of methods (right now I'm looking at possibly using RGBDSLAM), but I'd like to know if there are any typical methods people use.

Is RGBDSLAM a good choice for this? Are there other standard methods people use? What are their pros and cons?

(Related side-note: I tried using a Turtlebot with gmapping to build a 2D map of one section of the office, and it came out horribly. It also refused to work properly with any sort of loop closures involving hallways, I think largely because of the Turtlebot's horrible odometry.)

Edit: I Heart Robotics provided some very useful information, but if anyone has more information to offer, I'd very much appreciate it! :)