ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

yayafree186's profile - activity

2012-11-07 21:01:32 -0500 received badge  Famous Question (source)
2012-11-07 21:01:32 -0500 received badge  Popular Question (source)
2012-11-07 21:01:32 -0500 received badge  Notable Question (source)
2012-09-18 17:10:23 -0500 received badge  Popular Question (source)
2012-09-18 17:10:23 -0500 received badge  Notable Question (source)
2012-09-18 17:10:23 -0500 received badge  Famous Question (source)
2011-12-29 19:17:57 -0500 received badge  Editor (source)
2011-12-29 19:14:10 -0500 asked a question question about publishXYZPointCloud() in openni_camera

i'm using the newest version of openni_kinect from http://www.ros.org/wiki/openni_kinect. To find out how to get pointcloud from depth image, i read through the source codes and find that in openni_nodelet.cpp there is a function publishXYZPointCloud()

centerX = (cloud_msg->width >> 1) - 0.5f;
centerY = (cloud_msg->height >> 1) - 0.5f;
float constant = 0.001 / device_->getDepthFocalLength (depth_width_);
pt.x = (u - centerX) * depth_md[depth_idx] * constant;
pt.y = (v - centerY) * depth_md[depth_idx] * constant;
pt.z = depth_md[depth_idx] * 0.001;

according to the camera's pin-hole model, this function does a right job. but the question is, it seems that it's using built-in camera's intrincis parameters. i.e. centerX = (depth_width_ >> 1) - 0.5f; centerY = (depth_height_ >> 1) - 0.5f; also there is another sentence in openni_device.cpp that rgb_focal_length_SXGA_ = 1050; so the calibration file cannot affect the result. i think this is a bug. what's your opinion?

question 2: about the z value we get from the depth image, is it a distance from the target point to the camera's origin, or is it only the distance's z-coordinate? Let's imagine an experiment, say we have a plane parallel to the camera's image plane, if the z we get is the later one, then every point in the plane have the same z; on the other side, if z is the former one, then points in the plane have different z because the distance to the camera's origin is different. the problem following this is, if z if the later one, then the formula in function publishXYZPointCloud() is right, otherwise it is wrong.

question 3: to get the XYZRGB point cloud, we need to align the depth image to the rgb image. how this is done? i read through the codes in openni_camera but i didn't find any cue. or maybe this is done in the kinect's hardware? what's more, to do the alignment, i think we need the calibration information between the IR camera and the rgb camera. so if this done in kinect's hardware, we cannot change this through calibration.

sorry for so many questions. don't know if i make myself clear... thanks in advance for any comment.

2011-12-27 10:34:36 -0500 asked a question problem with gicp in rgbdslam

Hi there. i managed to get rgbdslam running with both gicp_code and gicp_bin. the problem is, when using gicp_code, the program runs for some time (about 3 or 4 seconds) and then it shuts down. when i use gicp_bin, after i hit the space bar, i get Non-zero return value from ./gicp/test_gicp... Identity transformation is returned instead of GICP result... sh: ./gicp/test_gicp: not found. could anybody figure out why? i'm using the rgbdslam checked from openslam.org. My CPU is i5, 4G memory. Ubuntu 10.04

thanks in advance.