Convert depth image to sensor_msgs::PointCloud2
Hi all,
I have an Asus Xtion Pro RGB-D camera, I'm using this node to acquire images:
which has tremendous performance improvements w.r.t. OpenNI2.
To convert the depth image into a point cloud I'm following the draft tutorial in:
http://docs.ros.org/jade/api/sensor_m...
So far, that's what I wrote:
void frameCallback(const sensor_msgs::ImageConstPtr& msg) {
if (!gotInfo) return;
vector<float> points;
cv_bridge::CvImageConstPtr image = cv_bridge::toCvShare(msg);
for(int i =0;i<image->image.rows;i++){
const ushort* row_ptr = image->image.ptr<ushort>(i);
for(int j=0;j<image->image.cols;j++){
ushort id=row_ptr[j];
if(id!=0){
float d=depth_scale*id;
Eigen::Vector3f image_point(j*d,i*d,d);
Eigen::Vector3f camera_point=invK*image_point;
points.push_back(camera_point.x());
points.push_back(camera_point.y());
points.push_back(camera_point.z());
}
}
}
int n_points = points.size();
// Create a PointCloud2
sensor_msgs::PointCloud2 cloud_msg;
sensor_msgs::PointCloud2Modifier modifier(cloud_msg);
modifier.setPointCloud2Fields(3, "x", 1, sensor_msgs::PointField::FLOAT32,
"y", 1, sensor_msgs::PointField::FLOAT32,
"z", 1, sensor_msgs::PointField::FLOAT32);
modifier.setPointCloud2FieldsByString(1, "xyz");
modifier.resize(n_points);
sensor_msgs::PointCloud2Iterator<float> iter_x(cloud_msg, "x");
sensor_msgs::PointCloud2Iterator<float> iter_y(cloud_msg, "y");
sensor_msgs::PointCloud2Iterator<float> iter_z(cloud_msg, "z");
cloud_msg.height = 1;
cloud_msg.width = n_points;
cloud_msg.header.frame_id = msg->header.frame_id;
cloud_msg.header.seq = msg->header.seq;
cloud_msg.header.stamp = msg->header.stamp;
for(size_t i=0; i<n_points; ++i, ++iter_x, ++iter_y, ++iter_z){
*iter_x = points[3*i+0];
*iter_y = points[3*i+1];
*iter_z = points[3*i+2];
cerr << *iter_x << " " << *iter_y << " " << *iter_z << endl;
}
cloud_pub.publish(cloud_msg);
std::cerr << ".";
}
but as expected in RVIZ I see three point clouds, and if I print the points created I see that they are triplicated. I'm quite sure that the bug is in the use of the sensor_msgs::PointCloud2Iterator, but since the tutorial I mentioned doesn't give better clues on how to use them I'm stuck.
Does anyone know the correct way to populate the sensor_msgs::PointCloud2?
Thanks
mind me asking what does the depth_scale variable refer to?
It's a conversion factor, it's needed when depth images are stored as CV_16U opencv matrices. It should be depth_value=1e-3
Hi, can someone tell me what is invK. I am trying to understand the calculation and how the code works.
@Brolin invK is the inverse of the camera matrix K: http://ksimek.github.io/2013/08/13/in...
@schizzz8 so invK is the inverse of the camera matrix right !!