ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Hi,

Ifound a solution for the problem and want to present them ... in order to motivate you for improvements:-) I used the

  • List item
  • openni,
  • openni_launch,
  • depth_image_proc
  • pcl_ros
  • pcl_conversions

packages for ROS Hydro on Kubuntu 12.10. I used a ASUS Xtion for my setup.

1. Launch file

The solution combines 3 core elements:

  1. a nodelet, generating images and pointclouds based on the sensor
  2. a filter merging the 3D PointCloud and RGB Image information
  3. a filter that evaluates the depth values and adapts the RGB values
  4. a transormation that generates an image based on the PointCloud2.

The launch file illustrates the start up of these elements and follow at the end of the description part. I want to include some comments:

1. The standard launch file $(find openni_launch)/launch/openni.launch have to be adapted related to the fact, that the following filter just accepts image and cloud data with the same frame id. I changed line 7 and 8 from the mentioned file

 <arg name="rgb_frame_id"   default="$(arg tf_prefix)/$(arg camera)_rgb_optical_frame" />
 <arg name="depth_frame_id" default="$(arg tf_prefix)/$(arg camera)_depth_optical_frame" />

to

 <arg name="rgb_frame_id"   default="$(arg tf_prefix)/$(arg camera)_optical_frame" />
 <arg name="depth_frame_id" default="$(arg tf_prefix)/$(arg camera)_optical_frame" />

in a new file myopenni.launch.

  1. The depth_image_proc package offers nodes for PointCloud2 processing. We use one of them for fusion of 3D data and corresponding RGB information. Take a view to the documentation on http://wiki.ros.org/depth_image_proc

3+4 The last two points are combined in the pointCloudToImage node. I wrote some lines of C++ to transform the ROS messages to PCL Pointcloud2 later to an pcl::PointCloud<> object. It filters the RGB values according to the z coordinate. Afterwards the PointCloud is transformed to an ros image.

   #include <iostream>
   #include "ros/ros.h"
   #include <sensor_msgs/PointCloud2.h>
   #include <sensor_msgs/Image.h>
   #include <pcl/filters/passthrough.h>
   #include "pcl_conversions/pcl_conversions.h"

   #define Z_MIN 1.0
   #define Z_MAX 1.5

   ros::Publisher pub_cloud;
   ros::Publisher pub_image; 

   void cloudCallback(const sensor_msgs::PointCloud2 &ros_cloud)
   {
     pcl::PCLPointCloud2 pcl_pc2;
     pcl_conversions::toPCL(ros_cloud, pcl_pc2); 
     pcl::PointCloud<pcl::PointXYZRGB>::Ptr pcl_cloud(new pcl::PointCloud<pcl::PointXYZRGB>);     
     pcl::fromPCLPointCloud2 (pcl_pc2, *pcl_cloud); 

     uint8_t r = 0;
     uint8_t g = 0;
     uint8_t b = 0;
     int32_t rgb = (r << 16) | (g << 8) | b; 

     for (size_t i = 0; i < pcl_cloud->points.size (); ++i)
     {
       if ((pcl_cloud->points[i].z > Z_MAX) or (pcl_cloud->points[i].z < Z_MIN))
       {
         pcl_cloud->points[i].rgb = rgb;
       }
     }
     pcl::toPCLPointCloud2 (*pcl_cloud, pcl_pc2);
     sensor_msgs::PointCloud2 ros_pc2;
     pcl_conversions::fromPCL(pcl_pc2, ros_pc2);
     ros_pc2.header.frame_id="camera_optical_frame";
     pub_cloud.publish(ros_pc2);  

     pcl::PCLImage pcl_image;
     pcl::toPCLPointCloud2 (pcl_pc2, pcl_image);  
     sensor_msgs::Image ros_image; 
     pcl_conversions::fromPCL(pcl_image, ros_image);
     pub_image.publish(ros_image);
   }

   int main(int argc, char **argv)
   {
     ros::init(argc, argv, "PointCloud2Filter");
     ros::NodeHandle n;
     ros::Subscriber sub = n.subscribe("input/", 1000, cloudCallback);
     pub_cloud = n.advertise<sensor_msgs::PointCloud2> ("output_cloud/", 30);
     pub_image = n.advertise<sensor_msgs::Image> ("output_image/", 30);
     ros::spin();
     return 0;
   }

And the launch file ... <launch> <include file="$(find RoboMix)/launch/myopenni.launch"/>

  <node pkg="nodelet" type="nodelet" name="pcl_manager" args="manager" output="screen" />
  <node pkg="nodelet" type="nodelet" name="fusion" args="load depth_image_proc/point_cloud_xyzrgb pcl_manager" output="screen">
    <remap from="rgb/camera_info" to="/camera/rgb/camera_info"/>
    <remap from="rgb/image_rect_color" to="/camera/rgb/image_rect_color"/>
    <remap from="depth_registered/image_rect" to="/camera/depth/image_rect"/>
    <remap from="depth_registered/points" to="/XYZRGBcloud"/>
  </node>

  <node pkg="RoboMix" type="pointCloudToImage" name="ImageFilter">
  <remap from="input" to="/XYZRGBcloud" />
  <remap from="output_image" to="/resulting_image" />
  </node>   

  <node pkg="image_view" type="image_view" name="Visualisation" args="image:=/resulting_image" output="screen">
  </node>   

</launch>

Feel free to improve !

Hi,

Ifound a solution for the problem and want to present them ... in order to motivate you for improvements:-) I used the

  • List item
  • openni,
  • openni_launch,
  • depth_image_proc
  • pcl_ros
  • pcl_conversions

packages for ROS Hydro on Kubuntu 12.10. I used a ASUS Xtion for my setup.

1. Launch file

The solution combines 3 core elements:

  1. a nodelet, generating images and pointclouds based on the sensor
  2. a filter merging the 3D PointCloud and RGB Image information
  3. a filter that evaluates the depth values and adapts the RGB values
  4. a transormation that generates an image based on the PointCloud2.

The launch file illustrates the start up of these elements and follow at the end of the description part. I want to include some comments:

1. -> 1. The standard launch file $(find openni_launch)/launch/openni.launch have to be adapted related to the fact, that the following filter just accepts image and cloud data with the same frame id. I changed line 7 and 8 from the mentioned file

 <arg name="rgb_frame_id"   default="$(arg tf_prefix)/$(arg camera)_rgb_optical_frame" />
 <arg name="depth_frame_id" default="$(arg tf_prefix)/$(arg camera)_depth_optical_frame" />

to

 <arg name="rgb_frame_id"   default="$(arg tf_prefix)/$(arg camera)_optical_frame" />
 <arg name="depth_frame_id" default="$(arg tf_prefix)/$(arg camera)_optical_frame" />

in a new file myopenni.launch.

  1. -> 2. The depth_image_proc package offers nodes for PointCloud2 processing. We use one of them for fusion of 3D data and corresponding RGB information. Take a view to the documentation on on http://wiki.ros.org/depth_image_proc

-> 3+4 The last two points are combined in the pointCloudToImage node. I wrote some lines of C++ to transform the ROS messages to PCL Pointcloud2 later to an pcl::PointCloud<> object. It filters the RGB values according to the z coordinate. Afterwards the PointCloud is transformed to an ros image.

   #include <iostream>
   #include "ros/ros.h"
   #include <sensor_msgs/PointCloud2.h>
   #include <sensor_msgs/Image.h>
   #include <pcl/filters/passthrough.h>
   #include "pcl_conversions/pcl_conversions.h"

   #define Z_MIN 1.0
   #define Z_MAX 1.5

   ros::Publisher pub_cloud;
   ros::Publisher pub_image; 

   void cloudCallback(const sensor_msgs::PointCloud2 &ros_cloud)
   {
     pcl::PCLPointCloud2 pcl_pc2;
     pcl_conversions::toPCL(ros_cloud, pcl_pc2); 
     pcl::PointCloud<pcl::PointXYZRGB>::Ptr pcl_cloud(new pcl::PointCloud<pcl::PointXYZRGB>);     
     pcl::fromPCLPointCloud2 (pcl_pc2, *pcl_cloud); 

     uint8_t r = 0;
     uint8_t g = 0;
     uint8_t b = 0;
     int32_t rgb = (r << 16) | (g << 8) | b; 

     for (size_t i = 0; i < pcl_cloud->points.size (); ++i)
     {
       if ((pcl_cloud->points[i].z > Z_MAX) or (pcl_cloud->points[i].z < Z_MIN))
       {
         pcl_cloud->points[i].rgb = rgb;
       }
     }
     pcl::toPCLPointCloud2 (*pcl_cloud, pcl_pc2);
     sensor_msgs::PointCloud2 ros_pc2;
     pcl_conversions::fromPCL(pcl_pc2, ros_pc2);
     ros_pc2.header.frame_id="camera_optical_frame";
     pub_cloud.publish(ros_pc2);  

     pcl::PCLImage pcl_image;
     pcl::toPCLPointCloud2 (pcl_pc2, pcl_image);  
     sensor_msgs::Image ros_image; 
     pcl_conversions::fromPCL(pcl_image, ros_image);
     pub_image.publish(ros_image);
   }

   int main(int argc, char **argv)
   {
     ros::init(argc, argv, "PointCloud2Filter");
     ros::NodeHandle n;
     ros::Subscriber sub = n.subscribe("input/", 1000, cloudCallback);
     pub_cloud = n.advertise<sensor_msgs::PointCloud2> ("output_cloud/", 30);
     pub_image = n.advertise<sensor_msgs::Image> ("output_image/", 30);
     ros::spin();
     return 0;
   }

And the launch file ... ...

   <launch>
     <include file="$(find RoboMix)/launch/myopenni.launch"/>

RoboMix)/launch/myopenni.launch" />

  <node pkg="nodelet" type="nodelet" name="pcl_manager" args="manager" output="screen" />
  <node pkg="nodelet" type="nodelet" name="fusion" args="load depth_image_proc/point_cloud_xyzrgb pcl_manager" output="screen">
    <remap from="rgb/camera_info" to="/camera/rgb/camera_info"/>
    <remap from="rgb/image_rect_color" to="/camera/rgb/image_rect_color"/>
    <remap from="depth_registered/image_rect" to="/camera/depth/image_rect"/>
    <remap from="depth_registered/points" to="/XYZRGBcloud"/>
  </node>

  <node pkg="RoboMix" type="pointCloudToImage" name="ImageFilter">
  <remap from="input" to="/XYZRGBcloud" />
  <remap from="output_image" to="/resulting_image" />
  </node>   

  <node pkg="image_view" type="image_view" name="Visualisation" args="image:=/resulting_image" output="screen">
  </node>   

</launch>

Feel free to improve !

Hi,

Ifound a solution for the problem and want to present them ... in order to motivate you for improvements:-) I used the

  • List item
  • openni,
  • openni_launch,
  • depth_image_proc
  • pcl_ros
  • pcl_conversions

packages for ROS Hydro on Kubuntu 12.10. I used a ASUS Xtion for my setup.

1. Launch file

The solution combines 3 core elements:

  1. a nodelet, generating images and pointclouds based on the sensor
  2. a filter merging the 3D PointCloud and RGB Image information
  3. a filter that evaluates the depth values and adapts the RGB values
  4. a transormation that generates an image based on the PointCloud2.

The launch file illustrates the start up of these elements and follow at the end of the description part. I want to include some comments:

-> 1. The standard launch file $(find openni_launch)/launch/openni.launch have has to be adapted related to the fact, that the following filter just accepts image and cloud data with the same frame id. I changed line 7 and 8 from the mentioned file

 <arg name="rgb_frame_id"   default="$(arg tf_prefix)/$(arg camera)_rgb_optical_frame" />
 <arg name="depth_frame_id" default="$(arg tf_prefix)/$(arg camera)_depth_optical_frame" />

to

 <arg name="rgb_frame_id"   default="$(arg tf_prefix)/$(arg camera)_optical_frame" />
 <arg name="depth_frame_id" default="$(arg tf_prefix)/$(arg camera)_optical_frame" />

in a new file myopenni.launch.

-> 2. The depth_image_proc package offers nodes for PointCloud2 processing. We use one of them for a fusion of 3D point cloud data and with corresponding RGB information. Take a view to the documentation on http://wiki.ros.org/depth_image_prochttp://wiki.ros.org/depth_image_proc !

-> 3+4 The last two points are combined in the pointCloudToImage node. I wrote some lines of C++ to transform the ROS messages to PCL Pointcloud2 later to an pcl::PointCloud<> pcl::PointCloud<pcl::pointcloudxyzrgb> object. It filters the RGB values according to the z coordinate. Afterwards the PointCloud PointCloud2 is transformed to an a ros image.

   #include <iostream>
   #include "ros/ros.h"
   #include <sensor_msgs/PointCloud2.h>
   #include <sensor_msgs/Image.h>
   #include <pcl/filters/passthrough.h>
   #include "pcl_conversions/pcl_conversions.h"

   #define Z_MIN 1.0
   #define Z_MAX 1.5

   ros::Publisher pub_cloud;
   ros::Publisher pub_image; 

   void cloudCallback(const sensor_msgs::PointCloud2 &ros_cloud)
   {
     pcl::PCLPointCloud2 pcl_pc2;
     pcl_conversions::toPCL(ros_cloud, pcl_pc2); 
     pcl::PointCloud<pcl::PointXYZRGB>::Ptr pcl_cloud(new pcl::PointCloud<pcl::PointXYZRGB>);     
     pcl::fromPCLPointCloud2 (pcl_pc2, *pcl_cloud); 

     uint8_t r = 0;
     uint8_t g = 0;
     uint8_t b = 0;
     int32_t rgb = (r << 16) | (g << 8) | b; 

     for (size_t i = 0; i < pcl_cloud->points.size (); ++i)
     {
       if ((pcl_cloud->points[i].z > Z_MAX) or (pcl_cloud->points[i].z < Z_MIN))
       {
         pcl_cloud->points[i].rgb = rgb;
       }
     }
     pcl::toPCLPointCloud2 (*pcl_cloud, pcl_pc2);
     sensor_msgs::PointCloud2 ros_pc2;
     pcl_conversions::fromPCL(pcl_pc2, ros_pc2);
     ros_pc2.header.frame_id="camera_optical_frame";
     pub_cloud.publish(ros_pc2);  

     pcl::PCLImage pcl_image;
     pcl::toPCLPointCloud2 (pcl_pc2, pcl_image);  
     sensor_msgs::Image ros_image; 
     pcl_conversions::fromPCL(pcl_image, ros_image);
     pub_image.publish(ros_image);
   }

   int main(int argc, char **argv)
   {
     ros::init(argc, argv, "PointCloud2Filter");
     ros::NodeHandle n;
     ros::Subscriber sub = n.subscribe("input/", 1000, cloudCallback);
     pub_cloud = n.advertise<sensor_msgs::PointCloud2> ("output_cloud/", 30);
     pub_image = n.advertise<sensor_msgs::Image> ("output_image/", 30);
     ros::spin();
     return 0;
   }

And the launch file ...

  <launch>
    <include file="$(find RoboMix)/launch/myopenni.launch" />

  <node pkg="nodelet" type="nodelet" name="pcl_manager" args="manager" output="screen" />
  <node pkg="nodelet" type="nodelet" name="fusion" args="load depth_image_proc/point_cloud_xyzrgb pcl_manager" output="screen">
    <remap from="rgb/camera_info" to="/camera/rgb/camera_info"/>
    <remap from="rgb/image_rect_color" to="/camera/rgb/image_rect_color"/>
    <remap from="depth_registered/image_rect" to="/camera/depth/image_rect"/>
    <remap from="depth_registered/points" to="/XYZRGBcloud"/>
  </node>

  <node pkg="RoboMix" type="pointCloudToImage" name="ImageFilter">
   <remap from="input" to="/XYZRGBcloud" />
   <remap from="output_image" to="/resulting_image" />
  </node>   

  <node pkg="image_view" type="image_view" name="Visualisation" args="image:=/resulting_image" output="screen">
  </node>   

</launch>

Feel free to improve !