ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
I'm no expert on the openni stack in ROS, so you may get better answers later.
First, determine which topics & message types your object recognition node needs. For my application, this is /camera/rgb/image_color
, /camera/depth_registered/points
, /camera/depth_registered/camera_info
, /camera/depth_registered/image
. I think some of the tabletop object detection applications only require the points. You probably don't need to simulate the whole set of nodelets that you'd get with roslaunch openni_launch openni.launch
.
Next, convert the information you have from your simulator into sensor_msgs/PointCloud2
and/or sensor_msgs/Image
messages. The best way to do it will depend on your incoming data format. If you have to send images, I find it easier to deal with cv::Mat as a data type and use cv_bridge to convert to a ROS message just before publishing.