ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Simulate openni compatible camera

asked 2013-10-21 13:57:01 -0500

Hendrik Wiese gravatar image

updated 2013-11-18 16:47:52 -0500

tfoote gravatar image

And again, hi folks,

it seems that for built in object recognition an openni compatible camera is way to go. I don't have such a cam here at home but what I do have is a simulator that's capable of creating simulated captured image data (rgb and depth).

I could send that data to ROS. But I don't have a clue on how to convert that data in such a way that it will be interpreted by ROS as real image data coming from an openni compatible camera.

Any idea on how to do that, use simulated image/depth data for openni respectively object recognition?

Thanks again and again and again! You're a great community!

Cheers, Hendrik

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
0

answered 2013-10-22 14:39:32 -0500

lindzey gravatar image

I'm no expert on the openni stack in ROS, so you may get better answers later.

First, determine which topics & message types your object recognition node needs. For my application, this is /camera/rgb/image_color, /camera/depth_registered/points, /camera/depth_registered/camera_info, /camera/depth_registered/image. I think some of the tabletop object detection applications only require the points. You probably don't need to simulate the whole set of nodelets that you'd get with roslaunch openni_launch openni.launch.

Next, convert the information you have from your simulator into sensor_msgs/PointCloud2 and/or sensor_msgs/Image messages. The best way to do it will depend on your incoming data format. If you have to send images, I find it easier to deal with cv::Mat as a data type and use cv_bridge to convert to a ROS message just before publishing.

edit flag offensive delete link more

Comments

Okay, I have raw RGB8 image data available from the simulator. I send it to a ROS node via TCP which converts it to sensor_msgs/Image. I can see the image in RViz (it's upside down, but I'll deal with that later). I'll try to combine that with the object recognition stack and keep you informed. Thx!

Hendrik Wiese gravatar image Hendrik Wiese  ( 2013-10-22 17:58:01 -0500 )edit
0

answered 2013-10-22 03:32:59 -0500

davinci gravatar image

Try to simulate a Kinect in Gazebo. With this you get a simulated world with a kinect generating data from it.

edit flag offensive delete link more

Comments

That would require to switch to Gazebo which I'd like to avoid since I already have a working simulator (V-REP by Coppelia Robotics). I can send raw image data to the ROS via TCP. But I don't know how to translate that data into an image format that's compatible with `openni`.

Hendrik Wiese gravatar image Hendrik Wiese  ( 2013-10-22 04:07:47 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2013-10-21 13:57:01 -0500

Seen: 564 times

Last updated: Oct 22 '13