ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

depth wrong using openni_camera depth image

asked 2011-03-02 07:35:08 -0500

Bruce gravatar image

updated 2016-10-24 09:02:14 -0500

ngrennan gravatar image

Hi,

I install ubuntu10.10 independently on my macbook pro. And ROS unstable with Openni_kinect. (I know the unstable is a little old, but I am sure it is not the reason.)

I could only get a very low frame rate when use a kinect camera, with the command of rosbag record -a, I find that the frame rate is just about 3fps. Even when I just record a single topic, the fps is still low, is about 8 fps, with a extremely big file.

I still have another problem that when I get the depth image with my kinect, I could only get a few depth value like 0,1,2,3,4,5.

Can you help me about this?

Regards Wei

edit retag flag offensive close merge delete

Comments

Are you logging on the same computer? http://answers.ros.org/question/220/kinect-data-via-router-to-host-pc It looks like your hardware might be having trouble keeping up, especially if logging fewer topics improves performance. Could you provide more details on exactly what you're running.
tfoote gravatar image tfoote  ( 2011-03-02 07:41:17 -0500 )edit
Also please try to ask one question at a time. Otherwise they are harder to index and find later. You can edit this question to update the above and remove the 2nd question. And then ask the 2nd question independently.
tfoote gravatar image tfoote  ( 2011-03-02 07:42:10 -0500 )edit

2 Answers

Sort by ยป oldest newest most voted
1

answered 2011-06-17 10:26:50 -0500

Mac gravatar image

The depth image isn't (usually) what you want; you usually want the point cloud, which has units of meters.

The slow recording is trickier to diagnose; are there known issues with USB speeds on macbooks in Linux?

edit flag offensive delete link more
0

answered 2011-07-14 09:46:34 -0500

Poofjunior gravatar image

I had the same problem. Taking a point-cloud approach is one way of doing it. I was trying to do this purely in Python, so that wouldn't work for me.

For the problem where the kinect rounded the depth values, that was easily fixed. We discovered that, when we converted to OpenCv, we used the command

bridge.imgmsg_to_cv(data, "mono8")

where bridge was an OpenCv bridge. However, this is looking for an 8 bit integer, and the camera produces a 32-bit floating point number, so this command should be:

bridge.imgmsg_to_cv(data, "32FC1")

edit flag offensive delete link more

Question Tools

Stats

Asked: 2011-03-02 07:35:08 -0500

Seen: 840 times

Last updated: Jul 14 '11