ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A
Ask Your Question

How much horsepower is needed to run the kinect stack

asked 2011-07-03 01:35:21 -0600

ringo42 gravatar image

updated 2016-10-24 09:01:43 -0600

ngrennan gravatar image

I have the kinect installed but RVIZ is really slow, a frame every few seconds I'm assuming it is my graphics card that is causing it.I have a Dell Inspiron 9400 and it has an ATI Mobility Radeon X1400. Any idea if this is good enough? Is there any way to spec a computer/video card combo or a way to test a computer? My pc is a dual core 1.8Mhz with 2 gigs of ram. That is close to the specs for the netbook that comes with the turtlebot. Could it be something else in my system causing the slowness. The CPU's are running at 98% when the kinect is running.


edit retag flag offensive close merge delete

3 Answers

Sort by ยป oldest newest most voted

answered 2011-07-03 04:57:28 -0600

Mac gravatar image

updated 2011-07-03 04:59:28 -0600

There are two (semi-independent) issues here: rviz and the kinect proper.

Fire up the openni drivers (without rviz), do rostopic hz /camera/rgb/points; this will tell you how quickly your machine can actually turn kinect data into point clouds. If that number is smaller than you need (and the CPU is spiked), the answer is that your computer isn't fast enough.

How quickly rviz can visualize those clouds is a different question, and is more about your graphics card. There is substantial CPU overhead in the serialize -> transmit -> deserialize step to get the data into rviz (note that a kinect point cloud, at frame rate, is about 300 MB/sec), which could also be a problem if just getting the data is already maxing your CPU.

If the problem is just GPU (rostopic hz gives you satisfactory speeds, and you have leftover CPU), you could apply a VoxelGrid Filter to intelligently downsample your pointcloud; that might help rviz keep up.

Roughly circa the release of eturtle, two things will happen. First the new openni drivers (currently partway available as openni_camera_unstable) will have a "record player" mode, allowing you to store the raw depth and RGB images, and produce point clouds later, via bag playback (meaning you can slow everything down without losing data). You can already roll your own version of this, to some degree; I use something similar on my netbook-based robot. Second, the drivers will become nodelets, meaning you can do away with the serialize-transmit-deserialize overhead in your nodes (although not with rviz).

edit flag offensive delete link more


rostopic hz /camera/rgb/points says 8 or 9 and cpus are at 80-90% so I guess that means my cpus don't have the power to handle it. So how does the netbook with turtlebot do it?
ringo42 gravatar image ringo42  ( 2011-07-03 05:15:39 -0600 )edit
By not doing at full framerate, I assume.
Mac gravatar image Mac  ( 2011-07-03 09:15:50 -0600 )edit
The TurtleBot works at full frame rate, and takes advantage of Nodelets to avoid serializing and deserializing data. It's also only using the b&w point cloud. For doing minimal work, one atom core is about the minimum. In general you need to be very careful about any extra copies of the data.
tfoote gravatar image tfoote  ( 2011-07-03 11:21:26 -0600 )edit

answered 2011-07-03 05:16:45 -0600

Chad Rockey gravatar image

updated 2011-07-03 05:19:43 -0600

So we ran Octomap + Kinect + lots of other tools on an i5. I think that was 4 threads and ran at ~3GHz. That setup used about 0.123374032 horsepower.

The biggest change for me was just making the pointcloud smaller. At full resolution, you get over 300,000 points per scan. At the smallest resolution, you get 19,200 which was more than enough for me. Be sure to check out the Reducing Kinect Resolution answer.

As for visualization, with 300,000 points at 30Hz, I know for sure that 2 GTX460 in SLI can do it. :P However, my laptop with Nvidia Quadro could visualize, but it really did make it hot and sometimes lagged out. The most important thing would be to make sure your graphics drivers are up to date and the latest version you can find.

Finally, watch out about the VoxelGrid filter. If you set the resolution too small, it will slow you down instead of helping you out. The algorithm in there is O(n^3) and will have to do operations for each cube. So if you pick something rather small like 0.01m, you'll basically be checking every mililitre (cm^3) in your Kinect volume. I think that 0.2m was good and 0.1m was about the lowest I would push that.

Also, running everything in the nodelets will help. I know they can be confusing, so take a look at my launch files. Main Launch and Filters Launch.

edit flag offensive delete link more

answered 2011-07-03 11:23:49 -0600

tfoote gravatar image

updated 2011-07-03 11:25:20 -0600

Processing the Kinect can be done on processors as small as Atoms or Arm processors. (Only the most recent generations. )

The Asus EeePc 1215N has enough cpu to do kinect processing and run the navigation stack. And I have even seen demos of the Kinect plugged into a PandaBoard and doing navigation.

Visualizing the point clouds is another topic entirely. This basically requires a discrete graphics card with proper drivers to support 3D acceleration. Though it doesn't require top end ones unless you want to render many frames at once.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2011-07-03 01:35:21 -0600

Seen: 1,462 times

Last updated: Jul 03 '11