ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Reduce Kinect Resolution

asked 2011-06-22 02:26:57 -0600

patrick_hammer gravatar image

updated 2016-10-24 09:02:16 -0600

ngrennan gravatar image

I want to reduce the resolution of the kinetic to speed up processing. For example ignoring every other pixel would reduce the resolution by half. Is there an easy way to do this using the current drivers?

On a similar note - is it possible to only look at one vertical or horizontal line of resolution and pretend my kinect is a laser scanner? Thanks!

edit retag flag offensive close merge delete

3 Answers

Sort by ยป oldest newest most voted

answered 2011-06-22 03:07:24 -0600

Chad Rockey gravatar image

Yes, the resolution of the Kinect can be decreased. At the default 640x480, you get over 300,000 points per scan. That was too much data for me to process, so in my launch files, I set my kinect like this:

<rosparam command="load" file="$(find openni_camera)/info/openni_params.yaml" />
<param name="rgb_frame_id" value="/openni_rgb_optical_frame" />
<param name="depth_frame_id" value="/openni_depth_optical_frame" />
<param name="use_indices" value="false" />
<param name="depth_registration" value="false" />
<param name="image_mode" value="8" />
<param name="depth_mode" value="8" />
<param name="debayering" value="0" />
<param name="depth_time_offset" value="0" />
<param name="image_time_offset" value="0" />

Note that here, I am NOT using the rgb camera at all for pointclouds, only for the image. This is the minimum amount of information I was able to obtain from the kinect and corresponds to roughly 19,200 points per scan.

However, this was STILL too much information. I tried pointcloud_to_laserscan along with voxel_grid filters, but I wasn't very happy with the results.

The node I wrote is very similar to pointcloud_to_laserscan, but still publishes a pointcloud2. What it does is adjusts the number of "slices" evenly through a range, so it gives the appearance of multiple laserscans. I haven't written it as a nodelet, but feel free to try it.">

Video of this node mapping: Video of this node mapping with full scan comparison:

Be sure to run dynamic reconfigure on this node, it makes tuning it very easy! Here's my default values:

<node pkg="pcl_decimator" type="pcl_decimator" respawn="true" name="pcl_slicer" output="screen"> <rosparam> field_name: y num_slices: 5 slice_width: 0.01 start_threshold: -1.0 end_threshold: 1.0 </rosparam> </node>

edit flag offensive delete link more


Wow, just changing the launch files like is mentioned above made a huge difference. I went from seeing changes in rviz every few seconds to it looking real time now. rostopic hz /camera/rgb/points was saying 8 or 9 with cpus at 98%. now it is at 17 and cpus at 60%. I'll try your node next. Thanks
ringo42 gravatar image ringo42  ( 2011-07-03 05:42:07 -0600 )edit
If you want to do even better, if you leave depth_registration off and subscribe to /camera/depth/points, it will do less work since it won't have to attach colors to each point. Won't look as cool, but still good for mapping/object detection.
Chad Rockey gravatar image Chad Rockey  ( 2011-07-03 15:09:06 -0600 )edit

answered 2011-06-22 02:47:00 -0600

dornhege gravatar image

Look at rosrun dynamic_reconfigure reconfigure_gui. You can adjust the resolution there while it is running. The values can also be set as parameters.

Depending on what you need an alternative would be to use a downsampling nodelet. That way you would get a equal distribution of points in space.

For faking a laser, have at look at this package: pointcloud_to_laserscan.

edit flag offensive delete link more

answered 2011-06-22 04:37:56 -0600

You can do this with openni_camera alone with the (undocumented) use_indices parameter. See my answer to this question for details. The advantage over pointcloud_to_laserscan type approaches would be that you cut out the unused information as early as possible in the pipeline, i.e. the driver doesn't spend CPU cycles projecting a bunch of depth values to 3D points that never get used anyway. Depending on how much you need to drop your CPU usage that might be important (it was in my application which is why I implemented that feature).

edit flag offensive delete link more

Question Tools

1 follower


Asked: 2011-06-22 02:26:57 -0600

Seen: 2,107 times

Last updated: Jun 22 '11