asked 2012-01-23 08:28:11 -0500Alper Aydemir
I'm trying to compress a depth map stream from the Kinect sensor. The requirements is that at 10fps, the uplink should be approx. 1.5 Mbit/s which is most ordinary people have at home (granted, depending on the location). I've tried plain bz2'ing the depth map (~50kb each) and recording the depth map as a smooth gradient in the RGB space and doing plain video compression on it. With ffmpeg lossless compression it's still prohibitively big, but it seemed to get there with lossy compression. Of course it depends on how lossy.
Is there anyone who looked into this? Note that I'm not handling point clouds or tracking the camera position. Anyone experimented with lossy compression rates?
Are there other ways of doing this? The choice of language is C++.
If you're using the openni driver included with ROS, and have image_transport_plugins installed, it will automatically publish JPEG/PNG compressed streams, as well as theora, all of which you can customize using dynamic_reconfigure.
I'm not doing this via ROS, think of it as a separate library. The PNG compression does much better than raw of course but it's not enough for the requirements that I posted. One thing that it misses is the temporal redundancy between consecutive frames.
This topic seems to be a new research question as there are recent papers on it comparing to JPEG 2000 . It seems the only option is to implement those and see.
Asked: 2012-01-23 08:28:11 -0500
Seen: 235 times
Last updated: Jan 24 '12