ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

That processor is not likely to run any existing 3D or 2D SLAM implementation at any reasonable sort of framerate. In fact, I'm guessing it would likely struggle with just running the openni driver and pulling in pointcloud data from the Xtion.

If this is running on your robot, you may have some luck grabbing the depth and rgb image from the camera on this processor. You could stream them compressed to a desktop/workstation, and run the rest of the openni processing stack (generating the pointcloud) and SLAM remotely.

That processor is not likely to run any existing 3D or 2D SLAM implementation at any reasonable sort of framerate. In fact, I'm guessing it would likely struggle with just running the openni driver and pulling in pointcloud data from the Xtion.

If this is running on your robot, you may have some luck grabbing just the depth and rgb image from the camera on this processor. You could stream them compressed (compressed) to a desktop/workstation, desktop/workstation over a network, and run the rest of the openni processing stack (generating the pointcloud) and SLAM remotely.

That processor is not likely to run any existing 3D or 2D SLAM implementation at any reasonable sort of framerate. In fact, I'm guessing it would likely struggle with just running the openni driver and pulling in pointcloud data from the Xtion.Xtion, but that would be up to you to verify.

If this is running on your a robot, you may have some luck grabbing the pointcloud, or even just the depth and rgb image from the camera streams on this processor. You could stream them (compressed) to processor, and doing the SLAM processing remotely on a desktop/workstation over a network, and run the rest of the openni processing stack (generating the pointcloud) and SLAM remotely.beefier PC.