ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2011-12-07 02:46:37 -0500 | received badge | ● Nice Answer (source) |
2011-05-10 22:31:41 -0500 | commented answer | What tf information does vslam_system need? @tom: I've been using pre-recorded image sequences with `vslam_system/src/run_stereo.cpp`. |
2011-04-21 20:06:49 -0500 | received badge | ● Necromancer (source) |
2011-04-20 22:11:05 -0500 | received badge | ● Nice Answer (source) |
2011-04-17 07:29:49 -0500 | received badge | ● Necromancer (source) |
2011-04-04 04:48:43 -0500 | received badge | ● Editor (source) |
2011-04-04 03:32:03 -0500 | answered a question | What tf information does vslam_system need? If you're following the tutorial then you'll be using:
If you look at the This is launching the binary compiled from the source So to answer your question, there's no tf inputs needed. You may be able to get it to broadcast odometry data as tf's if you provide the |
2011-03-25 22:35:53 -0500 | answered a question | kinect's rgb camera with camera_calibration The intrinsic calibration for the Kinect is already available in the Openni driver. This answer describes where to find it. You'll want to use the 'camera/rgb/camera_info' topic, which gives a sensor_msgs/CameraInfo message. |
2011-03-23 05:43:33 -0500 | commented answer | camera_calibration for multiple camera setup The stereo calibration is inherited from OpenCV, and was never designed for the "general" use case. http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html ...what you're asking for is non-trivial. Unfortunately i dont have time to help at the moment, sorry. |
2011-03-22 02:21:59 -0500 | commented answer | camera_calibration for multiple camera setup Hi Koen, this is the "answer" box. Perhaps modify your question rather than adding an answer that poses more questions. |
2011-03-22 02:20:22 -0500 | answered a question | camera_calibration for multiple camera setup Hi Koen, for the extrinsic calibration of multiple cameras it is considerably more difficult to generate feature correspondences and initial poses for the optimisation. In the stereo-vision case multiple views of a checkerboard are ideal: we can make assumptions about the relative views eg. the cameras are roughly pointing in the same direction and share the same "up". After extracting the checkerboard corners it's easy to assume the feature correspondences. Given the checkerboard size we can also estimate the relative camera pose. You'll probably need to calibrate each cameras' intrinsics separately, and then investigate either 1) using some sort of fiduciary marker, or 2) a rich feature descriptor (eg SURF) and a lot of random objects in your scene to generate the feature correspondences for bundle adjustment. This really becomes a full bundle adjustment problem. Most of the code you'll need is in this stack. If you have a small separation between your cameras you might get away with a checkerboard pattern with one corner modified so you can uniquely identify "up" on the board. You could then generate initial relative camera pose estimates and feature correspondences from that. Edit: The camera_pose_calibration package recently appeared on the ROS wiki. It looks like what you're after. |
2011-03-21 23:17:47 -0500 | received badge | ● Teacher (source) |
2011-03-21 23:09:50 -0500 | answered a question | VSlam error while running tutorial.bag This is a duplicate of this problem and answer. |
2011-03-21 22:58:16 -0500 | answered a question | vslam tutorial CameraInfo error The same problem occurs doing the tutorial on Diamondback and the bag file as of 22nd March. (md5sum is f5aece448b7af00a38a993eb71400806) To check: To fix: To play the modified bag: |
2011-03-21 22:44:02 -0500 | answered a question | VSLAM based navigation with Kinect, where do I begin?
|
2011-03-21 21:13:14 -0500 | received badge | ● Autobiographer |
2011-03-21 20:25:58 -0500 | received badge | ● Supporter (source) |