ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

rreid's profile - activity

2011-12-07 02:46:37 -0500 received badge  Nice Answer (source)
2011-05-10 22:31:41 -0500 commented answer What tf information does vslam_system need?
@tom: I've been using pre-recorded image sequences with `vslam_system/src/run_stereo.cpp`.
2011-04-21 20:06:49 -0500 received badge  Necromancer (source)
2011-04-20 22:11:05 -0500 received badge  Nice Answer (source)
2011-04-17 07:29:49 -0500 received badge  Necromancer (source)
2011-04-04 04:48:43 -0500 received badge  Editor (source)
2011-04-04 03:32:03 -0500 answered a question What tf information does vslam_system need?

If you're following the tutorial then you'll be using: roslaunch vslam_system stereo_vslam.launch

If you look at the stereo_vslam.launch file, it has:

<node name="stereo_vslam_node"...

This is launching the binary compiled from the source vslam_system/src/nodes/stereo_vslam_node.cpp. If you check that file, you'll see that it only subscribes to these topics:

// Synchronize inputs
l_image_sub_.subscribe(it_, "left/image_rect", 1);
l_info_sub_ .subscribe(nh_, "left/camera_info", 1);
r_image_sub_.subscribe(it_, "right/image_rect", 1);
r_info_sub_ .subscribe(nh_, "right/camera_info", 1);
sync_.connectInput(l_image_sub_, l_info_sub_, r_image_sub_, r_info_sub_);

So to answer your question, there's no tf inputs needed. You may be able to get it to broadcast odometry data as tf's if you provide the base_footprint transform. However that code (in the latest svn) is disabled with an if (0).

2011-03-25 22:35:53 -0500 answered a question kinect's rgb camera with camera_calibration

The intrinsic calibration for the Kinect is already available in the Openni driver. This answer describes where to find it.

You'll want to use the 'camera/rgb/camera_info' topic, which gives a sensor_msgs/CameraInfo message.

2011-03-23 05:43:33 -0500 commented answer camera_calibration for multiple camera setup
The stereo calibration is inherited from OpenCV, and was never designed for the "general" use case. http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html ...what you're asking for is non-trivial. Unfortunately i dont have time to help at the moment, sorry.
2011-03-22 02:21:59 -0500 commented answer camera_calibration for multiple camera setup
Hi Koen, this is the "answer" box. Perhaps modify your question rather than adding an answer that poses more questions.
2011-03-22 02:20:22 -0500 answered a question camera_calibration for multiple camera setup

Hi Koen, for the extrinsic calibration of multiple cameras it is considerably more difficult to generate feature correspondences and initial poses for the optimisation.

In the stereo-vision case multiple views of a checkerboard are ideal: we can make assumptions about the relative views eg. the cameras are roughly pointing in the same direction and share the same "up". After extracting the checkerboard corners it's easy to assume the feature correspondences. Given the checkerboard size we can also estimate the relative camera pose.

You'll probably need to calibrate each cameras' intrinsics separately, and then investigate either 1) using some sort of fiduciary marker, or 2) a rich feature descriptor (eg SURF) and a lot of random objects in your scene to generate the feature correspondences for bundle adjustment. This really becomes a full bundle adjustment problem. Most of the code you'll need is in this stack.

If you have a small separation between your cameras you might get away with a checkerboard pattern with one corner modified so you can uniquely identify "up" on the board. You could then generate initial relative camera pose estimates and feature correspondences from that.

Edit: The camera_pose_calibration package recently appeared on the ROS wiki. It looks like what you're after.

2011-03-21 23:17:47 -0500 received badge  Teacher (source)
2011-03-21 23:09:50 -0500 answered a question VSlam error while running tutorial.bag

This is a duplicate of this problem and answer.

2011-03-21 22:58:16 -0500 answered a question vslam tutorial CameraInfo error

The same problem occurs doing the tutorial on Diamondback and the bag file as of 22nd March. (md5sum is f5aece448b7af00a38a993eb71400806)

To check:

rosbag check vslam_tutorial.bag
The following migrations need to occur:
 * From: sensor_msgs/CameraInfo [1b5cf7f984c229b6141ceb3a955aa18f]
   To:   sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]

To fix:

rosbag fix vslam_tutorial.bag vslam_tutorial_diamondback.bag

To play the modified bag:

rosbag play --clock vslam_tutorial_diamondback.bag
2011-03-21 22:44:02 -0500 answered a question VSLAM based navigation with Kinect, where do I begin?
  1. ASUS are supposed to be releasing the Xtion PRO Developer Kit, an Primesense SDK based device. It may be easier to obtain than the Primesense SDK. Details are sketchy for a Q2 2011 release. Note that some photos show it without the color sensor (ie. depth only).

  2. Have you tried playing with the RGBD-6D-SLAM entry in the recent OpenNI competition?

  3. Given bandwidth limitations, you may want to decimate the point cloud. This tutorial may be a good starting point.

2011-03-21 21:13:14 -0500 received badge  Autobiographer
2011-03-21 20:25:58 -0500 received badge  Supporter (source)