ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Monocular Vslam

asked 2011-05-05 20:32:32 -0500

M Aravindh gravatar image

updated 2011-08-22 02:18:11 -0500

I am unable to get vslam_system 's run_mono to work on the 3VD & Versailles Rond datasets. It says "Returning a small amount of good points: 0, minGoodPts = 30".

Also after running SBA the error is of the order 10^3.

"Running SFM
Final error: 2074.372325"

What could be going wrong? Has anyone managed to run "run_mono" on any published dataset?


Sorry, I did not post additional detail last time.
I am running a Ubuntu 10.10 with ROS diamondback. I installed the latest stable version of vslam stack and tried to run the monocular pipeline.
The previous post was, however, from a fedora 14 system and used run_mono executable provided with the ros package vslam_stack.

This time, I have compiled a small snippet of code that adds frames to an instance of VslamSystemMono using the addFrame function specified in the api. I initially encountered the same error as mentioned link:here
I corrected this problem with the same fix provided yesterday by link.
The monocular pipeline is however unable to find good matches. The number of inliers is ~0-4 and this is less than the minimum number of inliers(30). Also the error after running SBA is very high (~1000's).

I used camera parameters - 1.2 1.2 0 0 1
Posted here is the output corresponding to two calls to "addFrame" function of VslamSystemMono class.

[ INFO] [1314008351.193699104]: Added frame to Mono Vslam system
called PoseEstimator2d::estimate for frames 1 and 2
Number of points: 92, number of matches: 100
extractPnPData: frame1.pts.size() = 92
extractPnPData: good points 0
Running SFM
Final error: 33044.761933
Dumping SFM returned r
3.128635 
-0.279981 
-0.019429
Dumping SFM return T
267.161316 
3770.465820 
9.712806 
The number of epipolar inliers: 53
53 points before filtering
35 points left after filtering
sba got 35 points
Node projection matrix
1.2   0   0  -0
  0 1.2   0  -0
  0   0   1  -0
Node projection matrix
    1.18084   -0.213058  -0.0148276     320.594
  -0.213064    -1.18093 0.000851121     4524.56
  -0.012286  0.00149597   -0.999923     9.71281
Added 35 points, 35 tracks
sba.tracks.size = 35
costProj done
sba.tracks.size = 35
costProj done
Errors after sba initialization: 168.142901 896.277610
[SBAsys] Cameras: 2   Points: 35   Projections: 70  Hessian elements: 105
[SBAsys] Average 2 projections per track and 35 projections per camera
[SBAsys] Checking camera order in projections...ok
[SBAsys] Number of invalid projections: 0
[SBAsys] Number of null tracks: 0  Number of single tracks: 0   Number of double tracks: 35
[SBAsys] Number of disconnected cameras: 0
[SBAsys] Number of cameras with <5 projs: 0
[SBAsys] Number of camera connections: 0  which is 0 connections per camera and 50 percent matrix fill
[SBAsys] Min connection points: 1000000  Max connection points: 0  Average connection points: -nan
[SBAsys] Total connection projections: 0
[SBAsys] Connections with less than 5 points: 0   10 pts: 0   20 pts: 0   50 pts: 0
sba pointer: 0xbfc1d35c

Removed 22 projections > 2 pixels ...
(more)
edit retag flag offensive close merge delete

Comments

You need to include more information for us to be able to help you. Please see the guidelines for asking questions. http://www.ros.org/wiki/Support#Guidelines_for_asking_a_question
tfoote gravatar image tfoote  ( 2011-06-17 11:34:19 -0500 )edit

3 Answers

Sort by ยป oldest newest most voted
0

answered 2011-09-02 11:03:48 -0500

Helen gravatar image

vslam is unsupported experimental research code, and more specifically, the monocular pose estimation only works for specific types of camera motions (i.e., will not work on a dataset with unconstrained motion).

Unfortunately, there are no current plans to improve the functionality of monocular vslam. However, patches are welcome.

edit flag offensive delete link more
1

answered 2011-09-08 03:24:22 -0500

RobotRocks gravatar image

Hi, I think that I've found some solution.

The problem apparently is in the initialization process. Since the landmark initialization is delayed (we have to know the first landmarks' 3d position, depth information also, with sufficient accuracy to include it on the map), so we need to adquire two frames with a specific motion and distance between them. Try to move the camera in a pure translation in parallel movement, and try to have sufficient landmarks and make the distance between the keypoints of the two frames sufficiently great. For example, if the landmarks detected are far from the camera, do the movement between the frames sufficient longer and quickly, in order to achieve a better depth computation. However, I'm still having a bad image match, but the program proceeds to calculate pnp in the next steps (you can see that in the program's output below), so I'm assuming that the initialization is done.

Removed 301 projections > 2 pixels error
trans = 5.323930 -3.030132 0.601436
229 points left after sba
finished pose estimation
Dumping tvec in SetPose
-0.691666 
0.722190 
-0.006294 
Dumping rvec in SetPose
0.061715 
0.068404 
-0.289008 
minz = 5.583058, maxz = 193.049911
Frame1: mean z = 19.460421, std z = 12.736793
Frame2: mean z = 19.646008, std z = 12.887172
Input number of matches 964, inliers 39
goodCount 39
filtered inliers: 5
_inliers.size() = 0
[Stereo VO] Inliers: 39  Nodes: 13   Points: 31
[SetupSys] 10 disconnected nodes
[SetupSys] 10 disconnected nodes
[SetupSys] 10 disconnected nodes
[SetupSys] 10 disconnected nodes

******** Bad image match: 

[ INFO] [1315480509.979110334]: In callback, seq = 1092
called PoseEstimator2d::estimate for frames 12 and 13
Number of points: 1331, number of matches: 1035
extractPnPData: frame1.pts.size() = 1331
extractPnPData: good points 35
Running PnP
finished pose estimation
Dumping tvec in SetPose
-0.736821 
0.210926 
-0.235437 
Dumping rvec in SetPose
0.015324 
0.089048 
-0.191074

After that I'm having more problems that I'm trying to solve now. I post the problem here, in case anyone wanted to help:

mono_vslam_node: /home/XXXX/ROSPackages/slam/vslam_system/trunk/posest/src/pe2d.cpp:319: virtual int pe::PoseEstimator2d::estimate(const frame_common::Frame&, const frame_common::Frame&, const std::vector<cv::DMatch, std::allocator<cv::DMatch> >&): Assertion `frame1.pts.size() == frame1.goodPts.size() == frame1.kpts.size()' failed.
edit flag offensive delete link more

Comments

Regarding the assertion error: I too was initially getting this assertion. The nested use of "==" is the cause. Just replace the assertion with assert(frame1.pts.size() == frame1.goodPts.size()); assert(frame1.goodPts.size() == frame1.kpts.size());
M Aravindh gravatar image M Aravindh  ( 2011-09-10 23:21:46 -0500 )edit
Thank you Aravindh, it works. Now I will try to fix the problems with the bad image matchings and the bad quaternions with NaNs :p One question, do you obtain the frame movement and the point cloud in rviz? (I can only see the first frame, I think this should be caused by the bad matching)
RobotRocks gravatar image RobotRocks  ( 2011-09-12 22:26:05 -0500 )edit
I too see only the first frame. No points in the point cloud too.
M Aravindh gravatar image M Aravindh  ( 2011-09-13 18:49:06 -0500 )edit
0

answered 2011-09-02 10:29:16 -0500

Mac gravatar image

Can you get this working on very small datasets? VSLAM can be very sensitive to camera speed; try taking some data of your own, and moving the camera very, very slowly. (I'm not familiar with the datasets you mention, so it may be that they require more robustness than VSLAM provides.)

edit flag offensive delete link more

Comments

I tried it off the integrated camera on my laptop. The motion consisted of bumpy translation along a corridor - I was carrying the laptop by hand. I recall that the usb_cam node was not publishing proper camera info. Could this be a problem?
M Aravindh gravatar image M Aravindh  ( 2011-09-10 23:28:02 -0500 )edit

Question Tools

Stats

Asked: 2011-05-05 20:32:32 -0500

Seen: 993 times

Last updated: Sep 08 '11