Missing Images in the Vocabulary tree DocDatabase? [closed]

asked 2011-07-11 01:18:19 -0500

NikolasEngelhard gravatar image

updated 2011-07-11 01:39:19 -0500

Hello

I want to use the vt::DocDatabase for loop closing during SLAM with a kinect. For every new image, I search for the nearest N words and use the number of inliers from posest to verify the match. I've notices a strange behaviour: If I insert a videostream, I'd expect the last frames to be the best matching ones, so they should (almost) always be one of the best matches. But I get something like:

[ INFO] [1310389286.092157966]: doc_id: 0
[ INFO] [1310389286.124423473]: doc_id: 1
[ INFO] [1310389286.157670327]: node 3 has potential neighbour: 0, #: 143
[ INFO] [1310389286.300146131]: doc_id: 2
[ INFO] [1310389286.327752705]: node 4 has potential neighbour: 0, #: 124
[ INFO] [1310389286.418847237]: node 4 has potential neighbour: 1, #: 115
[ INFO] [1310389286.503794645]: doc_id: 3
[ INFO] [1310389286.536062712]: node 5 has potential neighbour: 2, #: 116
[ INFO] [1310389286.635030183]: node 5 has potential neighbour: 0, #: 101
[ INFO] [1310389286.736272983]: node 5 has potential neighbour: 1, #: 106
[ INFO] [1310389286.831540906]: doc_id: 4
[ INFO] [1310389286.857269082]: node 6 has potential neighbour: 3, #: 125
[ INFO] [1310389286.946156944]: node 6 has potential neighbour: 2, #: 114
[ INFO] [1310389287.034078273]: node 6 has potential neighbour: 1, #: 117
[ INFO] [1310389287.125192919]: node 6 has potential neighbour: 0, #: 104
[ INFO] [1310389287.262271656]: doc_id: 5
[ INFO] [1310389287.289952757]: node 7 has potential neighbour: 2, #: 121
[ INFO] [1310389287.386291909]: node 7 has potential neighbour: 1, #: 132
[ INFO] [1310389287.475690740]: node 7 has potential neighbour: 0, #: 136
[ INFO] [1310389287.559115427]: node 7 has potential neighbour: 4, #: 122
[ INFO] [1310389287.646361971]: node 7 has potential neighbour: 3, #: 130
[ INFO] [1310389287.740120155]: doc_id: 6
[ INFO] [1310389287.778942874]: node 8 has potential neighbour: 5, #: 145
[ INFO] [1310389287.875678025]: node 8 has potential neighbour: 2, #: 102
[ INFO] [1310389287.958846919]: node 8 has potential neighbour: 0, #: 122
[ INFO] [1310389288.056358920]: node 8 has potential neighbour: 1, #: 130
[ INFO] [1310389288.148655284]: node 8 has potential neighbour: 4, #: 117
[ INFO] [1310389288.267859780]: node 8 has potential neighbour: 3, #: 107

N is set to 100, so I expect to get all images sorted according to their matching quality. In a perfect setting, I'd expect for image 4 the list 3,2,1,0. (The '#'-entry stands for the number of inliers returned by posest)

But the database never returns the two previous images.

Has anyone experienced this behaviour or do I have to check my implementation?

It boils down to:

 vt::Document words(frame.dtors.rows);
 for (int i = 0; i < frame.dtors.rows; ++i){
  words[i] =  place_db_.getVocTree()->quantize(frame.dtors.row(i));
 }

 vt::Matches matches;
 place_db_.getDocDatabase()->find(words, 100, matches );

 int doc_id = place_db_.getDocDatabase()->insert(words);
 ROS_INFO("doc_id: %i",doc_id);

 inlier = posest_.estimate(frame, frames(matches[i ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason question is not relevant or outdated by tfoote
close date 2012-02-21 18:37:39