Missing Images in the Vocabulary tree DocDatabase? [closed]
Hello
I want to use the vt::DocDatabase for loop closing during SLAM with a kinect. For every new image, I search for the nearest N words and use the number of inliers from posest to verify the match. I've notices a strange behaviour: If I insert a videostream, I'd expect the last frames to be the best matching ones, so they should (almost) always be one of the best matches. But I get something like:
[0m[ INFO] [1310389286.092157966]: doc_id: 0[0m
[0m[ INFO] [1310389286.124423473]: doc_id: 1[0m
[0m[ INFO] [1310389286.157670327]: node 3 has potential neighbour: 0, #: 143[0m
[0m[ INFO] [1310389286.300146131]: doc_id: 2[0m
[0m[ INFO] [1310389286.327752705]: node 4 has potential neighbour: 0, #: 124[0m
[0m[ INFO] [1310389286.418847237]: node 4 has potential neighbour: 1, #: 115[0m
[0m[ INFO] [1310389286.503794645]: doc_id: 3[0m
[0m[ INFO] [1310389286.536062712]: node 5 has potential neighbour: 2, #: 116[0m
[0m[ INFO] [1310389286.635030183]: node 5 has potential neighbour: 0, #: 101[0m
[0m[ INFO] [1310389286.736272983]: node 5 has potential neighbour: 1, #: 106[0m
[0m[ INFO] [1310389286.831540906]: doc_id: 4[0m
[0m[ INFO] [1310389286.857269082]: node 6 has potential neighbour: 3, #: 125[0m
[0m[ INFO] [1310389286.946156944]: node 6 has potential neighbour: 2, #: 114[0m
[0m[ INFO] [1310389287.034078273]: node 6 has potential neighbour: 1, #: 117[0m
[0m[ INFO] [1310389287.125192919]: node 6 has potential neighbour: 0, #: 104[0m
[0m[ INFO] [1310389287.262271656]: doc_id: 5[0m
[0m[ INFO] [1310389287.289952757]: node 7 has potential neighbour: 2, #: 121[0m
[0m[ INFO] [1310389287.386291909]: node 7 has potential neighbour: 1, #: 132[0m
[0m[ INFO] [1310389287.475690740]: node 7 has potential neighbour: 0, #: 136[0m
[0m[ INFO] [1310389287.559115427]: node 7 has potential neighbour: 4, #: 122[0m
[0m[ INFO] [1310389287.646361971]: node 7 has potential neighbour: 3, #: 130[0m
[0m[ INFO] [1310389287.740120155]: doc_id: 6[0m
[0m[ INFO] [1310389287.778942874]: node 8 has potential neighbour: 5, #: 145[0m
[0m[ INFO] [1310389287.875678025]: node 8 has potential neighbour: 2, #: 102[0m
[0m[ INFO] [1310389287.958846919]: node 8 has potential neighbour: 0, #: 122[0m
[0m[ INFO] [1310389288.056358920]: node 8 has potential neighbour: 1, #: 130[0m
[0m[ INFO] [1310389288.148655284]: node 8 has potential neighbour: 4, #: 117[0m
[0m[ INFO] [1310389288.267859780]: node 8 has potential neighbour: 3, #: 107[0m
N is set to 100, so I expect to get all images sorted according to their matching quality. In a perfect setting, I'd expect for image 4 the list 3,2,1,0. (The '#'-entry stands for the number of inliers returned by posest)
But the database never returns the two previous images.
Has anyone experienced this behaviour or do I have to check my implementation?
It boils down to:
vt::Document words(frame.dtors.rows);
for (int i = 0; i < frame.dtors.rows; ++i){
words[i] = place_db_.getVocTree()->quantize(frame.dtors.row(i));
}
vt::Matches matches;
place_db_.getDocDatabase()->find(words, 100, matches );
int doc_id = place_db_.getDocDatabase()->insert(words);
ROS_INFO("doc_id: %i",doc_id);
inlier = posest_.estimate(frame, frames(matches[i ...