Visual Odometry and Mapping in Google Tango !
Hi,
I am interested to know what are the algorithms used for visual odometry or the MAGIC behind Google Tango Project.
What is the algorithm behind pose estimation/visual Odometry ?
Do they have a loop closure mechanism ? What is the method behind it ?
In the video they show a reconstruction of stairs of a multi storey building..thats awesome. How are they able to reduce the error/drift accumulation in their visual odometry pipeline ?
If they have a loop closure method, then how are they able to successfully reject the loop closures that may occur in similar places in the multi storey case ? Is it by checking the places in an area close to their current pose ?
Do they use other sensors like IMU ? If yes, then how do they able to couple that sensor to visual odometry pipeline ?
Can it work in both indoors and outdoors ? What is the range upto which depth reconstruction can take place ?
Can it work in completely dark environment without any illumination ?
Thank you so much for your time...
@Dirk Thomas@tfoote sorry to disturb you, but I guess you people might know the answer.
I'm pretty sure the answers to most of these questions are not yet public, and I'm pretty sure that the people responsible have seen this question and have chosen not to respond. You may be able to answer a few of these for yourself by watching the demo videos closely.