ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

viso2 mono_odometer data problem

asked 2013-06-16 10:24:31 -0500

Jia gravatar image

I am using viso2 mono_odometer. I am sure packages are working good because I check rxgraph and topics are linked properly. The problem is the position data in mono_odometer is always around zero like something to e-300, no matter how I move my computer. BTW, I am testing with my integrated camera of my laptop using gscam. Thank you in advance if anyone has any idea.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2013-06-17 06:21:22 -0500

Stephan gravatar image

Jia,

please read the limitations section on this wiki page. To make the viso2 mono_odometer work you have to move the camera parallel to the ground and have to make sure there are features on the ground as well as above the ground. If you want a "freehand" odometry system, viso2 is not what you are looking for as it is designed for wide angle cameras rigidly mounted on cars. For a small scale odometry system I'd always recommend using an RGBD sensor (Microsoft Kinect or Asus Xtion or similar) together with fovis_ros or ccny_rgbd.

If you want to stick with mono, maybe PTAM suits more your needs? It does not give you metric results (the scaling factor for motions is unknown) but should work better for arbitrary motions.

edit flag offensive delete link more

Comments

Yeah, I read the limitations. I am expecting it gives me some results but not just zeros. Even if it has the limitations, it can still detect some matches. I checked the number of matches, there are enough detected points. Do I miss any step?

Jia gravatar image Jia  ( 2013-06-17 07:53:05 -0500 )edit

Are you sure there are enough matches on the ground to estimate the ground plane? Do you do translational movement? (Almost) pure rotations will not work.

Stephan gravatar image Stephan  ( 2013-06-17 08:40:54 -0500 )edit

I print out the numbers of matches in source code. They look fine. I tried translational movement, but just like I said, the position is around zero and the data is more like noise. And my goal is actually get the velocity of the camera. I just tried PTAM, and I have trouble to calibrate with it

Jia gravatar image Jia  ( 2013-06-17 09:29:44 -0500 )edit

Another thing is, in the tutorial, the frame relationships are world -> odom -> link_base -> camera, but I don't specify the world frame. Is that ok? And I use tf_echo, I found there is no transformation between any frame and odom has no parent frame. Any suggestion about that?

Jia gravatar image Jia  ( 2013-06-17 09:54:29 -0500 )edit

If you have no base_link, just set the parameter ~base_link_frame_id of the odometer to the frame_id that is in your image messages. You should then see tf's published from /odom to this same frame_id. Is your camera well calibrated? You have to use rectified images as input for mono_odometer.

Stephan gravatar image Stephan  ( 2013-06-17 13:59:20 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2013-06-16 10:24:31 -0500

Seen: 607 times

Last updated: Jun 17 '13