ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

quaternion_from_euler, pan/tilt, which order?

asked 2016-11-24 05:32:29 -0500

Hendrik Wiese gravatar image

updated 2016-11-24 06:10:55 -0500

Hey everybody,

before I turn my brain into a slimy mess it's probably easier to admit I'm failing and ask...

I have a camera pan/tilt system whose pose I'd like to broadcast through tf. The order of the axes is: first pan (left/right/yaw), then, in the pan-frame, tilt (up/down/pitch). How do I translate this into a quaternion? Is using quaternion_from_euler the best way to do this? And if it is, what order do I have to put pan and tilt angles in to get the corresponding correct quaternion returned? There are too many permutations to try and I'm really struggling to figure it out through pure thinking.

Thanks for your help!

//edit: oh, just to mention this: the axes of the tf frame should correspond to the image processing requirements. That is: the frame x- and y-axes should match the image x- and y-axes and the z-axis should point away from the camera, into the image. Or in other words, if I looked through the camera, the x-axis should go to the right, the y-axis down and the z-axis pointing away from me.

//edit2: z-axis direction wrong (right-hand convention), sorry... corrected!

//edot3: dammit, it was correct in the first place... corrected again... I'm so confused!

edit retag flag offensive close merge delete

Comments

the axes of the tf frame should correspond to the image processing requirements.

This does not make sense to me, or at least: I don't understand why this should be a requirement for the orientation of the TF frames that represent the pan-tilt-unit. I'd first solve the problem of representing ..

gvdhoorn gravatar image gvdhoorn  ( 2016-11-24 06:33:16 -0500 )edit

.. pan-tilt pose (relative to some origin of the pan-tilt model), then insert (an) additional frame(s) to end up with the orientation requirements your vision algorithm has. Separating responsibilities like that would seem to make sense, and might make things easier to reason about.

gvdhoorn gravatar image gvdhoorn  ( 2016-11-24 06:34:42 -0500 )edit

This is due to the fact that I'm reconstructing a point cloud from the image(s). The point cloud has to get the correct tf frame orientation to be correctly represented in the world frame.

Hendrik Wiese gravatar image Hendrik Wiese  ( 2016-11-24 07:21:54 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
0

answered 2016-11-24 09:53:32 -0500

Hendrik Wiese gravatar image

I've solved this puzzler. Step by step. First I've rotated the camera coordinate system so that it matches the camera in 0/0 orientation (pan/tilt) according to the mentioned requirements. I've created a fixed quaternion for that rotation. Then I've empirically figured out what axis I had to rotate by which angle. One after the other I've multiplied them with the current quaternion.

Now I have to correct orientation of the tf transform of the camera. Tough one.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2016-11-24 05:32:29 -0500

Seen: 886 times

Last updated: Nov 24 '16