ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Having an artificial robot in the scene

asked 2019-06-11 03:10:28 -0500

Jägermeister gravatar image

updated 2019-06-11 03:11:10 -0500

I'm trying to use a node which requires the full pose of the camera I am using, as well as the TF between the camera and the robot. Since I don't have an actual robot here, I'd like to create an artificial one in the urdf and pretend that it is there. This way I can use the code without having the physical robot, and the TF would work, I hope? I can pretend to have the robot placed somewhere close to the camera and set it's coordinates to (0,0,0) and then enter the full pose of the camera accordingly.

In short, 3 things are required according to the authors:

  1. 3D point cloud provider - I have this already (a stereo camera).
  2. An estimate for the 6 degree of freedom pose of the system. This can be given using the ros tf tree (default option). In this case the code will look for a transform from the world frame to the frame the above point-cloud is given in. A seperate ros node is required to give this transform.
  3. The transformation from the output of 2) to the frame the pointcloud in 1) is given in. If the same sensor is used for both this can just take the default value of identity.

In order to do these, I think I need to know how to add an artificial robot base into the scene and pretend that it is there, so that I can use the TF and realize the above given scenario.

Any thoughts?

edit retag flag offensive close merge delete

Comments

Any thoughts?

Use a regular static_transform_publisher to publish just the transform to your camera frame (ie: the one set in the frame_id of the pointclouds)?

gvdhoorn gravatar image gvdhoorn  ( 2019-06-11 03:29:59 -0500 )edit

In that case, what would be my camera's position and orientation? I mean, where would be my reference point? Should I take an arbitrary point in the workspace (a corner?) and measure the distances of the camera to the corner and then give the values to the state_transform_publisher's args?

Jägermeister gravatar image Jägermeister  ( 2019-06-11 03:40:19 -0500 )edit

This is going to be pedantic, but:

what would be my camera's position and orientation?

whatever you want them to be.

I mean, where would be my reference point?

wherever you'd want it to be.

You don't have a robot, but even if you did, the same questions could be asked: where would the robot's reference (ie: origin) be? That would seem to be just as arbitrary as any other point you choose.

You could even just make your camera's origin the origin by broadcasting an identity transform (ie: 0 0 0 0 0 0 <something> <some_camera_link>).


Edit: you write:

I'm trying to use a node which requires the full pose of the camera [..] as well as the TF between the camera and the robot

the TF "between the camera and the robot", combined with all of the robot's transforms is the full ...(more)

gvdhoorn gravatar image gvdhoorn  ( 2019-06-11 03:46:57 -0500 )edit

If I had a robot, everything would have been measured referenced to the robot_base. I would then find my camera's X,Y,Z by measuring the distances from the base. But now that I don't have a robot, I can simply pretend that there is one, and then measure the distances from that arbitrary point (corner of the station, for instance) to the camera, I guess. That would be my camera's position. As for the orientation, I think there is no way to find it out but to use a QR code or a calibration pattern.

Jägermeister gravatar image Jägermeister  ( 2019-06-11 03:53:51 -0500 )edit

But now that I don't have a robot, I can simply pretend that there is one, and then measure the distances from that arbitrary point (corner of the station, for instance) to the camera, I guess. That would be my camera's position

well, yes. But I wouldn't "pretend". That's not needed. There is no need for a robot whatsoever. Only thing needed is a known transform from some reference point to your camera's frames.

As for the orientation, I think there is no way to find it out but to use a QR code or a calibration pattern.

Yes, that would be a good way to approach this.

Note that depending on how things are setup and how you interprete them, you'd get the transform from the marker to your camera's frames. So the marker now becomes the origin.

gvdhoorn gravatar image gvdhoorn  ( 2019-06-11 04:12:49 -0500 )edit

You're right, there is no need for a robot. I should let the marker be that reference point and get all transformations accordingly. Then, what I need to do is to place the marker somewhere, find its pose, and then camera's.

Jägermeister gravatar image Jägermeister  ( 2019-06-11 04:20:04 -0500 )edit

1 Answer

Sort by » oldest newest most voted
0

answered 2019-07-12 05:02:37 -0500

Jägermeister gravatar image

Turns out the only thing you need in this kind of situation is a static publisher, which can act like your robot/camera or whatever you want it to be:

rosrun tf static_transform_publisher 0.0063345755 -0.0315891403 0.6954298219  0.7681816 0.6275346 -0.0602996 -0.1116304 small_calibration_pattern monocular_camera 1000
edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2019-06-11 03:10:28 -0500

Seen: 209 times

Last updated: Jul 12 '19