Having an artificial robot in the scene
I'm trying to use a node which requires the full pose of the camera I am using, as well as the TF between the camera and the robot. Since I don't have an actual robot here, I'd like to create an artificial one in the urdf and pretend that it is there. This way I can use the code without having the physical robot, and the TF would work, I hope? I can pretend to have the robot placed somewhere close to the camera and set it's coordinates to (0,0,0) and then enter the full pose of the camera accordingly.
In short, 3 things are required according to the authors:
- 3D point cloud provider - I have this already (a stereo camera).
- An estimate for the 6 degree of freedom pose of the system. This can be given using the ros tf tree (default option). In this case the code will look for a transform from the world frame to the frame the above point-cloud is given in. A seperate ros node is required to give this transform.
- The transformation from the output of 2) to the frame the pointcloud in 1) is given in. If the same sensor is used for both this can just take the default value of identity.
In order to do these, I think I need to know how to add an artificial robot base into the scene and pretend that it is there, so that I can use the TF and realize the above given scenario.
Any thoughts?
Use a regular static_transform_publisher to publish just the transform to your camera frame (ie: the one set in the
frame_id
of the pointclouds)?In that case, what would be my camera's position and orientation? I mean, where would be my reference point? Should I take an arbitrary point in the workspace (a corner?) and measure the distances of the camera to the corner and then give the values to the state_transform_publisher's args?
This is going to be pedantic, but:
whatever you want them to be.
wherever you'd want it to be.
You don't have a robot, but even if you did, the same questions could be asked: where would the robot's reference (ie: origin) be? That would seem to be just as arbitrary as any other point you choose.
You could even just make your camera's origin the origin by broadcasting an identity transform (ie:
0 0 0 0 0 0 <something> <some_camera_link>
).Edit: you write:
the TF "between the camera and the robot", combined with all of the robot's transforms is the full ...(more)
If I had a robot, everything would have been measured referenced to the robot_base. I would then find my camera's X,Y,Z by measuring the distances from the base. But now that I don't have a robot, I can simply pretend that there is one, and then measure the distances from that arbitrary point (corner of the station, for instance) to the camera, I guess. That would be my camera's position. As for the orientation, I think there is no way to find it out but to use a QR code or a calibration pattern.
well, yes. But I wouldn't "pretend". That's not needed. There is no need for a robot whatsoever. Only thing needed is a known transform from some reference point to your camera's frames.
Yes, that would be a good way to approach this.
Note that depending on how things are setup and how you interprete them, you'd get the transform from the marker to your camera's frames. So the marker now becomes the origin.
You're right, there is no need for a robot. I should let the marker be that reference point and get all transformations accordingly. Then, what I need to do is to place the marker somewhere, find its pose, and then camera's.