ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

The easiest way is two static transform publishers in a launch file http://wiki.ros.org/tf#static_transform_publisher :

<node pkg="tf" type="static_transform_publisher"
    name="camera1_broadcaster" 
    args="x1 y1 z1 yaw1 pitch1 roll1 base_link camera1_link 100" />

<node pkg="tf" type="static_transform_publisher" 
    name="camera2_broadcaster" 
    args="x2 y2 z2 yaw2 pitch2 roll2 base_link camera2_link 100" />

Replace x1 y1 etc. with the translation and rotation values you've determined for each camera relative to the base_link frame on your system (which could be anywhere you want it to). If you want to tweak those values you would kill the roslaunch that is running those broadcasters, edit them, and relaunch.

And camera1 would be set up to use camera1_linkin the Image header frame_id, and likewise for camera2.

Later you might want to replace this with a urdf/xacro derived robot_description that creates these two frames and other parts of your system they are attached to.

Or later maybe you have a live calibration process to determine where camera1 is relative to camera2, and you want to be able to change the transforms live with no disruption while values are editing and nodes are killed and relaunched- then you'd probably be setting up a python node that has a service or topic that it listens to that that determines the transform values and has a transfrom broadcaster through the python tf api, but that is the least easiest here.

The easiest way is two static transform publishers in a launch file http://wiki.ros.org/tf#static_transform_publisher :

<node pkg="tf" type="static_transform_publisher"
    name="camera1_broadcaster" 
    args="x1 y1 z1 yaw1 pitch1 roll1 base_link camera1_link 100" />

<node pkg="tf" type="static_transform_publisher" 
    name="camera2_broadcaster" 
    args="x2 y2 z2 yaw2 pitch2 roll2 base_link camera2_link 100" />

Replace x1 y1 etc. with the translation and rotation values you've determined for each camera relative to the base_link frame on your system (which could be anywhere you want it to). If you want to tweak those values you would kill the roslaunch that is running those broadcasters, edit them, and relaunch.relaunch. They could be replaced with roslaunch args and passed in from the command line to make that relaunching process a little more convenient.

And camera1 would be set up to use camera1_linkin the Image header frame_id, and likewise for camera2.

Later you might want to replace this with a urdf/xacro derived robot_description that creates these two frames and other parts of your system they are attached to.

Or later maybe you have a live calibration process to determine where camera1 is relative to camera2, and you want to be able to change the transforms live with no disruption while values are editing and nodes are killed and relaunched- then you'd probably be setting up a python node that has a service or topic that it listens to that that determines the transform values and has a transfrom broadcaster through the python tf api, but that is the least easiest here.

The easiest way is two static transform publishers in a launch file http://wiki.ros.org/tf#static_transform_publisher :

<node pkg="tf" type="static_transform_publisher"
    name="camera1_broadcaster" 
    args="x1 y1 z1 yaw1 pitch1 roll1 base_link camera1_link 100" />

<node pkg="tf" type="static_transform_publisher" 
    name="camera2_broadcaster" 
    args="x2 y2 z2 yaw2 pitch2 roll2 base_link camera2_link 100" />

Replace x1 y1 etc. with the translation and rotation values you've determined for each camera relative to the base_link frame on your system (which could be anywhere you want it to). If you want to tweak those values you would kill the roslaunch that is running those broadcasters, edit them, and relaunch. They could be replaced with roslaunch args and passed in from the command line to make that relaunching process a little more convenient.

And camera1 would be set up to use camera1_linkin the Image header frame_id, and likewise for camera2.

Later you might want to replace this with a urdf/xacro derived robot_description that creates these two frames and other parts of your system they are attached to.

Or later maybe you have a live calibration process to determine where camera1 is relative to camera2, and you want to be able to change the transforms live with no disruption while values are editing edited and nodes are killed and relaunched- then you'd probably be setting up a python node that has a service or topic that it listens to that that determines the transform values and has a transfrom broadcaster through the python tf api, but that is the least easiest here.