ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

How to use calibration result of easy_handeye

asked 2022-10-17 09:28:24 -0500

akumar3.1428 gravatar image

updated 2022-10-30 09:36:54 -0500

lucasw gravatar image

I have calibrated my camera using easy_handeye and using publish.launch to launch the calibration result. However, I would like to know how I can use this calibration result for my code, where I want to pick and place objects using a robotic arm.

Here is the list of publisher that is publishing the result of hand-eye calibration and subscriber that are subscribing the result.

$ rostopic info /tf_static 
Type: tf2_msgs/TFMessage

Publishers: 
 * /xarm_realsense_handeyecalibration_eye_on_hand/handeye_publisher_akumar_ThinkPad_P17_Gen_2i_114376_5000672119570238070 (http://akumar-ThinkPad-P17-Gen-2i:40065/)
 * /robot_state_publisher (http://akumar-ThinkPad-P17-Gen-2i:34185/)
 * /camera/realsense2_camera_manager (http://akumar-ThinkPad-P17-Gen-2i:39165/)
 * /xarm_realsense_handeyecalibration_eye_on_hand/handeye_publisher_akumar_ThinkPad_P17_Gen_2i_114681_6997610453558885147 (http://akumar-ThinkPad-P17-Gen-2i:38563/)

Subscribers: 
 * /move_group (http://akumar-ThinkPad-P17-Gen-2i:44351/)
 * /xarm_move_group_planner (http://akumar-ThinkPad-P17-Gen-2i:38529/)
 * /rviz_akumar_ThinkPad_P17_Gen_2i_114681_6050782860401126934 (http://akumar-ThinkPad-P17-Gen-2i:36855/)

After seeing the above output, I feel that xarm_move_group_planner or move_group uses the calibration result internally. Therefore, I do not need additional subscribers and publishers to transform the data. Please correct me if I am wrong.

edit retag flag offensive close merge delete

Comments

How can I use this calibration result for my code, where I want to pick and place objects using a robotic arm?

I strongly recommend going through MoveIt Tutorials. Once you get the idea, you may ask specific question.

ravijoshi gravatar image ravijoshi  ( 2022-10-18 07:23:51 -0500 )edit

I would like to use the calibrated result (camera on base) in moveIt python api, how can I use that ? Please let me know if you want me to be more detailed. I followed few tutorial and let me be more clear, I have calibrated eye on base result and got the yaml file, I would like to know do I have to write any code or just need to publish the data(https://github.com/IFL-CAMP/easy... and the moveit api would itself use the data to do the required transform.

akumar3.1428 gravatar image akumar3.1428  ( 2022-10-19 08:45:16 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
0

answered 2022-11-03 12:44:28 -0500

akumar3.1428 gravatar image

This issue ha been resolved using tf.TransformListener() and publishing the transformed data into base frame. Example code:

        (trans,rot) = listener.lookupTransform('/link_base', '/marker_1', rospy.Time(0))
        val = np.array([trans[0],trans[1],trans[2],rot[0],rot[1],rot[2],rot[3]],dtype=np.float64)
        marker_wrt_base.publish(val)
edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2022-10-17 09:28:24 -0500

Seen: 87 times

Last updated: Nov 03 '22