ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Robotic arm object tracking

asked 2017-10-03 14:03:23 -0500

Abdu gravatar image

updated 2017-10-07 09:37:56 -0500


I tested the Phantomx Pincher arm with rviz by using this repo which is same this and everything works fine.

Now, I want the arm to track an object by using normal camera or even by kinect. How the arm can track an object based on color for example ??? What should I do to get started ?

edit retag flag offensive close merge delete


Not sure if this is what you want, but you could try find_object_2d

Humpelstilzchen gravatar image Humpelstilzchen  ( 2017-10-05 03:01:41 -0500 )edit

Thanks a lot, but what should I do ?? do I need to detect the object's 3D position in rviz then I will be able to publish it in ros topic then the arm should be able to find the object's position and follow it !! is that the process??

Abdu gravatar image Abdu  ( 2017-10-05 08:07:48 -0500 )edit

You will probably also have to do some transformation between coordinate systems. You can recognize position of the object in respect to your camera. But your robot arm probably requires other coordinate system.

l4ncelot gravatar image l4ncelot  ( 2017-10-05 09:19:52 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted

answered 2017-10-08 04:54:56 -0500

jorge gravatar image

updated 2019-05-16 07:58:57 -0500

Hi @Abdu, so you essentially have the answer in the previous comments. i just try to summarize steps here:

  1. Use an object detector that provides 3D pose of the object you want to track. find_object_2d looks like a good option, though I use OKR
  2. Use MoveIt! to reach the object pose: you can request this throw one of the several interfaces. For example in Python you will call 'set_pose_target' and then 'go' on a MoveGroupCommander object.

As @l4ncelot said, you must be careful with the object's pose frame. But you can provide a stamped pose to MoveIt!, so unless you make a mistake, it will be able to resolve the frame to arm reference.

All this is to send the arm to current object pose; if what you want is to move the arm to a position next to the object, then you must do some TF operation to calculate this pose. For example, once you have the object pose, you can transform to the arm base reference frame and calculate a pose closer to the arm but with the same heading

Anyway, if you provide some more details of what are you trying to achieve, would help us to provide a better answer. But I hope this is enough to start hacking!

UPDATE 16 May 2019 To pick an object w/ MoveIt!, to my understanding, u have 2 main options: 1. add a collision object to the planning scene where u have detected one (as I do here). Then you can use the high-level commands pick and place 2. send the arm to the estimated pose and control the gripper yourself, as done here

edit flag offensive delete link more


Thanks for your reply, @jorge For more details, I want to attach a normal webcam on the end-effector of the Phantomx Pincher arm, then, the arm will track an object based on color. I suggest to make it simple and use normal webcam so we would only have two dimensions x, and y, there is no z axis.

Abdu gravatar image Abdu  ( 2017-10-09 19:59:27 -0500 )edit

The tough work is how to use opencv for controlling five servos

Abdu gravatar image Abdu  ( 2017-10-09 20:00:05 -0500 )edit

@jorge To give you examples of what I want to achieve, plz check these links, u will see something similar about opencv and robotic arm. 4-motions and 6-motions

Abdu gravatar image Abdu  ( 2017-10-10 09:22:07 -0500 )edit

My answer is essentially valid for those examples, I think. Using inverse kinematics you don't need to calculate the position of every servo, just an end-effector pose.

jorge gravatar image jorge  ( 2017-10-10 09:41:50 -0500 )edit

However, to keep gaze on an object, not to go to it, you must just calculate the pitch and yaw of the end-effector pose (tb_arm cannot roll), and calculate nice-looking x, y, z coordinates (physically attainable, of course)

jorge gravatar image jorge  ( 2017-10-10 09:44:52 -0500 )edit

Hi @jorge I almost decided what I want to do, and the idea now is more clear, could u plz take a look at link question and give me your guidance

Abdu gravatar image Abdu  ( 2017-10-20 08:47:42 -0500 )edit

@ is this task not required hand eye calibration or using only TF whould be sufficient to transfer object pose in camera to robot base coordinate?

nd gravatar image nd  ( 2019-05-12 06:22:32 -0500 )edit

What do you mean with "hand eye calibration"? Something like on this tutorial? Good question... I would say yes, unless u happen to have an extremely accurate robot model.

jorge gravatar image jorge  ( 2019-05-15 06:27:05 -0500 )edit

Question Tools

1 follower


Asked: 2017-10-03 14:03:23 -0500

Seen: 1,588 times

Last updated: May 16 '19