ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Pick-and-Place using Logitech C920 problem

asked 2018-07-21 10:35:27 -0600

artemiialessandrini gravatar image

Greeting to the ROS Community!

I'm working on pick and place procedure using UR5 Arm and a Logitech C920.

Detecting simple box-like ojbects, using cv_bridge to Publish an objects pose. (obj_No, x, y, z=0, yaw=object angle)

The questions are:

  1. How to calibrate a camera to make robot actually "move" towards by the objects pose Published?

I cannot find any proper calibration explanation and tools, regarding an extrinsic calibration, do I have to use a TF + something else and basically setup a camera frame to the world frame and to the robot frame, too?

  1. Another obvious question, can I make a camera being more robust about any accidential moves, changing a frame at all? Or it has to be absolutely fixed to achieve a precise result?

My appoligises for begginer's vocab:)

Best Regards!

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2018-07-23 03:10:49 -0600

Two answer both your questions, yes there are solutions but you need to get your hands dirty and build some of then yourself. I'm assuming you're camera is fixed WRT the workspace and views the working volume of the robot.

  1. We have such a system and we place a fiducial marker taped in place on the table. We use ar_track_alvar but there are a few to choose from. We then have measure as accurately as possible where this maker is relative to the robot base, this we setup as a static transform. Then when you detect the location of the marker in the camera frame you can calculate the extrinsic calibration you want.

  2. Using the process described above our system automatically calibrates itself every time it's started. This avoids having to worry about the camera being knocked when the systems not in use.

Hope this helps.

edit flag offensive delete link more

Comments

Thank you for your responce!

In this way, is it correct to say, that we use a marker's size to calculate an objects size, too? Additionally, what if the table surface is going to change the anglular orientation? Do we need to recalculate raw, pitch and yaw relatively to the robot,too?

artemiialessandrini gravatar image artemiialessandrini  ( 2018-07-25 00:08:03 -0600 )edit

Using ar_track_alvar you just have to define the size of the marker in cm. Then it calculates the distance, position and orientation for you. It gives you a full 6 DOF pose of the marker relative to the camera.

PeteBlackerThe3rd gravatar image PeteBlackerThe3rd  ( 2018-07-25 02:09:32 -0600 )edit

Marking an answer explained. And going to use markers then, great!

Two additional questions I want to ask:

  1. "<...> but you need to get your hands dirty" - could I get some references or tags about those other methods?
  2. Is this method going to work along with standart OpenCV contours detection?
artemiialessandrini gravatar image artemiialessandrini  ( 2018-07-25 03:54:04 -0600 )edit

1) I was meaning there is a method to solve your problem, it's just not a turn key package you can plug in and configure. Hence having to write your own node using available libraries to get this going.

PeteBlackerThe3rd gravatar image PeteBlackerThe3rd  ( 2018-07-25 05:12:34 -0600 )edit

2) I believe the tag detection uses thresholding to a binary image then detects the marker, but I haven't looked at the algorithm in detail. You can add your own node along side to process the camera image using any method you like.

PeteBlackerThe3rd gravatar image PeteBlackerThe3rd  ( 2018-07-25 05:14:00 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2018-07-21 10:35:27 -0600

Seen: 353 times

Last updated: Jul 23 '18