ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

setOrigin Broadcast a TF Transform for a filtered cluster box

asked 2019-03-07 02:39:24 -0500

artemiialessandrini gravatar image

updated 2019-03-07 03:56:32 -0500

Greetings!

I'm currently working on Pick and Place using Kinect camera, detecting a box from the PointCloud.

As it's shown in the pictures below, by using several PCL methods the box-like cluster is getting successfully detected (as the biggest cluster on the table).

Gazebo

RViz box cluster detection

In my understanding, the goal of the pipeline is to publish the result as a TF transform for the arm to move towards and grasp.

Unfortunately, while trying to broadcast a transform, the origin sets up just in the corner of the box, but not in the middle/top, which is not good for moveit_commander to work with.

Following Source 1 and Source 2, tf Broadcaster seems to be set as so:

    static tf::TransformBroadcaster br;
      tf::Transform part_transform;


      // Here in the tf::Vector3(x,y,z) x,y, and z should be calculated based on the pointcloud filtering results
      part_transform.setOrigin( tf::Vector3(sor_cloud_filtered->at(0).x, sor_cloud_filtered->at(0).y, sor_cloud_filtered->at(0).z) );
      tf::Quaternion q;
      q.setRPY(0, 0, 0);

  part_transform.setRotation(q);

  // Broadcast a transform
  br.sendTransform(tf::StampedTransform(part_transform, ros::Time::now(), world_frame, "part"));

Am I mistaken at some point? Have someone faced the same problem of "tf in the corner"?

Feel free to ask and clarify any moments!

Best Regards,

Artemii

edit retag flag offensive close merge delete

Comments

Could you please attach your images directly to the question? I've given you sufficient karma for that.

gvdhoorn gravatar image gvdhoorn  ( 2019-03-07 03:48:39 -0500 )edit

Thank you, @gvdhoorn! Attached directly now.

artemiialessandrini gravatar image artemiialessandrini  ( 2019-03-07 03:57:34 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
1

answered 2019-03-07 03:31:54 -0500

The code you have shown us is actually taking the location of the first point in the point cloud, so is it not certain where on the cluster that point will be. We would not expect this point to be the centre of the cluster. The structured nature of the point clouds from the kinect mean it's likely to be on the corner as you see but it won't always be.

You're trying to determine the grasp pose required to pickup the block, unfortunately this problem is a fair bit more complicated than your current solution. What happens if the block is placed at a diagonal angle? Ideally you really want to estimate the location and pose of the object so you can reliably determine the grasp pose.

A simple solution that ignores the orientation for now is to find the centre of the cluster. To do this you want to find the minimum and maximum x, y and z values in the point cloud. The centre can then be calculated as being half way between these min max values. This method will only work if the box is orthogonally aligned and doesn't determine it's size.

A more complex but robust method would be to use RANSAC to detect the faces of the cube and or extract the edges of the cloud and use RANSAC to detect the edge lines of the cube. This way you could detect the position, orientation and size of the cube. This would allow a suitable grasp to be found in the vast majority of cases, but it's a complex process to develop.

Hope this gives you some ideas.

edit flag offensive delete link more

Comments

Thank you, @PeteBlackerThe3rd, for your response! The answer is nice and clear, now I understand. As you mentioned RANSAC to be a complex process, but are there any simple solutions for this task in ROS, or you still recommend making an attempt to use RANSAC? Except for AR_Markers:)

artemiialessandrini gravatar image artemiialessandrini  ( 2019-03-07 04:06:53 -0500 )edit

AR_Markers are a good solution if suitable in your use case. I don't know of an existing box detection algorithm, although it's something someone should probably make. I would recommend having a look at RANSAC though if you have time, it's a powerful tool.

PeteBlackerThe3rd gravatar image PeteBlackerThe3rd  ( 2019-03-07 04:24:16 -0500 )edit

Got it! But what if my final goal is to be able to detect only one object from a set of on the workspace/table? In description said: "RANSAC can only estimate one model for a particular data set"

artemiialessandrini gravatar image artemiialessandrini  ( 2019-03-07 04:30:45 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2019-03-07 02:39:24 -0500

Seen: 579 times

Last updated: Mar 07 '19