ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

How to get object coordinates to grip objects

asked 2017-03-06 09:51:10 -0500

UllusParvus gravatar image

updated 2017-03-06 11:42:00 -0500

Hi altogether,

I am working on a project which aims to implement a object recognition and gripper arm control for the Kuka YouBot. I already implemented a first working draft of the object recognition (based on color-filters, Hough-Transformation etc.). What I am struggling with now is getting the "real world coordinates" of the recognized object(s) to provide them to the robot (and the gripper arm), which should then navigate to these coordinates and grip the object (a simple ball, for example). The robot is using the Asus Xtion Pro which is mounted to the top of the gripper arm.

There is a topic /camera/depth/image_raw which tells me the distance of each image coordinate to the sensor. I also see that there is a topic /camera/depth/points which provides me with point cloud data.

I already found this post in which the usage of pcl_ros::transformPointCloud is recommended. I played a bit with this but didn't get along with it.

Could anyone give me a hint to an approach on how to best extract the "real world coordinates", which I need to grip the recognized objects?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2018-03-04 18:01:33 -0500

Once you have a detection in 2D using X transform (i.e. Hough, yolo, etc), what is next is to find the points in 3D that correspond to your 2D pixels, producing a pointcloud (in 3D). You could get the centroid of the cloud as your object position and orientation by applying techniques such as PCA (principal component analysis) or CAD model fitting. The end product is the object pose, however now you need a graspable pose, which depends on your manipulator and gripper. Ideally this graspable pose you should get from a grasp planner or if you want to bypass it just grasp all objects by the center. As you can see it is quite complex and there is no straight forward answer. What I recommend is to get a ready (out of the box) perception algorithm that outputs object pose, such as object recognition mean circle, object recognition kitchen, find object 2D, (all of them give you 3D pose).

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2017-03-06 09:51:10 -0500

Seen: 2,261 times

Last updated: Mar 04 '18