How to get object coordinates to grip objects
Hi altogether,
I am working on a project which aims to implement a object recognition and gripper arm control for the Kuka YouBot. I already implemented a first working draft of the object recognition (based on color-filters, Hough-Transformation etc.). What I am struggling with now is getting the "real world coordinates" of the recognized object(s) to provide them to the robot (and the gripper arm), which should then navigate to these coordinates and grip the object (a simple ball, for example). The robot is using the Asus Xtion Pro which is mounted to the top of the gripper arm.
There is a topic /camera/depth/image_raw which tells me the distance of each image coordinate to the sensor. I also see that there is a topic /camera/depth/points which provides me with point cloud data.
I already found this post in which the usage of pcl_ros::transformPointCloud is recommended. I played a bit with this but didn't get along with it.
Could anyone give me a hint to an approach on how to best extract the "real world coordinates", which I need to grip the recognized objects?