ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

how to pick and place object with Moveit and Kinect

asked 2020-02-25 22:38:17 -0500

bkmen97 gravatar image

updated 2020-02-25 22:42:39 -0500

I had a robot is controlled thought MoveIt! ,but now i don't know how to integrate with Kinect in order to object detection and pick, place. I read about PCL, tabletop_object, but still don't have the best solution. Can you suggest me? Thanks.

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
1

answered 2020-02-26 03:08:23 -0500

SSar gravatar image

Hey,

Find_object_2d package is a great way to get into object detection and pick-and-place pipelines. Basically using a find_object_3d node to subscribe to /color and /depth topics and publishing detected objects' properties (such as id, width and length) to /objects topic. The UI is pretty simple and straightforward, you just take a snapshot of the image and make a bounding box for the object in the frame.

The best thing? The node also publishes object coordinate transformation and rotation using tf. You can look up the transformation from the object frame to camera frame and base frame and plan your pipeline from there.

Hope that helps!

edit flag offensive delete link more

Comments

thanks for your suggestion, i trying to use Find_object_2d package

bkmen97 gravatar image bkmen97  ( 2020-02-26 21:45:33 -0500 )edit

Please accept the answer if it solved your problem

fvd gravatar image fvd  ( 2020-02-29 11:45:31 -0500 )edit
0

answered 2020-02-25 23:43:12 -0500

What sort of object is this that you want to perform pick and place on ? You need to achieve a reliable 3D detection of this object first. If the object have a primitive shape, PCL can be handy. I have not used Kinect so I don't know how precise/accurate is the pointclouds, In any case try to hold camera as close as possible to your object scene,

If your comfortable with CNNs, you can train a CNN model to detect your object on image, and then do a sensor fusion between image and your pointcloud to "reflect" detection information to pointclouds, this way you can get object clusters on point cloud.

After you get Object clusters in point cloud, You can construct a 3D box around each cluster, which will give you final 3D detection.

You may consider to use this repo for constructing a minimal volume 3D box around each object cluster. LidarPerception/object_builders_lib.

else you may just skip CNN part and use a clustering/segmentation algorithm to clusterize your pointcloud. You can do some prepossessing such as ground plane removal to make job of clustering algo easier.

again from LidarPerception, they have a great repo for clustering algorithms targeted for pointclouds LidarPerception/segmenters_lib

After you have 3D detectionS, transform pose of objects from camera frame to base_link, and then follow moveit pick and place pipeline tutorial here, you will likely modify very few things

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2020-02-25 22:38:17 -0500

Seen: 1,285 times

Last updated: Feb 26 '20