object detection with turtlebot [closed]

asked 2012-05-08 07:31:55 -0500

Jerneja Mislej gravatar image

updated 2012-05-08 11:23:00 -0500

I am new to ROS, I am using electric on ubuntu 10.10, my robot hardware platform is turtlebot.

I need to make the turtlebot alter his trajectory towards the goal given to him on the map build with gmapping. The aim is to make him go slalom around objects that were not originally on the map. I am still not sure how to carry out the whole thing, I am thinking whether to insert a fake obstacle on the inner side of the object by either faking a laser or if possible publish on the move_base/.../inflated_obstacles, there for forcing him to go around, or maybe publish current goals on the outer side of the obstacle. In any case I need to detect the object on the way to the goal and get its coordinates.

I am already having problems with the simple detection/segmentation into the plane and the objects on it. I am currently trying with tabletop_object_detector, the tabletop_segmentation launch, but I dont know how to use it properly and beside the basic documentation I cant find any tutorials on it. I would really appreciate any guidance from somebody who has any experiences with this. I have the map build and the autonomous navigation on the map works as well as avoiding newly put obstacles. How do I run this segmentation, what do I need to launch and what other stuff like pcl has to be launched with it?

What about other object detection packages, like roboearth or cmvision, would these be easier to use? Or could I get the coordinates of the object directly from the laser, since I know that I will be dealing with a simple plane and anything sticking out of it will be an object. If anybody has any experience with anything similar, any help would do...I am really desperate...

Now after I somehow get coordinates, I am still not sure how to handle them, like I said I am new to ROS so I dont know exactly how the transformation of coordinate systems works in ROS. Do the coordinates of the map build with gmapping belong to the robot, or are they already multiplied for some universal system? In what system does the laser publish data? And the kinnect, if using pcl, in which system are the points from the cloud?

Any help please, I would be really grateful

edit retag flag offensive reopen merge delete

Closed for the following reason question is not relevant or outdated by tfoote
close date 2015-03-03 01:39:29.556736