Best way to setup a kinect?

asked 2019-11-13 02:51:07 -0600

dennieboy96 gravatar image

Dear ROS community,

Me and my project group from the Netherlands are building a robot, this robot is build on a mobility scooter platform with TV on it. With this robot we want it in the future to drive through masses with people like a convention. But at this moment we still have some issues with the robot. We are the sixth projectgroup to work on this robot, and we only got 18 weeks to finish it, so it is quite hard to completly understand ROS. We are using ubuntu 16.04 with ROS kinetic.

Our robot is quite high like 1,5m which brings us to one problem and that is tables. When we are driving the robot around it wants to underneath tables because it sees the legs but not the tabletop. At this moment we have a Lidar sensor and multiple sonar sensors at the bottom of the base, but not really something to detect obstakels higher from the ground.

Now we had the idea to implement an kinect v1 which we already have, but we could not find the best way to implement this, i have read something about SLAM, Gmapping etc. But we dont really get what we could best implement. I also saw a package which takes a point cloud from the kinect and converts it to a laser scan.

What would your recommedation be? We would like to detect object from below the lidar to the top of the robot.

edit retag flag offensive close merge delete



You can use a Kinect sensor to obtain a DepthImage from the surroundings, adding the additional camera frames to the robot model. Once you have built the proper tf tree with the roboy model + sensors you will have to use Kinect driver on ROS to obtain the data through a ROS topic and use it with this conversion node to obtain a laserScan available to use in the ROS Navigation stack.

With the laserScan you can navigate with a generated map with AMCL algorithms and Map server services.

Or generate a map with SLAM algorithms and Gmapping. Another alternative is to use packages such us ORB-SLAM than can perform SLAM with an image input.

Weasfas gravatar image Weasfas  ( 2019-11-14 07:19:21 -0600 )edit

No matter the approach you use you will have to calibrate the Kinect sensor and set up the intrinsic camera parameters.

Weasfas gravatar image Weasfas  ( 2019-11-14 07:21:59 -0600 )edit

It seems much broader question then only setting up kinect. One thing you can do is tilt the Lidar but that is question of implementation if you just want to move forward. I don't exactly know what sensors you have, but here are steps.

  1. First and most important step is to get your odometry correct. How? Use of IMU + GPS, encoders, Lidar slam (Gmapping, ndt), camera slam (orb2).
  2. Then comes obstacle detection and avoidance. Both can be done using move_base. Move base takes pointcloud/laserscan to generate obstacles and plan your path. Good practice would be to filter your pointcloud for noise and maybe ground removal for better performance. Then you can choose and tweak planners (both global and local) for you usecase.

Hope this gives you a better picture of steps to follow or things to keep in mind.

Choco93 gravatar image Choco93  ( 2019-11-14 08:08:35 -0600 )edit