Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

using Kinect with hokuyo Lidar for navigation

Hi all,

I have a mobile base which is navigating around a room with a given map. I am using Hokuyo laser for navigation, wheel encoders for odometry, robot_pose_ekf to fuse data from IMU and wheel encoders. I am using amcl for localization and move_base for planning. Now, I am planning to add Kinect to detect obstacles which are not in the hokuyo's view. I am using libfreenect and freenect launch files to get the data from the kinect. Then, I am using depthimage_to_laserscan to convert the depth image to laser scan. After this, I feel stuck and is wondering how to merge this laser scan data with the data coming from the hokuyo laser? I dont want to do any navigation with Kinect, I just want to use it for obstacle avoidance. Is there any standard way to do something like this? It will be great if someone can help me out here or point me in the right direction.

Thanks in advance.
Naman Kumar

using Kinect with hokuyo Lidar for navigation

Hi all,

I have a mobile base which is navigating around a room with a given map. I am using Hokuyo laser for navigation, wheel encoders for odometry, robot_pose_ekf to fuse data from IMU and wheel encoders. I am using amcl for localization and move_base for planning. Now, I am planning to add Kinect to detect obstacles which are not in the hokuyo's view. I am using libfreenect and freenect launch files to get the data from the kinect. Then, I am using depthimage_to_laserscan to convert the depth image to laser scan. After this, I feel stuck and is wondering how to merge this laser scan data with the data coming from the hokuyo laser? I dont want to do any navigation with Kinect, I just want to use it for obstacle avoidance. Is there any standard way to do something like this? It will be great if someone can help me out here or point me in the right direction.

Update 1:
Thanks @Humpelstilzchen! Yaa.. I thought of that but if I use an obstacle layer for RGB-D, then use that to update the costmap, it is very computationally expensive, taking many seconds (sometimes more than 5 seconds) to update the costmap. Then, when I use the laser scan from depthimage_to_laserscan in another obstacle layer, it seems good but the problem this time is that it just forms a laser scan from the depth image at the height of the kinect (it acts like a laser sensor at this height) and it does not bother about obstacles above or below the kinect height but I am looking to project everything to the kinect plane and update the costmap. Any help regarding this will be appreciated.

Thanks in advance.
Naman Kumar