Using Zed Camera with Navigation Stack and mapping

asked 2016-08-31 07:07:23 -0500

Waseem Anjum gravatar image

I have previously built a mobile robot. I used Lidar, Imu and optical encoders for obstacle avoidance, angular and linear odometry respectively. I used gmapping for SLAM and navigation stack (amcl, move_base etc) for autonomous motion.

Now, I want to use Zed Camera with ros navigation and I am expecting it to replace Lidar, Imu and Wheel encoders. I have following concerns regarding Zed Camera.

  • How I can perform mapping using Zed Camera as I have previously used only gmapping with Lidar
  • Will Zed Camera replace Lidar for obstacle avoidance (as I have seen in the documentation that Zed Camera provides point cloud data)
  • Will it also replace Imu and Wheel encoders (as it already publishes the odometry on /odom topic)

If anyone has already used Zed Camera with ROS, kindly guide if it is the right choice to use for outdoor navigation and mapping. Thanks

edit retag flag offensive close merge delete

Comments

You might want to look at some of the Turtlebot code. The Turtlebot code uses the Kinect and generates a LaserSan message, in much the same way as you could with the Zed.

Mark Rose gravatar image Mark Rose  ( 2016-08-31 10:45:54 -0500 )edit