ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

I'm assuming by ROS stack running, you mean that you have a base driver that take is cmd_vel, and spits out odom and tf?

There's some sample navigation stuff for another omni-wheel robot here: https://github.com/paulbovbel/nav2_platform/tree/hydro-devel/nav2_navigation/launch

The important distinction is that your robot is holonomic, so you can give it an x, y, theta move command. Diff drive on the other hand would only take x, theta.

If you're gonna reuse the configs there, make sure you tweak speed/acceleration limits, footprint, and anything else robot specific. There's some amcl and gmapping demo bringups in https://github.com/paulbovbel/nav2_platform/tree/hydro-devel/nav2_bringup as well.

I'm assuming by ROS stack running, you mean that you have a base driver that take is cmd_vel, and spits out odom and tf?

There's some sample navigation stuff for another omni-wheel robot here: https://github.com/paulbovbel/nav2_platform/tree/hydro-devel/nav2_navigation/launch

The important distinction is that your robot is holonomic, so you can give it an x, y, theta move command. Diff drive on the other hand would only take x, theta.

For converting Kinect data into laserscans for navigation, take a look at https://github.com/ros-perception/perception_pcl/blob/indigo-devel/pointcloud_to_laserscan/launch/sample_nodelet.launch. For Kinect, you'll have to use openni/freenect instead of openni2.

If you're gonna reuse the configs there, make sure you tweak speed/acceleration limits, footprint, and anything else robot specific. There's some amcl and gmapping demo bringups in https://github.com/paulbovbel/nav2_platform/tree/hydro-devel/nav2_bringup as well.

I'm assuming by ROS stack running, you mean that you have a base driver that take is cmd_vel, and spits out odom and tf?

For converting Kinect data into laserscans for navigation, take a look at https://github.com/ros-perception/perception_pcl/blob/indigo-devel/pointcloud_to_laserscan/launch/sample_nodelet.launch. For Kinect, you'll have to use openni/freenect instead of openni2. There's some sample navigation stuff for another omni-wheel robot here: https://github.com/paulbovbel/nav2_platform/tree/hydro-devel/nav2_navigation/launch

The important distinction is that your robot is holonomic, so you can give it an x, y, theta move command. Diff drive on the other hand would only take x, theta.

For converting Kinect data into laserscans for navigation, take a look at https://github.com/ros-perception/perception_pcl/blob/indigo-devel/pointcloud_to_laserscan/launch/sample_nodelet.launch. For Kinect, you'll have to use openni/freenect instead of openni2.

If you're gonna reuse the configs there, make sure you tweak speed/acceleration limits, footprint, and anything else robot specific. There's some amcl and gmapping demo bringups in https://github.com/paulbovbel/nav2_platform/tree/hydro-devel/nav2_bringup as well.

I'm assuming by ROS stack running, you mean that you have a base driver that take is cmd_vel, and spits out odom and tf?

For converting Kinect data into laserscans for navigation, take a look at https://github.com/ros-perception/perception_pcl/blob/indigo-devel/pointcloud_to_laserscan/launch/sample_nodelet.launch. For Kinect, you'll have to use openni/freenect instead of openni2. openni2.

There's some sample navigation stuff for another omni-wheel robot here: https://github.com/paulbovbel/nav2_platform/tree/hydro-devel/nav2_navigation/launch

The important distinction is that your robot is holonomic, so you can give it an x, y, theta move command. Diff drive on the other hand would only take x, theta.

If you're gonna reuse the configs there, make sure you tweak speed/acceleration limits, footprint, and anything else robot specific. There's some amcl and gmapping demo bringups in https://github.com/paulbovbel/nav2_platform/tree/hydro-devel/nav2_bringup as well.

I'm assuming by ROS stack running, you mean that you have a base driver that take is takes in cmd_vel, and spits out odom and tf?

For converting Kinect data into laserscans for navigation, take a look at https://github.com/ros-perception/perception_pcl/blob/indigo-devel/pointcloud_to_laserscan/launch/sample_nodelet.launch. For Kinect, you'll have to use openni/freenect instead of openni2.

There's some sample navigation stuff for another omni-wheel robot here: https://github.com/paulbovbel/nav2_platform/tree/hydro-devel/nav2_navigation/launch

The important distinction is that your robot is holonomic, so you can give it an x, y, theta move command. Diff drive on the other hand would only take x, theta.

If you're gonna reuse the configs there, make sure you tweak speed/acceleration limits, footprint, and anything else robot specific. There's some amcl and gmapping demo bringups in https://github.com/paulbovbel/nav2_platform/tree/hydro-devel/nav2_bringup as well.