ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

How to make SLAM with other programs and tools?

asked 2017-09-05 07:44:55 -0500

gerson_n gravatar image

updated 2017-09-05 11:22:56 -0500

Hi everyone

I'd like to know a way to make Simultaneous Localization And Mapping (SLAM). All projects I've seen are driven by teleop_twist_key node, or a joystick. In order to apply SLAM, the wheeled robot I'm looking for needs to make autonomously a mapping for the planar environment it moves, preferably indoors. I know teleop node, navigation stack and gmapping from ROS are always the chosen by almost everyone to make this task.

Edit

Thanks to @gvdhoorn now I know other implementation for ROS like frontier_exploration package and turtlebot_exploration_3d

Which other implementations exists to use SLAM in the way I'm looking for?

Thanks

edit retag flag offensive close merge delete

Comments

Is this related to #q269765 (your previous question)? If not, can you clarify what you actually want to do?

gvdhoorn gravatar image gvdhoorn  ( 2017-09-05 07:53:56 -0500 )edit

Oh, yes it is. I forgot I've asked the same question a week ago. In this case I should delete this question?

gerson_n gravatar image gerson_n  ( 2017-09-05 09:15:57 -0500 )edit

We can close it as a duplicate and then optionally delete it.

gvdhoorn gravatar image gvdhoorn  ( 2017-09-05 09:27:26 -0500 )edit

Thanks for reopen this one

gerson_n gravatar image gerson_n  ( 2017-09-05 11:40:21 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
1

answered 2017-09-06 03:11:09 -0500

Sebastian Kasperski gravatar image

The nav2d package provides both SLAM and navigation for indoor robots. It can load exploration strategies to make the robot explore its environment autonomously. There are also tutorials how to set it up.

edit flag offensive delete link more

Comments

Thanks a lot man, I'd like to use the package you've done. That's what I've been looking for. Just a few questions about the params: What about it performance for kinect? is it appropiate? I haven't found people who has used that sensor for it. And how it works for odometry?

gerson_n gravatar image gerson_n  ( 2017-09-06 14:55:13 -0500 )edit

I understand I need position (odom) and a vision sensor (kinect) which together as inputs are used for a filter like kalman or particle filter (this case). The thing is, I didn't see how can I send the odom info to this algorithm, can you tell me how to achieve that? Thanks

gerson_n gravatar image gerson_n  ( 2017-09-06 15:04:59 -0500 )edit

Odometry is input via topic tf, same as for navigation stack. I never used nav2d with a Kinect, which is a 3d sensor. But I think there are already ROS-nodes to create virtual 2d-scans from a Kinect sensor.

Sebastian Kasperski gravatar image Sebastian Kasperski  ( 2017-09-07 06:14:41 -0500 )edit

Thanks again man.

gerson_n gravatar image gerson_n  ( 2017-09-11 07:45:13 -0500 )edit

Question Tools

Stats

Asked: 2017-09-05 07:44:55 -0500

Seen: 923 times

Last updated: Sep 06 '17