ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Navigation Stack without Odom/Lidar

asked 2019-08-15 13:09:24 -0500

enthusiast.australia gravatar image

Hello. I have few confusions. First how can I provide a static map with static Obstacles to robot. I have an image file, how can i make it to a map. I am not using any sensor to make a map, rather using a image made in a photo editor and want to use as a map. Second, I want to use external sensor instead of Odometry to provide localization information. How can I use sensor information to locate robot and then give a go to goal command. Assuming the sensor is giving x,y value. I am using turtlebot with kietic and ubuntu 16.04 and python. Also i'm not very good at ROS, a little detailed answer will be very helpful. Many many thanks in advance.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2019-08-15 16:51:19 -0500

ksb gravatar image

From what I understood, you want to use image instead of the actual world co-ordinates for localization and your sensor gives out x,y value. Now I am assuming that the sensor output is in meters and in the world co-ordinate. So as far as odometry goes, it's a problem of mapping the image (pixel information) to the world (meter information in the actual world). Example: 1m of Robot motion = 1 Pixel in image. This will depend on the position of the camera when the image was taken (the height of camera, angle and the frame resolution). But if you are drawing your image it's up to you to draw an image as a top view to the world and scale your pixels according to world (example: 10mX 10m of your world [where robot is going to move] will map to 640x480 px.) So now you'll have a relation between image and world. Not to locate your Turtlebot, ideally you should start at 0,0 of image which will be 0,0 of world. And with the x,y information from the sensor you can map that to the image to view the robot position in image. Now if you want to set a goal, it is just a path planning problem. You can you algorithms like A-star and find waypoints. These points can be calculated in image and then mapped to world and given to the turtlebot for actual motion. If you can be a bit more clear with your question, I think I will be able to answer this better.

edit flag offensive delete link more

Comments

Lets start with image. I will creat a 2D image of a rectangle shaped space, which will represent my room e.g of 5 meter square. It will include 2 rectangle shaped sub boxes e.g 0.3 meter square, which will act as static obstacles. Now i want to use this image as reference for my map. How can I generate map using it. I am not using any camera or sensor to make image, rather i will creat this image in a photo editor, e.g coral draw.

The other part is, if i use Navigation stack, can i ignore odometry and use some sensor for its positioning? Because in general, Navigation Stack uses Odometry info and move robot.

enthusiast.australia gravatar image enthusiast.australia  ( 2019-08-15 18:10:45 -0500 )edit

For an image you can either try https://answers.ros.org/question/3229... this link, which I assume solves a similar problem as yours, or you can create a simulation environment using gazebo, which allows to make a model using images or blueprints. And then use gmapping http://wiki.ros.org/slam_gmapping/Tut... to generate the Map data required for navigation stack.

ksb gravatar image ksb  ( 2019-08-16 10:20:27 -0500 )edit

For the other part, navigation stack requires the Odometry data with translation and rotation information. So, if you can generate your own message with the sensor of data typenav_msgs/Odometry then you can publish this message and which will be used by Navigation stack as odometry information.

ksb gravatar image ksb  ( 2019-08-16 10:24:00 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2019-08-15 13:09:24 -0500

Seen: 730 times

Last updated: Aug 15 '19