Obtaining training data for CNN with Gazebo
Hi Everyone,
I am currently working on a project to create a neural network that can perform object recognition on simple objects. My plan was to place a simple model in my gazebo world (e.g. bookshelf, wall, box, etc) and record a video of moving past it. I would then create a second identical video but where the object is colored white and everything else is black. This second video would be my ground truth for training the neural network.
How would you go about doing this? Would creating two similar models with differing textures solve this or is there a more elegant way of doing this? Any build-in Gazebo options that could help?
Any help would be appreciated! :)
Asked by andarm on 2020-12-19 09:04:59 UTC
Answers
First of all quite good approach for explicitly labeling an object. To start with you can build a simple world and put the tracked object to it. You should note the position of this object. You can create a similar object but with a different texture, add it to another world in the exact same coordinates.
If you are driving a robot, record the motion with rosbag from the first video, and replay it in the second scenario to have the same motion while capturing the marked object.
If you are having multiple objects of interest, you might consider gazebo's set_model_state - get_model_state services, with this services you can change positions of the objects. So you are able to move objects in to the operating zone and out of the operating zone. This way you can manage the objects.
You can write a simple bash script, or you might even record a bag file from the set_model_state and play it over and over again.
Asked by ulashmetalcrush on 2020-12-24 08:26:16 UTC
Comments
Thank you! I'll definitely try this out! :)
Asked by andarm on 2021-01-05 07:53:35 UTC
Comments