ROS electric arm_navigation architecture questions
Hi all,
I started to realize that I'm not actually entirely sure how I'm supposed to use arm_navigation in my own code for the PR2. I was trying to insert my own meshes programmatically as collision objects which I got to work thanks to E. Gil Jones, but I can't seem to figure out how to visualize what's happening. I ran the planning_scene_warehouse_viewer but while this launches an environment_server which subscribes to collision_object (which I have checked is being published to) it doesn't seem to visualize the mesh that I inserted in rviz, whereas if I insert the mesh using the GUI it does show it in rviz. This is slightly annoying because I want to send goals to the planner to test it out but I can't see where my obstacles are exactly so I don't know whether it's working correctly.
I think at a high level I'm not entirely sure what launch files I'm supposed to be launching and interacting with. If I'm just writing code, not using the warehouse viewer, should I be using the launch files in pr2_3dnav as in previous releases of move_arm? What is the relationship between arm_navigation and move_arm?
At a high level again, what I want to do is take some sensory data, fit some mesh that I've got ahead of time to something in the real world, and then plan around this object. I can do the first few parts, but I'm not actually sure what I should be sending the motion planning request to and what general setup I should have before my code is running. What is the "correct" or suggested way to do this in ROS electric? Thanks very much!