ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version
  1. visio2_ros has a kinetic branch in github. It supports kinetic.
  2. Bucketing parameters : If you extract features from an image, usually a feature rich environment would produce most of the features and the other parts of the image will be underrepresented. Therefore image is gridded into buckets and at most n number of features from each bucket are extracted. Once you understand this concept, ~max_features, ~bucket_width, ~bucket_height are self explanatory parameters.
  3. Matcher parameters are used to configure the feature matching algorithm. For an example, ~nms_n and ~nms_tau parameters are used in the non-maxima-suppression. It is a good idea to start with default values if you don't have insight into what these parameters mean.
  4. Monocular visual odometry can't solve scale. This is why you need to give the camera height and camera pitch as parameters. You can read the pitch from your IMU and height from your sonar (I believe you have previously asked a question with that setup and I assume you haven't changed the setup). You can dynamically change the parameters using the cpp API. Alternatively, there is a Python API as well.
  5. As for where these parameters need to be set, all static parameters can be set in the launch file using a parameters.yaml. This is a good tutorial. You can set the dynamic parameters using the cpp API as meantioned in [4].