ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Hi @jimc91

I am going to answer your question but I am sure there are community members here that do know more than me about this particular topic, but I will try, so here we go:

  1. The sensor_node_typeis refering to all your sensor drivers that you need to produce sensors readings. For your particular set up, a Lidar and a Camera.
  2. The sensor_node_type is refering to the ROS wrappers that use the sensor readings and convert them into ROS usable msgs.
  3. The sensor_node_name is self explanatory and refers to a unique name to identify each used sensor.
  4. An finally the sensor_param refers to all those needed params for your sensor configuration, that you may find in your sensors official repository, or even in already made ROS wrappers for those sensors. Since you have a camera you will also need to determine intrinsic and extrinsic calibration parameters.

Then you have the odom_node_pkg refering to any package or utility able to provide any sort of odometry information. You have multiple approach for this.

  1. You can produce your own odometry estimation from your robot velocity.
  2. You can use probabilistic localization if you have a map with AMCL.
  3. You can use Kalman filters to produce accurate odometry output with robot_lozalization.
  4. There are more ways to produce odometry but I just mentioned the ones that I mos familiarize with. Of course since you have a camera you can also take advantage of Visual Odometry paradigm, but I cannot say too much about this topic.

Finally the transform_configuration_pkg is refering to having a proper tf_tree to relate any robot frame that you may have. If you have any doubts of tfs and frames you can check this. The usual (and standard tf tree) is needed to know the relation between points in space in any robot frame in which you can work. So, as a basic mobile robot you will have something like:

World --> Map --> Odom --> Base_footprint --> Base_link --> wheels, sensors, ...

And this is used to operate properly in the frame in which you are navigating with your robot. For the Base_footprint onward the tfs use to be static, so they can be loaded with the robot_description, however the rest are not static and are usually produced by your odometry/localization nodes. For instance, AMCL is able to produce a transformation between map and odom allowing you to localize in the map frame, the robot_localization can be configured to produce several transformation as well, even your own odometry node can have a transform broadcaster to produce the Base_footprint --> Odom transform.

So to sum up, you will need sensor drivers to produce proper readings, ROS wrappers to use the sensors with ROS utilities, a good source of odometry information and a proper tf tree to be able to localize yourself in the enviroment.

Hope that can help you understanding this things. If anyone is willing to add more information about this I will glad to discuss it here.

Regards.