ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

uwleahcim's profile - activity

2019-09-19 07:47:37 -0500 received badge  Nice Answer (source)
2018-09-19 14:23:30 -0500 marked best answer listener lookupTwist producting wrong velocity for turns.

I'm trying to extract linear velocity from an object's tf frame using lookupTwist.

I simulate an object moving around on a 400m track. I used simple line and circle equations to describe its position at time t at fixed velocity.

Looking at the tf in rviz, it performs correctly and moves in a smooth oval.

Now when I try and extract velocity using lookupTwist, the linear velocities are all messed up. In straight lines the linear velocities are correct, but in the turns the velocities values sky rocket. For example if I set my simulated object to move at 10 m/s, on a turn the velocity would instantly shoot up to 50m/s go back down halfway way through the turn, shoot back up to 50 just before the straight away.

This is because the code I'm using to extract the velocity from a tf. Is there something I should be aware of or is my environment bugged?

geometry_msgs::Twist objectTwist;
tf::TransformListener listener;
listener.lookupTwist("/map", "/sim_object", ros::Time(0), ros::Duration(0.1), objectTwist);

cout << sqrt(objectTwist.linear.x * objectTwist.linear.x + objectTwist.linear.y * objectTwist.linear.y) << endl;
2018-09-19 14:21:52 -0500 marked best answer ROS network issues

I have 2 computers running ROS and I'm trying to connect them via an Ethernet cable. Both machines are also connected to the Internet via wireless connection.

An intel NUC as the master at 10.0.0.2 An Tegra-k1 as the client at 10.0.0.3

NUC (Master at 10.0.0.2)

export ROS_IP=10.0.0.2
export ROS_HOSTNAME=10.0.0.2
export ROS_MASTER_URI=http://10.0.0.2:11311
roscore

Tegra (Client at 10.0.0.3)

export ROS_IP=10.0.0.3
export ROS_HOSTNAME=10.0.0.3
export ROS_MASTER_URI=http://10.0.0.2:11311
roscore

roscore output from Master

... logging to /home/ubuntu/.ros/log/e8292188-4152-11e6-9eb6-00c2c6b4df39/roslaunch-nuc-16093.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://10.0.0.2:40088/
ros_comm version 1.12.2


SUMMARY
========

PARAMETERS
 * /rosdistro: kinetic
 * /rosversion: 1.12.2

NODES

auto-starting new master
process[master]: started with pid [16104]
ROS_MASTER_URI=http://10.0.0.2:11311/

setting /run_id to e8292188-4152-11e6-9eb6-00c2c6b4df39
process[rosout-1]: started with pid [16119]
started core service [/rosout]

roscore output from Client

... logging to /home/ubuntu/.ros/log/014001fe-4154-11e6-85b6-60601f199a6b/roslaunch-tegra-ubuntu-3583.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://10.0.0.3:35762/
ros_comm version 1.11.19


SUMMARY
========

PARAMETERS
 * /rosdistro: indigo
 * /rosversion: 1.11.19

NODES

WARNING: ROS_MASTER_URI [http://10.0.0.2:11311] host is not set to this machine
auto-starting new master
process[master]: started with pid [3594]
ROS_MASTER_URI=http://10.0.0.3:11311/

setting /run_id to 014001fe-4154-11e6-85b6-60601f199a6b
process[rosout-1]: started with pid [3607]
started core service [/rosout]

I have have multiple network interfaces on both machines

ifconfig output from Master

eno1      Link encap:Ethernet  HWaddr b8:ae:ed:e9:81:d9  
          inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::d43d:9fd2:6b7e:6161/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1070691 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2245 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1478861461 (1.4 GB)  TX bytes:234854 (234.8 KB)
          Interrupt:16 Memory:df200000-df220000 

enx00e08f007010 Link encap:Ethernet  HWaddr 00:e0:8f:00:70:10  
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:93462668 errors:0 dropped:0 overruns:0 frame:0
          TX packets:93462668 errors:0 dropped ...
(more)
2018-05-15 01:00:00 -0500 received badge  Self-Learner (source)
2017-11-28 12:55:09 -0500 marked best answer invert an coordinate axis with tf

The coordinate of my robot and the bundled API is

x -> forward
y -> right
z -> down

I want to map such that my base_link is the following

x -> forward
y -> right
z -> up

(I think this is the most common base_link setup)

tf::Transform transform;
//setup transform

tf_broadcaster.sendTransform(tf::StampedTransform(transform, time, "base_link", "my_robot"));

How do I setup tf::Transform to invert the z axis? Do I use a 3x3 matrix to setup my transform to invert the axis? Possible to use the following 3x3 Matrix for the transformation? (tried using .setBasis but I got a bunch of tf warnings).

[[1,0,0],
 [0,1,0],
 [0,0,-1]]
2017-11-28 12:54:44 -0500 marked best answer tf broadcaster every frame?

I have a node that reads data from my robot, so I have the node also broadcast tf frames. Now my robot has multiple joints and thus frames. Should I declare a separate tf::TransformBroadcaster in my code for each frame or just reuse one tf::TransformBroadcaster. If both methods work, which one is the better programming practice?

Example

tf::TransformBroadcaster frame1__broadcaster;
tf::TransformBroadcaster frame2__broadcaster;
tf::TransformBroadcaster frame3__broadcaster;
...
frame1_broadcaster.sendTransform(tf::StampedTransform(transform, current_time, "base_link", "frame1"));
frame2_broadcaster.sendTransform(tf::StampedTransform(transform, current_time, "base_link", "frame2"));
frame3_broadcaster.sendTransform(tf::StampedTransform(transform, current_time, "base_link", "frame3"));

Or can I just reuse the same tf::TransformBroadcaster to publish transformations for multiple frames like this

tf::TransformBroadcaster frame__broadcaster;
...
frame_broadcaster.sendTransform(tf::StampedTransform(transform, current_time, "base_link", "frame1"));
frame_broadcaster.sendTransform(tf::StampedTransform(transform, current_time, "base_link", "frame2"));
frame_broadcaster.sendTransform(tf::StampedTransform(transform, current_time, "base_link", "frame3"));
2017-11-06 00:55:07 -0500 received badge  Favorite Question (source)
2017-09-13 15:32:04 -0500 marked best answer image_view stereo_view don't show image or disparity (ROS Kinetic)

I'm running ROS Kinetic Kame on Ubuntu 16.04. I have 2 320x240 gray-scale cameras with camera_info generated by running camera_calibration.

I have the following topics

/camera/front/left/image_raw
/camera/front/right/image_raw
/camera/front/left/camera_info
/camera/front/right/camera_info

I then generate the disparty map using

ROS_NAMESPACE=/camera/front rosrun stereo_image_proc stereo_image_proc

This generates a bunch of topics

/camera/front/disparity
/camera/front/left/image_color
/camera/front/left/image_color/compressed
/camera/front/left/image_color/compressed/parameter_descriptions
/camera/front/left/image_color/compressed/parameter_updates
/camera/front/left/image_color/compressedDepth
/camera/front/left/image_color/compressedDepth/parameter_descriptions
/camera/front/left/image_color/compressedDepth/parameter_updates
/camera/front/left/image_color/theora
/camera/front/left/image_color/theora/parameter_descriptions
/camera/front/left/image_color/theora/parameter_updates
/camera/front/left/image_mono
/camera/front/left/image_mono/compressed
/camera/front/left/image_mono/compressed/parameter_descriptions
/camera/front/left/image_mono/compressed/parameter_updates
/camera/front/left/image_mono/compressedDepth
/camera/front/left/image_mono/compressedDepth/parameter_descriptions
/camera/front/left/image_mono/compressedDepth/parameter_updates
/camera/front/left/image_mono/theora
/camera/front/left/image_mono/theora/parameter_descriptions
/camera/front/left/image_mono/theora/parameter_updates
/camera/front/left/image_rect
/camera/front/left/image_rect/compressed
/camera/front/left/image_rect/compressed/parameter_descriptions
/camera/front/left/image_rect/compressed/parameter_updates
/camera/front/left/image_rect/compressedDepth
/camera/front/left/image_rect/compressedDepth/parameter_descriptions
/camera/front/left/image_rect/compressedDepth/parameter_updates
/camera/front/left/image_rect/theora
/camera/front/left/image_rect/theora/parameter_descriptions
/camera/front/left/image_rect/theora/parameter_updates
/camera/front/left/image_rect_color
/camera/front/left/image_rect_color/compressed
/camera/front/left/image_rect_color/compressed/parameter_descriptions
/camera/front/left/image_rect_color/compressed/parameter_updates
/camera/front/left/image_rect_color/compressedDepth
/camera/front/left/image_rect_color/compressedDepth/parameter_descriptions
/camera/front/left/image_rect_color/compressedDepth/parameter_updates
/camera/front/left/image_rect_color/theora
/camera/front/left/image_rect_color/theora/parameter_descriptions
/camera/front/left/image_rect_color/theora/parameter_updates
/camera/front/points2
/camera/front/right/image_color
/camera/front/right/image_color/compressed
/camera/front/right/image_color/compressed/parameter_descriptions
/camera/front/right/image_color/compressed/parameter_updates
/camera/front/right/image_color/compressedDepth
/camera/front/right/image_color/compressedDepth/parameter_descriptions
/camera/front/right/image_color/compressedDepth/parameter_updates
/camera/front/right/image_color/theora
/camera/front/right/image_color/theora/parameter_descriptions
/camera/front/right/image_color/theora/parameter_updates
/camera/front/right/image_mono
/camera/front/right/image_mono/compressed
/camera/front/right/image_mono/compressed/parameter_descriptions
/camera/front/right/image_mono/compressed/parameter_updates
/camera/front/right/image_mono/compressedDepth
/camera/front/right/image_mono/compressedDepth/parameter_descriptions
/camera/front/right/image_mono/compressedDepth/parameter_updates
/camera/front/right/image_mono/theora
/camera/front/right/image_mono/theora/parameter_descriptions
/camera/front/right/image_mono/theora/parameter_updates
/camera/front/right/image_rect
/camera/front/right/image_rect/compressed
/camera/front/right/image_rect/compressed/parameter_descriptions
/camera/front/right/image_rect/compressed/parameter_updates
/camera/front/right/image_rect/compressedDepth
/camera/front/right/image_rect/compressedDepth/parameter_descriptions
/camera/front/right/image_rect/compressedDepth/parameter_updates
/camera/front/right/image_rect/theora
/camera/front/right/image_rect/theora/parameter_descriptions
/camera/front/right/image_rect/theora/parameter_updates
/camera/front/right/image_rect_color
/camera/front/right/image_rect_color/compressed
/camera/front/right/image_rect_color/compressed/parameter_descriptions
/camera/front/right/image_rect_color/compressed/parameter_updates
/camera/front/right/image_rect_color/compressedDepth
/camera/front/right/image_rect_color/compressedDepth/parameter_descriptions
/camera/front/right/image_rect_color/compressedDepth/parameter_updates
/camera/front/right/image_rect_color/theora
/camera/front/right/image_rect_color ...
(more)
2017-08-11 17:39:13 -0500 marked best answer multiple pointcloud2 topics for Navigation Stack with teb_local_planner

I'm trying to setup move_base to work with an quadrotor with 4 stereo cameras sets under four different namespace (/front, /right, /back, and /left)

I'm currently using stereo_image_proc to generate pointcloud2 topics, which generates four separate topics:

/front/points2
/right/points2
/back/points2
/left/points2

I like to use all 4 topics for obstacle detection in my navigation stack with the teb_local_planner. Can the navigation stack handle multi point clouds or should I merge them into one topic?

Does following costmap setup look correct?

costmap_common_parameters.yaml

robot_radius: 0.5

transform_tolerance: 0.2
map_type: costmap

obstacle_layer:
 enabled: true
 obstacle_range: 3.0
 raytrace_range: 4.0
 max_obstacle_height: 2.5 # I have it set just below door height
 min_obstacle_height: 1.5  # I have it set above my min flight height
 inflation_radius: 0.2
 track_unknown_space: true
 combination_method: 1

observation_sources: point1 point2 point3 point4 laser1 laser2

point1: {sensor_frame: front_camera, data_type: PointCloud, topic: /front/points2, marking: true, clearing: true}
point2: {sensor_frame: right_camera, data_type: PointCloud, topic: /right/points2, marking: true, clearing: true}
point3: {sensor_frame: back_camera,  data_type: PointCloud, topic: /back/points2,  marking: true, clearing: true}
point4: {sensor_frame: left_camera,  data_type: PointCloud, topic: /left/points2,  marking: true, clearing: true}
laser1: {sensor_frame: base_link,  data_type: LaserScan, topic: /scan1,  marking: true, clearing: true}
laser2: {sensor_frame: base_link,  data_type: LaserScan, topic: /scan2,  marking: true, clearing: true}

inflation_layer:
  enabled:              true
  cost_scaling_factor:  10.0  # exponential rate at which the obstacle cost drops off (default: 10)
  inflation_radius:     0.5  # max. distance from an obstacle at which costs are incurred for planning paths.

global_costmap_params.yaml

global_costmap:
  global_frame: /map
  robot_base_frame: base_link
  update_frequency: 1.0
  publish_frequency: 0.5
  static_map: true

  transform_tolerance: 0.5
  plugins:
    - {name: static_layer,            type: "costmap_2d::StaticLayer"}
    - {name: obstacle_layer,          type: "costmap_2d::VoxelLayer"}
    - {name: inflation_layer,         type: "costmap_2d::InflationLayer"}

local_costmap_params.yaml

local_costmap:
  global_frame: /map
  robot_base_frame: base_link
  update_frequency: 5.0
  publish_frequency: 2.0
  static_map: false
  rolling_window: true
  width: 5.5
  height: 5.5
  resolution: 0.1
  transform_tolerance: 0.5

  plugins:
   - {name: static_layer,        type: "costmap_2d::StaticLayer"}
   - {name: obstacle_layer,      type: "costmap_2d::ObstacleLayer"}

Since my robot is a quadrotor, It is also has omni-directional movement like a holonomic robot. So I need a planner setup that does y movements as well.

teh_local_planner_params.yaml

TebLocalPlannerROS:

 # Trajectory

 teb_autosize: True
 dt_ref: 0.3
 dt_hysteresis: 0.1
 global_plan_overwrite_orientation: True
 max_global_plan_lookahead_dist: 3.0
 feasibility_check_no_poses: 5

 # Robot

 max_vel_x: 3.0
 max_vel_x_backwards: 3.0
 max_vel_y: 3.0
 max_vel_theta: 1.5
 acc_lim_x: 3.0
 acc_lim_y: 3.0
 acc_lim_theta: 3.0
 min_turning_radius: 0.0 # omni-drive robot (can turn on place!)

 footprint_model:
   type: "point"

 # GoalTolerance

 xy_goal_tolerance: 0.2
 yaw_goal_tolerance: 0.1
 free_goal_vel: False

 # Obstacles

 min_obstacle_dist: 0.7 # This value must also include our robot radius, since footprint_model is set to "point".
 include_costmap_obstacles: True
 costmap_obstacles_behind_robot_dist: 1.0
 obstacle_poses_affected: 30
 costmap_converter_plugin: ""
 costmap_converter_spin_thread: True
 costmap_converter_rate: 5

 # Optimization

 no_inner_iterations: 5
 no_outer_iterations: 4
 optimization_activate: True
 optimization_verbose: False
 penalty_epsilon: 0.1
 weight_max_vel_x: 2
 weight_max_vel_y: 2
 weight_max_vel_theta: 1
 weight_acc_lim_x: 1
 weight_acc_lim_y: 1
 weight_acc_lim_theta: 1
 weight_kinematics_nh: 1 # WE HAVE A HOLONOMIC ROBOT, JUST ADD A SMALL PENALTY
 weight_kinematics_forward_drive: 1
 weight_kinematics_turning_radius: 1
 weight_optimaltime: 1
 weight_obstacle: 50

 # Homotopy Class Planner

 enable_homotopy_class_planning: True
 enable_multithreading: True
 simple_exploration: False
 max_number_classes: 4
 selection_cost_hysteresis: 1.0
 selection_obst_cost_scale: 1.0 ...
(more)
2017-06-22 17:59:06 -0500 received badge  Famous Question (source)
2017-03-24 11:06:56 -0500 received badge  Famous Question (source)
2017-03-07 11:32:55 -0500 received badge  Famous Question (source)
2016-12-21 16:35:03 -0500 received badge  Notable Question (source)
2016-12-21 16:35:03 -0500 received badge  Popular Question (source)
2016-12-01 16:40:34 -0500 received badge  Famous Question (source)
2016-11-10 11:12:44 -0500 received badge  Notable Question (source)
2016-11-10 11:12:44 -0500 received badge  Popular Question (source)
2016-10-05 04:58:29 -0500 received badge  Famous Question (source)
2016-09-09 05:44:21 -0500 received badge  Famous Question (source)
2016-08-17 11:28:07 -0500 received badge  Famous Question (source)
2016-08-11 01:20:38 -0500 received badge  Notable Question (source)
2016-08-04 14:36:08 -0500 received badge  Famous Question (source)
2016-08-02 15:00:30 -0500 commented answer image_view stereo_view don't show image or disparity (ROS Kinetic)

the disparity image generate by stereo_image_proc is a floating point image following the ROS guidelines. Thus it is very simple and straight forward to convert it into a depth map. This math is simply just

depth = baseline * focallength / disparity

(don't have space to post full code).

2016-07-28 13:51:46 -0500 received badge  Teacher (source)
2016-07-28 12:11:09 -0500 received badge  Famous Question (source)
2016-07-21 07:43:16 -0500 commented answer image_view stereo_view don't show image or disparity (ROS Kinetic)

I haven't tried displaying the disparity topic with image_view stereo_view. I did convert the disparity topics into depth map using camera_info from the right camera (it has the baseline in the projection matrix). From there I was able to display the depth map via rqt_image_view.

2016-07-21 06:44:30 -0500 received badge  Notable Question (source)
2016-07-13 00:53:16 -0500 received badge  Notable Question (source)
2016-07-12 20:38:35 -0500 received badge  Popular Question (source)
2016-07-12 20:37:29 -0500 received badge  Associate Editor (source)
2016-07-12 20:36:07 -0500 answered a question Point clouds not generating obstacles in costmap

Someone finally pointed out that my observation_sources were not in the right namespace. The correct yaml format is the following. You have to make sure the observation_sources are under the obstacle_layer namespace like the below.

obstacle_layer:  # I was missing this line
  observation_sources: sensorA sensorB sensorC
  sensorA: {}
  sensorB: {}
  sensorC: {}
2016-07-12 19:51:21 -0500 commented answer multiple pointcloud2 topics for Navigation Stack with teb_local_planner

Yes, I'm currently planning on performing 2D navigation for now.

As for the namespace issue, how do I got about making the observation_sources part of the obstacle_layer param namespace? According to link two, I should put the obs under the plugins, correct? Update: I added "obstacle_layer:" works

2016-07-11 05:37:50 -0500 received badge  Popular Question (source)
2016-07-11 04:32:01 -0500 commented question Point clouds not generating obstacles in costmap

Well my sensor are position at -0.09 from the base_link frame on the z-axix. Thus they start out literally below the z=0 plane. I tried using negative numbers, but it didn't really fix it. I'm going to post my configuration files.

2016-07-10 19:15:10 -0500 asked a question Point clouds not generating obstacles in costmap

I'm having issues trying to generate obstacles for obstacles avoid via the navigation stack. I'm using point cloud sensors

in my costmap_common_params.yaml

observation_sources: point_cloud_sensor

point_cloud_sensor: 
  data_type: PointCloud2
  topic: /points2
  marking: true
  clearing: true
  min_obstacle_height: 0.0
  max_obstacle_height: 2.5
  sensor_frame: depth_left_optical

For local_costmap_params.yaml i have the obstacle plugins

  plugins:
    - {name: obstacle_layer,   type: "costmap_2d::ObstacleLayer"}
    - {name: inflation_layer,  type: "costmap_2d::InflationLayer"}

I once got something to generate in local costmap, but ever since I haven't been able to get any obstacles from my point clouds sources. Any ideas why?

costmap_common_params.yaml

#---standard pioneer footprint---
#---(in meters)---
robot_radius: 0.55

transform_tolerance: 0.2
map_type: costmap

obstacle_layer:
 enabled: true
 obstacle_range: 3.0
 raytrace_range: 5.0
 max_obstacle_height:    20.0            # 2.0 m set to higher than flight height
 min_obstacle_height:    -20.0            # set just below flight height
 inflation_radius: 0.7
 track_unknown_space: true
 combination_method: 1


observation_sources: laser_scan_sensor point_cloud_sensorA point_cloud_sensorB point_cloud_sensorC point_cloud_sensorD
laser_scan_sensor: {data_type: LaserScan, topic: '/scan', marking: true, clearing: true}
point_cloud_sensorA: {data_type: PointCloud2, topic: '/guidance/front/points2', marking: true, clearing: true}
point_cloud_sensorB: {data_type: PointCloud2, topic: '/guidance/right/points2', marking: true, clearing: true}
point_cloud_sensorC: {data_type: PointCloud2, topic: '/guidance/back/points2', marking: true, clearing: true}
point_cloud_sensorD: {data_type: PointCloud2, topic: '/guidance/left/points2', marking: true, clearing: true}

# observation_sources: laser_scan_sensor point_cloud_sensor
# laser_scan_sensor: {data_type: LaserScan, topic: scan, marking: true, clearing: true}


inflation_layer:
  enabled:              true
  cost_scaling_factor:  10.0  # exponential rate at which the obstacle cost drops off (default: 10)
  inflation_radius:     0.7  # max. distance from an obstacle at which costs are incurred for planning paths.

#static_layer:
#  enabled:              true
#  map_topic:            "/map"

global_costmap_params.yaml

global_costmap:
  global_frame: /map
  robot_base_frame: /base_link
  update_frequency: 5.0
  publish_frequency: 2.0
  static_map: true

  transform_tolerance: 0.5
  plugins:
    - {name: static_layer,            type: "costmap_2d::StaticLayer"}
    - {name: obstacle_layer,          type: "costmap_2d::ObstacleLayer"}
    - {name: inflation_layer,         type: "costmap_2d::InflationLayer"}

local_costmap_params.yaml

local_costmap:
  global_frame: /odom
  robot_base_frame: /base_link
  update_frequency: 10.0
  publish_frequency: 10.0
  static_map: false
  rolling_window: true
  width: 8.0
  height: 8.0
  resolution: 0.1
  transform_tolerance: 0.5

  plugins:
   - {name: static_layer,        type: "costmap_2d::StaticLayer"}
   - {name: obstacle_layer,      type: "costmap_2d::ObstacleLayer"}
   - {name: inflation_layer,     type: "costmap_2d::InflationLayer"}

base_ local_planner_params.yaml

TebLocalPlannerROS:

 # Trajectory

 teb_autosize: True
 dt_ref: 0.3
 dt_hysteresis: 0.1
 global_plan_overwrite_orientation: True
 max_global_plan_lookahead_dist: 3.0
 feasibility_check_no_poses: 4

 # Robot

 max_vel_x: 10.0
 max_vel_x_backwards: 4.0
 max_vel_y: 10.0
 max_vel_theta: 1.0
 acc_lim_x: 4.0
 acc_lim_y: 4.0
 acc_lim_theta: 1.0
 min_turning_radius: 0.0
 footprint_model: "circular"   
# types: "point", "circular", "two_circles", "line", "polygon"
   radius: 0.6 # for type "circular"
#   line_start: [-0.3, 0.0] # for type "line"
#   line_end: [0.3, 0.0] # for type "line"
#   front_offset: 0.2 # for type "two_circles"
#   front_radius: 0.2 # for type "two_circles"
#   rear_offset: 0.2 # for type "two_circles"
#   rear_radius: 0.2 # for type "two_circles"
#   vertices: [ [0.25, -0.05], [0.18, -0.05], [0.18, -0.18], [-0.19, -0.18], [-0.25, 0], [-0.19, 0.18], [0.18, 0.18], [0.18, 0.05], [0.25, 0.05] ] # for type "polygon"

 # GoalTolerance

 xy_goal_tolerance: 0.2
 yaw_goal_tolerance: 0.1
 free_goal_vel: False

 # Obstacles

 min_obstacle_dist: 0.6
 include_costmap_obstacles: True
 costmap_obstacles_behind_robot_dist: 1.0
 obstacle_poses_affected: 30
# costmap_converter_plugin: ""
# costmap_converter_spin_thread: True
# costmap_converter_rate: 5

 # Optimization

 no_inner_iterations: 5
 no_outer_iterations: 4
 optimization_activate ...
(more)
2016-07-06 01:56:10 -0500 received badge  Notable Question (source)
2016-07-04 20:15:14 -0500 received badge  Notable Question (source)
2016-07-04 08:42:59 -0500 commented answer ROS network issues

Using Tegra K1 and intel NUC. Both have 1Gb ethernet. Both have wifi interfaces, which are connected to the internet and operate on 192.168.2.xx 255.255.255.0 network (all info in ifconfig above). Anyways I got the system to work by disabling wifi and rebooting and deleting & re-adding connections.

2016-07-03 20:08:19 -0500 received badge  Popular Question (source)
2016-07-03 19:29:54 -0500 received badge  Supporter (source)
2016-07-03 19:29:35 -0500 commented answer ROS network issues

Now that I have the machines talking to each other, I noticed the performance of passing topics is asymmetrical. In this case my client has no problem streaming topics from the master. However, the other way around I'll constantly lose messages and images will just stall out.