Ask Your Question

Naman's profile - activity

2019-09-17 04:57:01 -0500 received badge  Favorite Question (source)
2019-09-13 16:56:49 -0500 marked best answer using laser_filters to convert inf readings to max_range + 1 from hokuyo

Hi all,

I have a hokuyo laser sensor and I am using hokuyo_node to get the Laser Scan. Then, I am using LaserScanRangeFilter to filter out the readings to get max_range + 1 in cases where the hokuyo_node gives inf reading. The problem is instead of getting max_range + 1, I get nan readings for all the previous inf readings after running the filter. The launch file:

<launch>
    <node pkg="hokuyo_node" type="hokuyo_node" name="Laser_node" output="screen">
        <param name="port" value="/dev/ttyACM1"/>
        <param name="angle_min" value="-1.5707963"/>
        <param name="angle_max" value="1.5707963"/>
    </node>
    <node pkg="laser_filters" type="scan_to_scan_filter_chain" name="laser_filter">
            <rosparam command="load" file="$(find starter_node)/move_base_config/hokuyo_config.yaml" /> 
        <remap from="base_scan" to="scan" />
        </node>
</launch>

hokuyo_config.yaml

scan_filter_chain:
- name: range
  type: LaserScanRangeFilter
  params:
    lower_threshold: 0.02
    upper_threshold: 10

The output of rosparam list:

/Laser_node/allow_unsafe_settings
/Laser_node/angle_max
/Laser_node/angle_min
/Laser_node/calibrate_time
/Laser_node/cluster
/Laser_node/frame_id
/Laser_node/intensity
/Laser_node/max_ang
/Laser_node/max_ang_limit
/Laser_node/max_range
/Laser_node/min_ang
/Laser_node/min_ang_limit
/Laser_node/min_range
/Laser_node/port
/Laser_node/skip
/Laser_node/time_offset
/laser_filter/scan_filter_chain
/rosdistro
/roslaunch/uris/host_namankumar_inspiron_5521__45829
/rosversion
/run_id

I am not able to figure out what is causing it. Does anyone has any idea about how to solve this issue?

Thanks in advance.
Naman Kumar

2019-02-21 10:18:04 -0500 marked best answer following a boustrophedon path in a given room with obstacles

Hi all,

I have a mobile robot which is navigating around a room, I already have the map of the room. I am using the navigation_stack of ROS. I am using rotary encoders for odometry. I am fusing the data from Rotary encoders and IMU using robot_pose_ekf. I am using amcl for localization and move_base for planning. Now, I have to write a Complete coverage Path planning algorithm and I am following this paper and I would like to ask what is the best way to generate the Boustrophedon path (simple forward and backward motions) in a cell (can be rectangular, trapezium, etc.) with no obstacles? If someone can suggest how to implement it in ROS, that will be great.

Update:
In cases like shown here (taken from here):
image description

To come up with divisions in the 2nd or 3rd cell (center top or center bottom), I dont know whether knowing all the corner points will be enough (I might be wrong) or should we have all the boundary points (If yes, I am not sure how exactly to find it). Does anyone have any idea how to generate boustrophedon path in a cell like this?

Please let me know if you need more information from me. Any help will be appreciated.

Thanks in advance.
Naman Kumar

2019-01-22 08:04:58 -0500 received badge  Good Question (source)
2018-12-03 02:16:53 -0500 received badge  Famous Question (source)
2018-12-01 14:54:02 -0500 received badge  Good Question (source)
2018-11-19 13:41:03 -0500 received badge  Favorite Question (source)
2018-09-19 14:24:48 -0500 marked best answer Wall following using hokuyo lidar and sharp IR sensors

Hi all,

I have a mobile robot and I would like it to follow the walls of a room, I already have a map of the room. I am using wheel encoders for the odometry, robot_pose_ekf for fusing data from wheel encoders and IMU and to get the odom_combined -> base_footprint transformation. I have a Hokuyo lidar for localization and obstacle avoidance. I am also using Kinect to see obstacles which can not be seen by the hokuyo. I am using amcl for localization. I have couple of sharp sensors on the side for wall following. I am not planning to use global or local costmap because the localization of the robot is not perfect and the robot might think that it is closer (or further away) to the wall than it actually is and therefore, wall following might fail. So, I am planning to just use the data from hokuyo lidar and sharp sensors and do wall following and maintain constant distance from the wall (say 10 cm). Now, I would like to know what is the best technique for doing wall following in this manner? Also, how can one deal with the issue of open gaps in the wall (like open doors, etc..) while doing wall following using the above approach?

I know this is a very general question but any suggestions regarding it will be appreciated. Please let me know if you need more information from me.

Update:
I am just trying to do wall following in a given room (I have the vertices of the room in a global reference frame) For example, Lets say I have a map of a room (shown below). I want to make the robot follow the wall very closely (say 10 cm from the wall). Also, if there is an open space (on bottom left), the robot should not go in the adjacent room but should keep on doing wall following in the given room (For this, I have the boundary limits of the room which I can use to make sure the robot is within the given room). The approach which I am thinking is to come up with an initial global path (set of points close to the wall) for wall following and then make sure robot goes from one point to the next making sure that it always maintains a certain distance from the wall. If there is no wall, then the robot can just follow the global path (assuming localization is good). I am not sure about its implementation complexity and whether there is a better algorithm/ approach to do something like this.

image description

Thanks in advance.
Naman Kumar

2018-07-19 10:04:07 -0500 marked best answer Using robot_localization with amcl

Hi all,

I am starting to set up awesome robot_localization package and I am using rotary encoders (nav_msgs/Odometry, both pose and twist) and IMU (sensor_msgs/Imu) which provides continuous data and Ultrawide band sensor which provides global absolute position data as a nav_msgs/Odometry message (only pose, (x,y and yaw)) . As suggested on the wiki page of the package, I can fuse data from the encoders and the IMU in the odom_frame which gives the transform odom_frame -> base_link_frame. For the map_frame, I can fuse data from rotary_encoders, IMU and Ultrawide band sensor (It is similar to GPS in the sense that it provides global absolute position data) and it gives the transfrom from map_frame -> odom_frame but next I am using amcl to improve the pose_estimate and amcl also provides map_frame->odom_frame transformation. So, I want to ask how should I deal with the transformations (specifically map_frame->odom_frame), Should I just not use the map_frame->odom_frame transformation from robot_localization and let amcl take care of it, If yes how can I not use the transformation? Or is there any better way to deal with it?

Update 1:
If I give output of AMCL and UWB to second instance of ekf_localization_node, it will give me the required transformation (map_frame -> odom_frame) and will publish the output odometry message on odometry/filtered topic but I am using move_base which expects robot pose on /amcl_pose topic. So, I can remap the output on odometry/filtered to amcl_pose and change the message type from nav_msgs/Odometry to geometry_msgs/PoseWithCovarianceStamped, Does that sound okay?

Also, another approach which I am thinking is to run the first instance of ekf_localization_node (in odom_frame) with rotary encoders and IMU as input which will give odom_frame->base_link_frame transformation and then run second instance of ekf_localization_node (in map_frame) with UWB, rotary encoders and IMU as input. I am slightly confused about how the second instance will work. Will it modify the odom_frame -> base_link_frame tf by considering (or changing) the data from UWB in the odom_frame? If yes, then I can directly use the modified odom_frame->base_link_frame tf (which includes UWB, encoders and IMU) (and ignore map_frame->odom_frame tf from second instance of ekf_localization_node) in amcl and everything should be fine, right?

Update 2:
Thanks @Tom Moore for the answer! So just to clarify, run the first instance of ekf_localization_node in odom_frame with wheel odometry and IMU data as input which will give us the odom_frame -> base_link_frame transformation. Then run the second instance of the ekf_localization_node in the map_frame with wheel odometry, IMU data, UWB data (it is in map_frame) and amcl_pose which will give us the map_frame -> odom_frame transformation using existing odom_frame -> base_link_frame tf.

Thanks in advance.
Naman Kumar

2018-07-04 08:01:23 -0500 received badge  Nice Question (source)
2018-06-18 11:08:51 -0500 marked best answer How to subscribe a new costmap2dROS object to the global_costmap/costmap

Hi all,

I am planning to write my own node in ROS and I need to access the global_costmap which is being published. For that, I have created a new Costmap2DROS object:

int main(int argc, char** argv){

    ros::init(argc,argv,"publish_multiple_goals");
    ros::NodeHandle n;

    tf::TransformListener tf_(ros::Duration(10));

    costmap_ros_ = new costmap_2d::Costmap2DROS("my_costmap", tf_);
    costmap_ros_->start();

    costmap_ = costmap_ros_->getCostmap();

    return 0;
}

Now, I have a my_costmap_params.yaml file where I have specified the map_topic : /move_base_node/global_costmap/costmap (this is being published by move_base) so that the new costmap2DROS object subscribes to it:

my_costmap:
  #Set the global and robot frames for the costmap
  global_frame: /map
  robot_base_frame: /base_link

  #Set the update and publish frequency of the costmap
  update_frequency: 2.0
  publish_frequency: 2.0

  #We'll use a map served by the map_server to initialize this costmap
  static_map: true
  rolling_window: false

  use_dijkstra: false

  plugins:
  - {name: static_layer, type: "costmap_2d::StaticLayer"}
  - {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"}
  - {name: inflation_layer, type: "costmap_2d::InflationLayer"}

  static_layer:
    map_topic: /move_base_node/global_costmap/costmap

  #Configuration for the sensors that the costmap will use to update a map
  obstacle_layer:
    observation_sources: base_scan
    base_scan: {data_type: LaserScan, sensor_frame: /base_laser_link_1, topic: /base_scan, expected_update_rate: 0.4,
      observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 0.4, min_obstacle_height: 0.0}

I am loading the above parameters to parameter_server here:

publish_multiple.xml

<launch>
 <node pkg="publish_multiple_goals" type="publish_multiple_goals" respawn="false" name="publish_multiple_goals" output="screen">
    <rosparam file="$(find drsim_control)/move_base_config/my_costmap_params.yaml" command="load" />
  </node>
</launch>

Now the problem is it still subscribes to the "map" topic published by the map_server and does not subscribe to the map_topic I mentioned above that is /move_base_node/global_costmap/costmap using the costmap2DROS object I created above. I want to make sure that the costmap2DROS object created above subscribes to the map_topic to get the global_costmap. Does anyone have any idea what is going wrong here or what can I do to achieve the above mentioned task? Let me know if you need more information from me.

Update:
I think I am able to subscribe to the global_costmap but still when I use functions like: costmap_ros_->getRobotFootprint() , it gives me wrong readings (when I use this function in the global planner plugin, it gives me the correct values). Also, the Cost values of the costmap is either 0 or 254. Shouldn't the inflation be there according to http://wiki.ros.org/costmap_2d#Inflation ? Any help will be appreciated.

Thanks in advance.
Naman Kumar

2018-05-28 15:57:10 -0500 marked best answer sending a sequence of goals to move_base

Hi all,

I have a mobile robot which is navigating around a room and it can go from start to goal without any issues. Now, I want to send a sequence of goals to move_base so that once the robot reaches its first goal, it automatically starts moving to its second goal and so on. One way which I can think of is to send a second goal when the robot reaches its first goal but how can I know that the robot has reached its first goal automatically? Does it publish anything on any topic once it reaches the goal or is any flag set or changed? Does anyone have any idea about how can it be done or is there a better way to send a sequence of goals to move_base so that the robot just keeps on going from one goal to another?

Any help will be appreciated.

Thanks in advance.
Naman Kumar

2018-04-17 10:46:30 -0500 marked best answer setting covariances for robot_localization

Hi all,

I have a mobile robot and I am using robot_localization to fuse data from wheel_encoders, IMU and hokuyo lidar (using AMCL). In the first instance of robot_localization, I have wheel_encoders and IMU in the odom_frame and in the second instance of robot_localization, I have wheel_encoders and output of AMCL (amcl_pose (geometry_msgs/PoseWithCovarianceStamped)) in the map_frame.

This is my launch file for the second instance of robot_localization :

<launch>

<node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization2" clear_params="true">

  <param name="frequency" value="10"/>

  <param name="sensor_timeout" value="0.5"/>

  <param name="two_d_mode" value="true"/>

  <param name="map_frame" value="map"/>
  <param name="odom_frame" value="odom_combined"/>
  <param name="base_link_frame" value="base_footprint"/>
  <param name="world_frame" value="map"/>

  <param name="transform_time_offset" value="0.0"/>

  <param name="odom0" value="odom"/>
  <param name="pose0" value="amcl_pose"/>

  <rosparam param="odom0_config">[false, false, false,
                                  false, false, false,
                                  true, true, false,
                                  false, false, true,
                                  false, false, false]</rosparam>

  <rosparam param="pose0_config">[true,  true,  false,
                                  false, false, true,
                                  false, false, false,
                                  false, false, false,
                                  false, false, false]</rosparam>

  <param name="odom0_differential" value="true"/>
  <param name="pose0_differential" value="false"/>

  <param name="print_diagnostics" value="true"/>

  <param name="debug"           value="false"/>
  <param name="debug_out_file"  value="debug_ekf_localization.txt"/>

  <rosparam param="process_noise_covariance">[0.05, 0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0.05, 0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0.06, 0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0.03, 0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0.03, 0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0.06, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0.025, 0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0.025, 0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0.04, 0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0.01, 0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0.01, 0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0.02, 0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0.01, 0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0.01, 0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0.015]</rosparam>

       <rosparam param="initial_estimate_covariance">[0.3, 0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0.3, 0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    1e-9, 0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    1e-9, 0,    0,    0,    0,    0,    0,     0,     0 ...
(more)
2018-04-10 02:31:48 -0500 marked best answer Merge the output of Ultrasound sensor with hokuyo laser output for navigation

Hi all,

I have a mobile robot and I would like it to navigate around the building. I already have a 2D map of the building. I have Rotational encoders to get the odometry information and IMU/UWB for localization. I will be using hokuyo laser sensor and Ultrasound sensors for navigation. Hokuyo will be the main sensor for navigation and Ultrasound sensors will be used to detect obstacles not seen by the hokuyo (like Overhangs, etc). So, my question is how can I merge the output of Ultrasound sensor which is of the type sensor_msgs/Range.msg with the hokuyo sensor output which is of the type sensor_msgs/LaserScan.msg ?
I think one of the ways can be to convert Range message to LaserScan message and then modify the original LaserScan message from the Hokuyo to incorporate this LaserScan message to come up with the final merged LaserScan message and use it for navigation. But I dont know how hard it will be specially merging of two LaserScan messages into one. Is there any other (easier or better) way to accomplish this? Or if this is the only way, what is the best way to go about it?

Thanks in advance.
Naman Kumar

2018-04-06 23:11:08 -0500 received badge  Nice Question (source)
2018-03-19 13:14:20 -0500 marked best answer Most recent transform in tf Tree

Hi all,

I want to ask that why the value of Most recent transform in my transformation tree so high? Does it mean that there is too much latency in generating the transformations? My tf tree:
image description

Thanks in advance.
Naman

2018-03-19 13:10:27 -0500 received badge  Famous Question (source)
2018-03-15 16:09:57 -0500 received badge  Nice Question (source)
2018-03-09 19:57:20 -0500 marked best answer Orocos-bfl not found while installing navigation stack ROS indigo + Ubuntu 14.04

Hey guys,

I have just installed Ubuntu 14.04 and ROS indigo and is trying to install navigation stack ( http://wiki.ros.org/navigation ). While doing catkin_make, I get the following error in the robot_pose_ekf package:

-- Boost version: 1.54.0
-- +++ processing catkin package: 'robot_pose_ekf'
-- ==> add_subdirectory(navigation/robot_pose_ekf)
checking for module 'orocos-bfl'
-- package 'orocos-bfl' not found
CMake Error at /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:283 (message):
A required package was not found
Call Stack (most recent call first):
/usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:337 (_pkg_check_modules_internal)
navigation/robot_pose_ekf/CMakeLists.txt:6 (pkg_check_modules)

Then, I tried to install bfl from http://www.orocos.org/bfl which did not work as it is not up-to-date I guess. Can anyone help me out here and give some suggestions on how to solve this issue?

Thanks in advance.
Naman Kumar

2018-03-02 19:16:07 -0500 marked best answer CMake error while integrating catkin workspace with qtCreator

Hi all,

I am trying to open a ros package in Qt Creator following http://wiki.ros.org/IDEs . In Qtcreator, I opened the CmakeLists.txt file from ~/catkin_ws/src and changed the build directory properly to ~/catkin_ws/build and then ran cmake. I am following this answer.

Also, I set the cmake arguments for devel and install directories properly which are inside catkin_ws using this line

-DCMAKE_INSTALL_PREFIX=/home/sharath/catkin_ws/install
-DCATKIN_DEVEL_PREFIX=/home/sharath/catkin_ws/devel

But when I run cmake in qtCreator, I get the following error:

CMake Error at /opt/ros/indigo/share/catkin/cmake/safe_execute_process.cmake:11 (message):
execute_process(/usr/bin/python
"/home/sharath/catkin_ws/build/catkin_generated/generate_cached_setup.py")
returned error code 1
Call Stack (most recent call first):
/opt/ros/indigo/share/catkin/cmake/all.cmake:186 (safe_execute_process)
/opt/ros/indigo/share/catkin/cmake/catkinConfig.cmake:20 (include)
CMakeLists.txt:52 (find_package)
File "/home/sharath/catkin_ws/build/catkin_generated/generate_cached_setup.py", line 20, in <module>
from catkin.environment_cache import generate_environment_script
ImportError: No module named catkin.environment_cache
Traceback (most recent call last):
-- Using CATKIN_DEVEL_PREFIX: /home/sharath/catkin_ws/devel
-- Using CMAKE_PREFIX_PATH:
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/sharath/catkin_ws/build/test_results
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.6.14
-- Configuring incomplete, errors occurred!
See also "/home/sharath/catkin_ws/build/CMakeFiles/CMakeOutput.log".
See also "/home/sharath/catkin_ws/build/CMakeFiles/CMakeError.log".

Here are the problems I am facing,

1) First of all , the above error appears . I tried updating the python modules by sudo pip install -U catkin_pkg but that does not help.

2) Also interestingly after running the cmake in qtcreator , my catkin_make doesn't run in the ~/catkin_ws

Now I cannot use ros aliases like roscore and stuff even though I have already sourced catkin_ws/devel/setup.bash and also /opt/ros/indigo/setup.bash in my bashrc. Restarted the terminal , but still catkin_make and aliases dont run. However this gets solved the moment I source again inside terminal for /opt/ros/indigo/setup.bash. This solves and now all ros commands work normally without sourcing only until I run Cmake from qtcreator and then ros commands dont work and need to be sourced again from terminal. Is the setup.bash in catkin_ws/devel getting modified or something ?

3) However my main problem is how to solve the qt creator error above.?

I tried setting up the workspace again but these problems recur in the same manner. There's something I am missing.

Update:
Thanks @Simon for the answer! I tried the "old_way" mentioned in the answer and replaced the CMakeLists.txt in the ~/catkin_ws/src/ with the file it links to. Then, I opened that CMakefile in qtcreator and specified build directory as ~/catkin_ws/build/ and then ran cmake and then, I get the same error as mentioned above. Now, when I run catkin_make ...
(more)

2018-02-06 09:22:01 -0500 marked best answer IMU initial orientation and drift problem

Hi all,

I am using Phidgets Spatial 3/3/3 along with this launch file. I have couple of doubts regarding it:
1. How is the initial orientation decided, is it always the same (default orientation) or is there any way to find the exact initial orientation?
2. When I use the magnetic orientation vector <param name="use_mag" value="true"/>, I have noticed that there is lot of drift when I rotate the IMU which does not happen when use_mag parameter is false. Can anyone tell me why is this happening or am I misinterpreting something?

Thanks in advance.
Naman Kumar

2018-02-02 07:44:11 -0500 marked best answer Connecting SICK TIM 571 via Ethernet to Ubuntu 14.04

Hey,

We're working on our robot running Ubuntu 14.04 and ROS indigo. Currently, we are trying to connect a Sick TIM 571 laserscanner via ethernet and the Ubuntu 14.04 tries to connect to the Ethernet but it keeps on failing and never connects to it. It works perfectly in Windows using the SOPAS Engineering Tool. Also I installed ROS drivers http://wiki.ros.org/sick_tim and when I run the following command : roslaunch sick_tim sick_tim571_2050101.launch, I keep on getting the following error :

[FATAL] [1459886892.692926805]: Could not resolve host: ... (1)(Host not found (authoritative))
[FATAL] [1459886892.693107727]: Failed to init device: 1
[ERROR] [1459886892.693164271]: sendSOPASCommand: socket not open

SOPAS - Error stopping streaming scan data!
sick_tim driver exiting.

I have also tried (from http://answers.ros.org/question/20722... ):
"On edit connections, click add, go to ethernet tab, set device MAC address to the right ethernet port, got to IPv4 Settings tab, set method to Manual and enter these settings: Address: 192.168.0.1 Netmask: 255.255.255.0 Leave Gateway open and click save." but this also does not help in setting up the ethernet connection between SICK TIM 571 and Ubuntu 14.04.

I am not able to PING the sensor or anything because it is not even getting connected to the Ubuntu 14.04.
Does anyone know how can I set up the ethernet connection between SICK TIM 571 and Ubuntu 14.04?

Thanks in advance.
Naman Kumar

2017-11-07 04:00:44 -0500 marked best answer Using the estimated robot pose from robot_pose_ekf with amcl

Hi all,

I am using robot_pose_ekf to fuse data from Wheel encoders and IMU to estimate the 3D pose of the robot. Then, I am using amcl to take care of odometry drift and finally the move_base package does the navigation. I am generating the odom message of the type nav_msgs/Odometry from the wheel encoders which is used by the base_local_planner. I know robot_pose_ekf provides odom_combined to base_link transformation which is used by amcl.
My question is once robot_pose_ekf fuses the data from wheel encoders and IMU and generates the estimated 3D pose of the robot of the type geometry_msgs/PoseWithCovarianceStamped on the topic robot_pose_ekf/odom_combined, how and where is this output used in the navigation package as a whole (move_base or amcl)?

Thanks in advance.
Naman Kumar

2017-10-05 09:59:55 -0500 received badge  Stellar Question (source)
2017-08-31 07:51:27 -0500 marked best answer displaying the robot's actual trajectory in RViz

Hi all,

I have a mobile robot which is navigating around a room, I already have the map of the room. I am using rotary encoders for odometry. I am fusing the data from Rotary encoders and IMU using robot_pose_ekf. I am using amcl for localization and move_base for planning with SBPL Lattice Planner as the global planner and Trajectory Planner as the local planner. Now when I run RViz, I can visualize the global plan (desired trajectory) generated by move_base but I would also like to visualize the actual trajectory (based on odometry data and amcl) followed by the robot in RViz. I want to do this so that I can see how much is the actual trajectory different from the desired trajectory. Is there any package in ROS which does this (I am not able to find any such package)? If not, what is the best way to visualize the robot's actual trajectory in RViz? Any help will be appreciated.

Thanks in advance.
Naman Kumar

2017-08-21 07:49:11 -0500 marked best answer obtain a copy of the global_costmap

Hi all,

I am planning to write my own node in ROS and I need to access the global_costmap which is being published. I have the following piece of code:

int main(int argc, char** argv){

    ros::init(argc,argv,"publish_multiple_goals");
    ros::NodeHandle n;

    tf::TransformListener tf_(ros::Duration(10));

    costmap_ros_ = new costmap_2d::Costmap2DROS("global_costmap", tf_);
    costmap_ros_->start();

    costmap_ = costmap_ros_->getCostmap();

    unsigned int sizeX = costmap_->getSizeInCellsX();
    unsigned int sizeY = costmap_->getSizeInCellsY();

    std::cout << "SX: " << sizeX << " SY: " << sizeY << std::endl;

    return 0;
}

Now the problem is I don't think I am getting the global_costmap because the size getting printed is (200,200) (Its the default I guess). The correct size of the global_costmap is (280,457). Can anyone tell me what is wrong here? Or If there is any other way to obtain a copy of the global_costmap, please let me know. Let me know if you need more information from my side.

Update:
Yeah, I was able to subscribe to /move_base_node/global_costmap/costmap and obtain nav_msgs/OccupancyGrid message but I was wondering how can I have a new Costmap object which is an exact copy of the global_costmap being published my move_base instead of just getting the OccupancyGrid data or how can I create a costmap object once I have this Occupancy Grid data? I would like to have this so that I can use all the costmap functions.

Thanks in advance.
Naman Kumar

2017-06-27 13:21:08 -0500 marked best answer Adding range_sensor_layer to layered costmap for global planning

Hi all,

I am working on mobile robot navigation with the Hokuyo and Ultrasonic sensors. Hokuyo will be the main sensor for navigation and Ultrasound sensors will be used to detect obstacles not seen by the hokuyo (like Overhangs, etc). I already have the map of the environment. I have Rotational encoders to get the odometry information and IMU/UWB for localization.
I have added a range_sensor_layer ( http://wiki.ros.org/range_sensor_layer ) for the ultrasound sensor to the costmap (local and global). I am using the move_base ( http://wiki.ros.org/move_base ) package for navigation. The problem is that global planner still gives a path which passes through the obstacles detected by the ultrasound sensor and only the local planner takes care of avoiding that obstacle. The global planner takes care of the objects detected by the Hokuyo sensor and plans the path accordingly but ignores the obstacles detected by the ultrasound sensor while global planning. So, I want to ask how can I correctly include range_sensor_layer in the global costmap so that the global path does not give a path through the obstacles detected by the ultrasound sensor?

The relevant parameters are shown:

local_costmap_params.yaml

plugins:
  - {name: obstacle_layer, type: "costmap_2d::VoxelLayer"}
  - {name: sonar_layer,   type: "range_sensor_layer::RangeSensorLayer"}
  - {name: inflation_layer, type: "costmap_2d::InflationLayer"}

  #Configuration for the sensors that the costmap will use to update a map   
  obstacle_layer:
    observation_sources: base_scan
    base_scan: {data_type: LaserScan, sensor_frame: /base_laser_link, topic: /base_scan, expected_update_rate: 0.4,
      observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 0.4, min_obstacle_height: 0.08}

  sonar_layer:
    topics: ["/sonar"]
    no_readings_timeout: 1.0

global_costmap_params.yaml

plugins:
  - {name: static_layer, type: "costmap_2d::StaticLayer"}
  - {name: obstacle_layer, type: "costmap_2d::VoxelLayer"}
  - {name: inflation_layer, type: "costmap_2d::InflationLayer"}
  - {name: sonar_layer,   type: "range_sensor_layer::RangeSensorLayer"}

  #Configuration for the sensors that the costmap will use to update a map
  obstacle_layer:
    observation_sources: base_scan
    base_scan: {data_type: LaserScan, sensor_frame: /base_laser_link, topic: /base_scan, expected_update_rate: 0.4,
      observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 0.4, min_obstacle_height: 0.08}

  sonar_layer:
    topics: ["/sonar"]
    no_readings_timeout: 1.0

Update 1:
Gazebo simulation
image description

Global costmap Rviz Visualization
image description

As you can see, costmap visualization of the sonar obstacles in RViz does not look correct. The entire region in front of the sonar should be an obstacle which is not happening.

Update 2 (@Humpelstilzchen): I have pasted rostopic echo of single sonar message: image description

Update 3: The local costmap visualization of the sonar obstacles looks correct. Its only the global costmap which is wrong.

Local costmap Rviz visualization image description

It looks like I am missing out on something obvious but I am not able to figure it out. Does anyone have any idea about it?

Thanks in advance.
Naman Kumar

2017-06-26 15:43:16 -0500 received badge  Good Question (source)
2017-06-17 09:14:47 -0500 marked best answer baud rate parameter in rosserial_python arduino

Hi all,

I am trying serial communication between the Arduino and ROS using rosserial ( http://wiki.ros.org/rosserial ). It works perfectly with the default parameter for the baud rate:

rosrun rosserial_python serial_node.py _port:=/dev/ttyACM0

but when I give a different baud rate using this:

rosrun rosserial_python serial_node.py _port:=/dev/ttyACM0 _baud:=9600

I get the following error:

[INFO] [WallTime: 1428602466.003588] ROS Serial Python Node
[INFO] [WallTime: 1428602466.016732] Connecting to /dev/ttyACM0 at 9600 baud
[ERROR] [WallTime: 1428602483.124382] Unable to sync with device; possible link problem or link software version mismatch such as hydro rosserial_python with groovy Arduino

I have rosserial-indigo-devel installed and the ROS version is indigo. The above error comes only when you specify a baud rate other than 57600 (default baud rate).
I also looked into serial_node.py and changed the default baud rate to 9600 but then also, it only works with the baud rate of 57600 and gives error if some other baud rate is specified.

Does anyone know what can be the problem?

Thanks in advance.
Naman Kumar

2017-05-22 10:54:57 -0500 marked best answer RVIZ crashes when I add Camera/ Image Type

Hi all,

I installed ROS Indigo in Ubuntu 14.04 and the RVIZ version is 1.11.15. When I run rviz and try to add Image or Camera display type, RVIZ crashes giving following output:

Segmentation Fault (Core Dumped)

Although I am able to add PointCloud2 display type without any issues.
I have NVIDIA drivers version 367.27.

Update - 1
I get the following output after building RVIZ from source and running with GDB. The segmentation fault occurs when I try to add the Camera display type to RVIZ:

c-naman.kumar@namanviumar:~$ rosrun --prefix 'gdb --args' rviz rviz
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /home/likewise-open/FARADAYFUTURE/c-naman.kumar/catkin_ws/devel/lib/rviz/rviz...done.
(gdb) run
Starting program: /home/likewise-open/FARADAYFUTURE/c-naman.kumar/catkin_ws/devel/lib/rviz/rviz 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffdbd6a700 (LWP 8788)]
[New Thread 0x7fffdb569700 (LWP 8789)]
[New Thread 0x7fffd3fff700 (LWP 8790)]
[New Thread 0x7fffd37fe700 (LWP 8791)]
[New Thread 0x7fffd2ffd700 (LWP 8792)]
[ INFO] [1479141998.084372668]: rviz version 1.11.15
[ INFO] [1479141998.084458500]: compiled against Qt version 4.8.6
[ INFO] [1479141998.084484005]: compiled against OGRE version 1.8.1 (Byatis)
[New Thread 0x7fffd25f1700 (LWP 8796)]
[New Thread 0x7fffd1df0700 (LWP 8797)]
[New Thread 0x7fffd15ef700 (LWP 8798)]
[New Thread 0x7fffd0dee700 (LWP 8803)]
[Thread 0x7fffd37fe700 (LWP 8791) exited]
[New Thread 0x7fffd37fe700 (LWP 8804)]
[New Thread 0x7fffadae8700 (LWP 8805)]
[New Thread 0x7fffad2e7700 (LWP 8806)]
[New Thread 0x7fffacae6700 (LWP 8807)]
[New Thread 0x7fffa7fff700 (LWP 8808)]
[New Thread 0x7fffa77fe700 (LWP 8809)]
[New Thread 0x7fffa6ffd700 (LWP 8810)]
[New Thread 0x7fffa67fc700 (LWP 8811)]
[New Thread 0x7fffa5ffb700 (LWP 8812)]
[New Thread 0x7fffa57fa700 (LWP 8813)]
[New Thread 0x7fffa4ff9700 (LWP 8814)]
[New Thread 0x7fff83fff700 (LWP 8815)]
[ INFO] [1479141998.795922717]: Stereo is NOT SUPPORTED
[ INFO] [1479141998.796146340]: OpenGl version: 4.5 (GLSL 4.5).
[New Thread 0x7fff837fe700 (LWP 8829)]
[New Thread 0x7fff82ffd700 (LWP 8830)]

Program received signal SIGSEGV, Segmentation fault.
__strcmp_sse2_unaligned () at ../sysdeps/x86_64/multiarch/strcmp-sse2-unaligned.S:29
29  ../sysdeps/x86_64/multiarch/strcmp-sse2-unaligned.S: No such file or directory.
(gdb)

Update - 2
Here is the backtrace:

(gdb) bt
#0  __strcmp_sse2_unaligned () at ../sysdeps/x86_64/multiarch/strcmp-sse2-unaligned.S:29
#1  0x00007fff67eae09e in QMetaType::registerNormalizedType(QByteArray const&, void (*)(void*), void* (*)(void const*), void (*)(void*), void* (*)(void*, void const*), int, QFlags<QMetaType::TypeFlag>, QMetaObject const*) () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 ...
(more)
2017-05-22 10:54:57 -0500 received badge  Good Answer (source)
2017-05-19 10:57:48 -0500 marked best answer Filtering depth data of the form sensor_msgs/Image from Kinect 2

Hi all,

--See Update 1--

I am using iai_kinect2 to get a depth cloud in the form sensor_msgs/Image. Now, I want to filter this depth cloud (like remove points on the ground, outliers, etc.). I can use a Pass-through filter but it works only on sensor_msgs/PointCloud2 whereas I have sensor_msgs/Image. Now, my question is :

1) I can convert sensor_msgs/Image to sensor_msgs/PointCloud2 using depth_image_proc and do filtering but then, how can I convert the filtered Pointcloud (in form sensor_msgs/Pointcloud2) back to depth cloud (in form sensor_msgs/Image) to use it with depthimage_to_laserscan ?

2) Or, How can I directly filter depth_cloud in the form sensor_msgs/Image using pass-through filters, etc.. ?

3) I know I can use pointcloud_to_laserscan to convert filtered_pointcloud to laser scan but it will be less efficient and therefore, I want to know how can I do the above conversion or directly filter the depth_cloud?

Update 1 :
As suggested by @Akif, I used pcl:toROSMsg to convert sensor_msgs/PointCloud2 to sensor_msgs/Image. It compiled properly but during run time, when I try to visualize it in RVIZ, I get the following :

MessageFilter [target=kinect2_link ]: Discarding message from [unknown_publisher] due to empty frame_id.  This message will only print once.

When I do rostopic echo /kinect2/depthcloud_filtered, I get

header: 
  seq: 2139
  stamp: 
    secs: 0
    nsecs: 0
  frame_id: ''
height: 1
width: 68227
encoding: bgr8
is_bigendian: 0
step: 204681
data: [0, 0, 181, 209, 216, 179, 207, 214, 179, 207, 214, 0, 0, 0, 175, 204, 211, 176, 205, 212, 176, 205, 210, 177, 206, 211, 0, 0, 0, 0, 0, 0, 179, 207, 214, 179, 208, 215, 181, 210, 217, 0, 0, 0, 178, 207, 214, 179, 208, 215, 180, 209, 216, 175, 204, 209, 175, 205, 210, 174, 203, 210, 0, 0, 0, 174, 203, 210, 175, 204, 211, 174, 203, 210, 176, 204, 211, 177, 205, 212, 0, 0, 0, 178, 207, 212, 180, 209, 214, 178, 206, 213, 179, 208, 215, 177, 209, 215, 0, 0, 0, 0, 0, 0, 0, 0, 0, 180, 209, 216, 174, 203, 208, 174, 203, 208, 176, 205, 210, 174, 203, 208, 175, 204, 209, 175, 205, 210, 175, 205, 210, 175, 204, 211, 175, 204, 211, 173, 205, 211, 172, 204, 210, 173, 205, 211, 173, 205, 211, 176, 205, 212, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 179, 208, 215, 171, 203, 209, ....... ]

It looks like there is some problem here. It does not have any time stamp or frame_id. Does anyone has any idea why is this happening? Any suggestions will be helpful.

Thanks in advance.
Naman Kumar

2017-05-15 04:12:29 -0500 marked best answer Complete coverage path planning ros

Hi all,

I have a mobile robot with a Hokuyo Lidar, wheel encoders, IMU and couple of Sonar sensors. I have started working on the Complete coverage path planning and I am wondering if there is any package in ROS which can be used for this? I have searched and was not able to find anything. If there is no such package, what is the best way to proceed about it? I know this is a very general question but I am just looking for some pointers or references which might be helpful to start working on Complete coverage path planning?

Thanks in advance.
Naman Kumar