ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

stfn's profile - activity

2021-12-17 10:27:21 -0500 received badge  Favorite Question (source)
2021-04-18 20:19:41 -0500 received badge  Favorite Question (source)
2020-12-03 07:52:47 -0500 commented question move_base in uneven terrain (slopes)

Nowadays you should be able to find a good starting point: https://clearpathrobotics.com/sit-advances-autonomous-mapping

2020-03-12 13:12:52 -0500 edited answer catkin config --install and --no-install using same build dir?

You can also use different profiles for install and devel, see https://catkin-tools.readthedocs.io/en/latest/verbs/catki

2020-03-12 13:11:38 -0500 answered a question catkin config --install and --no-install using same build dir?

You can also use different profiles for install and devel, see https://catkin-tools.readthedocs.io/en/latest/verbs/catki

2020-01-02 14:35:12 -0500 commented question ROS NEWS not updated

I also wondered about this for some time now .. @tfoote?

2019-08-13 14:12:53 -0500 commented question rostopic info /topic/name and nodelets

Same frustration for 10 years now :) https://answers.ros.org/question/10013/what-topics-is-my-nodelet-listening-to-for-r

2019-08-13 13:53:21 -0500 commented question getPrivateNodeHandle() vs getNodeHandle ?

yes, correct :) However, you dont need to use the NodeHandle constructor here. "What's the good practice in this case ?"

2019-05-20 11:30:03 -0500 commented answer Weird behavior when instantiating SimpleActionClient right after initializing node in rospy

Turns out there already was a ticket for that: https://github.com/ros/actionlib/issues/19. I re-opened this issue in htt

2019-05-20 08:06:47 -0500 commented answer Weird behavior when instantiating SimpleActionClient right after initializing node in rospy

Turns out there already is a ticket for that: https://github.com/ros/actionlib/issues/19. I re-opened this issue in http

2019-05-20 07:58:27 -0500 commented answer Weird behavior when instantiating SimpleActionClient right after initializing node in rospy

Turns out there already is a ticket for that: https://github.com/ros/actionlib/issues/19

2019-04-09 07:21:25 -0500 commented answer rviz - video recording

I only get a huge video that shows only green pixels along with some noise.

2019-03-30 06:42:27 -0500 commented answer Weird behavior when instantiating SimpleActionClient right after initializing node in rospy

It seems that rospy.wait_for_service is robust against this time issue, while SimpleActionClient.wait_for_server is not.

2019-03-29 19:20:01 -0500 answered a question Weird behavior when instantiating SimpleActionClient right after initializing node in rospy

Well, almost a decade later, I also stumbled upon this. I tracked it down to time itself: wait_for_server () returns fal

2019-03-19 06:08:40 -0500 commented answer Does roslaunch support topic remapping from command line?

https://github.com/ros/ros_comm/issues/1664

2019-02-22 11:22:53 -0500 received badge  Nice Question (source)
2018-11-20 20:55:18 -0500 received badge  Famous Question (source)
2018-10-26 07:45:39 -0500 received badge  Nice Question (source)
2018-05-17 07:02:20 -0500 commented answer Where to find info/changelogs of ROS package releases

done: https://discourse.ros.org/t/unified-changelog-report-for-synchronized-ros-package-releases/4810

2018-05-17 07:01:52 -0500 received badge  Notable Question (source)
2018-05-17 06:37:08 -0500 received badge  Popular Question (source)
2018-05-17 03:54:52 -0500 commented answer Where to find info/changelogs of ROS package releases

Thanks for the detailed answer. Is there a single source summarizing the package version bumps that were (or are going t

2018-05-17 03:52:15 -0500 marked best answer Where to find info/changelogs of ROS package releases

I'm on Ubuntu 16.04 and get updates of ros-packages from time to time via apt(-get). One can observe that they're released in "bursts", so a group of packages are released together. Everytime I update the packages, I wonder what the actual changes are and how they affect my projects, but I have no clue where to look it up. Of course, I could search github for each package, find out the package version (not to mix up with the release version!) and look into the changelog, but this seems very cumbersome. Isn't there a unified Changelog for those burst-releases?

2018-05-17 03:27:10 -0500 asked a question Where to find info/changelogs of ROS package releases

Where to find info/changelogs of ROS package releases I'm on Ubuntu 16.04 and get updates of ros-packages from time to t

2018-05-09 15:49:14 -0500 received badge  Great Question (source)
2017-10-20 10:07:30 -0500 marked best answer rosdep install with metapackage

Hi,

I created a metapackage wrapping some of our own ros packages by declaring a run_depend on them. When I then run rosdep check my_metapackage everything is fine despite some system requirements of those run_depend packages are not met.

I know I could change the run_depend to build_depend, but then I get a warning for doing so. So ... I want to install the system-dependencies of a whole 'stack' aka metapackage with one command like rosdep install my_metapackage and not running the rosdep install for every single package ... is that possible?

2017-04-20 17:30:12 -0500 marked best answer using move_base recovery for initial pose estimation

Hi,

I want my robot to find its initial pose on its own rather than providing it with rviz "2d pose estimate". For that I already figured out that you can call AMCLs /global_localization service, whicht distributes the particles over the whole map. Second I'd need to rotate the robot. Therefore move_base already has the recovery behaviour, but how to trigger that? I know there is the C++ API, but isn't there a service call for that? Something that smoothly integrates with move_base resp. nav stack? Or can I use the actionlib for that? Anything any trigger despite the cpp api?

I'm wondering for a long time why there isn't something out there for initial pose estimation with the widely used combination of amcl and move_base. Are you guys all using the rviz-button? all the time you start demos?

Update 1

As I could not find an answer to this, I wrote a node triggering a recovery-behaviour similar to the move_base.cpp implementation. However those behaviours (rotate_recovery, clear_costmap_recovery) need the global and local costmap as parameters. Just creating your own as suggested in the code snipples from the wiki doesnt seem to work as intended since those maps are not the ones used by move_base: Running clear_costmap_recovery then does not clear the costmap used by move_base. However, and this is strange, rotate_recovery sometimes is not carried out due to potential collision. How can there be collisions if the costmap is empty? Or IS there a connection between a costmap you create and name ('local_costmap', 'global_costmap') and the ones used by move_base? This is confusing ...

An alternative solution would be to use actionlib and send move_base_msgs/MoveBaseActionGoal, but how can I tell move_base to just use the local planner and ignore the global one?

2016-09-30 02:41:45 -0500 marked best answer conversion of depht image coordinates to world coordinates (u,v,d) to (x,y,z)

Hi, I implemented a object detection classifier working on rgb and depth data (from openni_launch) which provides object center coordinates in the image plane (u,v). Now I want to convert those image coordinates (+depth) to world coordinates (resp. the 3D camera frame).

I know I could just use the (u,v) coordinates and the registered cloud to get the point stored at (u,v,) but sometimes there is a NaN at this point and I dont want to deal with that as it would require to look for the neighbours which biases the object center and stuff. Another drawback is that after some pcl-calculations (resampling, removing nans, calculating normals, removing major planes etc.) the cloud is not organized anymore.

Second I could use the /cameraInfo topic to get the intrinsics of my depth sensor in order to calculate the conversion by hand using the lens equation, but this is some trouble if you want to do it the correct way (considering all instrinsics of the camera matrix).

So ... as openni_launch already does this for the whole pointcloud-creation, is there a function or service offering this conversion in openni_launch, openni api, pcl api, ros_perception etc?

2016-08-20 13:41:28 -0500 received badge  Famous Question (source)
2016-08-02 10:55:50 -0500 commented question move_base in uneven terrain (slopes)

@M@t What worked best is a ground filter on the input data along with the flat obstacle grid (not voxel grid!) while setting min/max obstacle height to -large/+large. In case of VoxelGrid, the whole problem could be overcome if the running window would take the robots z-coordinate into account.

2016-08-02 10:55:22 -0500 answered a question move_base in uneven terrain (slopes)

@M@t What worked best is a ground filter on the input data along with the flat obstacle grid (not voxel grid!) while setting min/max obstacle height to -large/+large. In case of VoxelGrid, the whole problem could be overcome if the running window would take the robots z-coordinate into account. But therefore move_base needs to be altered.

2016-07-07 01:37:19 -0500 marked best answer RoboEarth detection performance

Hej,

I just got RoboEarth up and running but observe a very bad detection rate. For testing I used a textured box and a redbull can. Both were scanned according to the tutorial. The Box scan looks nice but the detection rate pretty poor. Frome some angles the box is not detected at all. Sometimes it is detected but not very stable, such that it switches very often betwen detected/not detected. image description

The second item is a red bull can. It's scan looks pretty distorted. I scanned it very often but this is the best I got. Probably due to Kinect issues (reflection, angle, noise, ...) This model is not detected at all. Neither with Kinect, nor with the vision detector. image description

So the question is: Is this as good as it gets with RoboEarth or am I missing something here? Is there another working stack out there which performs a better 3D object recognition? (I tried Willowgarages ORC, but it wouldn't work at all)

And some related questions: How does RoboEarths kinect detector work? It seems, that it just matches bruteforcefully every single model scan against the incomming pointcloud and tries to find a Pose with small error.

Is there a scan postprocessing step (noise reduction, outlier removal, reduction of number of points/complexity, meshing, PCA, low pass filter, ...)? How does noise effect the detection? How many scans are neccessary? Whats the detection range?

2016-07-06 08:12:43 -0500 edited question move_base in uneven terrain (slopes)

Hi,

I'm confronted with the situation of a car-like robot driving in uneven terrain, i.e. the ground is not plain but there are (slight) hills and (slight) slopes. We are not operating in the Rocky Mountains, but it's not entirely flat either. Localization is done with GPS and odometry, obstacles come from lidar pointclouds or similar. There is no static map, rather are global and local costmap operating as rolling window.

When using ROS' navigation stack resp. move_base for planning and controlling, I observe some issues:

  • When using GPS, one retrieves a fixed global frame that is originated at the robots start position (similar to /map). While driving over uneven terrain, the robot's z-position changes a lot, lets say within -10/10m relative to the starting position. The costmap, resp. the marking and clearing of obstacles stops working since it demands that the sensor origin (lidar) is within the operating height of the costmap (min/max_obstacle_height and some other parameters). The costmap's rolling window frame adapts to the robot's x and y position, but seems to completely ignore its z-position (and orientation!).

  • A slight ramp, although perfectly fine for driving, is marked as obstacle.

I played a lot with different parameters of move_base, used VoxelGrids and flat ones but could not figure out a setting, that works in uneven terrain. The only thing that solved most issues (but generated some others) was to override the /odom frame's z position and setting it to zero.

So I guess my general question is: can move_base be tweaked to work beyond the flat-world-assumption? Or are there stacks that overcome those limitations?

Any hint is highly appreciated

2016-06-16 06:07:00 -0500 marked best answer steering axis of carlike robot with teb_local_planner

We have an car-like robot and use the new teb_local_planner for driving. What is missing in the documentation is the question of the steering axis (front or rear), resp. where /base_link sits on the robot. As a first step, I modified teb_local_planner_tutorials to reflect our robot model and world. In contrast to the original robot_carlike_in_stage launch example where /base_link sits on the rear axis, our /base_link frame resides in the center of the front axis (as usual for this robot model). Accordingly our (rotation) origin is at /base_link and therefore in the front and not in the back, which is also modeled in the stage parameters of the robot model.

The result of running the stage simulation is, that the car seems to steer with the back rather than the front axis. In any case, it looks wrong. Does the teb_local_planner only support car-like robots that steer with the rear axis? How do I have to model the robot to reflect steering with the front axis?

EDIT: The wheelbase parameter is described as

The distance between the drive shaft and steering axle. The value might be negative for back-wheeled robots.

So for our robot, drive shaft is in the front, as well as steering axle and /base_link. So wheelbase according to the description is 0, but the planner does not produce reasonable output with that.

2016-06-16 06:06:59 -0500 commented answer steering axis of carlike robot with teb_local_planner

Thanks! Just one detail: With base_link you mean the robot_frame_id tf param (global costmap), right? So from move_base/teb_local_planner it should work (despite Stage issues) by introducing a new frame on the rear axis and setting this one as robot_frame_id. Or does it have to be base_link itself