ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Achim's profile - activity

2015-12-22 03:04:41 -0500 received badge  Nice Question (source)
2015-02-27 07:27:19 -0500 received badge  Nice Answer (source)
2014-04-29 07:49:03 -0500 received badge  Necromancer (source)
2013-08-29 01:38:57 -0500 answered a question How do I convert an ROS image into a numpy array?

The more general way (i.e. for all type of messages), which also avoids the detour over cv.Mat, and should work fine for opencv today, as opencv now uses numpy arrays in the new cv2 interface is described here:

http://www.ros.org/wiki/rospy_tutorials/Tutorials/numpy

You basically tell ros to use the serialize/deserialize_numpy methods by adding numpy_msg() around the message type. Which means your subscription should look like:

[code] from rospy.numpy_msg import numpy_msg ... sub_vis = rospy.Subscriber('navbot/camera/image',numpy_msg(Image),vis_callback)[/code]

and then data.data should be an numpy.array.

2013-08-12 01:25:41 -0500 commented question Extrapolation error using hector_mapping + move_base

I tried move_base with gmapping last week and got a similar issue... the robot moved for a few centimeters sometimes, but stopped with the same messages. Comparing the TFs, I get 40Hz from hector_slam and 50Hz from gmapping. So it seams the local planner is too strict with timestamps.

2013-08-07 03:38:46 -0500 commented question Unknown cell as a goal in Navigation stack with Gmapping

Was a bit strange... for me it worked to set (I did set that in multiple namespaces ;)) and then it usually worked. Often I would need to restart move_base in the beginning, though.

2013-08-07 03:35:35 -0500 received badge  Commentator
2013-08-07 03:35:35 -0500 commented question Extrapolation error using hector_mapping + move_base

I have the same problem. I looked at the code, the lookup fails in transformGlobalPlan, https://github.com/ros-planning/navigation/blob/groovy/base_local_planner/src/goal_functions.cpp Either this is too strict about the timestamps or it has a bug which only surfaces in special cases.

2013-02-27 01:40:53 -0500 answered a question How to log a global path defined in "/map" frame.

One thing you could try is to wait a little for the transform: listener.waitForTransform(destination_frame, original_frame, ros::Time(0), ros::Duration(10.0) );

This would wait at most 10 seconds for the transform to show up.

Of course that won't help if there is some fundamental issue with your tf setup. But generally for a new run node you should call waitForTransform before you try to use transforms for the first time. Also you should be sure to keep the transformlistener over the whole runtime of your node. If you require the tf for a special timestamp, be sure to adjust the third param of the waitForTransform call, also. ros::Time(0) means "the latest transform" in the tf-context.

2013-02-26 20:59:54 -0500 answered a question Synchronizing between a message and a tf.

Of course there are the message filters for that. But I don't think they solve your problem. I don't think that you can resemble the "transmit only the necessary stuff" mechanic through topics, because AFAIK if you subscribe to a topic, ros will transmit all the messages on that topic to your node (and if you can't process them fast enough and run out of buffer space, old ones are discarded).

I think you should look into the service mechanism that ros offers. Of course time delays are a problem here... you could implement, for example, a service request to the node that has the data (and should buffer some of that) with a timestamp as parameter of the TF you received and then transmit back the most fitting data available. Something like that. In any case the service mechanism allows you to transmit data only when you request it. That's what you need, right?

2013-02-21 20:41:54 -0500 commented answer bumblebee2 or bumblebee1394, this is the question.

I fear that code is not open source. :( Also it has it's own issues... it's using the triclops libraries which tend to crash on 64bit systems.

2013-02-21 18:31:50 -0500 received badge  Nice Answer (source)
2013-02-21 00:56:55 -0500 commented answer global localization and mapping (multiple robots)

I think that still is an active research topic. I'm pretty sure there is nothing ready to use in ros for that use case... So you'd need to find some algorithm from the robotics literature and try to implement it yourself, I guess.

2013-02-20 07:52:33 -0500 received badge  Nice Answer (source)
2013-02-20 01:50:51 -0500 answered a question global localization and mapping (multiple robots)

This is a nontrivial topic... The first question would be: what global coordinate system do you have? GPS? Or none at all and you want the robots to localize each other relatively?

That decides a bit in which direction of research you have to look... Anyway I'm not confident that there exists a finished solution in ROS for your problem. An approximate solution could be to just merge the maps into one using GPS... or try to do cross correlation or something to adjust position errors.

2013-02-20 01:44:57 -0500 answered a question Multiple NAOs, one roscore

That is what namespaces are for. Either set the ROS_NAMESPACE environment variable before using rosrun or make a launchfile that contains a <group ns="naoX"> tag for each robot. There you can define the parameter with the IP and, if the driver nodes work properly (i.e. subscribing to "cmd_vel" and not "/cmd_vel", the topics should now read like /naoX/cmd_vel and so on.

If you want to use tf, you should also set the tf_prefix param and be sure that all software you use (and especially the one you develop) uses the tf methods to resolve frame_ids with the tf_prefix. Look for tf_prefix in the ros wiki.

2013-02-19 21:20:51 -0500 answered a question problem building my map using SLAM

I'd say you forget to run some node in your second try. But it's impossible to diagnose from a far what that might have been... I'd have a look at the output of the mapping node and check if there is data flowing on all the topics it subscribed to.

Driving around a bit also is a good idea, because the map usually only get's updated, if it changes and most mapping algorithms won't update the map without robot movement... depending on the launch order the first map can sometimes get missing, even though the topic is latched. Try to drive a few meters in one direction, if that happens.

2013-02-19 21:06:43 -0500 answered a question H.264 video server

There is the open source codec x264. For lower resolution images it might be fast enough to do on the fly on modern computers: http://www.videolan.org/developers/x264.html You can try to use that to implement your own "video server". I'm not sure what you want to achieve and if one can easily expand the image transport. I guess the easiest thing would be to write your own server that receives the ros image topic, converts to h.264, transmits to your own client (somehow) and then generates a ros image topic again (if that is in your desire).

You could also look into gstreamer (if working on linux), which might help with the transmission of the video via RTSP, too. Not sure if it can act as server, but we're using it as client for some IP-Cameras.

2013-02-19 20:46:34 -0500 commented question gmapping with user defined laser scan and position data

yes, for that you have to run two gmapping algorithms. The stuff with the frame_ids, that shade explained, is important here, too. The result of the two gmapping algorithms will be two maps, which are not aligned, though...

2013-02-19 05:48:58 -0500 received badge  Nice Answer (source)
2013-02-19 03:22:47 -0500 received badge  Editor (source)
2013-02-19 03:19:10 -0500 answered a question gmapping with user defined laser scan and position data

I'm not sure what you try to achieve. Build one single map from two laser scanners? I don't think that gmapping node in ros can handle that (look here for example: http://answers.ros.org/question/28514/multiple-robots-with-slam_gmapping/ ). So you would need to write a new node with a different mapping approach anyway.. in that node you can then register to multiple laser_scan topics, easily. :)

Otherwise, if you just want to use the cpu-power of a different computer to process the laserscans from the robot (which has only a slow cpu, maybe), you can adjust the ROS_MASTER_URI on one of the computers and run only one master. Then the setup is just the normal setup, i.e. remap the laser topic on which the driver of the laserscanner publishes so that gmapping listens to it and so on...

As an alternative to the ROS_MASTER_URI you can try http://www.ros.org/wiki/master_sync_fkie, best in combination with node_manager_fkie (there you only need to press one button to enable the sync).

2013-02-12 20:23:24 -0500 received badge  Autobiographer
2012-10-04 18:26:07 -0500 received badge  Famous Question (source)
2012-09-25 02:27:58 -0500 received badge  Student (source)
2012-09-24 05:03:53 -0500 received badge  Notable Question (source)
2012-09-23 15:12:12 -0500 received badge  Popular Question (source)
2012-09-22 05:31:17 -0500 commented answer Navigation costmap parameter unknown_cost_value

I have the same parameters, now. It works, but usually I have to start move_base two times, in order to get it working. Usually on the first run unknown cells still are obstacles.

2012-09-22 03:58:17 -0500 asked a question move_base sometimes fails to drive, how to find out why

Hi,

I try to use move_base to drive my robot. I have the setup working quite fine, most of the time, using gmapping for map->odom tf, and the global planner working in the map frame. The local planner is working in the odom frame completely.

My issue is, that sometimes move_base just won't drive for no apparent reason.

I get this warning quite often:

Could not get robot pose, cancelling reconfiguration

Usually this does not seem to matter much. But I have no real idea, where that comes from. Is there anyway to investigate that further?

Then, before the robot stops moving I sometimes got this error (but not always):

 Aborting because a valid control could not be found. Even after executing all recovery behaviors

At this point, the global planner, still publishes paths on move_base/NavfnROS/plan, but there is nothing published on move_base/TrajectoryPlannerROS/local_plan.

So it seems that the local planner gave up. I watched the local obstacles in rviz and this also happens if there are no local obstacles at all. How can I find out for what reason the local planner gave up?

The robot actually starts moving again, as soon as it gets a new goal, either from exploration or manually from rviz. Then it will drive just fine, until it again, just stops moving for no obvious reason.

2012-08-30 22:33:32 -0500 commented answer Navigation Stack with gmapping

@ParNurZeal: you misinterpret "static" here. It is a parameter to the costmap that means the map comes from the outside and has not to be build from the laserscanners. i.e. static is correct here for the planner, because it get's a finished map from gmapping and does not have to build its own one.

2012-08-29 21:56:02 -0500 commented answer Navigation costmap parameter unknown_cost_value

Does that mean you managed to use navfn with gmapping planing in unknown space? Or are you using your own stuff? I'm quite stuck here...

2012-08-29 21:52:25 -0500 commented question Unknown cell as a goal in Navigation stack with Gmapping

Is there nobody using this combination? Or am I doing something wrong somewhere?

2012-08-29 21:51:59 -0500 commented question Unknown cell as a goal in Navigation stack with Gmapping

I suffer from the same issue. I really don't get where the problem might be. I even looked at the code in gmapping that creates the map and it sets unknown areas to -1 (which is the same as 255 for unsigned 8 bit). navfn then checks for 255 and allow_unknown when it processes the map. it should work

2012-07-24 01:26:01 -0500 received badge  Critic (source)
2012-03-06 10:38:46 -0500 received badge  Guru (source)
2012-03-06 10:38:45 -0500 received badge  Great Answer (source)
2012-02-29 04:11:31 -0500 received badge  Necromancer (source)
2012-01-05 22:51:18 -0500 commented question Is there a C++ API for a dynamic reconfigure client?
I'm looking for this, too and till today did not find anything. :(
2011-12-11 10:19:23 -0500 received badge  Enlightened (source)
2011-12-08 05:01:12 -0500 received badge  Good Answer (source)
2011-12-04 11:18:56 -0500 received badge  Nice Answer (source)
2011-11-23 19:35:29 -0500 answered a question Rosbag cannot record multiple topic with same type

I don't know what the kinnect can do or can not do. But there currently seems to be a bug in rosrecord that scrambles bag files... it's already in the tracker: https://code.ros.org/trac/ros/ticket/3755

I have the very same issue... for a node I have written myself, which definitively publishes two images in two different topics at roughly the same time intervals and which worked some weeks ago not the bag files only have one image topic but the topic seems to contain both images, at least sometimes images from the other topic flicker through, if I just watch the stream with image_view... It's kind of random which image topic will be saved to and which is really the content of the image topic...

A work around is to start a new rosrecord for every image topic you want to record. You can then just give all those bag files to rosrecord play on the command line and it will play them all simultaneously (and then the image topics are working again.. ;))

I hope this bug gets fixed, soon.

2011-11-17 23:56:51 -0500 answered a question Disabling boost for a custom package

Just in case someone else stumples upon this... I ran into a similar problem. For me triclops always seg faulted in triclopsStereo. triclopsRectify did work (which uses the internal calibration of the bumblebee, that is why I wanted to use it). So I get the rectified images from triclops and passed them to an OpenCV StereoBM block matcher.

triclopsGetImage( triclops, TriImg_RECTIFIED, TriCam_RIGHT, &image_right );
triclopsGetImage( triclops, TriImg_RECTIFIED, TriCam_LEFT, &image_left );

cv::Mat img_r(480,640,CV_8UC1,image_right.data);
cv::Mat img_l(480,640,CV_8UC1,image_left.data);

block_matcher_(img_l,img_r,disparity);

This gives me real nice disparity images, seemingly even faster than triclops without any hastle of calibration or similar and without crashes, as it seems until now. :)