ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Garrick's profile - activity

2020-07-08 10:59:55 -0500 received badge  Nice Answer (source)
2017-11-27 20:41:59 -0500 received badge  Self-Learner (source)
2017-04-20 16:35:51 -0500 marked best answer master slave and changing IP addresses

i'm on the university network and they can't offer static ip addresses. the main problem is when driving the robot through different wifi networks and getting a new IP.

i have the robot set up as the master (or i could make it slave if i have to) and i came up with a work around sort of like DynDNS, where once i get a new ip i send it to my google drive and can access it.

problem is dynamically changing the ROS_MASTER_URI and ROS_IP variables. I'm pretty sure as soon as u start a node they're locked to that node. is there no way to change it while the node is running?

otherwise it's essentially required that the computers running ROS and communicating together have static IPs.

if i can't dynamically change the IP addresses of running nodes i guess the best solution then, for a mobile computer running nodes is to have an 4G/LTE modem attached to it? am i right in assuming that this should provide me with the same IP address throughout its lifetime?

or maybe i'll have to make some "communication" nodes on both the robot and the controller.

thank you for your time.

2015-11-20 15:43:32 -0500 received badge  Nice Question (source)
2015-10-06 18:40:11 -0500 received badge  Taxonomist
2015-09-09 09:10:14 -0500 received badge  Famous Question (source)
2015-03-24 22:02:18 -0500 received badge  Famous Question (source)
2015-03-16 11:02:05 -0500 received badge  Notable Question (source)
2015-03-16 11:02:05 -0500 received badge  Popular Question (source)
2014-10-27 01:24:22 -0500 received badge  Famous Question (source)
2014-10-16 07:57:18 -0500 asked a question point transform from rotation and translation in python

hi all. the python tf api has a function for transforming stamped points but i can't find anything to transform a point with just a rotation and translation...

there is a function for returning a matrix from the translation and rotation... is this the only way of transforming such a point?

cheers, Garrick.

2014-10-13 13:22:37 -0500 received badge  Famous Question (source)
2014-10-12 22:39:52 -0500 commented question publishing "0" points in a point cloud

i think i was using a voxel layer instead of an obstacle layer. do you think this will make a difference with the clearing? the camera will ultimately generate the poincloud. but in the video i just put a line of points. this is solved using the cloud->laser but i will now try with an obstacle layer

2014-10-09 08:22:17 -0500 commented answer publishing "0" points in a point cloud

thanks heaps Paul. I'll test it tomorrow.

2014-10-08 08:18:35 -0500 received badge  Notable Question (source)
2014-10-08 05:24:07 -0500 commented answer publishing "0" points in a point cloud

ype boost::shared_ptr<t>::operator->() const [with T = const pcl::PointCloud<pcl::pointxyz>; typename boost::detail::sp_member_access<t>::type = const pcl::PointCloud<pcl::pointxyz>*]: Assertion `px != 0' failed.

2014-10-08 05:23:09 -0500 commented answer publishing "0" points in a point cloud

Paul. Have you tested that code? the node starts up fine but when I echo the laser scan topic it dies with this error:

process[pointcloud_to_laserscan-1]: started with pid [8313] pointcloud_to_laserscan: /usr/include/boost/smart_ptr/shared_ptr.hpp:653: typename boost::detail::sp_member_access<t>::t

2014-10-05 14:40:33 -0500 commented answer publishing "0" points in a point cloud

ah. thanks Paul. that makes sense now.

2014-10-05 14:35:18 -0500 received badge  Popular Question (source)
2014-10-05 08:35:55 -0500 asked a question publishing "0" points in a point cloud

hi all. basically my problem is with clearing a costmap when using a point cloud to add obstacles to it.

http://youtu.be/VNzrfEc2JYk?t=29s

the video above pretty much shows what's going on. i just put a couple of points in a straight line (ultimately the point cloud will come from a camera, but this indicates the problem i am having).

it's not clearing. i think i've set the clearing distance fine. i'll post it once i boot up ubuntu.

i'm thinking it may be because i need to publish "0" points to clear the line i'm making. otherwise there's really nothing to clear the costmap. i'm not too sure what value i should use for these "0" points -- i tried 0 and they actually came up as obstacles too when put into the costmap.

if anyone can help with this i'll be extremely grateful...

thank you for your time.

2014-09-02 03:27:12 -0500 received badge  Notable Question (source)
2014-09-01 22:22:47 -0500 received badge  Popular Question (source)
2014-09-01 21:50:15 -0500 received badge  Commentator
2014-09-01 21:50:15 -0500 commented answer attaching Costmap2DROS object to existing costmaps (i.e. from movebase)

I know you can initialize from a static map here: http://wiki.ros.org/costmap_2d/hydro/... But that would require being able to save the map generated from the individual static layer. Can you do this? I could maybe just create another costmap_2d with just the 1 static layer, then use that.

2014-09-01 21:44:08 -0500 commented answer attaching Costmap2DROS object to existing costmaps (i.e. from movebase)

Thanks for the reply David. The thing is I'm going to be running 2 static layers, but I need to save and initialize only one of these. I can't save and initialize the superposition of them (i.e the costmap topic). Is there are way to use map_saver to save an inividual layer? If so this is fine.

2014-09-01 04:24:21 -0500 asked a question attaching Costmap2DROS object to existing costmaps (i.e. from movebase)

Hi all. I think i'm finally on the right track to solving my problem. If you would like to see the original problem, I asked about it here http://answers.ros.org/question/19085... -- still no responses though :(

I should be able to set up individual layers using specific sensors similar to what is described here: http://answers.ros.org/question/83471...

Ultimately, I'm going to set up 2 static layers, and I need to save the map generated by one of these static layer and I will load it next time the navigation stack runs.

I think the way of doing this is by using accessing the Costmap2DROS C++ object detailed here: http://wiki.ros.org/costmap_2d

Looking at the API, it makes sense that I would be able to create a new one of these objects in a .cpp file, but I don't know how I would go about getting the Costmap2DROS objects that move_base creates when it is launched.

Does anyone have any idea as to how to obtain a C++ interface to the global and local costmaps that move_base instantiates when you run similar launch files to what is on the tutorials? Maybe you can create the object with a parameter such as the topic or node name or something.

Thanks if anyone can help.

Also, i'm on indigo so maybe the Costmap2DROS object is not called that anymore.

2014-08-26 23:24:46 -0500 received badge  Notable Question (source)
2014-08-24 23:30:40 -0500 commented answer Navigation stack with "different" sensors

hi Paul. thanks for all your help.

I've made a separate question here if you're interested and have any input.

http://answers.ros.org/question/19085...

2014-08-22 11:47:47 -0500 received badge  Popular Question (source)
2014-08-21 11:54:24 -0500 asked a question how to go about this engineering problem (using navigation stack)?

Hi all.

Any help with this would be greatly appreciated.

Basically I'm tasked with making a base go to GPS waypoints whilst avoiding "barrel" obstacles and "white lines" (painted on the ground).

The base has a long range sick lms laser scanner and a video camera for detecting the white lines. Ultimately the camera will output either a point cloud or laser scanner to indicate the white lines.

In the mean time i'm using as smaller hokuyo laser scanner (positioned at a lower level than the sick) to mimic what the video camera will eventually do (i.e. the hokuyo can detect things that aren't high enough for the sick to detect).

I plan to localize with the sick lms using the barrels that will be scattered around the course. So I run the navigation stack in the map_sick frame of reference (this is outputted by gmapping). I use this for the global and local costmaps (even though usually odom is used for local).

I've set static map to false and the thing runs -- avoiding obstacles that the hokuyo detects and the sick -- and i'm pretty sure neither of them are clearing each other.

This was pointed out to me by paulbovbel who explained they run in seperate "layers". This is all good but there is an additional functionality required whereby the robot needs to save the white lines it detects, so when it goes through the course again, it already has quite an amount of white lines and can plan its global path better.

I understand the navigation stack has a static layer and this can be seeded/initialized (usually from map server?). So by setting static map to false, is the navigation stack creating its own static map from the laser sources? If so, are there two static maps -- one for the hokuyo and one for the sick? Or is this not part of the navigation stack functionality.

If this is not the case, I could maybe set use static map to true and I think i read this can subscribe to the topic gmapping puts out and uses that to create the static map (I think I read the static map keeps updating as gmapping does).

Then maybe the way to save and use the "white lines" is to create another static map layer (if you can) and make my own node that creates a map of the detected lines in the frame of reference of the sick gmapping.

But then maybe the Nav stack does this anyway -- it seems like it's subscribing to the hokuyo/video camera topic and creating a map in the sick gmapping frame.

Additionally, I really only need "static" layers (no one is going to walk in front of the base) so can I disable the obstacle layers for each scanner?

Also, do I really even need the local costmap then too? I'm not sure of its purpose (and why it's usually set to ... (more)

2014-08-21 08:54:40 -0500 received badge  Enthusiast
2014-08-20 04:32:34 -0500 received badge  Self-Learner (source)
2014-08-20 04:32:34 -0500 received badge  Teacher (source)
2014-08-20 04:32:22 -0500 received badge  Famous Question (source)
2014-08-19 22:25:11 -0500 answered a question Navigation stack with "different" sensors

Hey guys. Just a few more questions about the static layer of the navigation stack.

I had the Nav stack going with two laser scanners. One of them was running gmapping and it was this frame that I used for the navigation stack (both global and local costmaps (even though normally it's odom for local)).

In the local and global costmap params I set the static map parameter to false... So does this mean the navigation stack is creating its own static map from both of the laser scanners? If this is so, can I eventually load this static layer at the start of a new run? What's to stop the longer range laser scanner from overwriting the white lines that are only detected when the camera gets relatively close to them?...

When considering this, it seems like I have to set the static map parameter to true, then get the map generated by gmapping, then copy it and add on the detected lines and this will be that map that the navigation stack will subscribe to to generate the static map layer.

Further, since I'm mainly dealing with obstacles that are static, and I only really need a static map layer, can I disable the obstacle layers?

Also, maybe it's possible to create 2*static map layers, one for the laser scanner data and one for the camera? (this might be what the Nav stack does anyway, when you specify 2 sensors)

If you can have two static map layers, one for each sensor and you load a map at the start of a run, will the navigation stack update, the static map if something has slightly changed? Say if a barrel has moved a meter from where it originally was.

Any comments would be greatly appreciated.

I'm just trying to figure out the best way of going about this problem using ROS's navigation stack.

2014-08-17 08:28:24 -0500 commented answer Navigation stack with "different" sensors

Thanks for the response Paul.

Just one more quick question...

The individual layers for each sensor should be able to initialize/seed from a saved map?

i.e. I'll save the static maps for each layer eventually, then initialize next time i run. Is there any functionality for this?

Thanks, Garrick.

2014-08-17 07:01:50 -0500 received badge  Notable Question (source)
2014-08-16 15:03:19 -0500 received badge  Popular Question (source)
2014-08-16 05:05:29 -0500 asked a question Navigation stack with "different" sensors

Hi all. If this question has been asked somewhere else, please direct me instead of repeating =).

Essentially I'm tasked with making a p3-at base navigate a course such that it can't go over "white lines" marked on the ground or run into random barrels placed throughout the course.

The base has a sick lms100 laser scanner which gmapping will be running on to localize and it will also be fed into the navigation stack to add the barrels onto the costmap.

We're working on a camera that will ultimately output the "white lines" as laser scanner data. This, however, won't have much of a range, and I think this can be specified in the laser msg.

The problem I envisage occurring is that the LMS100 is going to clear a lot of what the camera is going to pick up.

I was thinking a way to counteract this would be to run cost_map2d individually for both scanners then essentially superimposed these two maps and this will be the one that gets inflated and used to navigate by move_base.

Have other users ever encountered similar situations and went about it this way or another?

Essentially, it's two scanning sources that aren't detecting the same thing (i.e. barrels @ a certain height for the lms100 and lines on the ground for the camera).

Will superimposing the costmaps take too much time?

Maybe the navigation stack already incorporates this by tying the sensor source to each pixel of an obstacle, but I doubt it.

If I went with my plan, I could then seed each map with its own initial map, and it shouldn't overwrite it because of the individual sensor ranges?

If any guidance can be given I would be greatly appreciative.

2014-08-13 01:06:55 -0500 received badge  Famous Question (source)
2014-03-07 19:57:16 -0500 received badge  Notable Question (source)
2014-02-25 15:44:02 -0500 commented answer master slave and changing IP addresses

that's a good idea. if the DNS doesn't pull through i'll give it a try. also got a 3g modem now and possibly 4g in the future so might not need to bother =). thanks

2014-02-25 12:30:12 -0500 commented answer master slave and changing IP addresses

to get names. ROS could be smart about this and i guess check the server for changes in IP addresses. and change them accordingly. but i don't know. i'll try it out today. sorry about posting in the wrong places. thanks

2014-02-25 12:29:09 -0500 commented answer master slave and changing IP addresses

thanks for the reply. will try mucking around with hostname. with the "table" thing, i was thinking that that's how hostnames work. like there is some server that stores (hostname, ip address) pairs and if a hosts ip address changes it updates the pair. and other computers look up this "table"...