ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

TBnor's profile - activity

2021-03-14 17:40:57 -0500 received badge  Taxonomist
2020-02-23 16:14:38 -0500 received badge  Famous Question (source)
2018-11-17 14:01:01 -0500 received badge  Student (source)
2018-07-19 07:14:59 -0500 received badge  Famous Question (source)
2018-05-01 03:42:52 -0500 received badge  Famous Question (source)
2018-05-01 03:42:52 -0500 received badge  Notable Question (source)
2018-04-26 22:16:53 -0500 received badge  Notable Question (source)
2017-09-23 06:02:15 -0500 received badge  Notable Question (source)
2017-09-23 06:02:15 -0500 received badge  Popular Question (source)
2017-09-15 15:57:02 -0500 received badge  Popular Question (source)
2017-09-15 02:05:36 -0500 edited question NetworkingSetup (#1 to #3 through #2)

NetworkingSetup (#1 to #3 through #2) The network I need to run my system has multiple constraints that would be easy to

2017-09-14 13:05:57 -0500 edited question NetworkingSetup (#1 to #3 through #2)

NetworkingSetup (#1 to #3 through #2) The network I need to run my system has multiple constraints that would be easy to

2017-09-14 13:05:38 -0500 edited question NetworkingSetup (#1 to #3 through #2)

NetworkingSetup (#1 to #3 through #2) The network I need to run my system has multiple constraints that would be easy to

2017-09-14 13:05:38 -0500 received badge  Editor (source)
2017-09-14 13:04:37 -0500 asked a question NetworkingSetup (#1 to #3 through #2)

NetworkingSetup (#1 to #3 through #2) The network I need to run my system has multiple constraints that would be easy to

2017-09-11 17:03:58 -0500 edited answer Complex nodes (mapping, nav) on embedded, resource constraint platforms (rpi, intel edison): feasibility / performance?

I've worked quite a bit with these issues, and I would say that you'd be surprised how extremely efficient ROS is. I mus

2017-09-11 16:57:25 -0500 answered a question Complex nodes (mapping, nav) on embedded, resource constraint platforms (rpi, intel edison): feasibility / performance?

I've worked quite a bit with these issues, and I would say that you'd be surprised how extremely efficient ROS is. I mus

2017-06-19 10:06:54 -0500 received badge  Enthusiast
2017-06-18 12:23:48 -0500 asked a question Hokuyo, Velodyne, up to date experiences?

Hokuyo, Velodyne, up to date experiences? Hello, I just messed up my rplidar. Kinda needed that, but the range is t

2017-04-13 04:47:44 -0500 received badge  Popular Question (source)
2017-03-16 04:18:30 -0500 commented question Stereo camera, preprocessing, transforms and frames = pointcloud?

Yes! I will do just that, as soon as possible, feel free to close/delete this one. I've worked ~10 hours since then and talked to the distributor; clarified a whole lot.

2017-03-15 05:16:51 -0500 asked a question Stereo camera, preprocessing, transforms and frames = pointcloud?

Hello,

I am trying to avoid the simple questions, so trust me; I've really tried to figure out this for myself. But I find this to be am beyond my understanding.

Situation - I am acquiring BGR-preprocessed stereo view, but this has raised some issues for me. While the camera software is aimed towards Igloo (and Jade's?) "uvc-camera".

For concretizing purposes, let's assume the simplest version of "differential wheeled robot"; with a base footprint, base link etc - as frequently addressed in tutorials.

My goal is to achieve perception of the environment utilizing as much as possible from generic packages,.,

The camera is published under the /stereo/ namespace 1) How do you describe a stereo camera through xacro/urdf? - Is the camera "one"; with e.g. "optical frame" as origin? Or should the left and right camera be described in a separate manner? - Which frames/transforms am I advised to implement? - Should I utilize a dedicated driver?

2) I am finding a recurring error-messages relating to the packages expecting a colored image. At the same time I am facing all these other challenges, making it impossible for me to deduce whether this is a cause or a consequence.' - I have implemented stereo_image_proc ; but without these issues addressed I find it natural that I don't get any disparity or pointcloud data.

3) If colored images are required; could it be an idea to process the BGR-frames captured; use OpenCV to create a colored image intensifying parameters of interest, and then feeding these to the stereo-vision algorithms at a lower framefrate? (I have managed to utilize the BGR images for cv-processing by using a "cv_bridge"-node).

4) Related to the abovementioned; how do I successfully propagate odometry / pose in a simple conceptual system, where an ekf-node provdes filtered sensordata? I'm assuming I should remap the topic somewhere, but I'm struggling to catch the essential factor here.

5) In general as with the camera; I'm struggling with the countless possibilities relating to frames, tf's and dedicated drivers. Should I use a kobuki or a differential driver? What are the main pro's and con's? (It should be mentioned that I won't receive my wheel encoders before the end of next week, so for now I will have to estimate odometry. )

I hope I'm not completely off track here, thanks in advance. Please notify me if I should change and/or elaborate on any of these issues.

2017-03-15 04:36:50 -0500 answered a question How to do SLAM with just RaspberryPi3 PiCam and IMU Sensor with ORBSLAM2 ?

I think you will find it challenging to implement without any perception of depth. You would have to "fabricate" the environment based on 2D-computervision, providing pointcloud / laserscan to the stack.

The RPi probably lacks the computing power also; but that is an assumption based on experiences working with depth perception - which implies the amount of data published by these devices.

Don't take my word for anything, I'm just a couple of months into ROS myself.

2017-01-14 23:45:22 -0500 received badge  Scholar (source)
2017-01-14 23:45:20 -0500 received badge  Supporter (source)
2017-01-14 23:45:07 -0500 commented answer Catkin workspaces, packages “built from scratch”, builds vs source-files, interoperability in builds.

Thanks, this was spot on :)

See you around, I guess.

TB

2017-01-14 23:43:34 -0500 received badge  Notable Question (source)
2017-01-14 07:10:50 -0500 received badge  Popular Question (source)
2017-01-12 15:02:41 -0500 asked a question Catkin workspaces, packages “built from scratch”, builds vs source-files, interoperability in builds.

Hello,

I picked up ROS around Christmas, and I’m struggling with some core concepts here. I’ve read all tutorials, multiple times, so please don’t just reference me to one of those…

When I follow a tutorial I will most likely end up with ng a workspace, add a package or even “create one”*; before making the catkin workspace.

Now I can roscore, rosrun and listnodes at my heart’s desire. Everything work’s and I’m having a good time. Then, I see I can extend on the previous, and in a matter of minutes I am no longer having a good time.

Now I am making new workspaces, adding packages, deleting packages, by hand and by commandline, really messing everything up big time.

I’ve attempted to define the core of my problems (Please forgive my noobishness, I’ve barely used cmake before and I wrote my first code in 2015) :

When first initializing and making some package or set of packages: 1) What is the correct way of adding/removing packages from a workspace? 2) Should always dependencies be listed to create a package (catkin_create_pkg package_name_pkg rospy rviz etc); before adding the “src” content of the package? 3) Is there any real catch (for a newb) to overlay the workspace with echo ~/source --- bashrc ?

When the initial workspace is up and running: 1) Can I just add, i.e a launch file, without doing anything more/building something?
2) If I did something wrong, maybe write xviz instead of rviz, how can I fix this? Do I have to re-initialize the whole package?

When I want to extend the made workspace with an additional package (my main issue): 1) When downloading a package from the web, do you always have to make it with catkin before it can be used? 1.1)If not, how can I tell if I can just use the script after placing it in the folder, versus when I have to remake/update something? 2) Is there a difference between how packages work? Like; could I download the built package at one site, while another site provided me with the source-files that had to be built?

In general, is it recommended to add more new packages, or to edit the one(s) one already has in place? (Not talking large-scale teamwork, but for i.e one dedicated and experienced individual creating something).

Bonus Q: 1)) When multiple device are supposed to interact – they would all need the same catkin workspace? Would every device compile it’s own in full?

Sorry I have a hard time to really express what I am struggling with. I would deeply appreciate any kind of general (basic) discussion around all related topics, so please don’t try to be short 