ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

TimboInSpace's profile - activity

2018-05-28 02:56:20 -0500 received badge  Good Answer (source)
2017-10-23 11:22:30 -0500 commented answer Raspberry Pi +Hydro + Openni/freenect

For any reading this: Yeah I got Openni working but NOT Openni2 Would highly recommend using a different depth camera, n

2017-10-23 11:19:37 -0500 commented answer Mapping with Sonar Data?

Not sure about MarkyMark2012, but I used mine with gmapping on a fairly coarse map. Worked decently in the end. Good luc

2017-07-09 14:14:18 -0500 received badge  Nice Answer (source)
2016-12-11 10:28:26 -0500 received badge  Taxonomist
2016-06-01 01:13:56 -0500 received badge  Necromancer (source)
2016-05-31 12:15:11 -0500 answered a question Mapping with Sonar Data?

Hi MarkyMark,

I have had success with this setup. I made a small differential-drive bot that mapped using sonar, wheel odometry, and inertial sensors. Some of the key parts that made it work: - Stagger your sonar sensor's phase, especially if the sensors are looking in the same direction (or opposite by 180degrees). Use time division on the sampling. This greatly reduces noise. - BUFFER YOUR SONAR DATA. This is extremely important! My implementation used two sonar sensors that swept side to side. To get it to work, I had to buffer it for ~0.5s (thus creating a "laserscan" of about 25 data points) - it will not work very well if you publish single- or two-point "laserscans" - Use a coarse occupancy gridmap. I had good results with only 2.5cm pixels and large. (Also, since I was doing this on an arduino, it was a great way to drop the floating point numbers and use raw bytes instead) - Your localization will suffer from the extra delay due to buffering the sonar. If your robot doesn't need to respond quickly, it's better to use a coordinate transform snapshotted during the middle of the buffering operation, not at the end.

So YES this was a success for my project. Slow localization wasn't an issue for the environment I was using the robot in. If you have an extra couple hundred dollars, I'd opt for a cheap laserscanner. The objective of my project was to do SLAM on as small a budget as possible.

2015-08-26 14:43:17 -0500 received badge  Famous Question (source)
2015-05-10 22:43:45 -0500 received badge  Notable Question (source)
2015-05-10 22:43:45 -0500 received badge  Popular Question (source)
2015-04-13 18:10:59 -0500 received badge  Critic (source)
2015-04-10 15:20:00 -0500 answered a question Raspberry Pi +Hydro + Openni/freenect

I battled with the kinect and never gave up! I have a "working" implementation as a .img image and some code in https://github.com/TimboInSpace/Armin/ . It eventually outputs an 8 bit depthimage as a 16-bit mm-encoded ROS depthimage. The intention was kinect --> raspi --> laptop

As far as I can tell, it's one of the few working ROS projects using kinect with Rasperry Pi 1 Model B+ My implementation uses the librekinect driver, but has big limitations: 6fps frame rate 640x40 resolution Connection often drops for 1s periods

Try it out. I just bought a raspberry pi 2, which can probably handle OpenNI, so I'll be trying that next as my current implementation is sub-par

2015-04-07 15:26:08 -0500 answered a question p2os driver with ser2net

I can confirm it works. my setup does serial A <> computer A<> wifi <> computer B <> Serial B, using ser2net and socat Your debian ser2net conf is correct

60001:raw:0:/dev/ttyS0:9600 8DATABITS NONE 1STOPBIT

On ubuntu, install socat and run this:

sudo socat pty,link=/dev/ttyV0,echo=0,raw tcp:[debian hostname]:60001

Then on your ubuntu computer you can access your debian computer's /dev/ttyS0 through the local virtual serial port /dev/ttyV0

2015-04-07 14:38:27 -0500 answered a question Roomba 560 "Could not connect to Roomba" Error in ROS (Ubuntu 12.04)

Try enabling hardware flow control. That should raise and lower the RTS pin for you from minicom. Let me know your results, I'm working on the same problem and have so far had no luck connecting to the roomba...

2015-04-06 10:13:14 -0500 received badge  Commentator
2015-04-06 09:59:16 -0500 answered a question p2os driver with ser2net

I'll verify later today, but there should be a conf file in net2ser/examples called ser2net. Try matching your conf to that example.

2015-03-24 14:48:05 -0500 received badge  Teacher (source)
2015-03-24 14:31:05 -0500 answered a question Laser scan to probabilistic map based on occupancy grid

It seems to me like gmapping would still work. Just remap ~map_frame to some local map frame that you have defined, and also set a static transform between the robot base and this local frame.. After that, you'll have to manage the global map <--> odometry frame transform by some other means, but it should accomplish what you need.

2015-03-23 12:57:57 -0500 answered a question How to create a map data for navigation stack using kinect

RGBDSlam is the best choice if you have both the rgb and depth stream, but Depthimage_to_laserscan is better if you only have the depth data (my project is like this). Depthimage_to_laserscan requires two things, the depth image topic and the camera_info topic (see monocular camera calibration tutorials). You'll also need to set up a coordinate frame to reference the kinect to. For example, with my stationary kinect 360, I use these:

rosrun tf static_transform_publisher 0 0 0 0 0 0 map camera_depth_frame 20

rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/kinect/mono16/image_raw camera_info:=/kinect/mono16/camera_info _scan_height:=20 _scan_time:=0.167 _range_min:=0.75 _range_max:=2.5

You'd probably want to use a larger scan height if possible, but my slow little raspberry pi cannot handle much more than 640x40 at 6fps, so im limited in height.

After that, you'll see a new /scan topic. Use that in place of a laser scanner as described in the navigation stack tutorials: http://wiki.ros.org/navigation/Tutori...

2014-12-18 21:30:54 -0500 asked a question Depthimage_to_laserscan scan height too large (at 1 pixel)

Im having an issue using Depthimage_to_laserscan. After I subscribe to the topic (uses lazy subscription), it throws an error:

Could not convert depth image to laserscan: scan_height ( 1 pixels) is too large for the image height.

This seems strange to me. Checking the headers of both the image topic and camera_info using rostopic echo, I find they are identically 640x40 pixels (not a typo), the expected size.

Any ideas what could be causing this?

It should be noted that I am running this off a raspberry pi, so I'm using some librekinect and opencv to create the images, with a camera_info_manager to hand out the camera_info. Only the interface to depthimage_to_laserscan causes issues: other uses of the image work fine. This post outlines a possible solution, but does not apply here as Im not using openni.

Thanks

2014-12-18 13:24:13 -0500 commented answer camera_info override

You don't need to fake it. One option is: on the non-android device set up a camera info manager publishing on a different camera name than your current one. Then set up an mage_transport republisher on the same name as your new camera info manager. Hope it goes well

2014-12-15 15:14:10 -0500 answered a question camera_info override

You need to run CameraCalibrator.py, obtain a calibration .yaml file from that script, then provide your image publisher the URL to your calibration yaml file as a parameter to the image publisher.

This page gives details on the calibration process.

What are you using to publish these images? Does it have a camera_info_url parameter, or anything like that?

2014-12-15 12:23:34 -0500 commented answer how to publish sensor_msgs::CameraInfo messages ?

To add to Thomas D's comment, another way to do this would be using message_filters::TimeSynchronizer, set up to subscribe to each camera

2014-12-02 16:09:55 -0500 received badge  Supporter (source)
2014-11-16 23:41:52 -0500 commented answer sensor_msgs/Image encoding conversion

Were you able to get any other conversions to work? I'm trying the same thing with RGB8 > Mono16, with no luck: encoding specified as mono16, but image has incompatible type 8UC3

Edit: My stream was only coming in as RGB8. This was the default in opencv, changing that fixed it.

2014-11-16 19:04:41 -0500 commented question kinect calibration problem

This looks like the same issue: http://answers.ros.org/question/9550/... Are you using OpenNI?

2014-11-02 13:46:38 -0500 commented answer alternatives to 3d sensors

Now I said it's possible, but the disclaimer is that it will never be as good as a laser. The main gmapping tweaks: - Set resolution to match sensors. For my SRF-04s, I used 4cm - Increase particle count, to account for relatively low sample rate. - Buffer the ultrasonic, publish as sets of points!!

2014-10-24 20:21:18 -0500 answered a question alternatives to 3d sensors

It's totally possible. I have a little robot that does some basic SLAM, just using two ultrasonic rangefinders. However, I found that gmapping took some heavy tweaking to get to work properly using such noisy & low spatial resolution sensors. The whole project took a couple XBees, some servos, an arduino, the rangefinders, and my laptop.

2014-09-25 15:51:20 -0500 commented question XBee Network Error

I had a (mostly) working XBee network going with ROS Hydro. But when I upgraded to Indigo last night, I got this error immediately. I haven't found any solution yet Edit: I just removed all the ros-indigo-rosserial-* packages, and copied my old rosserial from Hydro to Indigo. Now working fine.

2014-08-04 03:38:09 -0500 received badge  Famous Question (source)
2014-05-27 21:07:15 -0500 received badge  Notable Question (source)
2014-04-20 16:22:02 -0500 received badge  Popular Question (source)
2014-04-19 14:10:24 -0500 commented answer update the map_server

You're right. I changed the Mapper node to publish its own OccupancyGrid, and this was a lot simpler. It's working quite well now.

2014-04-19 14:09:13 -0500 received badge  Scholar (source)
2014-04-19 06:25:40 -0500 asked a question update the map_server

Does anyone know a way to update the map that map_server uses? AFAIK, the map is set to the contents of an image file as soon as the node is launched, then remains static for the life of the node. My situation is as follows: I have two nodes this job mapper and map_server. Mapper records ultrasonic range measurements into a png file, and has a service for saving updated versions of the map. How can I push updates of the map to my map_server node?? Will I have to relaunch the map_server node every time? Thanks

2014-04-18 13:05:08 -0500 commented answer Mapping with Sonar Data?

Would you be able to post your homebrew solution somewhere? I'm attempting a very similar project and am about to go down the ultrasonic > laserscan conversion route. Also did you eventually have any success with this approach?

2014-02-13 10:36:36 -0500 received badge  Enthusiast
2014-01-31 07:47:13 -0500 commented answer rosrun rosserial_python serial_node.py

I've had issues with this in the past, if launching the serial node right after rviz. It was causing rviz to crash immediately, but works fine when the two are launched separately. Not sure why

2014-01-24 13:47:11 -0500 commented question Can't run or compile anything !!!! -> symbol lookup error: /opt/ros/hydro/lib/libroscpp.so: undefined symbol: _ZN3ros6HeaderC1Ev

I was having the exact same problem then I found this thread. My issue seems to have been caused by OS updates running across errors while updating the ROS packages.