ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

ee.cornwell's profile - activity

2018-07-23 10:01:52 -0500 received badge  Notable Question (source)
2015-10-30 17:58:42 -0500 received badge  Notable Question (source)
2015-10-06 04:46:59 -0500 received badge  Nice Answer (source)
2014-09-24 13:31:20 -0500 received badge  Popular Question (source)
2014-02-04 22:49:33 -0500 received badge  Famous Question (source)
2014-01-28 17:29:42 -0500 marked best answer robot_pose_ekf timing error

I am getting the error:

ERROR: filter time older than vo message buffer

when I run only the Kinect through openni_launch -> fovis -> robot_pose_ekf

I am aware that this issue is caused by "two sensor inputs have timestamps that are not synchronized" as referenced by http://ros.org/wiki/robot_pose_ekf/Troubleshooting

My questions are:

1.) I am only using the vo message topic from fovis. Are the errors possibly because the depth and rgb cameras are out of sync?

2.) How can I correct this issue? I am aware of approximate and exact time message filters and fovis implements both.

2014-01-28 17:29:24 -0500 marked best answer viso2_ros package make error

I am trying to rosmake viso2_ros but I am getting this error:

/home/eric/fuerte_workspace/sandbox/viso2/viso2_ros/src/stereo_odometer.cpp: In member function ‘virtual void viso2_ros::StereoOdometer::imageCallback(const ImageConstPtr&, const ImageConstPtr&, const CameraInfoConstPtr&, const CameraInfoConstPtr&)’:
  /home/eric/fuerte_workspace/sandbox/viso2/viso2_ros/src/stereo_odometer.cpp:112:47: error: conversion from ‘ros::WallTime’ to non-scalar type ‘ros::Time’ requested
  /home/eric/fuerte_workspace/sandbox/viso2/viso2_ros/src/stereo_odometer.cpp:222:61: error: no match for ‘operator-’ in ‘ros::WallTime::now()() - start_time’

Can anyone tell me why I am getting this make error? I am running Ubuntu Precise Fuerte. Thanks

2014-01-28 17:29:22 -0500 marked best answer Kinect frame synchronization: does ros openni_launch provide message filters?

I am curious if the openni_launch package already provided message filters such as an ApproximateTime message filter.

I am getting a warning when running the Kinect such as:

[WARN] [1361652036.274076110]: [image_transport] Topics '/camera/depth_registered/image_rect' and '/camera/depth_registered/camera_info' do not appear to be synchronized. In the last 10s:
    Image messages received:      100
    CameraInfo messages received: 196
    Synchronized pairs:           14

Ubuntu, Precise, Fuerte

2014-01-28 17:29:20 -0500 marked best answer pcl coordinate system orientation

I am aware that ros uses the positive coordinate system in which x=forward, y=left, and z=up, but can anyone tell me what pcl data uses?

2014-01-28 17:29:20 -0500 marked best answer How to convert pointcloud2 to separate depth and rgb image messages

I'm wanting to convert a filtered pointcloud2 message into separate depth and rgb image messages. RGBDSLAM requires an input of a RBG image topic as well as a depth image topic. I have tried to just pass in the filtered pointcloud2 message into "config/topic_points", but no frames are captured. Any ideas?

System: Ubuntu, Fuerte, Precise

2014-01-28 17:29:01 -0500 marked best answer image_pipeline svn no longer exists on server

I'm trying to svn checkout image_pipeline using the link the ros wiki suggests, but the server is not responding. I am using electric and need a svn source for image_pipeline. Can anyone tell me why image_pipeline is no longer on the svn server?

2013-09-20 04:32:07 -0500 commented answer problem calibrating kinect ROS Fuerte

It sounds like the .ini file is either not in your current directory or you didn't change the file extention .txt to .ini. That's the only thing I can think of...I've never came across this error. Make sure you are sourcing the command window and you cd to the directory where the .ini file is saved in.

2013-09-13 05:32:02 -0500 received badge  Famous Question (source)
2013-08-08 00:58:09 -0500 marked best answer converting optical frame to base frame tf

I'm trying to transform an optical frame -> x=right, y=down, z=forward to a base frame with the orientation -> x=forward, y=left, z=up.

I attempted to try the translation in a tf broadcaster, but was unsuccessful. Any ideas?

System:Ubuntu, Fuerte, Precise

2013-06-18 05:49:46 -0500 received badge  Popular Question (source)
2013-06-18 05:49:46 -0500 received badge  Notable Question (source)
2013-06-03 09:43:23 -0500 received badge  Famous Question (source)
2013-05-27 15:05:30 -0500 received badge  Famous Question (source)
2013-05-13 05:23:49 -0500 received badge  Notable Question (source)
2013-05-12 22:46:33 -0500 received badge  Famous Question (source)
2013-05-01 11:08:45 -0500 received badge  Famous Question (source)
2013-04-19 07:57:25 -0500 commented question Static map unmatched origin

Ok, this makes sense now...I need to use amcl in place of slam_gmapping once I have a static map, right? This will allow me compare local and global maps...I'll check to see if that solves the problem, thanks!

2013-04-19 07:12:53 -0500 commented question Static map unmatched origin

Isn't that what slam_gmapping does? I placed the robot in the same initial position as when I created the static map. Am I missing something?

2013-04-19 06:40:36 -0500 asked a question Static map unmatched origin

I'm having a problem matching up origins using a static map and dynamic map. I'm using only a Kinect sensor, CCNYRGBD, gmapping, and map_server. So I run gmapping with CCNYRGBD and get a complete map of the room, then run map_server map_saver to save the map. Then I launch CCNYRGBD, gmapping, and map_server displaying the static map. My problem is that I am getting the static map and current map displayed in RVIZ, but the current dynamic map is in the center field and the static map is on the bottom right. I've tried correcting this in the map.yaml file by setting the origin to different coordinates, but the map stays in the same spot.

Should the static map be in its own frame and have a different topic than the current map? I've tried launching amcl, 2dcostmap, and move_base with global static map true and setting the local origin to different values to see if it corrects the problem, but no luck. Any ideas guys?

2013-04-17 18:52:29 -0500 received badge  Famous Question (source)
2013-04-17 18:51:30 -0500 marked best answer What is the preferred Visual Odometry package: viso2_ros or fovis_ros?

I've been experimenting with fovis and viso2 in order to acquire visual odometry using the Kinect and Phidgets IMU. I noticed that the parameters are pretty much the same for both packages. Can anyone tell me which one they prefer and why?

2013-04-17 18:51:24 -0500 commented answer What is the preferred Visual Odometry package: viso2_ros or fovis_ros?

I actually started using ccny_rgbd and it has proved to be a nice complement to wheel encoder odometry. It is very lightweight which is desirable for my system and doesn't throw a lot of warnings and errors.

2013-04-17 14:27:57 -0500 received badge  Teacher (source)
2013-04-17 09:42:22 -0500 answered a question problem calibrating kinect ROS Fuerte

Here's a procedure I always use...make sure you calibrate the RGB and IR cameras separately

Kinect Calibration: RGB and IR Cameras

Packages Needed:

git clone https://github.com/ros-perception/image_pipeline.git

git clone https://github.com/ros-perception/image_common.git

Install dependences: rosdep install ______

Make packages: rosmake ______

Procedure:

For RGB Camera

1.) roscore

2.) rosrun openni_camera openni_node

3.) rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.059 image:=/rgb/image_raw rgb:=/rgb

Note: Size is number of rows of boxes-1 by number of columns of boxes-1.

Square is size of boxes in meters.

4.) Make sure Kinect is stationary. Move the checkerboard around slowly, changing the 6 degrees of freedom: x, y, z, roll, pitch, yaw.

5.) Keep changing the position of the checkerboard until the X, Y, Size, and Skew green bars on the right side of the screen are at a maximum.

6.) When the “Calibrate” button is lit, click it.

7.) When the “Save” button is lit, click it.

8.) This saves a .tar file in your system’s ‘tmp’ folder. Find it and extract the files somewhere handy. Inside the extracted folder will be pictures and a ost.txt file. Change the file extension to ost.ini and copy it to your sandbox/workspace.

9.) rosrun camera_calibration_parsers convert ost.ini cal.yml

10.) This saves the cal.yml in your sandbox/workspace. Change the extension to .yaml. Also, change the file name corresponding to what openni_node refrenced in the terminal when it was trying to find the calibration file. It’s usually something like “RGB_A00366812242050A”. Once the file name is changed, open the file in a text editor and change the default camera_name to “RGB_A00366812242050A”, or whatever name openni_node refrenced.

11.) Now, copy this .yaml file to home/.ros/camera_info folder

Note: The .ros folder might be a hidden folder

For IR Camera

1.) Procedure is the same except:

a.) Cover the IR projector with tape/paper

b.) Make sure you are in a low light environment with incandescent or halogen lighting. Fluorescent lighting interferes with the IR calibration. Also, make sure there is minimal movement…the IR camera is very finicky.

c.) rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.059 image:=/ir/image_raw ir:=/ir

2013-04-17 03:34:54 -0500 received badge  Notable Question (source)
2013-04-10 01:31:23 -0500 received badge  Famous Question (source)