Ask Your Question

ebeowulf's profile - activity

2019-01-16 09:15:43 -0500 received badge  Enthusiast
2019-01-16 09:15:43 -0500 received badge  Enthusiast
2018-11-02 13:01:38 -0500 asked a question Configure Kinova Arm for MoveGroup

Configure Kinova Arm for MoveGroup I have a Kinova Mico2 arm. The kinova_ros package works just fine... MoveGroup is lau

2018-08-02 04:35:47 -0500 received badge  Popular Question (source)
2018-08-02 04:35:47 -0500 received badge  Notable Question (source)
2017-12-06 07:57:40 -0500 received badge  Famous Question (source)
2017-12-06 07:57:40 -0500 received badge  Notable Question (source)
2017-08-30 18:54:04 -0500 received badge  Student (source)
2017-01-05 13:20:25 -0500 received badge  Notable Question (source)
2016-04-28 20:34:48 -0500 received badge  Popular Question (source)
2016-03-14 02:23:14 -0500 received badge  Famous Question (source)
2015-08-27 02:32:53 -0500 received badge  Teacher (source)
2015-07-14 09:59:49 -0500 received badge  Popular Question (source)
2015-05-31 11:16:40 -0500 received badge  Notable Question (source)
2015-05-22 14:01:01 -0500 commented answer move_base without base_local_planner

I didn't think about the Service Call - although the node mentioned by David may work as well (haven't tried it yet), the service call solves one of my other problems where move base fails in the middle of executing a plan

2015-05-22 03:12:19 -0500 received badge  Popular Question (source)
2015-05-20 17:43:44 -0500 asked a question move_base without base_local_planner

I have an alternative navigation package that generates cmd_vel messages given the plans published on /move_base/NavfnROS/plan. I already redirected topics and everything runs well... but there is a lot of wasted overhead processing in move_base. What is the best method for stripping move base of its local planner?

A couple more specifics: 1) We are using the default global planner ("navfn/NavfnROS") 2) We want to use both the global and local costmaps to produce a plan

Should I be: A) stripping the base_local_planner from the source code? B) write a separate planning node from scratch C) maybe there is a parameter I can set? D) something else?

2015-05-08 11:20:24 -0500 asked a question python: reading messages very slow

I'm new to using python - and have hit a giant speed bump trying to integrate a python library into ROS indigo. I created a node in c++ that processes a point cloud, and transmits segments using the following two messages:

Segmented BlobArray.msg

Header header
int16 full_image_rows
int16 full_image_cols
SegmentedBlob[] blobs

SegmentedBlob.msg

int64  id
int16[]  bbox
std_msgs/Int16MultiArray pt_list

On the python side, I have to rebuild each segment within the image:

for pp in range(0,data.blobs[Bid].pt_list.layout.dim[0].size):
    row = data.blobs[Bid].pt_list.data[3*pp+1]-ymin;
    col = data.blobs[Bid].pt_list.data[3*pp]-xmin;
    cv_image[row,col] = data.blobs[Bid].pt_list.data[3*pp+2]

The process works, but the transfer is extremely slow. The for loop itself is not the issue. Adding random numbers to cv_image for the same number of times is still quick. Only once I access the ROS message does it get slow ( 4 sec delay). The same process in C++ has a <10msec delay.

Are their python tricks for accessing these types of messages? Or maybe there are easy ways of transmitting multiple segments in an image?

2015-04-27 15:43:15 -0500 asked a question openni: depth_ir_offset_x

What exactly does changing the depth_ir_offset_x parameter used by the openni driver do? In fuerte, this parameter (or something similar) appeared to select the bbox of the published 640x480 images by shifting the entire image left|right (and offset_y shifted up|down). In indigo, however, the offset_x and offset_y do not appear to affect the published image... only the published point cloud.

1) What units are these parameters in? (maybe pixels, but the change in the point cloud seems too large for pixels)

2) What image topics are they used to align? (maybe /camera/rgb/image_rect_color + /camera/depth/image_rect_raw)

3) Where do I find the source code that uses them?

2015-04-23 11:38:58 -0500 marked best answer Access nodelet_manager when playing bagfile

I have a program that depends on the pointcloud_to_laserscan nodelet code. For normal access with real data, I use the following launch code:

<node pkg="nodelet" type="nodelet" name="kinect_laser_0" args="load pointcloud_to_laserscan/CloudToScan robot_camera2_nodelet_manager" respawn="true"> <remap from="cloud" to="/robot/camera2/depth_registered/points"/> <remap from="scan" to="/legs"/> </node>

I have some bagfiles for which I recorded the transformation frame, and the /robot/camera2/depth_registered/points topic. The actual rosbag command was:

rosbag record /robot/camera2/rgb/image_rect /robot/camera2/rgb/image_rect_color /robot/camera2/depth_registered/points /tf /robot_tf/camera2_rgb_optical_frame /robot_tf/camera2_rgb_frame /robot_tf/camera2_depth_frame /robot_tf/camera2/depth_optical_frame /robot_tf/base_link

But while I can play and view all of the recorded data and topics in rviz with no problems, running the CloudToScan routine doesn't seem to do anything. It doesn't generate any topics, but it doesn't throw any errors either... at least not that I can tell. I've also checked (rosrun tf view_frames), and the specified transformation space exists (/robot_tf/camera2_depth_frame). So I don't know what's going on.

My best guess is it has something to do with the nodelet manager mentioned in the CloudScan launch code (robot_camera2_nodelet_manager). Do I need to record the nodelet manager when recording the bagfile? Is that possible? Obviously, I could record the CloudScan topic, but I'm hoping to figure out the right parameters for the system, rather than having to re-record all the time.

Thanks in advance

2015-04-23 11:38:49 -0500 received badge  Famous Question (source)
2015-04-23 11:38:37 -0500 marked best answer mountain lion install broken?

I've been following the instructions on (http://www.ros.org/wiki/fuerte/Installation/OSX/Homebrew/Source) to install ros/fuerte on a macbook...(OSX 10.8). I get to the line:

brew install ros/fuerte/swig-wx

which generates the following error:


bash-3.2$ brew install ros/fuerte/swig-wx ==> Cloning git://github.com/wg-debs/swig-wx.git Cloning into '/Library/Caches/Homebrew/swig-wx--git'... fatal: remote error: Repository not found. Error: Failure while executing: git clone --no-checkout --depth 1 --branch upstream/1.3.29 git://github.com/wg-debs/swig-wx.git /Library/Caches/Homebrew/swig-wx--git


In the troubleshooting part of the page, it says that the git repository has been removed, and provides some code.


require 'formula'

class SwigWx < Formula url 'git://github.com/ros/swig-wx.git', {:using => :git, :tag => '1.3.29_fuerte'} homepage 'http://www.swig.org' version '1.3.29'

def install ENV.universal_binary system "./configure --prefix=#{prefix}" system "make" system "make install" end

end


Can anybody tell me what am I supposed do with this code? Or maybe this is a red herring?

Thank You!!!

2015-04-23 11:37:16 -0500 received badge  Famous Question (source)
2015-04-23 11:36:34 -0500 asked a question Search for object in known map

Are there any existing ROS packages for searching a known environment? I have a visual detection system already implemented and want to evaluate it. I'm aware of the exploration package and its frontier-based exploration method, but it looks inappropriate for searching when the map is known a priori.

2015-03-04 18:02:33 -0500 answered a question Debugging with QtCreator

It seems to be a QtCreator problem ( https://bugs.launchpad.net/ubuntu/+so... ). Removing the CMakeLists.txt.user file fixes the issue temporarily... upgrading to Creator 3.1.0 may solve the issue.

2015-02-05 14:59:44 -0500 received badge  Famous Question (source)
2015-02-05 14:59:44 -0500 received badge  Notable Question (source)
2015-02-05 14:59:44 -0500 received badge  Popular Question (source)
2014-02-09 21:49:20 -0500 received badge  Notable Question (source)
2014-01-28 17:29:28 -0500 marked best answer Time synchronization errors with multiple computers

I am experiencing a string of time synchronization errors that I am struggling with.

I have two different computers, each of which is streaming data from a different sensor. I have set the time update on each computer to look at a common time server (sudo ntpdate {ip.address}).

  • Computer A runs a Hokuyo, the roscore, a robot urdf, amcl, and a processing node (A_p) for the hokuyo data.
  • Computer B runs a Kinect, two processing nodes accessing the kinect (B_p), a node (C_p) fusing data from the kinect and the processing node (A_p), and another node fusing information from all of the nodes (A_p, B_p, and C_p) on computer A and B, and transforming their locations into map space (generated by amcl).

This configuration generates synchronization errors, even though everything uses ros::Time::now() to retrieve time. When fusing data in node B_p, I get time synchronization errors:


terminate called after throwing an instance of 'tf::ExtrapolationException what(): Lookup would require extrapolation into the past. Requested time 1362173375.685855810 but the earliest data is at time 1362173375.907669363, when looking up transform from frame [/robot_tf/back_laser] to frame [/robot_tf/base_link]


and when transforming data into /map space, I get transformation errors that I am assuming are also time synchronization errors:


terminate called after throwing an instance of 'tf::LookupException' what(): Frame id /robot_tf/base_link does not exist! Frames (6): Frame /robot_tf/camera2_depth_optical_frame exists with parent /robot_tf/camera2_depth_frame.


All of these processes work when on the same computer, but spread across 2 computers, they eventually fail... and eventually isn't very long. Can I get rid of these errors entirely? Barring that, how do I ignore these errors? Do I need to write my own exception handler, or is there something easy in ros already?

Thank you,

2013-12-13 05:05:00 -0500 received badge  Popular Question (source)
2013-12-10 08:39:39 -0500 asked a question Rviz crashing to login screen

When I run the command "rosrun rviz rviz", the screen flashes and I get dumped to the ubuntu login screen. I am running fuerte on ubuntu 12.04.

Some more details: 1) I have used ROS and rviz extensively before on this same computer. 2) The only thing I upgraded recently were the Primesense drivers. 3) I've already tried restarting the computer, and deleting the .rviz directory to reset the system. 4) The .ros/log files don't appear very informative. The last message I see in rosout.log is "Loading general config from [/home/{username}/.rviz/config]". Otherwise, the master.log has a message "keyboard interrupt, will exit" near the end. 5) OpenCV graphics appear to work fine. I can run programs that display color and/or grayscale images using native OpenCV without any problems. 6) I also tried reinstalling the ros-fuerte-visualization package without any effect. 7) I also tried the flag options (LIBGL and OGRE_RTT export commands shown on http://wiki.ros.org/rviz/Troubleshooting), but get the same behavior.

What I'm hoping for is suggestions on where else to look in order to debug this. I'd rather not reinstall ros from scratch, but that's the next step... followed by a reinstall of the graphics drivers, and then ubuntu itself. Anything else I can do before having to commit to drastic reinstalls?

Thank you

2013-06-20 22:12:45 -0500 received badge  Notable Question (source)
2013-06-10 09:10:36 -0500 asked a question Convert bagfile to .oni sequence

I'm trying to write the contents of a bagfile to an OpenNI .oni sequence file. What I can't tell, however, is if the underlying openni_wrapper modified the data at all in order to get it into OpenCV, or if the data can be copied directly back into the OpenNI data structures. Does anyone know the answer? And in general, any pointers on creating .oni sequences would be welcome.

2013-06-10 08:42:47 -0500 received badge  Famous Question (source)
2013-05-30 13:04:24 -0500 marked best answer Subscribe to tf (e.g. tf callback)

I'm playing around with the openni_skeletal tracker, which posts all of its position data only to /tf. Now I want to access that tf data like I would access a different sensor stream. I want to write a callback function that is only activated when there is new skeletal data to process. Is it possible?

I know that the traditional route for accessing tf is to write a listener that regularly polls tf, but I'd rather not have to write the synchronization code myself when the tf messages I'm interested in are published more or less at the same time.

2013-05-30 11:58:58 -0500 received badge  Notable Question (source)