ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

K_Yousif's profile - activity

2021-04-01 14:42:59 -0500 received badge  Favorite Question (source)
2020-08-24 20:44:02 -0500 received badge  Famous Question (source)
2018-07-24 19:29:03 -0500 received badge  Great Question (source)
2017-06-08 04:26:14 -0500 received badge  Good Question (source)
2017-06-07 20:14:41 -0500 marked best answer Concatenating relative transformations to obtain a global transformation

Hi guys,

I am trying to implement a visual odometry system that matches features viewed by sequential frames and calculating the rigid body transformation between the corresponding points using RANSAC. I am successfully obtaining the ransac transformation that aligns the points obtained by the two frames. The relative transformation is of type Eigen::Matrix4f relative_transformation. I would like to concatenate (sum up) all the relative transformation in order to obtain a global transformation. I don't think I am doing it correctly:

//class attributes
Eigen::Matrix4f global_transformation== Eigen::Matrix4f::Identity(4,4);;
Eigen::Matrix4f relative_transformation;
tf::Transform transform;

void process(){
relative_transformation=calculate_ransac_relative_transformation();
global_transformation = global_transformation*relative_transformation;

Eigen::Matrix4d global_transformation_(global_transformation.cast<double>());
Eigen::Affine3d affine(global_transformation_);
tf::TransformEigenToTF(affine, transform);
static tf::TransformBroadcaster br;
br.sendTransform(tf::StampedTransform(transform, ros::Time::now(), "/odom",  "Global_transformation"));
}

When viewing the transformation in RVIZ, it keeps jumping around even if the camera is not moving. Any idea what I am doing wrong?

Any help would be much appreciated. Thanks,

Khalid

2017-04-12 22:15:43 -0500 received badge  Nice Answer (source)
2017-03-13 08:37:08 -0500 received badge  Nice Answer (source)
2016-08-08 01:35:53 -0500 marked best answer How to use perception_odu as a dependency?

Hi Guys, I have created a package for 3D registration using a Kinect, I would like to use the normal distribution transform that are available in the perception_odu package. I have downloaded and compiled this package successfully in my fuerte_workspace and added it to the ROS_PACKAGE_PATH. However, when I add this line to the manifest.xml: <depend package="perception_oru"/> I get the following error:Error: package/stack firefly_mv depends on non-existent package perception_oru.

Any help of how to use this package in my registration package would be much appreciated.

thanks, Khalid

2016-03-25 22:05:10 -0500 received badge  Good Question (source)
2016-03-03 06:14:58 -0500 received badge  Great Answer (source)
2016-02-14 01:36:33 -0500 received badge  Nice Question (source)
2015-03-10 15:20:40 -0500 received badge  Notable Question (source)
2015-03-10 15:20:40 -0500 received badge  Popular Question (source)
2014-11-25 17:29:51 -0500 received badge  Nice Question (source)
2014-11-04 09:25:21 -0500 received badge  Nice Answer (source)
2014-07-09 20:54:06 -0500 commented question Concatenating relative transformations to obtain a global transformation

Hi Hamed, Try transformEigenToTF (with the small t instead of capital T).

2014-04-20 13:32:39 -0500 marked best answer Compile is successful, but no executable generated?

Hi guys,

I have created my own package in which I do some image processing with openCV and PCL. After compiling, I get no errors and the build is successful, However, no executable is generated in the /bin folder. Note: this is not my first package, I have created other nodes with no problem.

In the Cmakelist.txt, I have added the following lines

rosbuild_add_executable(myNode src/myNode.cpp)
rosbuild_add_library(myNode src/myNode.cpp)

Here is the output of the compilation:

[ rosmake ] Results:                                                            
[ rosmake ] Built 31 packages with 0 failures.                                  
[ rosmake ] Summary output to directory                                         
[ rosmake ] /home/samme/.ros/rosmake/rosmake_output-20130322-160141

When I run the command $rosun firefly_mv myNode I get:[rosrun] Couldn't find executable named myNode below /home/samme/fuerte_workspace/firefly_mv.

I am not sure why the executable is not being generated here, I have created many other nodes and never got this problem. Any help would be greatly appreciated.

Thanks and regards, Khalid

2014-04-20 13:29:22 -0500 marked best answer Kinect RGB + Depth to .pcd format

I have tried the rgbd_registration package in which a source cloud (first frame) and a target cloud (another frame) in the form of a .pcd format are passed to the algorithm, the algorithm restores the RGB images from the .pcd file and detects and matches features between the two frames and then aligns the reconstructed 3D points between the two frames.

What I want to do is construct my own .pcd files that will contain depth and rgb images from a Kinect so I can pass it to this algorithm for alignment, instead of using their scenes.

Is there any tutorial or example on how I would be able to store both depth and RGB information into a pcd file?

Thanks for your help

Khalid

2014-04-20 13:26:15 -0500 marked best answer ccny_rgbd with vo+mapping slow, map disappears

Hello Dryanovski/and those who have used the ccny_rgbd,

I am using ROS fuerte and ubuntu 12.04 on an i5 Dell laptop. I have followed the instructions given to launch your method, the visual_odometry is quite fast. However, when I launch the vo+mapping.launch, It is extremely slow and the alignment is really bad. Also the parts of the map that are not viewed disappear after a very short time. I am not sure if I am doing something wrong or is this how it is. I was hoping to build a map similar to the video you posted with a similiar performance.

Thanks for your help,

Regards,

Khalid

2014-04-20 13:23:09 -0500 marked best answer Processing an image outside the Callback function

Hi guys,

The answer to this question is probably trivial, but I have looked around and cannot find the answer I am looking for. I have written a simple Harris corner detector algorithm in which I receive an image from the camera/image_raw topic and then inside the Callback function the image is converted to a open cv type image (I call it cv_ptr) using the cv::bridge, and the processing of detecting the corner is all done inside the callback function. It works but is very slow (lag and a black screen for a couple of seconds) when the camera movement is fast. I am assuming this is because I have my processing in the callback function?

I have read in other posts that doing processing inside the callback function is not the correct way and that processing should be in the main loop. Which leeds to my question, how can I access cv_ptr (or better the sensor_msgs::ImageConstPtr& msg passed to the callback function) in the main process since callbacks are void functions and do not return anything? Note: I have tried making cv_ptr a global variable and then access it outside the callback function, although there is no error in compiling it, the process dies immediately) I am sure there is a better way to access messages passed to the callback function.

What am I missing? I have posted my code for clarification. Thanks alot for your help.

#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <stdio.h>
#include <stdlib.h>

using namespace cv;
using namespace std;
namespace enc = sensor_msgs::image_encodings;

static const char WINDOW[] = "Image window";
static const char WINDOWDST[] = "DST window";

/// Global variables
Mat src, src_gray, src_gray_corner;
int thresh = 200;
int max_thresh = 255;


char* source_window = "Source image";
char* corners_window = "Corners detected";

/// Function header
void cornerHarris_demo( int, void* );

/** @function cornerHarris_demo */


class ImageConverter
{
 ros::NodeHandle nh_;
 image_transport::ImageTransport it_;
 image_transport::Subscriber image_sub_;
 image_transport::Publisher image_pub_;

public:
  ImageConverter()
   : it_(nh_)
 {
  image_pub_ = it_.advertise("out", 1);
  image_sub_ = it_.subscribe("camera/image_raw", 1, &ImageConverter::imageCb,this);

  cv::namedWindow(WINDOW);
  }

 ~ImageConverter()
 {
  cv::destroyWindow(WINDOW);
 }

 void imageCb(const sensor_msgs::ImageConstPtr& msg)
{



cv_bridge::CvImagePtr cv_ptr;
cv_bridge::CvImagePtr cv_ptr2;


/// Detector parameters
int blockSize = 2;
int apertureSize = 3;
double k = .04;
try
{
  cv_ptr = cv_bridge::toCvCopy(msg, enc::BGR8);
  cv_ptr2 = cv_bridge::toCvCopy(msg, enc::BGR8);
}
catch (cv_bridge::Exception& e)
{
  ROS_ERROR("cv_bridge exception: %s", e.what());
  return;
}


Mat dst, dst_norm, dst_norm_scaled;


cv::imshow(WINDOW, cv_ptr->image);

cvtColor( cv_ptr->image, src_gray, CV_BGR2GRAY );

dst = Mat::zeros( cv_ptr->image.size(), CV_32FC1);

 /// Detecting corners
 cornerHarris( src_gray, dst, blockSize, apertureSize, k, BORDER_DEFAULT );
 cv::imshow(WINDOWDST, dst);


 // Normalizing
 normalize( dst,dst_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
 convertScaleAbs( dst_norm, dst_norm_scaled );
 cv::imshow(WINDOWDST, dst_norm_scaled );
 cv::waitKey(1);

 /// Drawing a circle around corners
 for( int j = 0; j < dst_norm.rows ; j++ )
  { for( int i = 0; i < dst_norm.cols; i++ )
      {
        if( (int) dst_norm.at<float>(j,i) > thresh )
          {
           circle( dst_norm_scaled, Point( i, j ), 5,  Scalar ...
(more)
2014-04-20 13:22:58 -0500 marked best answer Corner detection using opencv and ROS

Hi guys, I have implemented an algorithm that uses the opencv2 Harris corner detector. I have a few questions, but first let me explain my system: I have copied the opencv2 Harris corner detector and placed it in a callback function in which messages of type sensor_msgs::ImageConstPtr& msg are passed to it. I then instantiate a cv_bridge::CvImagePtr object and call it cv_ptr, the message (msg) is then copied to cv_ptr using the cv_bridge. I then declare Mat dst, dst_norm, dst_norm_scaled and a number of operations are performed and the final image (with the corners detected) is stored in dst_norm_scaled. When rosmake and then roslaunch it works, and I am able to view the dst_norm_scaled using cv::imshow. However I have a number of concerns:

1- If I want to transfer this back to a ROS type image, image_pub_.publish(cv_ptr->toImageMsg()); is used, however, my corner detected image is stored in dst_norm_scaled now, so how do I convert and publish this?

2- I tried to use a second cv_bridge::CvImagePtr object I called cv_ptr2 in which I can use instead of dst_norm_scale and then publish it using image_pub_.publish(cv_ptr2->toImageMsg()), and although rosmake was successful, the process died and threw an exception. I guess only 1 cv_ptr is allowed to be used?

3- In my program, although the corners are detected, if I move the camera too quickly there becomes a lag and sometimes the image shown becomes black for a couple of seconds. I am currently using ros::spin() in my main function.

If there are any additional information you would like me to explain, please let me know. I really appreciate your help.

Regards,

Khalid

2014-04-20 13:22:50 -0500 marked best answer VISO2 pitch angle

Hello,

I have managed to get ros_viso2 package to work, I am just wondering where is the pitch angle measured from? for example, I would like to point the camera upwards looking at the ceiling, what should my pitch angle be? and is it in degrees or radians?

I am viewing the outcome of the visual odometry on rviz where I set the fixed frame: /odom and target frame: base_link. Is this the best way to visualize the camera/robot path?

Thanks alot for your help.

Khalid

2014-04-20 13:22:46 -0500 marked best answer Monocular camera calibration

Hi guys,

I have followed the monocular calibration tutorial and have calibrated my Firefly_mv (which uses the DC1394 drivers)image. The calibrations seems to be successful since the /camera/camera_info is publishing the correct information:

header: 
seq: 152254
stamp: 
secs: 8
nsecs: 21200000
frame_id: /camera
height: 480
width: 640
distortion_model: plumb_bob
D: [-0.4055357733215121, 0.2322957248510526, -0.000988042635329932,                   0.00818916859481644, 0.0]
K: [701.8606085716559, 0.0, 317.3954985453047, 0.0, 704.0761053327817,      233.51394559637478, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [639.8907470703125, 0.0, 321.50737796160684, 0.0, 0.0, 668.643798828125, 232.067902760391, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi: 
x_offset: 0
y_offset: 0
height: 0
width: 0
do_rectify: False
---

However, when I type rostopic echo /image_rect , nothing seems to be published. Where can I find the rectified/calibrated image?

edit: $ rostopic list:

/camera/camera_info
/camera/image_color
/camera/image_color/compressed
/camera/image_color/compressed/parameter_descriptions
/camera/image_color/compressed/parameter_updates
/camera/image_color/compressedDepth
/camera/image_color/compressedDepth/parameter_descriptions
/camera/image_color/compressedDepth/parameter_updates
/camera/image_color/theora
/camera/image_color/theora/parameter_descriptions
/camera/image_color/theora/parameter_updates
/camera/image_raw
/camera/image_raw/compressed
/camera/image_raw/compressed/parameter_descriptions
/camera/image_raw/compressed/parameter_updates
/camera/image_raw/compressedDepth
/camera/image_raw/compressedDepth/parameter_descriptions
/camera/image_raw/compressedDepth/parameter_updates
/camera/image_raw/theora
/camera/image_raw/theora/parameter_descriptions
/camera/image_raw/theora/parameter_updates
/camera1394_nodelet/parameter_descriptions
/camera1394_nodelet/parameter_updates
/camera_nodelet_manager/bond
/diagnostics
/image_mono
/image_mono/compressed
/image_mono/compressed/parameter_descriptions
/image_mono/compressed/parameter_updates
/image_mono/compressedDepth
/image_mono/compressedDepth/parameter_descriptions
/image_mono/compressedDepth/parameter_updates
/image_mono/theora
/image_mono/theora/parameter_descriptions
/image_mono/theora/parameter_updates
/image_proc_debayer/parameter_descriptions
/image_proc_debayer/parameter_updates
/image_proc_rectify_color/parameter_descriptions
/image_proc_rectify_color/parameter_updates
/image_proc_rectify_mono/parameter_descriptions
/image_proc_rectify_mono/parameter_updates
/image_rect
/image_rect/compressed
/image_rect/compressed/parameter_descriptions
/image_rect/compressed/parameter_updates
/image_rect/compressedDepth
/image_rect/compressedDepth/parameter_descriptions
/image_rect/compressedDepth/parameter_updates
/image_rect/theora
/image_rect/theora/parameter_descriptions
/image_rect/theora/parameter_updates
/image_rect_color
/image_rect_color/compressed
/image_rect_color/compressed/parameter_descriptions
/image_rect_color/compressed/parameter_updates
/image_rect_color/compressedDepth
/image_rect_color/compressedDepth/parameter_descriptions
/image_rect_color/compressedDepth/parameter_updates
/image_rect_color/theora
/image_rect_color/theora/parameter_descriptions  
/image_rect_color/theora/parameter_updates
/out
/out/compressed
/out/compressed/parameter_descriptions
/out/compressed/parameter_updates
/out/compressedDepth
/out/compressedDepth/parameter_descriptions
/out/compressedDepth/parameter_updates
/out/theora
/out/theora/parameter_descriptions
/out/theora/parameter_updates
/rosout
/rosout_agg

Thanks,

Khalid

2014-04-20 13:22:31 -0500 marked best answer Viso2 mono_odometer error

Hi guys,

I am trying to use the viso2 package in order to obtain the monocular visual odometry using a Firefly mv camera. The camera works fine and I am able to view the images. However, when I run the command:

rosrun viso2_ros mono_odometer image:=/camera/image_raw

I get the following warnings and error:


Using default: 1.000000

Using default: 0.000000

Segmentation fault (core dumped)


Any idea what is causing this segmentation fault?

Thanks and best regards,

Khalid

2014-04-20 13:22:26 -0500 marked best answer learning_tf turtlesim tutorial: second turtle twirling

Hi guys,

I have followed the tf tutorials and implemented all the instructions to get the second turtle to follow the first turtle which in turn is controlled via the keyboard, however, when I launch the start_demo.launch, the second turtle starts twirling continuously even without moving the first turtle.

When I move the first turtle, I can see that the second turtle changes its movement slightly while still rotating. I am not sure what the problem is. I have attached an image showing the pdf file created by rosrun tf view_frames and the turtlesim. Heres what I get:

$ rosrun tf view_frames Listening to /tf for 5.000000 seconds Done Listening Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored dot - graphviz version 2.26.3 (20100126.1600)

Detected dot version 2.26.3 Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored frames.pdf generated

Any help would be much appreciated.

regards,

Khalid

image description

2014-04-20 13:20:26 -0500 marked best answer Robotino make error

Hi guys,

I am using ros fuerte and ubuntu 12.04, I am trying to use the robotino stacks and have followed the instructions from http://wiki.openrobotino.org/index.php?title=ROS, I have downloaded the api2 packages on my laptop running ubuntu 12.04. The CF card has an image of version 2.4 (I think api1). I know I need to update my CF card to api2. But I still expected the make to be successful (even if the connection to robotino will fail until I update the CF card to api2). I have added the robotino-ros-pkg to my ROS PACKAGE PATH. When I run the comand : $ rosmake --pre-clean robotino

I get the following error at the end:

Linking CXX executable ../bin/robotino_camera_node /usr/local/robotino/api2/lib/librec_robotino_api2.so: could not read symbols: File in wrong format collect2: ld returned 1 exit status make[3]: * [../bin/robotino_camera_node] Error 1 make[3]: Leaving directory /home/samme/robotino-ros-pkg/trunk/robotino/robotino_node/build' make[2]: *** [CMakeFiles/robotino_camera_node.dir/all] Error 2 make[2]: Leaving directory/home/samme/robotino-ros-pkg/trunk/robotino/robotino_node/build' make[1]: * [all] Error 2 make[1]: Leaving directory `/home/samme/robotino-ros-pkg/trunk/robotino/robotino_node/build' -------------------------------------------------------------------------------} [ rosmake ] Output from build of package robotino_node written to: [ rosmake ] /home/samme/.ros/rosmake/rosmake_output-20121231-130653/robotino_node/build_output.log [rosmake-0] Finished <<< robotino_node [FAIL] [ 7.27 seconds ]
[ rosmake ] Halting due to failure in package robotino_node. [ rosmake ] Waiting for other threads to complete. [ rosmake ] Results:
[ rosmake ] Cleaned 51 packages.
[ rosmake ] Built 47 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/samme/.ros/rosmake/rosmake_output-20121231-130653

Any ideas why I am getting this error? /usr/local/robotino/api2/lib/librec_robotino_api2.so: could not read symbols: File in wrong format

Thanks and best regards,

Khalid

2014-04-20 13:20:07 -0500 marked best answer remapping in launch files

Hi guys,

I have created a launch file that launches the camera1394 driver and displays the color image using the image_proc and image_view packages.

I am also trying to launch the dynamic_reconfigure node and connect it to the camera1394 node, however I am having a problem with the remapping, here is what I am trying to do in my launch file:

<node pkg="dynamic_reconfigure" type="testserver" name="dynamic_reconfigure">
  <remap from="XXX" to="camera1394_nodelet/parameter_updates" />
</node>

I am not sure where to get the XXX name from? Could someone please tell me where I can find the names of the inputs/outputs of a node(in general)from?

Thanks and regards, Khalid

2014-04-20 13:17:46 -0500 marked best answer RGBDSLAM error: Trust certificate g2o problem

Hey guys,

I am using ubuntu 12.04 and ros fuerte. I am trying to run rgbdslam.

I followed the instructions given in this post http://answers.ros.org/question/50585/rgbdslam-error-in-fuerte-and-electric/ (link text): and I got the following error:

svn --non-interactive --config-dir /home/samme/trunk/g2o/svnconf co https://svn.openslam.org/data/svn/g2o/trunk build/g2o svn: OPTIONS of 'https://svn.openslam.org/data/svn/g2o/trunk': Server certificate verification failed: issuer is not trusted (https://svn.openslam.org)

Note: I had previously downloaded g2o from the svn: https://openslam.informatik.uni-freiburg.de/data/svn/g2o/trunk/. I then moved the file trunk to the rubbish bin and downloaded g2o again from the svn : https://code.ros.org/svn/ros-pkg/stacks/vslam/trunk/

I have been told to add the line: SVN_CMDLINE = svn--trust-server-cert--non-interactive --config-dir $(TOP_DIR)/svnconf to the makefile in the g2o folder.

However I am now getting this error:

svn--trust-server-cert--non-interactive --config-dir /home/samme/trunk/g2o/svnconf co https://svn.openslam.org/data/svn/g2o/trunk build/g2o make: svn--trust-server-cert--non-interactive: Command not found

Am I doing the command correctly? I really appreciate your help.

Regards,

Khalid

2014-04-20 13:17:43 -0500 marked best answer Kinect not working

Hello,

I am trying to get the kinect to work on ROS fuerte and Ubuntu 12.04, I have installed the openni drivers by typing the command:

sudo apt-get install ros-fuerte-openni-launch

and then in a new terminal: roslaunch openni_launch openni.launch

I get the following errors:


... logging to /home/samme/.ros/log/ccef1f80-4728-11e2-bace-60672021cf54/roslaunch-samme-Latitude-E6320-14241.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://samme-Latitude-E6320:49615/

SUMMARY

PARAMETERS * /camera/depth/rectify_depth/interpolation * /camera/depth_registered/rectify_depth/interpolation * /camera/disparity_depth/max_range * /camera/disparity_depth/min_range * /camera/disparity_depth_registered/max_range * /camera/disparity_depth_registered/min_range * /camera/driver/depth_camera_info_url * /camera/driver/depth_frame_id * /camera/driver/depth_registration * /camera/driver/device_id * /camera/driver/rgb_camera_info_url * /camera/driver/rgb_frame_id * /rosdistro * /rosversion

NODES /camera/depth/ metric (nodelet/nodelet) metric_rect (nodelet/nodelet) points (nodelet/nodelet) rectify_depth (nodelet/nodelet) /camera/rgb/ debayer (nodelet/nodelet) rectify_color (nodelet/nodelet) rectify_mono (nodelet/nodelet) / camera_base_link (tf/static_transform_publisher) camera_base_link1 (tf/static_transform_publisher) camera_base_link2 (tf/static_transform_publisher) camera_base_link3 (tf/static_transform_publisher) camera_nodelet_manager (nodelet/nodelet) /camera/ disparity_depth (nodelet/nodelet) disparity_depth_registered (nodelet/nodelet) driver (nodelet/nodelet) points_xyzrgb_depth_rgb (nodelet/nodelet) register_depth_rgb (nodelet/nodelet) /camera/ir/ rectify_ir (nodelet/nodelet) /camera/depth_registered/ metric (nodelet/nodelet) metric_rect (nodelet/nodelet) rectify_depth (nodelet/nodelet)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera_nodelet_manager-1]: started with pid [14261] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/driver-2]: started with pid [14262] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/rgb/debayer-3]: started with pid [14293] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/rgb/rectify_mono-4]: started with pid [14321] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/rgb/rectify_color-5]: started with pid [14343] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/ir/rectify_ir-6]: started with pid [14361] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/depth/rectify_depth-7]: started with pid [14383] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/depth/metric_rect-8]: started with pid [14410] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/depth/metric-9]: started with pid [14462] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/depth/points-10]: started with pid [14532] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2 ...

(more)
2014-04-20 13:17:39 -0500 marked best answer RGBD-SLAM error

Hello,

I am trying to run the rgbdslam algorithm by following the instructions described in http://www.ros.org/wiki/rgbdslam. This is what I get when I type in the command rosmake rgbdslam_freiburg

[ rosmake ] rosmake starting...
[ rosmake ] Packages requested are: ['rgbdslam_freiburg']
[ rosmake ] Logging to directory /home/samme/.ros/rosmake/rosmake_output-20121216-143500 [ rosmake ] Expanded args ['rgbdslam_freiburg'] to: ['rgbdslam']
[rosmake-0] Starting >>> roslang [ make ]
[rosmake-1] Starting >>> geometry_msgs [ make ]
[rosmake-2] Starting >>> opencv2 [ make ]
[rosmake-0] Finished <<< roslang No Makefile in package roslang
[rosmake-2] Finished <<< opencv2 ROS_NOBUILD in package opencv2
[rosmake-3] Starting >>> bullet [ make ]
[rosmake-1] Finished <<< geometry_msgs No Makefile in package geometry_msgs
[rosmake-2] Starting >>> rospy [ make ]
[rosmake-0] Starting >>> roscpp [ make ]
[rosmake-1] Starting >>> sensor_msgs [ make ]
[rosmake-3] Finished <<< bullet ROS_NOBUILD in package bullet
[rosmake-0] Finished <<< roscpp No Makefile in package roscpp
[rosmake-3] Starting >>> rosconsole [ make ]
[rosmake-1] Finished <<< sensor_msgs No Makefile in package sensor_msgs
[rosmake-0] Starting >>> angles [ make ]
[rosmake-0] Finished <<< angles ROS_NOBUILD in package angles
[rosmake-1] Starting >>> rostest [ make ]
[rosmake-0] Starting >>> roswtf [ make ]
[rosmake-2] Finished <<< rospy No Makefile in package rospy
[rosmake-2] Starting >>> image_geometry [ make ]
[rosmake-3] Finished <<< rosconsole No Makefile in package rosconsole
[rosmake-3] Starting >>> message_filters [ make ]
[rosmake-0] Finished <<< roswtf No Makefile in package roswtf
[rosmake-2] Finished <<< image_geometry ROS_NOBUILD in package image_geometry
[rosmake-2] Starting >>> std_msgs [ make ]
[rosmake-0] Starting >>> rosbag [ make ]
[rosmake-1] Finished <<< rostest No Makefile in package rostest
[rosmake-3] Finished <<< message_filters No Makefile in package message_filters [rosmake-1] Starting >>> rosbuild [ make ]
[rosmake-3] Starting >>> tf [ make ]
[rosmake-2] Finished <<< std_msgs No Makefile in package std_msgs
[rosmake-1] Finished <<< rosbuild No Makefile in package rosbuild
[rosmake-2] Starting >>> pcl [ make ]
[rosmake-0] Finished <<< rosbag No Makefile in package rosbag
[rosmake-3] Finished <<< tf ROS_NOBUILD in package tf
[rosmake-0] Starting >>> roslib [ make ]
[rosmake-1] Starting >>> smclib [ make ]
[rosmake-3] Starting >>> rosservice [ make ]
[rosmake-1] Finished <<< smclib ROS_NOBUILD in package smclib
[rosmake-1] Starting >>> common_rosdeps [ make ]
[rosmake-0] Finished <<< roslib No Makefile in package roslib
[rosmake-2] Finished <<< pcl ROS_NOBUILD in package pcl No Makefile in package pcl [rosmake-2] Starting >>> cv_bridge [ make ]
[rosmake-3] Finished <<< rosservice No Makefile in package rosservice
[rosmake-2] Finished <<< cv_bridge ROS_NOBUILD in package cv_bridge
[rosmake-3] Starting >>> dynamic_reconfigure [ make ]
[rosmake-0] Starting >>> pluginlib [ make ]
[rosmake-2] Starting >>> bond [ make ]
[rosmake-1] Finished <<< common_rosdeps ROS_NOBUILD in package common_rosdeps
[rosmake-1] Starting >>> visualization_msgs [ make ]
[rosmake-0] Finished <<< pluginlib ROS_NOBUILD in package pluginlib
[rosmake-2] Finished <<< bond ROS_NOBUILD in package bond
[rosmake-3] Finished <<< dynamic_reconfigure ROS_NOBUILD in package dynamic_reconfigure [rosmake-0] Starting >>> mk [ make ]
[rosmake-0] Finished <<< mk No Makefile in package mk
[rosmake-2] Starting >>> bondcpp [ make ]
[rosmake-3] Starting >>> orocos_kdl [ make ]
[rosmake-0] Starting >>> opencv_tests [ make ]
[rosmake-0] Finished <<< opencv_tests ROS_NOBUILD in package opencv_tests
[rosmake-2] Finished <<< bondcpp ROS_NOBUILD in package bondcpp
[rosmake-3] Finished <<< orocos_kdl ROS_NOBUILD in package orocos_kdl
[rosmake-3] Starting >>> python_orocos_kdl [ make ]
[rosmake-1] Finished <<< visualization_msgs No Makefile in package visualization_msgs [rosmake-1] Starting >>> nodelet [ make ]
[rosmake-1] Finished <<< nodelet ROS_NOBUILD in package nodelet
[rosmake-3] Finished <<< python_orocos_kdl ROS_NOBUILD in package python_orocos_kdl [rosmake-3] Starting >>> kdl [ make ]
[rosmake-1] Starting >>> nodelet_topic_tools [ make ]
[rosmake-3] Finished <<< kdl ROS_NOBUILD in package kdl No Makefile in package kdl [rosmake-1] Finished <<< nodelet_topic_tools ROS_NOBUILD in package nodelet_topic_tools [rosmake-1] Starting >>> pcl_ros [ make ]
[rosmake-2] Starting >>> tf_conversions [ make ]
[rosmake-3] Starting >>> eigen_conversions [ make ]
[rosmake-1] Finished <<< pcl_ros ROS_NOBUILD in package pcl_ros
[rosmake-1] Starting >>> rgbdslam [ make ]
[rosmake-2] Finished <<< tf_conversions ROS_NOBUILD in package tf_conversions
[rosmake-3] Finished ...







































































(more)

2014-04-20 13:17:20 -0500 marked best answer rqt_console and rqt_logger_level

Hello,

I'm am very new to ROS, I am using Ubuntu 12.04.1 LTS and ROS Fuerte. I have been going through the tutorials and I came across a problem, when I type the command:

rosrun rqt_console rqt_console

I get:

[rosrun] Couldn't find executable named rqt_console below /opt/ros/fuerte/stacks/rqt/rqt_console [rosrun] Found the following, but they're either not files, [rosrun] or not executable: [rosrun] /opt/ros/fuerte/stacks/rqt/rqt_console [rosrun] /opt/ros/fuerte/stacks/rqt/rqt_console/src/rqt_console

and similarly with rqt_logger_level, although the rqt_gui/rqt_graph works fine. Is there a solution to this problem?

Thanks and best regards,

Khalid Yousif

2014-04-20 06:56:12 -0500 marked best answer Concatenating pointclouds gets very slow

Hi guys, I have implemented an algorithm that uses ransac to align pointclouds obtained by consecutive scenes and add them to a pointcloud representing the map called total_cloud using the pcl concatenate function. So total_cloud starts empty and then gradually grows as more pointclouds are aligned and added.

My question is why does it get slower and slower as the map gets bigger? I am not using total_cloud in any other computation so it is not part of my computation, only adding the new aligned pointcloud to it. Is this normal or am I doing something wrong? From my understanding, total_cloud is a just a chunk of memory that is growing dynamically, and is not part of any computation.

Any help or advice would be much appreciated.

Best regards, Khalid

2014-04-20 06:55:58 -0500 marked best answer Fuerte VS Hydro: 3D mapping algorithm runs very slow after Hydro migration?

Hello,

I have recently installed ROS Hydro mainly due to PCL 1.7. I had a 3D registration algorithm that I developed on ROS Fuerte and ported it to ROS Hydro. I am using the same laptop and everything is identical. However, the algorithm runs extremely slow on ROS Hydro (less than a frame per second), while it was running at over 10 fps on ROS Fuerte. I haven't changed anything in the code except for the parts that are depreciated and needed to be updated to PCL 1.7. Is there any idea why this extreme slowing down might be caused by?

Update: I have observed that the following function runs much slower now on Hydro. Although it is identical and I did not change anything at all. This function is part of a transformation estimation algorithm Ransac style.

void Transformation::computeInliersAndError (
const std::vector<cv::DMatch>& matches,
const Eigen::Matrix4f& transformation,
const std::vector<Eigen::Vector4f, Eigen::aligned_allocator<Eigen::Vector4f> >& origins,
const std::vector<Eigen::Vector4f, Eigen::aligned_allocator<Eigen::Vector4f> >& earlier,
std::vector<cv::DMatch>& inliers, //output var
double& mean_error, std::vector<double>& errors)
{ //output var


inliers.clear ();
errors.clear ();

std::vector<std::pair<float, int> > dists;
std::vector<cv::DMatch> inliers_temp;

assert(matches.size() > 0);
mean_error = 0.0;
for (unsigned int j = 0; j < matches.size (); j++)
{ //compute new error and inliers

unsigned int this_id = matches[j].queryIdx;
unsigned int earlier_id = matches[j].trainIdx;

Eigen::Vector4f vec = (transformation * origins[this_id]) - earlier[earlier_id];

double error = vec.dot (vec);

error = sqrt (error);
dists.push_back (std::pair<float, int> (error, j));
inliers_temp.push_back (matches[j]); //include inlier

mean_error += error;
errors.push_back (error);
} 


}

I really appreciate your help.

Regards, Khalid