ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2021-04-01 14:42:59 -0500 | received badge | ● Favorite Question (source) |
2020-08-24 20:44:02 -0500 | received badge | ● Famous Question (source) |
2018-07-24 19:29:03 -0500 | received badge | ● Great Question (source) |
2017-06-08 04:26:14 -0500 | received badge | ● Good Question (source) |
2017-06-07 20:14:41 -0500 | marked best answer | Concatenating relative transformations to obtain a global transformation Hi guys, I am trying to implement a visual odometry system that matches features viewed by sequential frames and calculating the rigid body transformation between the corresponding points using RANSAC. I am successfully obtaining the ransac transformation that aligns the points obtained by the two frames. The relative transformation is of type When viewing the transformation in RVIZ, it keeps jumping around even if the camera is not moving. Any idea what I am doing wrong? Any help would be much appreciated. Thanks, Khalid |
2017-04-12 22:15:43 -0500 | received badge | ● Nice Answer (source) |
2017-03-13 08:37:08 -0500 | received badge | ● Nice Answer (source) |
2016-08-08 01:35:53 -0500 | marked best answer | How to use perception_odu as a dependency? Hi Guys, I have created a package for 3D registration using a Kinect, I would like to use the normal distribution transform that are available in the perception_odu package. I have downloaded and compiled this package successfully in my fuerte_workspace and added it to the ROS_PACKAGE_PATH. However, when I add this line to the manifest.xml: Any help of how to use this package in my registration package would be much appreciated. thanks, Khalid |
2016-03-25 22:05:10 -0500 | received badge | ● Good Question (source) |
2016-03-03 06:14:58 -0500 | received badge | ● Great Answer (source) |
2016-02-14 01:36:33 -0500 | received badge | ● Nice Question (source) |
2015-03-10 15:20:40 -0500 | received badge | ● Notable Question (source) |
2015-03-10 15:20:40 -0500 | received badge | ● Popular Question (source) |
2014-11-25 17:29:51 -0500 | received badge | ● Nice Question (source) |
2014-11-04 09:25:21 -0500 | received badge | ● Nice Answer (source) |
2014-07-09 20:54:06 -0500 | commented question | Concatenating relative transformations to obtain a global transformation Hi Hamed, Try transformEigenToTF (with the small t instead of capital T). |
2014-04-20 13:32:39 -0500 | marked best answer | Compile is successful, but no executable generated? Hi guys, I have created my own package in which I do some image processing with openCV and PCL. After compiling, I get no errors and the build is successful, However, no executable is generated in the /bin folder. Note: this is not my first package, I have created other nodes with no problem. In the Cmakelist.txt, I have added the following lines Here is the output of the compilation: When I run the command I am not sure why the executable is not being generated here, I have created many other nodes and never got this problem. Any help would be greatly appreciated. Thanks and regards, Khalid |
2014-04-20 13:29:22 -0500 | marked best answer | Kinect RGB + Depth to .pcd format I have tried the rgbd_registration package in which a source cloud (first frame) and a target cloud (another frame) in the form of a .pcd format are passed to the algorithm, the algorithm restores the RGB images from the .pcd file and detects and matches features between the two frames and then aligns the reconstructed 3D points between the two frames. What I want to do is construct my own .pcd files that will contain depth and rgb images from a Kinect so I can pass it to this algorithm for alignment, instead of using their scenes. Is there any tutorial or example on how I would be able to store both depth and RGB information into a pcd file? Thanks for your help Khalid |
2014-04-20 13:26:15 -0500 | marked best answer | ccny_rgbd with vo+mapping slow, map disappears Hello Dryanovski/and those who have used the ccny_rgbd, I am using ROS fuerte and ubuntu 12.04 on an i5 Dell laptop. I have followed the instructions given to launch your method, the visual_odometry is quite fast. However, when I launch the vo+mapping.launch, It is extremely slow and the alignment is really bad. Also the parts of the map that are not viewed disappear after a very short time. I am not sure if I am doing something wrong or is this how it is. I was hoping to build a map similar to the video you posted with a similiar performance. Thanks for your help, Regards, Khalid |
2014-04-20 13:23:09 -0500 | marked best answer | Processing an image outside the Callback function Hi guys, The answer to this question is probably trivial, but I have looked around and cannot find the answer I am looking for. I have written a simple Harris corner detector algorithm in which I receive an image from the I have read in other posts that doing processing inside the callback function is not the correct way and that processing should be in the main loop. Which leeds to my question, how can I access What am I missing? I have posted my code for clarification. Thanks alot for your help. (more) |
2014-04-20 13:22:58 -0500 | marked best answer | Corner detection using opencv and ROS Hi guys,
I have implemented an algorithm that uses the opencv2 Harris corner detector. I have a few questions, but first let me explain my system: I have copied the opencv2 Harris corner detector and placed it in a callback function in which messages of type 1- If I want to transfer this back to a ROS type image, 2- I tried to use a second 3- In my program, although the corners are detected, if I move the camera too quickly there becomes a lag and sometimes the image shown becomes black for a couple of seconds. I am currently using If there are any additional information you would like me to explain, please let me know. I really appreciate your help. Regards, Khalid |
2014-04-20 13:22:50 -0500 | marked best answer | VISO2 pitch angle Hello, I have managed to get ros_viso2 package to work, I am just wondering where is the pitch angle measured from? for example, I would like to point the camera upwards looking at the ceiling, what should my pitch angle be? and is it in degrees or radians? I am viewing the outcome of the visual odometry on rviz where I set the fixed frame: /odom and target frame: base_link. Is this the best way to visualize the camera/robot path? Thanks alot for your help. Khalid |
2014-04-20 13:22:46 -0500 | marked best answer | Monocular camera calibration Hi guys, I have followed the monocular calibration tutorial and have calibrated my Firefly_mv (which uses the DC1394 drivers)image. The calibrations seems to be successful since the /camera/camera_info is publishing the correct information: However, when I type rostopic echo /image_rect , nothing seems to be published. Where can I find the rectified/calibrated image? edit: $ rostopic list: Thanks, Khalid |
2014-04-20 13:22:31 -0500 | marked best answer | Viso2 mono_odometer error Hi guys, I am trying to use the viso2 package in order to obtain the monocular visual odometry using a Firefly mv camera. The camera works fine and I am able to view the images. However, when I run the command: rosrun viso2_ros mono_odometer image:=/camera/image_raw I get the following warnings and error: Using default: 1.000000 Using default: 0.000000 Segmentation fault (core dumped) Any idea what is causing this segmentation fault? Thanks and best regards, Khalid |
2014-04-20 13:22:26 -0500 | marked best answer | learning_tf turtlesim tutorial: second turtle twirling Hi guys, I have followed the tf tutorials and implemented all the instructions to get the second turtle to follow the first turtle which in turn is controlled via the keyboard, however, when I launch the start_demo.launch, the second turtle starts twirling continuously even without moving the first turtle. When I move the first turtle, I can see that the second turtle changes its movement slightly while still rotating. I am not sure what the problem is. I have attached an image showing the pdf file created by rosrun tf view_frames and the turtlesim. Heres what I get: $ rosrun tf view_frames Listening to /tf for 5.000000 seconds Done Listening Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored dot - graphviz version 2.26.3 (20100126.1600) Detected dot version 2.26.3 Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored frames.pdf generated Any help would be much appreciated. regards, Khalid |
2014-04-20 13:20:26 -0500 | marked best answer | Robotino make error Hi guys, I am using ros fuerte and ubuntu 12.04, I am trying to use the robotino stacks and have followed the instructions from http://wiki.openrobotino.org/index.php?title=ROS, I have downloaded the api2 packages on my laptop running ubuntu 12.04. The CF card has an image of version 2.4 (I think api1). I know I need to update my CF card to api2. But I still expected the make to be successful (even if the connection to robotino will fail until I update the CF card to api2). I have added the robotino-ros-pkg to my ROS PACKAGE PATH. When I run the comand : $ rosmake --pre-clean robotino I get the following error at the end: Linking CXX executable ../bin/robotino_camera_node
/usr/local/robotino/api2/lib/librec_robotino_api2.so: could not read symbols: File in wrong format
collect2: ld returned 1 exit status
make[3]: * [../bin/robotino_camera_node] Error 1
make[3]: Leaving directory Any ideas why I am getting this error? /usr/local/robotino/api2/lib/librec_robotino_api2.so: could not read symbols: File in wrong format Thanks and best regards, Khalid |
2014-04-20 13:20:07 -0500 | marked best answer | remapping in launch files Hi guys, I have created a launch file that launches the camera1394 driver and displays the color image using the image_proc and image_view packages. I am also trying to launch the dynamic_reconfigure node and connect it to the camera1394 node, however I am having a problem with the remapping, here is what I am trying to do in my launch file: I am not sure where to get the XXX name from? Could someone please tell me where I can find the names of the inputs/outputs of a node(in general)from? Thanks and regards, Khalid |
2014-04-20 13:17:46 -0500 | marked best answer | RGBDSLAM error: Trust certificate g2o problem Hey guys, I am using ubuntu 12.04 and ros fuerte. I am trying to run rgbdslam. I followed the instructions given in this post http://answers.ros.org/question/50585/rgbdslam-error-in-fuerte-and-electric/ (link text): and I got the following error: svn --non-interactive --config-dir /home/samme/trunk/g2o/svnconf co https://svn.openslam.org/data/svn/g2o/trunk build/g2o svn: OPTIONS of 'https://svn.openslam.org/data/svn/g2o/trunk': Server certificate verification failed: issuer is not trusted (https://svn.openslam.org) Note: I had previously downloaded g2o from the svn: https://openslam.informatik.uni-freiburg.de/data/svn/g2o/trunk/. I then moved the file trunk to the rubbish bin and downloaded g2o again from the svn : https://code.ros.org/svn/ros-pkg/stacks/vslam/trunk/ I have been told to add the line: SVN_CMDLINE = svn--trust-server-cert--non-interactive --config-dir $(TOP_DIR)/svnconf to the makefile in the g2o folder. However I am now getting this error: svn--trust-server-cert--non-interactive --config-dir /home/samme/trunk/g2o/svnconf co https://svn.openslam.org/data/svn/g2o/trunk build/g2o make: svn--trust-server-cert--non-interactive: Command not found Am I doing the command correctly? I really appreciate your help. Regards, Khalid |
2014-04-20 13:17:43 -0500 | marked best answer | Kinect not working Hello, I am trying to get the kinect to work on ROS fuerte and Ubuntu 12.04, I have installed the openni drivers by typing the command: sudo apt-get install ros-fuerte-openni-launch and then in a new terminal: roslaunch openni_launch openni.launch I get the following errors: ... logging to /home/samme/.ros/log/ccef1f80-4728-11e2-bace-60672021cf54/roslaunch-samme-Latitude-E6320-14241.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://samme-Latitude-E6320:49615/ SUMMARYPARAMETERS * /camera/depth/rectify_depth/interpolation * /camera/depth_registered/rectify_depth/interpolation * /camera/disparity_depth/max_range * /camera/disparity_depth/min_range * /camera/disparity_depth_registered/max_range * /camera/disparity_depth_registered/min_range * /camera/driver/depth_camera_info_url * /camera/driver/depth_frame_id * /camera/driver/depth_registration * /camera/driver/device_id * /camera/driver/rgb_camera_info_url * /camera/driver/rgb_frame_id * /rosdistro * /rosversion NODES /camera/depth/ metric (nodelet/nodelet) metric_rect (nodelet/nodelet) points (nodelet/nodelet) rectify_depth (nodelet/nodelet) /camera/rgb/ debayer (nodelet/nodelet) rectify_color (nodelet/nodelet) rectify_mono (nodelet/nodelet) / camera_base_link (tf/static_transform_publisher) camera_base_link1 (tf/static_transform_publisher) camera_base_link2 (tf/static_transform_publisher) camera_base_link3 (tf/static_transform_publisher) camera_nodelet_manager (nodelet/nodelet) /camera/ disparity_depth (nodelet/nodelet) disparity_depth_registered (nodelet/nodelet) driver (nodelet/nodelet) points_xyzrgb_depth_rgb (nodelet/nodelet) register_depth_rgb (nodelet/nodelet) /camera/ir/ rectify_ir (nodelet/nodelet) /camera/depth_registered/ metric (nodelet/nodelet) metric_rect (nodelet/nodelet) rectify_depth (nodelet/nodelet) ROS_MASTER_URI=http://localhost:11311 core service [/rosout] found Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera_nodelet_manager-1]: started with pid [14261] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/driver-2]: started with pid [14262] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/rgb/debayer-3]: started with pid [14293] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/rgb/rectify_mono-4]: started with pid [14321] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/rgb/rectify_color-5]: started with pid [14343] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/ir/rectify_ir-6]: started with pid [14361] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/depth/rectify_depth-7]: started with pid [14383] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/depth/metric_rect-8]: started with pid [14410] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/depth/metric-9]: started with pid [14462] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2.7="" threading.pyc'=""> ignored process[camera/depth/points-10]: started with pid [14532] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading'="" from="" '="" usr="" lib="" python2 ... (more) |
2014-04-20 13:17:39 -0500 | marked best answer | RGBD-SLAM error Hello, I am trying to run the rgbdslam algorithm by following the instructions described in http://www.ros.org/wiki/rgbdslam. This is what I get when I type in the command rosmake rgbdslam_freiburg [ rosmake ] rosmake starting... |
2014-04-20 13:17:20 -0500 | marked best answer | rqt_console and rqt_logger_level Hello, I'm am very new to ROS, I am using Ubuntu 12.04.1 LTS and ROS Fuerte. I have been going through the tutorials and I came across a problem, when I type the command: rosrun rqt_console rqt_console I get: [rosrun] Couldn't find executable named rqt_console below /opt/ros/fuerte/stacks/rqt/rqt_console [rosrun] Found the following, but they're either not files, [rosrun] or not executable: [rosrun] /opt/ros/fuerte/stacks/rqt/rqt_console [rosrun] /opt/ros/fuerte/stacks/rqt/rqt_console/src/rqt_console and similarly with rqt_logger_level, although the rqt_gui/rqt_graph works fine. Is there a solution to this problem? Thanks and best regards, Khalid Yousif |
2014-04-20 06:56:12 -0500 | marked best answer | Concatenating pointclouds gets very slow Hi guys, I have implemented an algorithm that uses ransac to align pointclouds obtained by consecutive scenes and add them to a pointcloud representing the map called total_cloud using the pcl concatenate function. So total_cloud starts empty and then gradually grows as more pointclouds are aligned and added. My question is why does it get slower and slower as the map gets bigger? I am not using total_cloud in any other computation so it is not part of my computation, only adding the new aligned pointcloud to it. Is this normal or am I doing something wrong? From my understanding, total_cloud is a just a chunk of memory that is growing dynamically, and is not part of any computation. Any help or advice would be much appreciated. Best regards, Khalid |
2014-04-20 06:55:58 -0500 | marked best answer | Fuerte VS Hydro: 3D mapping algorithm runs very slow after Hydro migration? Hello, I have recently installed ROS Hydro mainly due to PCL 1.7. I had a 3D registration algorithm that I developed on ROS Fuerte and ported it to ROS Hydro. I am using the same laptop and everything is identical. However, the algorithm runs extremely slow on ROS Hydro (less than a frame per second), while it was running at over 10 fps on ROS Fuerte. I haven't changed anything in the code except for the parts that are depreciated and needed to be updated to PCL 1.7. Is there any idea why this extreme slowing down might be caused by? Update: I have observed that the following function runs much slower now on Hydro. Although it is identical and I did not change anything at all. This function is part of a transformation estimation algorithm Ransac style. I really appreciate your help. Regards, Khalid |