ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2014-08-17 04:16:19 -0500 | received badge | ● Enlightened (source) |
2014-07-03 18:23:58 -0500 | received badge | ● Nice Answer (source) |
2013-11-03 07:56:17 -0500 | received badge | ● Self-Learner (source) |
2013-10-10 22:12:52 -0500 | received badge | ● Famous Question (source) |
2013-10-10 22:12:52 -0500 | received badge | ● Notable Question (source) |
2013-09-05 16:52:57 -0500 | received badge | ● Popular Question (source) |
2013-09-04 12:38:20 -0500 | answered a question | Custom OpenCV with ROS Groovy So the workaround I found was to add BEFORE rosbuild_init() in my CMakeLists.txt I found this out by first looking at my gcc line: At the end is -rpath,/opt/ros/groovy/lib, which seems to be set by cmake macro LINK_DIRECTORIES. This seemed the be the cause of the problem. /opt/ros/groovy/lib is added to LINK_DIRECTORIES by rosbuild_init() and therefore by calling LINK_DIRECTORIES before rosbuild_init() puts my library path in front of it. This is not an ideal solution. If anyone has a better idea I would like to hear it |
2013-09-04 08:56:03 -0500 | asked a question | Custom OpenCV with Groovy I have a project which requires OpenCV with cuda support, so I have a local version of OpenCV compiled with cuda support, and I want to use this in ROS. In previous versions of ROS I achieved this by overlaying opencv2, which I made point to my custom OpenCV version. Now I am trying to achieve this with CMake, with limited success. I made a simple test package (using rosbuild), only depending on roscpp (no opencv, no cv_bridge) The code looks like this: My CMakeLists.txt looks like this: I verified that ${OpenCV_INCLUDE_DIRS} and ${OpenCV_LIBS} are pointing to the correct opencv (in my case ~/svslocal/include and ~/svslocal/lib) I then compile my code and run it, no problems. However when I run ldd on the executable, it is using the ros opencv_core instead of my local one. This is pretty important because if I try it on my project with CUDA I get a runtime telling me that the opencv library has been compiled without cuda. My LD_LIBRARY_PATH: I also tried adding /home/kir1pal/svslocal/lib to the front of the LD_LIBRARY_PATH, did not change anything Any idea what is going wrong here? |
2013-09-04 08:54:05 -0500 | asked a question | Custom OpenCV with ROS Groovy I have a project which requires OpenCV with cuda support, so I have a local version of OpenCV compiled with cuda support, and I want to use this in ROS. In previous versions of ROS I achieved this by overlaying opencv2, which I made point to my custom OpenCV version. Now I am trying to achieve this with CMake, with limited success. I made a simple test package (using rosbuild), only depending on roscpp (no opencv, no cv_bridge) The code looks like this: My CMakeLists.txt looks like this: I verified that ${OpenCV_INCLUDE_DIRS} and ${OpenCV_LIBS} are pointing to the correct opencv (in my case ~/svslocal/include and ~/svslocal/lib) I then compile my code and run it, no problems. However when I run ldd on the executable, it is using the ros opencv_core instead of my local one. This is pretty important because if I try it on my project with CUDA I get a runtime error telling me that the opencv library has been compiled without cuda. My LD_LIBRARY_PATH: I also tried adding /home/kir1pal/svslocal/lib to the front of the LD_LIBRARY_PATH, did not change anything Any idea what is going wrong here? |
2012-10-15 15:26:32 -0500 | received badge | ● Good Answer (source) |
2012-10-02 03:04:10 -0500 | received badge | ● Nice Answer (source) |
2012-09-26 09:47:26 -0500 | commented question | How to test rgbdslam without a kinect? I answered this question comprehensively but it didn't seem to show up :( Short answer is that these parameters are set correctly for the bag files from that dataset, you shouldn't have to do anything. |
2012-09-26 09:30:04 -0500 | answered a question | How to test rgbdslam without a kinect? Short answer is you do not have to do anything in this case. Long answer: He is referring to the ros parameters for RGBDSLAM. These may be set in a launch file or by editing parameter_server.cpp. The relevant parameters in the case are: "topic_image_mono" - Topic for the camera image (/camera/rgb/image_color) "camera_info_topic" - Topic for the camera information (camera/rgb/camera_info) "topic_image_depth" - Topic for the depth image (camera/depth/image) "topic_points" - Topic for the pointclouds (camera/rgb/points) Leave this empty, read further These can be set by making a copy of the launch file rgbdslam_sample_config.launch and editing the relevant fields. In this case the defaults for these topics are the same as the topics for the bag files in this data set, and therefore it should just work. If "topic_points" is empty then RGBDSLAM will reconstruct the point clouds from color and depth images. In this case it is necessary to do so as the bag files do not come with this topic. This is because the pointcloud topic is far less space efficient than having separate color and depth image topics. |
2012-09-20 12:26:24 -0500 | answered a question | find transformation rgbdslam After having finished processing some kinect data, you can select "Save node wide" under the Graph menu to save all the point clouds making up the model as separate pcd files. You can then obtain the transformations between the pointclouds by selecting "Save Trajectory Estimate" under Processing. This will save a text file containing the transformations between camera poses. |
2012-09-17 12:16:38 -0500 | received badge | ● Supporter (source) |
2012-09-16 03:30:15 -0500 | received badge | ● Teacher (source) |
2012-09-16 01:08:09 -0500 | answered a question | How to run rgbdslam on ROS fuerte? This is answered here |
2012-09-16 00:28:06 -0500 | answered a question | How to solve 'No such file or directory: u'/opt/ros/fuerte/stacks/openni_camera/launch/openni_node.launch'? The openni drivers have changed in fuerte, and the kinect+rgbdslam.launch launch file is looking for the electric openni_camera launch file. You can fix this by changing the kinect+rgbdslam.launch file as such: change to |
2012-09-15 11:57:01 -0500 | received badge | ● Editor (source) |
2012-09-12 06:54:38 -0500 | answered a question | What is the API to generate a registered point cloud from raw kinect streams There is a script provided with the RGBDSLAM benchmark data set from Freiburg that adds a pointcloud to a bagfile given these topics. Take a look here http://vision.in.tum.de/data/datasets/rgbd-dataset/tools#adding_point_clouds_to_ros_bag_files Take a look at the script 'add_pointclouds_to_bagfile.py'. You can either use it on your bagfile or look at the code. Note if you want to use it you will need to change the topic names in the script, as the script was written for the electric openni drivers. EDIT: Sorry I didn't see that you had recorded the raw images. I don't know how to rectify these images, however recording /camera/depth_registered/image and /camera/rgb/image_rect_color instead of the raw images would solve this. |