ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2017-10-27 03:05:11 -0500 | received badge | ● Famous Question (source) |
2017-08-03 05:49:31 -0500 | received badge | ● Famous Question (source) |
2017-06-02 07:59:17 -0500 | received badge | ● Notable Question (source) |
2017-05-11 06:25:26 -0500 | received badge | ● Notable Question (source) |
2017-03-07 05:40:31 -0500 | commented question | Improved checkerboard detector for camera calibration But the problem for me is not speed/performance. I don't care if the images lag a bit and it doesn't look real time while calibrating. I just care about having the best chessboard detection and calibration result. |
2017-03-07 02:09:10 -0500 | received badge | ● Popular Question (source) |
2017-03-06 12:15:02 -0500 | received badge | ● Student (source) |
2017-03-06 09:55:04 -0500 | asked a question | Improved checkerboard detector for camera calibration Hello! I was wondering if there is a better checkerboard detector to substitute the one that is being used in the camera_calibrator.py node from the ROS image_pipeline. This is because the calibration is absolutely impossible to carry out for distances larger than 5 meters. The rospy node just does not detect the checkerboard anymore, even though I am using a checkerboard with 108 mm of square size which is quite big. With Matlab the checkerboard is detected for much larger distances, so I am sure that having a better algorithm to detect the corners of the checkerboard would help. What would be the best strategy to detect the checkerboard also for large distances? Is this being developed already by someone? Thanks in advance for your help! --------------------------- EDIT --------------------------------- https://github.com/ros-perception/ima... There is an hard-coded threshold in the is_good_sample method that might be rejecting too many good checkerboard detections. That is one ting that might be improved, modified. Another problem is the downsample and detect approach, in which the images are scaled back to VGA resolution and then detections are upsampled to original size. I think that really makes it impossible to have good detection for distant objects and it should be avoided for high quality camera calibration. Has anyone tried to get rid of the downsample and detect method? Is the checkerboard detected better without it? Does it become too slow? |
2017-03-03 08:55:14 -0500 | answered a question | unable to extract images from bag file Isn't there already an implementation somewhere to extract left and right images with identical timestamps from a rosbag? This seems quite a basic thing that should be part of ROS, given how much stereo cameras are used nowadays in robotics. If someone has such a node, can it be shared please? I can write my own I guess, but maybe someone else has a good implementation for this and sharing it would make life easier for me and many other ROS users. -------------------------------------- EDIT ------------------------------------------------- Method 1) Use the rospy node that comes inside image_view from the image_pipeline. It can be run with Just modify the topic names and the rospy node according to your needs. Method 2) Use this package http://wiki.ros.org/bag_tools?distro=... I hope this can be helpful to other ROS users! Cheers! |
2017-02-08 03:37:45 -0500 | received badge | ● Enthusiast |
2017-02-08 03:37:44 -0500 | received badge | ● Enthusiast |
2017-02-02 10:13:52 -0500 | received badge | ● Popular Question (source) |
2017-02-02 07:34:00 -0500 | answered a question | Stereo Calibration - Export from Matlab to ROS I have re-run the calibration in better conditions and have updated the values. I am using the same images for both Matlab and ROS. Why are the results so different? And can someone please help me translate the values from Matlab into the ROS calibration files? Because I am sure the Matlab calibration gives well rectified images, while the rect_color images from stereo_processing node of ROS are always poorly rectified. I do not understand why the rectification matrices are like that in ROS. Shouldn't the rotation matrix be the identity for the left camera and for the right camera express the rotation with respect to the left? Is there a ROS documentation explaining how exactly the distortion, rectification and projection matrices are computed? |
2017-02-02 07:24:10 -0500 | received badge | ● Editor (source) |
2017-02-01 08:19:36 -0500 | asked a question | Stereo Calibration - Export from Matlab to ROS I am trying to calibrate a pair of ueye cameras, model CP USB 3.0. if I use the camera calibrator node the quality of calibration is really bad. I followed this guide step by step http://wiki.ros.org/camera_calibratio... After calibration I clicked commit and I ended up with two files in my driver ueye node. I can then start the driver and I see in topic camera_info that the calibration file has been loaded. Then I run stereo_image_proc node to rectify the images, but I get a very bad result. Topics image_rect_color and image_color are almost the same, meaning the calibration is almost not doing much and the images are not well rectified. I then tried to perform calibration again using the Matlab computer vision toolbox, stereo calibration app. https://nl.mathworks.com/help/vision/... With this one I get a much better calibration and the images are well rectified, but now I don't know how to export the results from Matlab to the .ini calibration files used by the ueye driver. I am attaching the files with the results for both ROS and Matlab. Could you please help me translate the results from Matlab into ROS? I understand the camera matrix, but I don't know how to correctly match the results from Matlab into the Distortion, Rectification and Projection Matrices of the .ini calibration files. MATLAB RESULTS (more) |