ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Faizan A.'s profile - activity

2016-04-12 08:42:12 -0500 received badge  Good Answer (source)
2016-04-12 08:42:12 -0500 received badge  Enlightened (source)
2016-04-12 08:38:58 -0500 received badge  Nice Question (source)
2014-01-07 19:24:30 -0500 commented answer Multiple Kinects: Global Options parameter?

Example for static transform publisher: define a transform from /world to kinect1_link. define another static transform from /world to kinect2_link. use /world as Global options in rviz and you will see both camera images. The transform can be arbitrary if you dont need the exact camera pose.

2014-01-07 19:20:45 -0500 received badge  Commentator
2014-01-07 19:20:45 -0500 commented answer Multiple Kinects: Global Options parameter?

If you want to find the orientation of one camera w.r.t the other use ros package camera_pose_calibration. i.e. if you want the pointclouds from both cameras to be aligned properly. In case you only want to see images from both cameras at the same time, use static transfrom publisher.

2013-12-28 07:45:18 -0500 received badge  Famous Question (source)
2013-12-09 05:46:00 -0500 received badge  Favorite Question (source)
2013-12-09 05:45:45 -0500 received badge  Nice Answer (source)
2013-10-08 04:16:08 -0500 commented question Time Synchronizer with more than 9 incoming channels

Hi, I was just wondering if it is a good idea to divide the problem in 2 steps. i.e first synchronize 8 images and make them a new custom topic lets say "Image8" and publish it. That way the Image set of 32 is divided in to 4 sets. and then we can synchronize the 4 "Image8" topics aftwards.

2013-07-16 22:37:04 -0500 commented answer Extracting kinect depth_registered image

That did the trick for me. Thanks a lot Stephane :)

2013-07-16 07:11:58 -0500 commented answer Extracting kinect depth_registered image

Hi, Thanks for the answer. I tried your code but i still get black jpg images. What i would like to have is images similar to /depth_registered/image so I can extract depth information to generate point clouds. Can you tell me how to achieve that? Thanks,

2013-06-05 01:49:18 -0500 commented question openni don't like xtion pro live

I faced a similar problem. The new sensors of xtion pro live work with openni2. whereas openni_launch is based on openni. You can use https://github.com/ros-drivers/openni2_launch.git with catkin

2013-06-05 01:43:53 -0500 received badge  Notable Question (source)
2013-06-05 00:26:44 -0500 received badge  Scholar (source)
2013-06-05 00:26:00 -0500 commented answer Ambiguity between Checkboard pose

Actually i am keeping the checkerboard in the upright position but the corners are detected inverted i.e. the upper right corner of the checkerboard is detected in the lower right of the image instead of top left of the image. I just added a simple check which did the trick. Thanks.

2013-06-04 13:30:52 -0500 received badge  Popular Question (source)
2013-06-03 21:31:50 -0500 answered a question Ambiguity between Checkboard pose

I just added a simple check in image_cb_detector to remove the ambiguity. Although now the checkerboard can only be detected in the upright position.

2013-05-22 05:37:24 -0500 received badge  Student (source)
2013-05-22 03:18:18 -0500 received badge  Famous Question (source)
2013-05-22 03:08:23 -0500 received badge  Teacher (source)
2013-05-22 03:08:23 -0500 received badge  Self-Learner (source)
2013-05-21 23:54:38 -0500 answered a question Align Point clouds from multiple kinects

So I found a work around and posting it here so it would be useful to others.

First calculate the camera poses using a temporary namespace for the cameras. e.g i used kinect1_temp and kinect2_temp. Once the calibration file is saved you would have transforms from /world to /kinect1_temp_rgb_optical_frame and /kinect2_temp_rgb_optical_frame. Now launch the tf publisher and launch the cameras using namespaces that you would use later e.g. kinect1 and kinect2. Calculate the transforms from kinect1_rgb_optical_frame to kinect1_link. Use this transform to link kinect1_temp_rgb_optical_frame and kinect1_link. The corresponding tf tree can be seen here.

Since Tf tree does not support cyclic trees, it is workaround using dummy namespaces assuming the camera positions remain constant.

Hope it helps someone.

2013-05-21 23:25:04 -0500 asked a question Ambiguity between Checkboard pose

Hi, There is an ambiguity between the checkerboard pose in the upright position and the checkerboard pose rotated 180 degrees vertically(upside down). I am using camera_pose_calibration which relies on image_cb_detector for checkerboard detection for calibrating multiple cameras. Holding the checkerboard upright, one of my cameras assumes it to be upside down during calibration. Any suggestions on how to rectify this?

Thanks,

2013-05-12 21:37:51 -0500 commented question Align Point clouds from multiple kinects

So do you have a work around? how can I align multiple point clouds using camera_pose_calibration? As I understand you are also trying to achieve similar results. Were you able to achieve that using multiple AR markers to estimate camera poses relative to each other? Any help would be appreciated.

2013-05-12 21:33:33 -0500 commented question Align Point clouds from multiple kinects

That's true. It can also be verified. The /world is arbitrarily chosen, and sometimes during the configuration it was aligned perfectly with one of the /kinect frames. i.e. the /world and /kinect1 had the same coordinates and orientation. At the point the point clouds were perfectly aligned.

2013-05-09 15:17:32 -0500 received badge  Notable Question (source)
2013-05-06 05:49:20 -0500 received badge  Popular Question (source)
2013-05-06 00:16:22 -0500 commented question Align Point clouds from multiple kinects

yes. i would upload the result of tf view_frames here but i dont have enough karma so I uploaded it on Dropbox here https://www.dropbox.com/s/cvn1nilsn089xyo/calibration_2Kinects.pdf

2013-05-04 03:52:00 -0500 asked a question Align Point clouds from multiple kinects

Hi, I am using two Kinects to capture an object. In this scenario the cameras are rigidly attached to a surface and their poses are also fixed. I have obtained the relative camera poses using "camera_pose_calibration" package, calibrated for rgb images. I want to visualize the point clouds from both Kinects in Rviz but the point clouds i obtain are not aligned. In some cases I get one of the point clouds completely inverted with respect to the other when viewed from the /world as fixed frame in Rviz. In other cases the alignment is not satisfactory.

How can i achieve reasonable alignment between the point clouds from multiple Kinects and visualize the result in Rviz?

Additional info: I am using ROS fuerte and platform is Ubuntu 12.04. In Rviz I am displaying PointCloud2 and the depth registration is turned on so the topics are /kinect1/depth_registered/points and /kinect2/depth_registered/points respectively.

2013-04-02 05:42:40 -0500 commented question Multiple Kinects: Global Options parameter?

Hi Andrius, Please update if you were able to fix the issue. I am stuck in a similar situation. Thanks,

2013-04-02 05:19:57 -0500 received badge  Supporter (source)