ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

bumblebee xb3 driver

asked 2013-07-01 16:10:01 -0600

aswin gravatar image

updated 2013-07-01 16:10:42 -0600

Hi all, I am new to the bumblebee camera. I tried using the triclops and pgrstereo library to grab images but was not successful in getting the center image. Is there a driver available? Or any example that can grab all the three images from triclops?

edit retag flag offensive close merge delete

7 Answers

Sort by ยป oldest newest most voted

answered 2013-07-03 22:33:34 -0600

joanpau gravatar image

updated 2013-07-04 04:56:24 -0600

I am Joan Pau, author of the camera1394stereo package, now maintained by a collegue. This package was created to handle binocular cameras, namely bumblebee2 but it should work with any other IIDC camera supported by libdc1394 producing binocular stereo images using the usual way.

From the system point of view, a firewire IIDC stereo camera is seen as a single device (not two or three), it has a unique GID (device identifier on the firewire bus) and a unique set of parameters (frame rate, video mode, ...). Stereo cameras usually use 'smartly' some video modes to transmit 2 or 3 images at a time (one for each camera).

  • For 2 images they use a video mode with 16 bit depth (2 images x 8 bits/pixel) or with 32 bit depth (2 images x 16 bits/pixel).
  • For 3 images they use a video mode with 24 bit depth (3 images x 8 bits/pixel).

Depending on the camera each single image is either mono or has color information encoded in Bayer pattern indicated by the manufacturer. For the Bumblebee XB3 this is described in section Stereo Image Formats here:

The camera1394stereo driver is just a copy of the camera1394 one, but with the added functionality to deinterleave the stereo image into two separated images and publish them to two different topics (left and right) with the corresponding camera info topic. So in your case it won't work for a stereo camera with 3 images.

However, you can use the following approach. Use a camera1394 nodelet to grab and publish the 24 bit depth images from the bumblebeeXB3 as if it was a mono camera. Here you can set the frame rate and the other camera parameters. And write a simple nodelet that subscribes to an image topic and deinterlaces the input images, publishing 3 output images to the corresponding output image topics (left, center and right) each one with the corresponding camera info. This should be quite easy to implement (even though it would add some overhead, for which I would not worry if using nodelets because images are not copied). For the camera info part the CameraInfoManager already does the job, you just have to use three instances (one for each output topic):

Actually, it might be even useful to have this included in the image_proc chain: a stereo_split or stereo_decompose nodelet, wich subscribes to an input image topic and decomposes it according to the following parameters:

  • number of output images,
  • their pixel bit depth,
  • the stereo mode (how pixels from each camera are merged in the stereo image)

This same nodelet should provide the camera info topics for each of the output images, the camera info of the original stereo image should be simply ignored.

Sorry for the long response.

edit flag offensive delete link more


HI, Thank you for the detailed response. When using the pgr_examples, I had problems exactly with interlacing the input image given by libdc. Will try your suggestion next week and will keep this thread updated.

aswin gravatar image aswin  ( 2013-07-04 19:50:10 -0600 )edit

Thanks for explaining, @joanpau, your suggestion is a good one. I agree that the overhead of an extra nodelet would not be prohibitive. Once it is working, we could add it to image_common or to camera1394, if that seems appropriate. Bumblebee questions come up regularly.

joq gravatar image joq  ( 2013-07-05 02:33:50 -0600 )edit

Thanks guys, grabbing the images was more easy than I thought. Just need to publish in format7_mode3 with format color rgb8 and iso speed 800

aswin gravatar image aswin  ( 2013-07-15 23:10:09 -0600 )edit

answered 2016-01-27 12:23:57 -0600

Marcus Barnet gravatar image

I would like to use my XB3 on ROS. Are there any updates about this?

Or the best way is to use the Linux SDK to implement a new ROS node for xb3?

edit flag offensive delete link more

answered 2016-01-27 13:21:07 -0600

Marcus Barnet gravatar image

Since I only need to use two cameras, Is it possible to use bumblebee xb2 ros node with my trinocular system? I just need to use outer cameras and not the central one.

Thank you

edit flag offensive delete link more

answered 2013-07-03 16:04:18 -0600

joq gravatar image

People have written several ROS drivers for various bumblebee models. I don't know which, if any, are actively maintained. Here are two I was able to find:

I recommend trying them out to see what works for you. There are probably others, as well.

edit flag offensive delete link more


Thanks joq. I have not tried them, but I doubt that it will help me obtain all the 3 images from the xb3 sensor. Its just sad that pointgrey has no support for this.

aswin gravatar image aswin  ( 2013-07-03 16:08:35 -0600 )edit

answered 2014-03-17 02:29:09 -0600

Yashodhan gravatar image

Hi Ashwin,

Please could you share the parameters/launch file you used? I'm having some issues getting the images out with the both camera1394 or camera1394stereo packages.



edit flag offensive delete link more


Hi @aswin! I was wondering what version of Linux/libdc you are using? On my Ubunut 13.04 install with ROS Groovy, I managed to get some images out (I had to use an ISO of 400), but I had 'loop' artifacts in the image stream. Not sure if this has to do with the libdc version or the hardware. Yasho

Yashodhan gravatar image Yashodhan  ( 2014-03-18 23:42:23 -0600 )edit

launch file added in answer above

aswin gravatar image aswin  ( 2014-03-19 03:52:13 -0600 )edit

Thanks @aswin! I think I've traced the root of the problem... The hardware I have is 1394a compatible, allowing an ISO of 400. This means the images are transferred slower than the DMA buffer expects, and i get older images which are not overwritten twice. I'm working on a method using the PtGrey examples to take retrieve single shot images after flushing the libdc buffers - this should allow me to get images from all three cameras. Next step would then be to use the Triclops library to do the stereo conversion.

Yashodhan gravatar image Yashodhan  ( 2014-03-19 04:45:30 -0600 )edit

Hello, this is exactly what I was looking for. I have a question. Since PointGrey do their own calibration, How did you extract this calibration and put into your yaml files? Did you do the camera_calibration ros pkg? or just tune your own calibration in the yaml file? Thanks in advance

pmarinplaza gravatar image pmarinplaza  ( 2014-09-22 10:31:45 -0600 )edit

I did not use PointGrey calibration. Yes i used camera_calibration

aswin gravatar image aswin  ( 2015-01-07 21:13:58 -0600 )edit

answered 2013-07-16 07:58:04 -0600

joanpau gravatar image

updated 2013-07-16 08:10:22 -0600

Thanks for sharing the code. That was the idea. Here is some extra advice: 1. You could use the originial image timestamp img->header.stamp instead of ros::Time::now() 2. Publish images with the proper camera info using image_transport::CameraPublisher and a camera_info_manager::CameraInfoManager.

edit flag offensive delete link more

answered 2013-07-15 23:15:58 -0600

aswin gravatar image

updated 2014-03-19 03:51:30 -0600

Just for reference, following is the callback code I used for deinterleaving the 24bit image published by camera1394.

class xb3ImageDeinterlacer {
    ros::NodeHandle leftNh,centerNh,rightNh;
    sensor_msgs::CameraInfo leftInfo,centerInfo,rightInfo;
    image_transport::CameraPublisher pLeftImage,pRightImage,pCenterImage;

    xb3ImageDeinterlacer(ros::NodeHandle n1, ros::NodeHandle n2, ros::NodeHandle n3) {
    leftNh=n1; centerNh=n2; rightNh=n3;
    image_transport::ImageTransport itLeft(leftNh);
    pLeftImage = itLeft.advertiseCamera("image", 1000);
    camera_info_manager::CameraInfoManager leftInfoMgr(leftNh);
    leftInfo = leftInfoMgr.getCameraInfo();
    leftInfo.header.frame_id = "/camera_frame";

    image_transport::ImageTransport itCenter(centerNh);
    pCenterImage = itCenter.advertiseCamera("image", 1000);
    camera_info_manager::CameraInfoManager centerInfoMgr(centerNh);
    centerInfo = centerInfoMgr.getCameraInfo();
    centerInfo.header.frame_id = "/camera_frame";

    image_transport::ImageTransport itRight(rightNh);
    pRightImage = itRight.advertiseCamera("image", 1000);
    camera_info_manager::CameraInfoManager rightInfoMgr(rightNh);
    rightInfo = rightInfoMgr.getCameraInfo();
    rightInfo.header.frame_id = "/camera_frame";

  void imageCallback(const sensor_msgs::ImagePtr& img){
    sensor_msgs::Image left,right,center;
    for(unsigned int i=0; i<img->data.size(); i+=3) {>data[i]);>data[i+1]);>data[i+2]);

    left.header.frame_id = "/camera_frame";
    left.height=leftInfo.height; left.width=leftInfo.width; left.step=left.width left.encoding="mono8";

    center.header.frame_id = "/camera_frame";
    center.height=centerInfo.height; center.width=centerInfo.width; center.step=left.width; center.encoding="mono8";

    right.header.frame_id = "/camera_frame";
    right.height=rightInfo.height; right.width=rightInfo.width; right.step=left.width; right.encoding="mono8";

int main(int argc, char **argv) {
  ros::init(argc, argv, "xb3_deinterlacer");
  ros::NodeHandle n;
  ros::NodeHandle npLeft("xb3_left"),npRight("xb3_right"),npCenter("xb3_center");
  xb3ImageDeinterlacer imgDI(npLeft,npCenter,npRight);
  ros::Subscriber sub1 = n.subscribe("/camera/image_raw", 1000, &xb3ImageDeinterlacer::imageCallback, &imgDI);
  return 0;

camera1394 launch file:

<node pkg="nodelet" type="nodelet" name="camera_nodelet_manager" args="manager" output="screen"/>
<node pkg="nodelet" type="nodelet" name="camera1394_nodelet" args="load camera1394/driver camera_nodelet_manager" output="screen">
    <param name="video_mode" value="format7_mode3" />
    <param name="frame_id" value="camera_frame" />
    <param name="frame_rate" value="15.0" />
    <param name="iso_speed" value="800" />  
    <param name="format7_color_coding" value="rgb8" />
edit flag offensive delete link more


Updated code. Not sure if this is the correct way. I generated the yaml files using camera_calibration package for each camera. I can now see the images in Rviz. However, the images are shaky, both in rviz and image_view. This only happens when rviz is subscribing. Standalone image_view looks fine

aswin gravatar image aswin  ( 2013-07-16 21:17:34 -0600 )edit

This looks like a promising approach. If you all want to create and document a package containing a nodelet that multiplexes data from camera1394 and similar drivers and are willing to maintain it, I would favor adding it to ros-drivers. I can help, but lack access to a Bumblebee camera.

joq gravatar image joq  ( 2013-07-17 08:11:48 -0600 )edit

@aswin probably frame_ids should be different for each camera, a private parameter of each camera node my be used for that. I do not know if this has something to to with your problem with rviz.

joanpau gravatar image joanpau  ( 2013-07-25 03:27:49 -0600 )edit

@aswin Also, if calibration services are required, info managers should be class members (using shared_ptr for deferred initialization), look constructor and publish methods in camera1394stereo for example:

joanpau gravatar image joanpau  ( 2013-07-25 03:35:57 -0600 )edit

Question Tools

1 follower


Asked: 2013-07-01 16:10:01 -0600

Seen: 4,048 times

Last updated: Jan 27 '16