Ask Your Question

dinamex's profile - activity

2019-08-01 14:22:07 -0500 received badge  Good Question (source)
2018-04-19 02:59:07 -0500 received badge  Good Question (source)
2017-07-01 10:13:08 -0500 received badge  Nice Question (source)
2017-06-06 05:46:22 -0500 marked best answer ApproximateTime Policy of message filter doesn't filter right

Hi all,

I would like to synchronize some of my sensordata (kinect and a webcam) in the mater of time. My webcam seems to put everything in a internal memory before transmitting it. Because of that I have a time difference of about 1 sec to the actual recorded scene. I tried to adjust the time-stamp in the header of webcam message before publishing and then use the message filter package to synchronize them as good as possible to my kinect images (openni_launch).

Here is my function which grabs an image from the webcam, fills in the header and publishes the data. (its the source code of the usb_cam with some minor tweaks)

bool take_and_send_image(){

         usb_cam_camera_grab_image(camera_image_);
         fillImage(img_, "rgb8", camera_image_->height, camera_image_->width, 3 *camera_image_->width, camera_image_->image);


         img_.header.stamp = ros::Time::now();
         ROS_INFO("Time now = %i.%i",img_.header.stamp.sec, img_.header.stamp.nsec );

         ros::Duration lag(5.0);
         ros::Time header_time = ros::Time::now()-lag;
         img_.header.stamp = header_time;
         info_.header.stamp = img_.header.stamp;

         ROS_INFO("Time lag = %i.%i",img_.header.stamp.sec, img_.header.stamp.nsec );


         image_pub_.publish(img_, info_);

            return true;


}

the output of this code indicates that the minus operation does the right thing.

[ INFO] [1357672295.986516662]: Time now = 1357672295.986394591
[ INFO] [1357672295.986791322]: Time lag = 1357672290.986699769

This is the subscriber which is pretty much the same as in the tutorial for the message filter. subscribing and calling the callback everytime two messages have the approximate same time.

void sync_callback(const ImageConstPtr&  webcam_image,const ImageConstPtr&  rgb_image){

     // save both images to disk



     ROS_INFO("syncro callback");
     ROS_INFO("Kinect Timestamp= %i.%i",rgb_image->header.stamp.sec,rgb_image->header.stamp.nsec);
     ROS_INFO("Webcam Timestamp= %i.%i",webcam_image->header.stamp.sec,webcam_image->header.stamp.nsec);

}



int main(int argc, char **argv){

      ros::init(argc, argv, "listener");
      ros::NodeHandle nh;
      image_transport::ImageTransport it(nh);

      image_transport::SubscriberFilter webcam_sub (it, "/panda_cam/image_raw", 20);
      image_transport::SubscriberFilter rgb_sub (it, "/camera/rgb/image", 20);
      Synchronizer<MySyncPolicy> sync(MySyncPolicy(1000), webcam_sub, rgb_sub);

      ROS_INFO("Ready to recieve messages");
      sync.registerCallback(boost::bind(&sync_callback, _1, _2));

      ros::spin();
      return 0;
}

The output of the time is indeed indicating that the message are 5 seconds behind the actual time.

[ INFO] [1357672295.859623746]: Kinect Timestamp= 1357672290.292021533
[ INFO] [1357672295.859715299]: Webcam Timestamp= 1357672290.301270495

But by observing the images (of a stopwatch) there is now real difference even when I ad 10 or more seconds to it. The Kinect header is still the original and pretty much on the original time and can be seen as my reference.

Would be great if someone could point me to the failure.

2016-09-12 05:32:33 -0500 received badge  Good Question (source)
2015-11-16 01:31:33 -0500 marked best answer Ubuntu 12.04 and Pandaboard -> fails to open ASUS xtion/ Driver issues for ARM

I'm using a Pandaboard which runs Ubuntu 12.04 and I try to use openni_launch. Therefore I need to install the OpenNI drivers. I used the tutorial "Kinect Installation on BeagleBoard-xM and PandaBoard running Ubuntu 12.04" which is pretty good and helped me (and probably a lot more developers) to save some time. Unfortunately, I'm using an Asus xtion (because of the lag of enough usb ports and the fact of less energy consumption). The compiling seems to work well but when I'm trying to run the tutorials I get the following failure msgs: "Open failed: Failed to open the USB device!" By executing the samples as sudo I get following output: "Open failed: Device Protocol: Bad Parameter sent!"

I know that this is probably the wrong forum to ask this question but I know that the robotic community (especially the Raspberry pi owner) are interested to solve this error.

Would be great get your thoughts about this issue. Best regards

2015-10-14 16:24:48 -0500 marked best answer RGBDslam:negative mahalanobis distance

Hi all,

I try to use rgbdslam for my dataset consisting out of depth and rgb images saved as png.

To make my live easy I tried to write a bag file from these images and then use this bagfile for the rgbdslam. But when I start the reconstruction I get a error about a negative mahalanobis distance and absolut nothing gets reconstructed.

What could be the cause for that? How can even a negative mahalanobis distance emerge?

here is a sample output:

[ERROR] [1359413304.934807938]: Non-Positive Mahalanobis Distance for vectors 2.68921e+22
1.42928e+22
6.56501e+22
1
and
1.08665e-31
7.1891e-32
2.5584e-31
1 with covariances
6.56972e+17           0           0
          0 7.03062e+17           0
          0           0 3.23245e+43
and
2.56023e-36           0           0
          0 2.73984e-36           0
          0           0 4.90904e-64
[ERROR] [1359413304.935060057]: Non-Positive Mahalanobis Distance for vectors -4.749e-39
-3.48146e-39
-1.29024e-38
1
and
7.44725e-30
6.38305e-30
1.98261e-29
1 with covariances
-1.29116e-43            0            0
           0 -1.38174e-43            0
           0            0  1.24853e-78
and
1.98404e-34           0           0
          0 2.12323e-34           0
          0           0 2.94807e-60
[ERROR] [1359413304.935230189]: Non-Positive Mahalanobis Distance for vectors -2.44741e+27
1.18307e+27
4.70933e+27
1
and
-6.61237e-31
3.8003e-31
1.30072e-30
1 with covariances
4.71271e+22           0           0
          0 5.04332e+22           0
          0           0 1.66333e+53
and
1.30165e-35           0           0
          0 1.39297e-35           0
          0           0 1.26891e-62

EDIT 1:

I think the problem lies in the saving and re-conversion to a sensor_msgs of the depth map. My self-written bag file is displayed totally different as the original bag file. I uploaded some Images to explain the problem a little bit better to imgur.com here

There are three screenshots. In every screenshot is the RGBDslam gui plus the window from image_view dispalying the same depth map. When I playback the original reference bag file everything works and the image_view and the RGBDslam display of the depth map are identical. When I playback my own bag file these two are significantly different.

2015-09-28 01:13:49 -0500 marked best answer camera_pose_calibration not working in fuerte

Hi there,

I would like to extrinsically calibrate a webcam and a asus xtion pro live sensor. Therefore I would like to use the camera_pose_calibration. Everything is intrinsically calibrated and I get the two images. But the checkerboard detector is not working.

while launchup the following message pops up:

can't locate node [image_cb_detector_action_old] in package [image_cb_detector]

My try to just copy the image_cb_detector_action to image_detector_action_old resultet in a tracked checkerboard but no working calibration.

[capture_exec-14] process has died [pid 9092, exit code 1, cmd /opt/ros/fuerte/stacks/camera_pose/camera_pose_calibration/src/camera_pose_calibration/multicam_capture_exec.py panda_cam camera/rgb request_interval:=interval_filtered __name:=capture_exec __log:=/home/pclinux/.ros/log/b9d2f746-5150-11e2-a6a7-b8ac6f40a936/capture_exec-14.log].
log file: /home/pclinux/.ros/log/b9d2f746-5150-11e2-a6a7-b8ac6f40a936/capture_exec-14*.log

Is there another working solution for extrinsic calibration for two sensors? (doesn't need to be ROS) I would appreciate any suggestion.

2015-09-28 01:08:48 -0500 marked best answer usb_cam package as nodelet

Hi there,

At the moment I use the usb_cam package from the bosch_drivers stack to acquire data from a webcam on an an ARM board. (I used the uvc_cam package also but the colors are weird). Because the ARM board has limited computational resources I would like to ask if someone implemented the usb_cam package already as nodelet?

I saw the Tutorials about how to convert a node to nodlet package but my question is if somebody already did that. Would be great to get your help.

EDIT: I tried to port the usb_cam to the nodelet structure but without success (caused by lag of programming skills). Is there a better/more detailed tutorial than "porting node to nodelet" ?

2015-08-10 06:11:36 -0500 marked best answer load calibration file

Hi there,

I would like to calibrate (extrinsically) a Webcam to my Kinect. For that I need the rectified image of the webcam (usb_cam/image_rect). To get that I calibrated the camera like this tutorial and got a ost.txt. I converted this txt file to yaml and saved it. I use following commands the get the rectified image to run

$ roslaunch usb_cam high.launch

In another tab:

$ ROS_NAMESPACE=usb_cam rosrun image_proc image_proc

And then to receive the rectified image

$ rosrun image_view image_view image:=/usb_cam/image_rect

But when I subscribe to the rectified image I just get

[ERROR] [1356567364.021643621]: Rectified topic '/usb_cam/image_rect' requested but camera publishing '/usb_cam/camera_info' is uncalibrated

My launchfile looks like this:

<launch>
    <node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen">
       <param name="video_device" value="/dev/video0" />
        <param name="image_width" value="1920" /> #max: 2304
        <param name="image_height" value="1080" /> #max: 1536
        <param name="pixel_format" value="mjpeg" />
        <param name="camera_frame_id" value="webcam" />
        <param name="camera_info_url" type="string" value="file://$(find usb_cam)/usb_cam.yaml" />

    </node>
</launch>

My calibration file like this:

image_width: 1920
image_height: 1080
camera_name: usb_cam
camera_matrix:
  rows: 3
  cols: 3
  data: [1367.163672, 0, 960.12046, 0, 1387.606605, 519.231255, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [0.094896, -0.140698, -0.000646, 0.002101, 0]
rectification_matrix:
  rows: 3
  cols: 3
  data: [1, 0, 0, 0, 1, 0, 0, 0, 1]
projection_matrix:
  rows: 3
  cols: 4
  data: [1384.273193, 0, 963.928625, 0, 0, 1410.329346, 518.011368, 0, 0, 0, 1, 0]

EDIT 1:

I saw that the usb_cam package doesn't use the Camera_Info_Manager. Could that be the reason why the calibration file has no effect? I tried now to insert the calibration parameter directly in the launch file like this. But still no changes....

<param name="D" type="string" value="-0.0834232 0.120545 -0.0023918 0.0175383 0 "/>
<param name="K" type="string" value="578.252 0 350.204 0 575.115 207.606 0 0 1 "/>
<param name="R" type="string" value="1 0 0 0 1 0 0 0 1 "/>
<param name="P" type="string" value="578.252 0 350.204 0 0 575.115 207.606 0 0 0 1 0 "/>

but still without a change. This is the output for $ rostopic echo /usb_cam/camera_info

 header: 
  seq: 204
  stamp: 
    secs: 1356617114
    nsecs: 959397246
  frame_id: usb_cam
height: 240
width: 320
distortion_model: ''
D: []
K: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
R: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
P: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
binning_x: 0
binning_y: 0
roi: 
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: False
2015-05-10 21:41:50 -0500 received badge  Nice Question (source)
2014-09-01 10:32:21 -0500 received badge  Famous Question (source)
2014-04-20 13:38:01 -0500 marked best answer why is it just one kinect per USB host?

Due to definition a USB 2.0 host offers a max Bandwidth of 36 to 40MB/s.

A kinect: (source from openkinect) There are 242 packets for one frame for the depth camera together with the header packets. All data packets are of 1760 bytes size. That results in 12672000 bytes/sec for 30 frames per second. The RGB camera needs 162 packets for one frame -> 9216000 bytes/sec for 30fps.

Which sums up tor 21MB/s for rgb and ir stream.

Do I miss something or why is it not possible to use another device beside the Kinect (in 640x480 and not even in 320x240)?

hope someone can show me what I miss.

2014-04-20 13:31:23 -0500 marked best answer Kinect calibration "Unknown OpenCV format yuv422"

Hi all,

I would like to intrinsically calibrate my asus sensor. Therefore I used the "openni_launch/Tutorials/IntrinsicCalibration" page. I calibrated the IR without problems. Now I want to calibrate the RGB sensor but get the following error message.

Unknown OpenCV format yuv422

I use following command (nothing special therefore I dont think that it is a false usage from my side)

$rosrun camera_calibration cameracalibrator.py image:=/camera/rgb/image_raw camera:=/camera/rgb --size 8x6 --square 0.025

My machine is a fresh install Ubunutu 12.4 and ROS Fuerte.

2014-04-20 13:29:08 -0500 marked best answer how can I register a Callback when using ros::spinOnce ?

Hi all I would like to control if I subscribe or unsubscribe to a topic via a key press. I Can see that the subscribtion works but the registered callback is never called when I use ros::spinOnce(). When I change to ros::spin() the callbacks works but not the key control because ros::spin leaves the main loop and doesn't come back unless the node isn't shutdown. Is there a way to control the subscriber status?

Here is my code:

using namespace sensor_msgs;
using namespace message_filters;
typedef sync_policies::ApproximateTime<sensor_msgs::Image, sensor_msgs::Image> MySyncPolicy;



image_transport::SubscriberFilter *webcam_sub = NULL;
image_transport::SubscriberFilter *rgb_sub = NULL;
bool subscribtion; 

void subscribe(ros::NodeHandle nh, image_transport::ImageTransport it){

      webcam_sub = new image_transport::SubscriberFilter (it, "/panda_cam/image_raw", 200);
      rgb_sub = new image_transport::SubscriberFilter (it, "/camera/rgb/image_color", 200);
      Synchronizer<MySyncPolicy> sync(MySyncPolicy(1000), *webcam_sub, *rgb_sub);
      sync.registerCallback(boost::bind(&sync_callback, _1, _2));

}

void unsubscribe(){

      webcam_sub->unsubscribe();
      rgb_sub->unsubscribe();

}



void sync_callback(const ImageConstPtr&  rgb_image,const ImageConstPtr&  webcam_image){

      ROS_INFO("CALLBACK");

}    

int main(int argc, char **argv){

      ros::init(argc, argv, "subscriber");
      ros::NodeHandle nh;
      image_transport::ImageTransport it (nh);
      ros::Rate rate(100);


      while(ros::ok()){
         rate.sleep();
         int key = kbhit();
           if (key == 122 && subscribtion == false){
             subscribtion = true;
             subscribe(nh,it);
             ROS_INFO("Ready to recieve messages");                                                    
          }
          else if (key == 122 && subscribtion == true){
             subscribtion = false;
             unsubscribe();
             ROS_INFO("unsibscribed to topics");                
          }

         ros::spinOnce();
      }
    return 0;
}
2014-04-20 13:26:00 -0500 marked best answer how to save depth image loss-less to disk

Hi all,

I would like to record a scene and save depth image + rgb image of the kinect sensor. therefore I use cv_bridge and save the images to png format. After that I want to use rgbdslam with my recorded data but it seems that I have a lot of data loss.

The color images are stored as 640×480 8-bit RGB images in PNG format.
The depth maps are stored as 640×480 16-bit monochrome images in PNG format.

But when I convert the images back to sensor_msgs::Image and display them with image_view the depth image looks totally different (much darker) as it looks when I directly display the depth image from the sensor.

Would be great to get your thoughts.

2014-04-20 13:21:46 -0500 marked best answer autostart a launchfile after/while boot

Hi,

I looking for a solution to just turn on my system (ubuntu 11.04 server/headless) and launch a launchfile automatically.

I tried the solution posted here with a boot startup script but it didn't work. Would be great to get your thoughts.

here is my script in /etc/init.d/:

start_ros () {
#Get ROS bash commands
  source /opt/ros/electric/setup.sh
#Get custom packages (this script will launch with 'sudo', so it'll be the root user
  export ROS_PACKAGE_PATH=/home/panda:$ROS_PACKAGE_PATH
#Home directory of your user for logging
  export HOME="/home/panda"
#Home directory for your user for logging
  export ROS_HOME="/home/panda/.ros"

#I run ROS on a headless machine, so I want to be able to connect to it remotely
  export ROS_IP="192.168.9.123"
#I like to log things, so this creates a new log for this script
  LOG="/var/log/panda.log"

#Time/Date stamp the log
  echo -e "\n$(date +%Y:%m:%d-%T) - Starting ROS daemon at system startup" >> $LOG
  echo "This launch will export ROS's IP as $ip" >> $LOG

#For bootup time calculations
  START=$(date +%s.%N)

#This is important. You must wait until the IP address of the machine is actually configured by all of the Ubuntu process. Otherwise, you will get an error and launch will fail. This loop just loops until the IP comes up. 
  while true; do
        IP="`ifconfig  | grep 'inet addr:'192.168.9.123''| cut -d: -f2 | awk '{ print $1}'`"

        if [ "$IP" ] ; then
                echo "Found"
                break
        fi
  done
#For bootup time calculations
  END=$(date +%s.%N)

  echo "It took $(echo "$END - $START"|bc ) seconds for an IP to come up" >> $LOG

  echo "Launching default_package default_launch into a screen with name 'ros'." >> $LOG
  screen -dmS ros roslaunch panda_cam high.launch

}

case "$1" in
  start)
        start_ros
esac

exit 0

after that I called and rebooted

$sudo update-rc.d panda_cam_startup defaults 99

while bootup i get the following output

/etc/rc2.d/S99panda_cam_startup: 45: source: not found 
Found
/etc/rc2.d/S99panda_cam_startup: 45: bc: not found
/etc/rc2.d/S99panda_cam_startup: 45: screen: not found

But the sourcepath is definitely righ. No idea whats wrong

2014-04-20 13:21:12 -0500 marked best answer Time difference in showed and actual webcam image

Hi all,

I'm using a Pandaboard ES (ARM) and would like to synchronize a webcam (logitech c920) and a kinect. I want to grab single frames on a keypress and save pcd and jpg to disk. Unfortunately is there a big lag in the webcam image which prevents me from a good result. I would like to know if there is a way to avoid the lag of 2-5 sec of my webcam image (high resolution 2304 x 1536 / YUYV format)? I tried usb_cam, uvc_camera (which is not working properly see this topic) and the gscam package. But unfortunately all of them have a big lag between the showed and the actual image. I know that the frame rate will be low ,which is not a big problem, its more that the data is the wrong one when I want to save it.

Edit 1: I looked for some unwanted queue as tfoote mentioned. The usb_cam package which seems for me to have a queue size of 1 and my subscriber uses queue size of 1, too. Could it be something else or am I wrong with the usb_cam?

Edit 2: I realized that my camera offers MJPEG compression directly on the webcam. This works together with the usb_cam package and I get a much higher frame rate (around 10 fps without paralleled kinect capturing and around 5 with the kinect capturing). The lag now is around 1 second but the resolution is also lower with 1920x1080. My first guess was that the camera has a internal memory which gets filled when the frames are not processed in time. Therefore I lowered the frame rate directly within the v4l2 driver to 5fps. This didn't brought better results also.

Would be great to get your thoughts about this problem?

side note: I get follwoing warning when using MJPEG compression

[swscaler @ 0x1c57aa0] No accelerated colorspace conversion found from yuv422p to rgb24.

which is probably a ffmpeg related and issue and regarding to their forum shouldn't cause big problems. I tried to solve it by installing various versions of ffmpeg without success.

2014-04-20 13:17:36 -0500 marked best answer codec type or id mismatches when using usb_cam

Hi all,

I would like to use the usb_cam package to integrate my logitech c920 into my project. Unfortunately, I get the following output when I use mjeg as pixel format:

[swscaler @ 0x4453c7a0] No accelerated colorspace conversion found from yuv422p to rgb24.

Due to the high resolution pictures is this compression necessary, otherwise my system would be to slow. I have a lack of 3-4 seconds with yuyv as pixel format.

I followed this guide to reinstall ffmpeg from source but it didn't change anything... I would very much appreciate any ideas to solve this issue. Thanks

EDIT1:

I found this post where they mention that the 6.2 Version should work. I installed it but the error message is now:

[mjpeg @ 0x4461b040]codec type or id mismatches
I could find a decoderCould not open MJPEG Decoder
2014-04-20 13:12:54 -0500 marked best answer callback for two topics at the same time

Hi everyone,

I would like to save an image from a depthcamera and a webcam at the same time as a mouse click. So I subscribed to the depth image and in this callback I used

 webcam_image = ros::topic::waitForMessage<Image>("/logitech_usb_webcam/image_raw");

but it seems that the wait for message skips the first message and uses the second one instead...

So I went on and tried another solution. Therefore, I used the registerCallback function from the message_filter package but it doesn't behave like I want it because the callback gets never triggered (both topics are published).

My code looks like this

#include <ros/ros.h>
#include <std_msgs/String.h>
#include <message_filters/subscriber.h>
#include <message_filters/time_synchronizer.h>
#include <sensor_msgs/Image.h>


using namespace sensor_msgs;
using namespace message_filters;


void sync_callback(const ImageConstPtr& depth_image, const ImageConstPtr&  webcam_image){

        ROS_INFO("syncro callback");
}



int main(int argc, char **argv){

        ros::init(argc, argv, "listener");
        ros::NodeHandle nh;

        message_filters::Subscriber<Image> depth_sub(nh, "/camera/depth/image", 1);
        message_filters::Subscriber<Image> webcam_sub(nh, "/logitech_usb_webcam/image_raw", 1);
        TimeSynchronizer<Image, Image> sync(depth_sub, webcam_sub, 50);
        sync.registerCallback(boost::bind(&sync_callback, _1, _2));



        ros::spin();
        return 0;
}

Would be great to get your help to get this working.

2014-04-20 13:10:43 -0500 marked best answer can just subscribe once to openni topics

Hi there,

I'm running ROS electric on a ARM board (which is running Ubuntu 11.10). When i use openni_launch I can just subscribe once to the published topics. After releasing the subscription I can't subscribe a second time to any of the topics (but $rostopic list still shows all of them). could it be a USB specific issue?
The initialization looks kind of weird as well (see creating image generator failed. Reason: Failed to set USB interface!)

Here the output from launching the openni node:

[ INFO] [1349729971.865694298]: Number devices connected: 1
[ INFO] [1349729971.868807096]: 1. device on bus 001:04 is a PrimeSense Device (600) from PrimeSense (1d27) with serial id ''
[ INFO] [1349729971.880312242]: Searching for device with index = 1
[ INFO] [1349729972.140444513]: No matching device found.... waiting for devices. Reason: openni_wrapper::OpenNIDevice::OpenNIDevice(xn::Context&, const xn::NodeInfo&, const xn::NodeInfo&, const xn::NodeInfo&, const xn::NodeInfo&) @ /home/panda/ros_workspace/perception_pcl/pcl/build/pcl/io/src/openni_camera/openni_device.cpp @ 98 : creating image generator failed. Reason: Failed to set USB interface!
[ INFO] [1349729975.296821860]: Number devices connected: 1
[ INFO] [1349729975.297066001]: 1. device on bus 001:04 is a PrimeSense Device (600) from PrimeSense (1d27) with serial id 'Device1'
[ INFO] [1349729975.308174419]: Searching for device with index = 1
[ INFO] [1349729975.316627802]: Opened 'PrimeSense Device' on bus 1:4 with serial number 'Device1'
[ INFO] [1349729975.437843824]: rgb_frame_id = '/camera_rgb_optical_frame' 
[ INFO] [1349729975.437965894]: depth_frame_id = '/camera_depth_optical_frame' 
[ WARN] [1349729975.455727155]: Camera calibration file /home/panda/.ros/camera_info/rgb_Device1.yaml not found.
[ WARN] [1349729975.461220328]: Camera calibration file /home/panda/.ros/camera_info/depth_Device1.yaml not found.
[ WARN] [1349729975.466744019]: Using default parameters for RGB camera calibration.
[ WARN] [1349729975.466957642]: Using default parameters for IR camera calibration.
Warning: USB events thread - failed to set priority. This might cause loss of data...

Would be great to get your ideas...


Edit: I looked into the XnUSBLinux.cpp of the openni driver (where the failure message was coming from) and I got a hint from someone from the lib_usb community that I need to detach the kernel driver with (libusb_detach_kernel_driver(handle,0)) befor claiming the device. Since I edit this file I dont get any error messages anymore but unforunatly I dont get any messages either (Topics are listed but not published). The projector is on also... So I think the device is initialized properly but there is no callback for the incoming msgs.

2014-04-20 13:05:42 -0500 marked best answer ROS Fuerte and PCL from source on ARM / Hard-float support

Hi guys,

Unfortunately, I can't use the apt-get install ros-fuerte-pcl command and need an installation from source. (I'm using an ARM board and Ubuntu 12.04) I'm asking because it seems that in the new ROS Fuerte setup the PCL Library is not included and is seen as an external system dependency.

When i use $roslocate info pcl I get the following output:

But this website doesn't exist. But there seems to be a special PCL version for ROS (see the Willow Garage repository [https://github.com/wg-debs/pcl-release] ) saw that at answers.ros.org (link)

So i tried to compile the code from WG with the USE_ROS option on ON and it fails to compile with the following failure.

[  1%] Built target gtest 
[  1%] Built target gtest_main 
[  8%] Built target pcl_common 
[  9%] Built target pcl_kdtree 
[ 10%] Built target pcl_octree 
[ 12%] Built target pcl_search 
[ 23%] Built target pcl_sample_consensus 
[ 23%] Built target pcl_geometry 
[ 26%] Built target pcl_segmentation 
[ 26%] Building CXX object io/CMakeFiles/pcl_io_ply.dir/src/ply/ply_parser.cpp.o 
In file included from /home/panda/ros/pcl-release/io/include/pcl/io/ply/ply.h:45:0, 
                 from /home/panda/ros/pcl-release/io/include/pcl/io/ply/ply_parser.h:61, 
                 from /home/panda/ros/pcl-release/io/src/ply/ply_parser.cpp:41: 
/home/panda/ros/pcl-release/io/include/pcl/io/ply/byte_order.h:65:4: error: #error 
In file included from /home/panda/ros/pcl-release/io/include/pcl/io/ply/ply.h:45:0, 
                 from /home/panda/ros/pcl-release/io/include/pcl/io/ply/ply_parser.h:61, 
                 from /home/panda/ros/pcl-release/io/src/ply/ply_parser.cpp:41: 
/home/panda/ros/pcl-release/io/include/pcl/io/ply/byte_order.h:77:4: error: #error 
In file included from /home/panda/ros/pcl-release/io/src/ply/ply_parser.cpp:41:0: 
/home/panda/ros/pcl-release/io/include/pcl/io/ply/ply_parser.h: In member function ‘bool pcl::io::ply::ply_parser::parse_scalar_property(pcl::io::ply::format_type, std::istream&, const typename pcl::io::ply::ply_parser::scalar_property_callback_type<ScalarType>::type&)’: 
/home/panda/ros/pcl-release/io/include/pcl/io/ply/ply_parser.h:577:51: error: ‘host_byte_order’ was not declared in this scope 
/home/panda/ros/pcl-release/io/include/pcl/io/ply/ply_parser.h: In member function ‘bool pcl::io::ply::ply_parser::parse_list_property(pcl::io::ply::format_type, std::istream&, const typename pcl::io::ply::ply_parser::list_property_begin_callback_type<SizeType, ScalarType>::type&, const typename pcl::io::ply::ply_parser::list_property_element_callback_type<SizeType, ScalarType>::type&, const typename pcl::io::ply::ply_parser::list_property_end_callback_type<SizeType, ScalarType>::type&)’: 
/home/panda/ros/pcl-release/io/include/pcl/io/ply/ply_parser.h:648:51: error: ‘host_byte_order’ was not declared in this scope 
/home/panda/ros/pcl-release/io/include/pcl/io/ply/ply_parser.h:674:53: error: ‘host_byte_order’ was not declared in this scope 
make[2]: *** [io/CMakeFiles/pcl_io_ply.dir/src/ply/ply_parser.cpp.o] Error 1 
make[1]: *** [io/CMakeFiles/pcl_io_ply.dir/all] Error 2 
make: *** [all] Error 2

Would be great to get your thoughts about ... (more)

2014-04-20 13:04:47 -0500 marked best answer Problem to start roscore on ARM + Ubuntu 12.04

Hi there,

I installed ros fuerte from source on a new installation of Ubuntu 12.04. When I want to run roscore i get following failure message:

 ... logging to /home/panda/.ros/log/4b069fea-efbf-11e1-b011-2e607c264e01/roslaunch-panda-1813.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

Traceback (most recent call last):
  File "/opt/ros/fuerte/bin/rosversion", line 5, in <module>
    pkg_resources.run_script('rospkg==1.0.6', 'rosversion')
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 499, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1229, in run_script
    raise ResolutionError("No script named %r" % script_name)
pkg_resources.ResolutionError: No script named 'rosversion'
Invalid <param> tag: Cannot load command parameter [rosversion]: command [rosversion ros] returned with code [1].

Param xml is <param command="rosversion ros" name="rosversion"/>

I saw that there was already a question regarding to this kind of failure. So I deleted all rospkg regarding folders under /usr/local/lib/python2.7/dist-packages/ and used

 $sudo easy_install -U rospkg

Unfortunately, I got a different failure msgs then:

 ... logging to /home/panda/.ros/log/e6c9fbac-efbf-11e1-8192-2e607c264e01/roslaunch-panda-1830.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

Unable to contact my own server at [http://panda:60413/].
This usually means that the network is not configured properly.

A common cause is that the machine cannot ping itself.  Please check
for errors by running:

        ping panda

For more tips, please see

        http://www.ros.org/wiki/ROS/NetworkSetup

Which is really weird because i can ping myself without a problem. I deleted all rospkg folders again and used to get back to the old setup.

$sudo pip install -U rospkg

there was a interesting output while installing

Installing collected packages: rospkg
  Running setup.py install for rospkg
    changing mode of build/scripts-2.7/rosversion from 644 to 755

So a ROS-Version must be set somewhere.

the output from $roswtf ist

No package or stack in context
================================================================================
Static checks summary:

Found 1 error(s).

ERROR Not all paths in ROS_PACKAGE_PATH [/opt/ros/fuerte/share:/opt/ros/fuerte/stacks] point to an existing directory:
 * /opt/ros/fuerte/stacks

================================================================================

ROS Master does not appear to be running.
Online graph checks will not be run.
ROS_MASTER_URI is [http://localhost:11311]

I appreciate any kind of suggestions.


Edit:

Lorenz answer is the solution to the problem. I used $ sudo easy_install rospkg and then added

export ROS_HOSTNAME=localhost  
export ROS_MASTER_URI=http://localhost:11311

to my .bashrc

There is now a failure msgs about

Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored

but this shouldn't be an issue. (reference)

2014-04-20 13:02:27 -0500 marked best answer Singelshot from multiple Kinects / synchronize Kinects

Hi there,

I'm trying to build a portable device with a pandaboard and two kinects on it. Because of the fact that the Pandaboard just provide one USB-bus I would like to knows if it's possible to synchronize the sensors in a way that they are interlacing (pictures are alternating) or if I could trigger the Sensor to take a picture ( for the first and then for the second sensor with minimal latency). I just need singleshots (but from both sensors at the same time) after pressing a button, respectively not the full videostream is needed. Is there a way to do that with just one USB-bus?

EDIT: Due to the fact that I'm using an ARM Platform (Pandaboard ES) my computational resources are very limited I will probably use two boards and synchronize the boards system-time with crony. Afterwards I would be able to use message_filters as Mac mentioned it. Unfortunately is my publishing rate pretty bad (1-2 pointclouds per second) so I doubt that this will give me sufficient solution. Is there a way to converge the publishing time of the sensors?

I would be glad about any idea or recommendations.
Cheers

2014-04-20 13:02:17 -0500 marked best answer Will this Project be possible? / ROS Time Syncronisation

Hi Guys,

I would like to discuss the doability of a project. I want to run a couple of ARM Boards (Pandaboard ES) which are connected to each other with an Ethernet-switch. The Boards already run ROS electric and Ubuntu 11.10. Every board is hosting a sensor like a Kinect or Webcam. Now I want to trigger an event and every sensor should publish the data from the exact same time-stamp (synchronization of the data should be in the milliseconds range).

Will that be possible by exporting the master to a single board which is providing the ROS-Time for all other nodes. Did I get I right that the Timestamps in the Sensor_msgs depend on the ROS-Time and so it should be possible to obtain data from the exact same time stamp or will I encounter a summation of time delays? Or does the timestamp in the sensor_msgs header depend on the system time? Is there a way to use the clock just from a single board

It would be great to here about your expertise/opinion.

Cheers

2014-04-20 12:57:38 -0500 marked best answer add system dependecies installed from source to rosdep check

Hi there,

I would like to know how to add a already installed package from source to the rosdep check.

My problem is in specific that I use a ARM Platform and could not install yaml-cpp0.2.6-dev from binaries (is needed from the uvc_camera package and many more). So I installed it from source without any problems. When I'm now using following commad:

$rosmake --rosdep-install uvc_camera

it still complains about the missing yaml dependency but When I'm using:

$rosmake --no-rosdep uvc_camera

It compiles and works. Now I want to now how I can define that yaml is already installed on the machine.

I would be glad to get a hint here...
Cheers

2014-04-20 12:57:17 -0500 marked best answer can't set any settings for uvc_camera

Hi guys,

I want to use a webcam with the uvc_camera Package. I'm using Ubuntu 11.10 server and compiled the package from source (because I use an ARM processor). It compiled without any problems but when I try to run the launchfile (camera_node.launch) I got the following failure message. I can see the video but its really slow and all settings (color, saturation, whitbalance etc.) are wrong and the picture looks almost crossprocessed.

camera_node.launch:

PARAMETERS
 * /uvc_camera/device
 * /rosdistro
 * /uvc_camera/fps
 * /uvc_camera/height
 * /uvc_camera/camera_info_url
 * /uvc_camera/frame
 * /rosversion
 * /uvc_camera/width

NODES
  /
    uvc_camera (uvc_camera/camera_node)

auto-starting new master
process[master]: started with pid [3736]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to c57a2f6c-b5ae-11e1-bd3e-2e607c264e01
process[rosout-1]: started with pid [3751]
started core service [/rosout]
process[uvc_camera-2]: started with pid [3754]
[ INFO] [1339629806.595916290]: using default calibration URL
[ INFO] [1339629806.597411651]: camera calibration URL: file:///home/panda/.ros/camera_info/camera.yaml
[ERROR] [1339629806.598754424]: Unable to open camera calibration file [/home/panda/.ros/camera_info/camera.yaml]
[ WARN] [1339629806.599486846]: Camera calibration file /home/panda/.ros/camera_info/camera.yaml not found.
[ INFO] [1339629806.612029571]: camera calibration URL: file:///home/panda/ros_workspace/camera_umd/uvc_camera/example.yaml
[ERROR] [1339629806.612853545]: Unable to open camera calibration file [/home/panda/ros_workspace/camera_umd/uvc_camera/example.yaml]
[ WARN] [1339629806.613372344]: Camera calibration file /home/panda/ros_workspace/camera_umd/uvc_camera/example.yaml not found.
opening /dev/video2
pixfmt 0 = 'YUYV' desc = 'YUV 4:2:2 (YUYV)'
  discrete: 640x480:   1/30 1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 160x90:   1/30 1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 160x120:   1/30 1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 176x144:   1/30 1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 320x180:   1/30 1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 320x240:   1/30 1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 352x288:   1/30 1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 432x240:   1/30 1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 640x360:   1/30 1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 800x448:   1/30 1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 800x600:   1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 864x480:   1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 960x720:   1/15 1/10 2/15 1/5
  discrete: 1024x576:   1/15 1/10 2/15 1/5
  discrete: 1280x720:   1/10 2/15 1/5
  discrete: 1600x896:   2/15 1/5
  discrete: 1920x1080:   1/5
  discrete: 2304x1296:   1/2
  discrete: 2304x1536:   1/2
pixfmt 1 = '    ' desc = '34363248-0000-0010-8000-00aa003'
  discrete: 640x480:   1/30 1/24 1/20 1/15 1/10 2/15 1/5
  discrete: 160x90:   1/30 1/24 1/20 1/15 ...
(more)
2014-04-20 12:57:17 -0500 marked best answer singleshot / photo with webcamera

Hi everybody,

I want to use a webcam as a photocamera which can be triggered to to take single pictures (the sensor is 3.1 MP). I'm pretty new to the imaging part and would like to know how it would be the easiest way to take a single photo? I already use the usb_cam package but this is just for streaming. Can I modify the source code to trigger a single shot? because there they only use v4l2.

2014-04-20 12:56:51 -0500 marked best answer manual focus with usb_cam

Hi all,

I would like to know how I could manually set the focal length. I know that I can set the autofocus option to false in the launchfile of the usb_cam package but where or how can I use the camera then?

Because now just everything is blurry and I don't know where to start looking. Thank you.

EDIT I found that there is a different in the binary (which doesen't have the option) and the installation from source (which offers the option in the launchfile). I looked up the libusb source code and I found the following at the end but I have no idea how to enable/disable or to set the focus manually.

// enables/disables auto focus
void usb_cam_camera_set_auto_focus(int value)
{
  struct v4l2_queryctrl queryctrl;
  struct v4l2_ext_control control;

  memset(&queryctrl, 0, sizeof(queryctrl));
  queryctrl.id = V4L2_CID_FOCUS_AUTO;

  if (-1 == xioctl(fd, VIDIOC_QUERYCTRL, &queryctrl)) {
    if (errno != EINVAL) {     
    perror("VIDIOC_QUERYCTRL");
      return;
    } else {
      printf("V4L2_CID_FOCUS_AUTO is not supported\n");
      return;
    }
  } else if (queryctrl.flags & V4L2_CTRL_FLAG_DISABLED) {
    printf("V4L2_CID_FOCUS_AUTO is not supported\n");
    return;
  } else {
   memset(&control, 0, sizeof(control));
    control.id = V4L2_CID_FOCUS_AUTO;
    control.value = value;

    if (-1 == xioctl(fd, VIDIOC_S_CTRL, &control)) {
      perror("VIDIOC_S_CTRL");
      return;
    }
  }
}

2014-04-20 12:54:35 -0500 marked best answer rosinstall behind Proxy

Hi,

I'm currently trying to install ROS on the Pandaboard ES behind a proxy. The board runs Ubuntu 11.10 (server prebuild) and is working so far. But I'm running into failures by proceeding the installationguide for fuerte from source. http://www.ros.org/wiki/fuerte/Installation/Ubuntu/Source

I'm doing everything the same as in the tutorial but when it comes to the point of installing the second layer: rosinstall ~/ros "http://packages.ros.org/cgi-bin/gen_rosinstall.py?rosdistro=fuerte&variant=desktop-full&overlay=no"

I get the following failure and not a single package gets installed
Error processing 'NAME OF XY PACKAGE' : local variable 'tempdir' referenced before assignment

I set the environment as following
http_proxy=http://user:pass@proxy.xxxx.com:8080/
ftp_proxy=ftp://user:pass@proxy.xxxx.com:8080/
https_proxy=https://user:pass@proxy.xxxx.com:8080/

and everything like wget,git,apt-get is functioning but i think the proxy is still blocking the rosinstall procedure.

Would be great to get some help here. Cheers

2014-04-20 12:54:25 -0500 marked best answer How to install yaml-cpp0.2.6-dev / How to install from source correctly

Hi Guys,

I'm not a total newbie to ROS and Ubuntu but I have a problem to install the yaml-cpp2.6-dev file.

I tried to make the uvc_camera Package and it complains about a missing dependencie of 'yaml-cpp' so i tried

$sudo apt-get yaml-cpp

but it unable to locate a package probably because of my armel architecture. Therefore i tried to compile it from source.

$ hg clone ttp://code.google.com/p/yaml-cpp/
$ mkdir build
$ cd build
$ cmake -DBUILD_SHARED_LIBS=ON ..
$ make
$ sudo make install

everything without a failure. So i tried to make the uvc_camera package again but it's still complaining about the missing yaml-cpp?

EDIT #1: By using rosmake uvc_camera --no-rosdep it complains that camera_calibration_parsers is not installed (which is caused by the "missing" yaml2.6-dev)

EDIT #2: This is the output for rosmake camera_calibration_parsers --no-rosdep as joq suggested.

[ rosmake ] Packages requested are: ['camera_calibration_parsers']
[ rosmake ] Logging to directory/home/panda/.ros/rosmake/rosmake_output-20120613-113412
...
[ rosmake ] Expanded args ['camera_calibration_parsers'] to:
['camera_calibration_parsers']
[rosmake-1] Starting >>> camera_calibration_parsers [ make ]
[ rosmake ] Last 40 linesmera_calibration_parsers... [ 1 Active 28/29 Complete ]
{-------------------------------------------------------------------------------
  /home/panda/ros_workspace/image_common/camera_calibration_parsers/src/parse_yml.cpp:50:6: note: void camera_calibration_parsers::operator>>(const YAML::Node&, camera_calibration_parsers::SimpleMatrix&)
  /home/panda/ros_workspace/image_common/camera_calibration_parsers/src/parse_yml.cpp:50:6: note:   no known conversion for argument 2 from ‘uint32_t {aka unsigned int}’ to ‘camera_calibration_parsers::SimpleMatrix&’
  /home/panda/ros_workspace/image_common/camera_calibration_parsers/src/parse_yml.cpp:138:44: error: ‘class YAML::Node’ has no member named ‘FindValue’
  /home/panda/ros_workspace/image_common/camera_calibration_parsers/src/parse_yml.cpp:139:31: error: no match for ‘operator>>’ in ‘* model_node >> cam_info.sensor_msgs::CameraInfo_<std::allocator<void> >::distortion_model’
  /home/panda/ros_workspace/image_common/camera_calibration_parsers/src/parse_yml.cpp:139:31: note: candidates are:
  /home/panda/ros_workspace/image_common/camera_calibration_parsers/src/parse_yml.cpp:50:6: note: void camera_calibration_parsers::operator>>(const YAML::Node&, camera_calibration_parsers::SimpleMatrix&)
  /home/panda/ros_workspace/image_common/camera_calibration_parsers/src/parse_yml.cpp:50:6: note:   no known conversion for argument 2 from ‘std::basic_string<char>’ to ‘camera_calibration_parsers::SimpleMatrix&’
  /usr/include/c++/4.6/complex:488:5: note: template<class _Tp, class _CharT, class _Traits> std::basic_istream<_CharT, _Traits>& std::operator>>(std::basic_istream<_CharT, _Traits>&, std::complex<_Tp>&)
  /usr/include/c++/4.6/iomanip:229:5: note: template<class _CharT, class _Traits> std::basic_istream<_CharT, _Traits>& std::operator>>(std::basic_istream<_CharT, _Traits>&, std::_Setw)
  /usr/include/c++/4.6/iomanip:199:5: note: template<class _CharT, class _Traits> std::basic_istream<_CharT, _Traits>& std::operator>>(std::basic_istream<_CharT, _Traits>&, std::_Setprecision)
  /usr/include/c++/4.6/iomanip:169:5: note: template<class _CharT, class _Traits> std::basic_istream<_CharT, _Traits>& std::operator>>(std::basic_istream<_CharT, _Traits>&, std::_Setfill<_CharT>)
  /usr/include/c++/4.6/iomanip:131:5: note: template<class _CharT, class _Traits> std::basic_istream<_CharT, _Traits>& std::operator>>(std::basic_istream<_CharT, _Traits>&, std::_Setbase)
  /usr/include/c++/4.6/iomanip:100:5: note: template<class _CharT, class _Traits> std::basic_istream<_CharT, _Traits>& std::operator>>(std::basic_istream<_CharT, _Traits>&, std::_Setiosflags)
  /usr/include/c++/4.6/iomanip:70:5: note: template<class _CharT, class _Traits> std::basic_istream<_CharT, _Traits>& std::operator>>(std::basic_istream ...
(more)
2014-04-20 12:54:18 -0500 marked best answer usb_cam on pandaboard / fatal error: libavcodec/avcodec.h

Hi there,

I'm trying to install the usb_cam package on my Pandaboard (Armel architecture / System Oneric 11.10-Server. I used:

$ sudo apt-get install ros-electric-bosch-drivers

which probably fails because of the armel architecture. Therefore I choosed the installation from source.

$ svn co https://bosch-ros-pkg.svn.sourceforge.net/svnroot/bosch-ros-pkg/branches/electric/stacks/bosch_drivers/
$ roscd usb_cam
$ rosmake --rosdep-install

I get the following failure message:

E: Unable to locate package yaml-cpp0.2.6-dev
E: Couldn't find any package by regex 'yaml-cpp0.2.6-dev'
Failed to install yaml-cpp!
[ rosmake ] rosdep install failed: Rosdep install failed

I installed the yaml_cpp from source with:

$ hg clone ttp://code.google.com/p/yaml-cpp/
$ svn co
$ mkdir build
$ cd build
$ cmake -DBUILD_SHARED_LIBS=ON ..
$ make
$ sudo make install

without any problems. But when I try to make the usb_cam package by:

$ rosmake usb_cam --rosdep-install

it stills produces the same failure (missing yaml_cpp)


By compiling just the usb_cam package:

$ rosmake usb_cam --no-rosdep

it produces the following failure

[rosmake-0] Starting >>> usb_cam [ make ]
[ rosmake ] Last 40 linesb_cam: 20.6 sec ]           [ 1 Active 33/34 Complete ]
{-------------------------------------------------------------------------------
  make[3]: Leaving directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  [  0%] Built target rosbuild_premsgsrvgen
  make[3]: Entering directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  Scanning dependencies of target ROSBUILD_gensrv_cpp
  make[3]: Leaving directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  [  0%] Built target ROSBUILD_gensrv_cpp
  make[3]: Entering directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  Scanning dependencies of target ROSBUILD_gensrv_lisp
  make[3]: Leaving directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  [  0%] Built target ROSBUILD_gensrv_lisp
  make[3]: Entering directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  Scanning dependencies of target rospack_gensrv
  make[3]: Leaving directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  [  0%] Built target rospack_gensrv
  make[3]: Entering directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  Scanning dependencies of target rospack_gensrv_all
  make[3]: Leaving directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  [  0%] Built target rospack_gensrv_all
  make[3]: Entering directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  Scanning dependencies of target rospack_genmsg_libexe
  make[3]: Leaving directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  [  0%] Built target rospack_genmsg_libexe
  make[3]: Entering directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  Scanning dependencies of target rosbuild_precompile
  make[3]: Leaving directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  [  0%] Built target rosbuild_precompile
  make[3]: Entering directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  Scanning dependencies of target usb_cam
  make[3]: Leaving directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  make[3]: Entering directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  [ 50%] Building CXX object src/libusb_cam/CMakeFiles/usb_cam.dir/usb_cam.o
  /home/panda/ros_workspace/bosch_drivers/usb_cam/src/libusb_cam/usb_cam.cpp:55:32: fatal error: libavcodec/avcodec.h: No such file or directory
  compilation terminated.
  make[3]: *** [src/libusb_cam/CMakeFiles/usb_cam.dir/usb_cam.o] Error 1
  make[3]: Leaving directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  make[2]: *** [src/libusb_cam/CMakeFiles/usb_cam.dir/all] Error 2
  make[2]: Leaving directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
  make[1]: *** [all] Error 2
  make[1]: Leaving directory `/home/panda/ros_workspace/bosch_drivers/usb_cam/build'
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package usb_cam written to:
[ rosmake ]    /home/panda ...
(more)
2014-04-20 12:54:09 -0500 marked best answer can't launch openni on Pandaboard

Hi ther,

I installed ROS electric (base + perception) from source on an armel platform and try to run a kinect and a webcam on it.

I used this command to install the perception based installation which went smooth and without failures.

rosinstall ~/ros "http://packages.ros.org/cgi-bin/gen_rosinstall.py?rosdistro=electric&variant=perception&overlay=no"

But when I try to run the included packages:

$roslaunch openni_camera camera_node.launch

or

$roslaunch openni_launch openni.launch

or for the Webcam (usb_cam & uvc_camera packages from source):

$roslaunch uvc_camera camera_node.launch

or

$roslaunch usb_cam 'MY OWN LAUNCH FILE

I always get the Error: cannot launch node of type [the specific package]: cannot locate node of type [specific node] in package [package]

Is there something wrong with my ROS installation or did I install all four Packages wrong?

the output of the $echo $ROS_PACKAGE_PATH is

/home/panda/ros_workspace:/opt/ros/electric

which is correct

2014-04-20 12:49:24 -0500 marked best answer ROS offline installation

Hi there,

I know that this question is already been asked but this threat didn't get me to the solution. I'm trying to install the ROS-Base version on an ASCTEC Pelican. Unfortunately, is there no way to connect the Pelican to the internet. I can checkout the https://code.ros.org/svn/ros/stacks/ros/tags/ros-1.6.8 repository but how do I get the other Packages onto the machine. Would be great to get some help.

Thanks & Cheers

2014-04-20 12:49:09 -0500 marked best answer video stream between multiple maschines

Hi There,

I'm working on a UAV which is equipped with a webcam. The webcam is running with the usb_cam package and it is working fine on the UAV Machine. Now I'm trying to send the image-data via wireless to the base station but I can't get the messages. I exported the Master to the UAV and can even see which topics are published there, but when I try rostopic hz /logitech_usb_webcam/image_raw I don't recieve any message.

Could that be a result of two different versions of ROS (UAV = diamondback / Base = electric) or is that a failure somewhere else? would be great to get some help.

Cheers