ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

Hi Sebastian,

Here's a similar piece bit of Python code that works for me with my Kinect. Like you, I am using the latest version of Diamondback and the latest Ni stack. The steps I am using are:

$ roslaunch openni_camera openni_node.launch

$ rosrun pi_head_tracking_tutorial ros_to_cv.py

where of course the second line will be different for you. Here is the ros_to_cv.py code. If you run it as a node, you'll also have to change the manifiest name in the roslib.load_manifiest('pi_head_tracking_tutorial') line.

#!/usr/bin/env python

import roslib
roslib.load_manifest('pi_head_tracking_tutorial')
import sys
import rospy
import cv
from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError

class vision_node:
    def __init__(self):
        rospy.init_node('vision_node')

        self.cv_window_name = "OpenCV Image"

        cv.NamedWindow(self.cv_window_name, 1)
        self.bridge = CvBridge()
        self.image_sub = rospy.Subscriber("/camera/rgb/image_color", Image,\
        self.callback)

    def callback(self, data):
        try:
          cv_image = self.bridge.imgmsg_to_cv(data, "bgr8")
        except CvBridgeError, e:
          print e

        cv.ShowImage(self.cv_window_name, cv_image)
        cv.WaitKey(3)

def main(args):
      vn = vision_node()
      try:
        rospy.spin()
      except KeyboardInterrupt:
        print "Shutting down vison node."
        cv.DestroyAllWindows()

if __name__ == '__main__':
    main(sys.argv)