Ask Your Question

rse101's profile - activity

2016-09-20 14:22:58 -0600 received badge  Student (source)
2016-08-14 01:13:33 -0600 marked best answer Issue with multithreading and standard output

I have a multithreaded piece of python code that moves the arms of my robot. The basic premise is that there is one thread for each arm, and threads block until each arm as finished it's movements (i.e., given two sets of joint positions, one for each arm, the program splits into two threads that loop until their respective arm has reached the specified joint positions). Here is the code that does that:

def __moveToJointPos(self, instance, index): if instance >= self.numInstances(): raise Exception("Called with instance " + str(instance) + " but only " + str(self.numInstances()) + " instances exist")

    t = None
    if type(index) is tuple:
        #If this method is being called with 2 indices rather than one, that means we want to move both arms at the same time.
        #To do this, we spin off one of the indices to another thread, then execute as normal on the remaining one.
        #Then we wait until the other thread finishes before returning.
        if len(index) > 2:
            raise Exception("Can only call with a tuple of size 2, got a tuple of size " + str(len(index)))
        elif self.poss[instance][1][index[0]][0] == self.poss[instance][1][index[1]][0]:
            raise Exception("When calling with a tuple, must specify indices with different arms.")
        elif len(index) == 2:
            i2 = index[1]
            t = threading.Thread(target=self._Behavior__moveToJointPos, args = (instance, i2))
            t.daemon = True
            t.start()
        mov = self.poss[instance][1][index[0]]
    else:
        mov = self.poss[instance][1][index]

    rate = rospy.Rate(10)

    if mov[0] == Behavior.RIGHT_ARM:
        arm = self.right_arm
    else:
        arm = self.left_arm
    done = False
    dd = [9999.0, 9999.0, 9999.0, 9999.0, 9999.0]
    while not done:
        arm.set_joint_positions(mov[1])
        dist = 0.0
        for j in arm.joint_names():
            dist = dist + (mov[1][j] - arm.joint_angle(j))*(mov[1][j] - arm.joint_angle(j))
        dist = math.sqrt(dist)
        ss = 0.0
        for d in dd:
            ss = ss + math.fabs(d - dist)
        if ss < 0.01:
            done = True
        elif rospy.is_shutdown():
            print("ROS shutdown, cancelled movement")
            done = True
        if baxter_external_devices.getch() in ['\x1b', '\x03']:
            print("stopped")
            done = True
        else:
            dd.append(dist)
            del dd[0]
            rate.sleep()

    if t is not None:
        t.join()

I'll admit the code is rather messy, but this is just something I'm quickly hacking together to test out the robot. Here is the code that calls that function:

     print("a")
        self._Behavior__moveToJointPos(0,(ToTable.lHOME, ToTable.rHOME))
        print("b")
        self._Behavior__moveToJointPos(0,(ToTable.lMID, ToTable.rMID))
        print("c")
        self._Behavior__moveToJointPos(0,(ToTable.lTABLE, ToTable.rTABLE))
        print("d")

(the prints are added so that the effect on standard out is more clear). Here is what my terminal looks like:

Here is the link to the image: http://i.imgur.com/1hdCq7f.png

(you may need to open it in a separate tab to see it). There are a couple more calls to __moveToJointPos, which is why there is more text output, but the error starts with the set of commands above. As you can see ... (more)

2015-08-10 04:41:12 -0600 received badge  Famous Question (source)
2015-02-03 09:33:31 -0600 received badge  Popular Question (source)
2015-02-03 09:33:31 -0600 received badge  Notable Question (source)
2014-09-30 18:05:30 -0600 asked a question Support for Asus Xtion Microphones

I've got an Asus Xtion Pro Live hooked up to my dev machine and I can get the RGBD data just fine using the ROS version of openni. Is there any way for me to get the audio data from the built-in microphones using ROS? I'm running Groovy for compatibility with some other libraries that I need.

2014-07-24 01:10:40 -0600 received badge  Famous Question (source)
2014-04-18 06:58:43 -0600 received badge  Notable Question (source)
2014-03-02 08:42:18 -0600 commented answer What is the best way to record RGB and Depth data from a Kinect using Openni?

Sorry about that. I originally posted my answer a few days ago, but ROS Answers wouldn't post it and it wasn't until today that I figured out that it was because I have links in the answer. I managed to get it posted now, so hopefully that answers your question.

2014-03-02 08:40:52 -0600 received badge  Editor (source)
2014-03-02 08:38:34 -0600 answered a question What is the best way to record RGB and Depth data from a Kinect using Openni?

Ok, so the solution I found is a little specific to my project. I determined that OpenNI .oni files will work as video file output for my purposes. If anyone is reading this and wants to go all the way to .avi files, if you try my solution and then add this:

http://kirilllykov[dot]github[dot]io/blog/2013/03/19/convert-openni-star-dot-oni-files-into-avi/

(replace '[dot]' with '.') solution on top of that, you should be able to get to .avi files.

To get the .oni files from the RGB and depth data, I first downloaded OpenNI (okay, so I already had OpenNI from before, but the ROS version isn't the same as the release version, so I downloaded that and put it in it's own folder) from here:

http://www[dot]openni[dot]org/openni-sdk/

(replace '[dot]' with '.') (Note, I'm running Linux, so this may work slightly differently for you if you are running something else). I extracted the folder and just placed it in my home directory (so it didn't interfere with ROS's OpenNI). Next I downloaded Primesense:

https://pypi[dot]python[dot]org/pypi/primesense/2.2.0.30-3

(replace '[dot]' with '.') which adds python bindings for OpenNI. I ran

setup.py build

and

setup.py install

from the Primesense folder to install the necessary libraries. I then ran the following python code to record the .oni files:

import roslib
import rospy
from primesense import openni2

openni2.initialize(OPENNI_REDIST_DIR)    

dev = openni2.Device.open_any()

depth_stream = dev.create_depth_stream()
color_stream = dev.create_color_stream()
depth_stream.start()
color_stream.start()
rec = openni2.Recorder("test.oni")
rec.attach(depth_stream)
rec.attach(color_stream)
print rec.start()

#Do stuff here

rec.stop()
depth_stream.stop()
color_stream.stop()

openni2.unload()

where OPENNI_REDIST_DIR is the path to the redist folder in the OpenNI library. You can play these back just to double-check that they worked by running 'NiViewer' (in the Tools folder of the OpenNI library) with the path to the .oni file.

2014-02-24 15:09:04 -0600 received badge  Famous Question (source)
2014-02-24 14:31:41 -0600 commented answer What is the best way to record RGB and Depth data from a Kinect using Openni?

Thanks for your code. I ran it and it works great for the RGB data, but I get the same problem with depth data as in my original post. I get an error thrown at line 40 of bag2video.py: img = np.asarray(bridge.imgmsg_to_cv(msg, 'bgr8')) that OpenCV can't convert from 32FC1 to bgr8. I found another solution which I'll write up, but thanks anyways.

2014-02-23 21:12:19 -0600 received badge  Notable Question (source)
2014-02-23 09:33:22 -0600 received badge  Popular Question (source)
2014-02-23 07:22:59 -0600 commented answer What is the best way to record RGB and Depth data from a Kinect using Openni?

I've looked at using rosbag, but it seems like it only pushes the problem down the line. For example, this (http://wiki.ros.org/rosbag/Tutorials/Exporting%20image%20and%20video%20data) tutorial explains how to extract video from a rosbag, but why can't I just record straight from the topic rather than storing in a rosbag first? Actually, that tutorial is the reason I made the calls to 'extract_images' and 'mencoder' in the first place, but I didn't see any reason to store in a rosbag and then play back. Dan, thanks for the utility. When I get back into the lab I will see if I can get that to work. If I can, I think I might go with this route.

2014-02-22 16:04:10 -0600 asked a question What is the best way to record RGB and Depth data from a Kinect using Openni?

I'm attempting to record short (~10s) video streams from a kinect. Ideally I'd like to output two video files after recording is finished: one file for RGB and the other file for depth (I can save a series of images rather than video file if that's not possible, but that tends to be space intensive, so I'd prefer not to do that). This is so I can process the data later. I have been searching for the past few days around the internet, and while I have found many resources and posts that are close to what I want to do, none of them are quite exactly what I'm looking for. I'm writing in Python, which is relevant since a lot of the posts I've found refer to C++ code, but I can never seem to find the equivalent function calls for Python.

Here's what I've got at the moment, but it is very hacky and doesn't really work. For recording the RGB data, I use image_view extract_images. I make this function call:

subprocess.Popen(["rosrun","image_view","extract_images","_sec_per_frame:=0.01","image:=camera/rgb/image_color"], cwd="temp/rgb/", stdout=DEVNULL)

To record the depth data, I wrote a class that subscribes to the "/camera/depth/image_rect" topic and then writes out the images to files:

class ImageConverter:

NUM_THREADS = 1

def __init__(self):
    self.count = 0
    self.todo = Queue.Queue()
    self.stop = False
    self.sub = None
    self.threads = None
    self.countLock = threading.Lock()

def __getCounter(self):
    self.countLock.acquire()
    c = self.count
    self.count = self.count + 1
    self.countLock.release()
    ret = str(c)
    while len(ret) < 4:
        ret = "0" + ret
    return ret

def __callback(self, data):
    self.todo.put(data)

def __processImage(self,data):
    bridge = CvBridge()
    try:
        source = bridge.imgmsg_to_cv(data, "32FC1")
    except CvBridgeError, e:
        print e
        return

    ret = cv.CreateImage((data.width, data.height), 8, 3)

    mx = 0.0
    for i in range(data.width):
        for j in range(data.height):
            d = source[j,i]
            if d > mx:
                mx = d

    for i in range(source.width):
        for j in range(source.height):
            v = source[j,i]/mx*255
            ret[j,i] = (v,v,v)

    return ret

def startRecording(self,topic):
    self.stop = False

    def processor():
        rate = rospy.Rate(100.0)
        while (not self.stop) or (not self.todo.empty()):
            if not self.todo.empty():
                data = self.todo.get()
                img = self.__processImage(data)
                c = self.__getCounter()
                cv.SaveImage("depth/frame" + c + ".jpg", img)
                print("[ImageConverter]: image saved " + c + "/" + str(self.todo.qsize()))
            rate.sleep()

    self.threads = []
    for i in range(ImageConverter.NUM_THREADS):
        t = threading.Thread(target=processor, args = ())
        t.daemon = True
        t.start()
        self.threads.append(t)

    self.sub = rospy.Subscriber(topic, Image, self.__callback)

def stopRecording(self):
    self.sub.unregister()
    self.stop = True
    for t in self.threads:
        t.join()

As you can see in the __processImage method, I had to manually convert the image from 32FC1 to bgr8, since everytime I try to call the cv_bridge method imgmsg_to_cv with ... (more)

2014-02-22 13:04:08 -0600 received badge  Enthusiast
2014-02-11 07:25:35 -0600 commented answer Issue with multithreading and standard output

Thanks for the help. stty sane worked beautifully. I don't really have time at the moment to parse through the output and figure out exactly what is happening, so for the moment I just hacked it and added 'call(["stty", "sane"])' in the last if block right after 't.join()'. It works for now, I'll fix it later. Thanks again!

2014-02-11 07:24:51 -0600 received badge  Popular Question (source)