Ask Your Question

JoeRomano's profile - activity

2016-02-26 10:27:10 -0600 received badge  Great Question (source)
2014-01-28 17:22:50 -0600 marked best answer problems running hand_detector on PR2

Has anyone gotten hand_detector.launch (the MIT hand detection Kinect library) to run on a PR2 robot?

I've worked through some minor problems with our PR2 running Diamonback:

  1. The solution from this post: helps solve one problem.

  2. The file main.cpp inside of skeletal_tracker (which must be run with this package) has a bunch of calls to OpenGL/GLUT code. You can't run it locally on the PR2 (no monitor), and I've found you can't pipe it over ssh -X for some technical reasons with GLUT (at least that's what I've been able to gather through google). I fixed this situation by modifying main.cpp as follows:

//glInit(&argc, argv);



As you can see I just replaced the OpenGL callbacks with a ros::spin. I'm not really sure what the graphical window in this package is for. I'm guessing it's important.

SO ... Now I can run the code on our PR2 robot without it crashing/complaining, but no messages ever get published to the topics /hands/ /hand0_fullcloud /hand1_fullcloud. I'm not really sure where to go from here. Suggestions?

2013-11-27 10:52:14 -0600 answered a question This robot has a joint named "foo" which is not in the gazebo model.

same problem here, except the child had no physical properties and had to be filled in. too bad no related errror is thrown.

thanks for posting the hint

2013-04-23 23:04:28 -0600 received badge  Good Question (source)
2013-03-15 13:21:25 -0600 received badge  Notable Question (source)
2013-03-15 13:21:25 -0600 received badge  Popular Question (source)
2013-03-15 13:21:25 -0600 received badge  Famous Question (source)
2012-12-17 10:02:27 -0600 received badge  Nice Question (source)
2012-10-24 10:44:46 -0600 received badge  Famous Question (source)
2012-10-24 10:44:46 -0600 received badge  Notable Question (source)
2012-08-19 15:27:35 -0600 received badge  Famous Question (source)
2012-04-19 02:45:01 -0600 received badge  Notable Question (source)
2012-04-10 12:23:40 -0600 received badge  Popular Question (source)
2011-12-13 03:32:52 -0600 received badge  Student (source)
2011-11-24 07:12:27 -0600 received badge  Popular Question (source)
2011-10-24 09:05:53 -0600 answered a question acceleration measurement error

You could start by high-pass filtering ( your acceleration signal to get rid of the zero-offset. If your robot isn't moving but you still have an acceleration signal that is definitely going to be an issue when you integrate.

But likie Dimitri said - integrating an accelerometer really isn't a good way to track distance anyways.

2011-10-06 16:56:10 -0600 marked best answer problems running hand_detector on PR2

Hi there!

It has been almost 4 months since this question was posted. Recently I had the same problem as the one that Joe reports in his question. I don't own a PR2 (yet! ;P) but I was having trouble executing MIT Kinect Demo's hand_detector in a no-monitor environment. So, I would like to share the solution just in case anybody else has the same problem.

As Joe said, the file main.cpp inside skeletal_tracker has a bunch of calls to OpenGL/GLUT code. The problem is that if you do this:

//glInit(&argc, argv);

Then the function glutDisplay will never be called. And inside that function is where they get all the data from kinect, process it and publish the skeleton topics.

So the solution could be: create a new function to do the same as glutDisplay but without all the GUI stuff. The new function would look like this:

void publishSkeletons(void)
  xn::SceneMetaData sceneMD;
  xn::DepthMetaData depthMD;

  //Read next available data
  ros::Time tstamp = ros::Time::now();

  //Process the data
  g_UserGenerator.GetUserPixels(0, sceneMD);
  std::vector<mapping_msgs::PolygonalMap> pmaps;
  body_msgs::Skeletons skels;
  ROS_DEBUG("skels size %d \n",pmaps.size());

    skels.header.seq = depthMD.FrameID();
    pmaps.front().header.seq = depthMD.FrameID();

And then change the unwanted calls to OpenGL/GLUT stuff

glInit(&argc, argv);

for this other calls:

ros::Rate loop_rate(30);

while (ros::ok())



It works perfectly for me, I hope somebody else finds it useful.

2011-08-29 19:58:23 -0600 received badge  Necromancer (source)
2011-08-29 19:58:23 -0600 received badge  Self-Learner (source)
2011-08-08 07:51:41 -0600 commented question OpenNI: "Save calibration to file failed: This operation is invalid!"
In main.cpp of NiUserTracker you'll notice several callback functions that occur when a new user is found (line 87), or pose detected (line 106). If you insert the call to LoadCalibration into either of these functions it should work. I would recommend replacing line 110 from PoseDetected with this
2011-08-08 07:04:29 -0600 commented question OpenNI: "Save calibration to file failed: This operation is invalid!"
g_UserGenerator.GetSkeletonCap().SaveCalibrationDataToFile(aUserIDs[i], XN_CALIBRATION_FILE_NAME);
2011-08-08 06:55:10 -0600 answered a question OpenNI: "Save calibration to file failed: This operation is invalid!"

Here is an example of the function you should find inside of recent unstable NiUserTracker demo code. It creates the UserCalibration.bin file:

#define XN_CALIBRATION_FILE_NAME "UserCalibration.bin"

void SaveCalibration()
    XnUserID aUserIDs[20] = {0};
    XnUInt16 nUsers = 20;
    g_UserGenerator.GetUsers(aUserIDs, nUsers);
    for (int i = 0; i < nUsers; ++i)
        // Find a user who is already calibrated
        if (g_UserGenerator.GetSkeletonCap().IsCalibrated(aUserIDs[i]))
            // Save user's calibration to file
            g_UserGenerator.GetSkeletonCap().SaveCalibrationDataToFile(aUserIDs[i], XN_CALIBRATION_FILE_NAME);
            printf("saved data\n");
2011-08-07 03:11:17 -0600 answered a question openni_tracker moving kinect loses user

I had some success fixing this problem by removing the ROS openni packges and installing the lastest "unstable" version of OpenNi from source as described in this post:

It no longer segfaults (at least not yet) with the newest version of OpenNI, so a significant improvement there. But, performance is still not so great while the Kinect is moving, it tends to lose tracking and find new "phantom" users a lot.

2011-08-06 14:38:46 -0600 received badge  Self-Learner (source)
2011-07-19 15:28:27 -0600 commented answer OpenNi Skeleton Tracking - Save calibration data
they do not work for me either. I didn't try and debug too much. I'd be interested if there is a fix.
2011-07-18 08:54:14 -0600 marked best answer openni_tracker moving kinect loses user

Hi Joe,

From a general point of view the openni skeleton tracker is not meant to work when both human and kinect are moving. Movement, especially rotations, can cause the tracker to fail and loose tracking. However, some people managed to keep tracking will doing slow translations. You can try retrieving the raw rgb and depth images and look if your motion introduces much noise.

Hope this helps,


2011-07-17 04:18:00 -0600 commented answer pausing inside .launch file?
Right, that is the current approach I am using. I was just wondering, as indicated by the title of the post, if there was a way to pause inside of a launch script. But, seeing as how you really can't guarantee things are executed serially, that is probably a moot point.
2011-07-17 04:11:14 -0600 received badge  Teacher (source)
2011-07-17 04:11:14 -0600 received badge  Self-Learner (source)
2011-07-17 04:09:21 -0600 answered a question pausing inside .launch file?

Thanks for the help guys.

I've noticed that execution does happen serially, but I still don't think these solutions will solve the problem I am seeing. Allow me to add some additional detail:

Step 1: A user logs on to the system (in this case the pr2) and launches the basic robot communication nodes (in this case the l/r_arm_controller nodes). I definitely don't want to modify these scripts.

Step 2: A user decides to run my code by launching my launch file. I need to change some of the settings for the l/r_arm_controller_nodes, and the only way I've found to do this is to use pr2_controller_manager to kill the node, reload the param_server params with new values, and then use pr2_controller_manager respawn these nodes. I am pretty sure the l/r_arm_controller only reads the param_server when it is launched, so I need to actually kill it, change params, and respawn in this fashion.

The problem: Adding the commands to kill and spawn the controller in the same launch file does not result in the controller getting spawned properly. It looks like kill/spawn are executed in sequence so quickly that the pr2_controller_manager somehow doesn't deal with it properly. If I could add in a pause between kill and spawn in my launch file I believe this wouldn't be an issue.

2011-07-16 03:56:26 -0600 answered a question pausing inside .launch file?

Unfortunately I don't think it is possible to easily change the parameters of the running controllers. They launch with the robot, and changing that behavior would be more trouble than just stopping and restarting them. I've broken it up into two launch files, one to kill the existing controllers, and one to set the params and restart them. Not ideal, but it seems like the simplest solution if I can't do it in one .launch.

2011-07-15 15:53:09 -0600 asked a question pausing inside .launch file?

Is there any way to add a sleep (delay or pause) to a .launch file between statements?

In my current code I need to kill several default PR2 controllers, change some rosparams, then reload these same controllers. However, this all happens so fast in my launch file that it seems to cause problems.

2011-06-22 01:13:16 -0600 received badge  Supporter (source)
2011-06-21 04:20:30 -0600 answered a question openni_tracker moving kinect loses user

Thanks guys. It is great to get a little extra info/verification on this. I suspected the tracker wasn't written with this in mind.

For the record, in my experience: The openni_tracker doesn't actually lose the original person it was tracking, it keeps tracking them just fine. However, it does register multiple new users while it is moving. I've noticed that the openni_tracker tends to segfault when too many people enter/leave the scene. So, perhaps if the openni linux code gets more stable and can deal with all that chaos of adding and removing users you will be able to move around the kinect.

2011-06-20 16:22:19 -0600 asked a question openni_tracker moving kinect loses user

I am using a kinect on the head of our PR2 robot, and I've noticed that anytime the head moves the openni_tracker loses track of the user/skeleton and thinks it has spotted a new user. I realize that the openni_tracker package is just calling the openni library functions so there is not a whole lot that can probably be done about it, but has anyone else experienced similar results? I tried some simple things like setting the head motion to very slow velocities, but it still seems to lose tracking. Too bad.