Ask Your Question
0

How to save Skeleton Tracking DaTA of OpenNI?

asked 2013-04-13 02:28:22 -0500

Dylan8slb gravatar image

updated 2016-10-24 09:03:31 -0500

ngrennan gravatar image

I am doing a school project which is a Kinect Robosapien project for the Robosapien to do my movements in front of the Kinect (google quick cheap robosapien kinect). I can already track the Skeleton with my Kinect (for Xbox) and I am using the OpenNI SDK and not the Microsoft Kinect for Windows SDK. I wanted to know how can I save the Skeleton Data and then transform that data into the commands for the Robosapien? I used the code in the Processing IDE which is this:

import SimpleOpenNI.*;

SimpleOpenNI  context;

void setup()
{
  // instantiate a new context
  context = new SimpleOpenNI(this);

  // enable depthMap generation 
  context.enableDepth();

  // enable skeleton generation for all joints
  context.enableUser(SimpleOpenNI.SKEL_PROFILE_ALL);

  background(200,0,0);
  stroke(0,0,255);
  strokeWeight(3);
  smooth();

  // create a window the size of the depth information
  size(context.depthWidth(), context.depthHeight()); 
}

void draw()
{
  // update the camera
  context.update();

  // draw depth image
  image(context.depthImage(),0,0); 

  // for all users from 1 to 10
  int i;
  for (i=1; i<=10; i++)
  {
    // check if the skeleton is being tracked
    if(context.isTrackingSkeleton(i))
    {
      drawSkeleton(i);  // draw the skeleton
    }
  }
}

// draw the skeleton with the selected joints
void drawSkeleton(int userId)
{  
  context.drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK);

  context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER);
  context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);
  context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND);

  context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER);
  context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND);

  context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
  context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO);

  context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP);
  context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE);
  context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT);

  context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP);
  context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE);
  context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT);  
}

// Event-based Methods

// when a person ('user') enters the field of view
void onNewUser(int userId)
{
  println("New User Detected - userId: " + userId);

// start pose detection
  context.startPoseDetection("Psi",userId);
}

// when a person ('user') leaves the field of view 
void onLostUser(int userId)
{
  println("User Lost - userId: " + userId);
}

// when a user begins a pose
void onStartPose(String pose,int userId)
{
  println("Start of Pose Detected - userId: " + userId + ", pose: " + pose);

  // stop pose detection
  context.stopPoseDetection(userId); 

  // start attempting to calibrate the skeleton
  context.requestCalibrationSkeleton(userId, true); 
}

// when calibration begins
void onStartCalibration(int userId)
{
  println("Beginning Calibration - userId: " + userId);
}

// when calibaration ends - successfully or unsucessfully 
void onEndCalibration(int userId, boolean successfull)
{
  println("Calibration of userId: " + userId + ", successfull: " + successfull);

  if (successfull) 
  { 
    println(" User calibrated !!!");

    // begin skeleton tracking
    context.startTrackingSkeleton(userId); 
  } 
  else 
  { 
    println(" Failed to calibrate user !!!");

    // Start pose detection
    context.startPoseDetection("Psi",userId);
  }
}
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
3

answered 2013-04-14 13:33:47 -0500

bit-pirate gravatar image

I wonder, if you are actually using ROS? If yes, please state the distribution (e.g. Groovy) and system (e.g. Linux 64bit) you are using. If no, start using it! :-) My suggestion: start with the tutorials.

In any case, I can recommend you have a look at the OpenNI tracker. It uses the information from the Kinect and outputs transforms/poses for the various body parts (i.e. torso, head, hands etc.). If you want to store this data, you can use rosbag to do so.

PS: In case you have no interest in ROS at all, I recommend you use other sites for help, e.g. Stack Overflow. You probably get better help there, since this Q&A here is targeted at ROS-specific issues.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2013-04-13 02:28:22 -0500

Seen: 1,210 times

Last updated: Apr 14 '13