Well i have some old program which can do that, but u have to do little work on that, to complete it. First considering that only other objects are moving and your robot is stationary. First when ever your robot is in standing state i.e not moving ur robot needs to first save the 3d image of the present image, which can be done like below. Here this function simply saves the present 3d image in a ros bag called image.bag, which is used latter, or can be combined in same program.
int ix = -1, iy = -1, iz = -1;
rosbag::Bag bag("image.bag", rosbag::bagmode::Write);
for(int k=0;k<h && ros::ok();k++)
{
for(int l=0;l<w;l++) {
ix = ix+1;
x[ix].data = P[k][l].x;
} }
for(int k=0;k<h;k++)
{
for(int l=0;l<w;l++) {
iy = iy+1;
int ix = -1, iy = -1, iz = -1;
rosbag::Bag bag("image.bag", rosbag::bagmode::Write);
for(int k=0;k<h && ros::ok();k++)
{
for(int l=0;l<w;l++) {
ix = ix+1;
x[ix].data = P[k][l].x;
} }
for(int k=0;k<h;k++)
{
for(int l=0;l<w;l++) {
iy = iy+1;
y[iy].data = P[k][l].y;
} }
for(int k=0;k<h;k++)
{
for(int l=0;l<w;l++) {
iz = iz+1;
z[iz].data = P[k][l].z;
}}
for(int i=0;i<n;i++) {
bag.write("numbers", ros::Time::now(), x[i]);
//std::cout<<i<<"\t"<<ix<<"\t"<<"x"<<"\t"<<x[i]<<std::endl;
}
for(int i=0;i<n;i++) {
bag.write("numbers", ros::Time::now(), y[i]);
//std::cout<<n<<"\t"<<iy<<"\t"<<"y"<<"\t"<<y[n]<<std::endl;
}
for(int i=0;i<n;i++) {
bag.write("numbers", ros::Time::now(), z[i]);
//std::cout<<i<<"\t"<<iz<<"\t"<<"z"<<"\t"<<z[i]<<std::endl;
}
bag.close(); y[iy].data = P[k][l].y;
} }
for(int k=0;k<h;k++)
{
for(int l=0;l<w;l++) {
iz = iz+1;
z[iz].data = P[k][l].z;
}}
for(int i=0;i<n;i++) {
bag.write("numbers", ros::Time::now(), x[i]);
//std::cout<<i<<"\t"<<ix<<"\t"<<"x"<<"\t"<<x[i]<<std::endl;
}
for(int i=0;i<n;i++) {
bag.write("numbers", ros::Time::now(), y[i]);
//std::cout<<n<<"\t"<<iy<<"\t"<<"y"<<"\t"<<y[n]<<std::endl;
}
for(int i=0;i<n;i++) {
bag.write("numbers", ros::Time::now(), z[i]);
//std::cout<<i<<"\t"<<iz<<"\t"<<"z"<<"\t"<<z[i]<<std::endl;
}
bag.close();
After saving how the 3d image is, when the robot is not moving. The next part what robot does is it analyses the present 3d image(video stream subscribed from kinect) and if any object is moving, it will detect that, and save its point clouds, and for every seperate object at distance farther than 2 cam, they are considered as differenct ... (more)
Well, I have used minimal distances in OpenCV in my people counter problem. I don't know exactly on the PCL library is that you could create a tuple and append all the euclidean distances between 2D objects. Get the least one to track and use the COM of objects for calculating the euclidean distance