ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Minimal distance between multiple moving obstacles using PCL

asked 2017-03-24 20:38:03 -0500

pipo gravatar image

Hello

I would like know the minimal distances between the robot (KUKA iiwa,base-fixed) and several moving obstacles(human and other objects). All of these stuff may be moving when collaborative task is executing. I have laser scanner or Kinect V2. I think PCL library is a good start as I dont need any transform of depth images. But how can I get there? Any help??

Thanks,

edit retag flag offensive close merge delete

Comments

Well, I have used minimal distances in OpenCV in my people counter problem. I don't know exactly on the PCL library is that you could create a tuple and append all the euclidean distances between 2D objects. Get the least one to track and use the COM of objects for calculating the euclidean distance

kartikmadhira1 gravatar image kartikmadhira1  ( 2017-03-25 22:11:38 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
0

answered 2017-03-26 05:25:05 -0500

dinesh gravatar image

Well i have some old program which can do that, but u have to do little work on that, to complete it. First considering that only other objects are moving and your robot is stationary. First when ever your robot is in standing state i.e not moving ur robot needs to first save the 3d image of the present image, which can be done like below. Here this function simply saves the present 3d image in a ros bag called image.bag, which is used latter, or can be combined in same program.

    int ix = -1, iy = -1, iz = -1;
    rosbag::Bag bag("image.bag", rosbag::bagmode::Write);

    for(int k=0;k<h && ros::ok();k++)
    {
        for(int l=0;l<w;l++)  {
        ix = ix+1;
            x[ix].data = P[k][l].x;
        } }

    for(int k=0;k<h;k++)
    {
        for(int l=0;l<w;l++)  {
         iy = iy+1;
           int ix = -1, iy = -1, iz = -1;
    rosbag::Bag bag("image.bag", rosbag::bagmode::Write);

    for(int k=0;k<h && ros::ok();k++)
    {
        for(int l=0;l<w;l++)  {
        ix = ix+1;
            x[ix].data = P[k][l].x;
        } }

    for(int k=0;k<h;k++)
    {
        for(int l=0;l<w;l++)  {
         iy = iy+1;
            y[iy].data = P[k][l].y;
     } }

    for(int k=0;k<h;k++)
    {
        for(int l=0;l<w;l++)  {
        iz = iz+1;
        z[iz].data = P[k][l].z;
     }}

    for(int i=0;i<n;i++)  {
    bag.write("numbers", ros::Time::now(), x[i]);
   //std::cout<<i<<"\t"<<ix<<"\t"<<"x"<<"\t"<<x[i]<<std::endl;
    }
    for(int i=0;i<n;i++)  {
    bag.write("numbers", ros::Time::now(), y[i]);
   //std::cout<<n<<"\t"<<iy<<"\t"<<"y"<<"\t"<<y[n]<<std::endl;
    }
    for(int i=0;i<n;i++)  {
    bag.write("numbers", ros::Time::now(), z[i]);
   //std::cout<<i<<"\t"<<iz<<"\t"<<"z"<<"\t"<<z[i]<<std::endl;
    }

    bag.close();     y[iy].data = P[k][l].y;
     } }

    for(int k=0;k<h;k++)
    {
        for(int l=0;l<w;l++)  {
        iz = iz+1;
        z[iz].data = P[k][l].z;
     }}

    for(int i=0;i<n;i++)  {
    bag.write("numbers", ros::Time::now(), x[i]);
   //std::cout<<i<<"\t"<<ix<<"\t"<<"x"<<"\t"<<x[i]<<std::endl;
    }
    for(int i=0;i<n;i++)  {
    bag.write("numbers", ros::Time::now(), y[i]);
   //std::cout<<n<<"\t"<<iy<<"\t"<<"y"<<"\t"<<y[n]<<std::endl;
    }
    for(int i=0;i<n;i++)  {
    bag.write("numbers", ros::Time::now(), z[i]);
   //std::cout<<i<<"\t"<<iz<<"\t"<<"z"<<"\t"<<z[i]<<std::endl;
    }

    bag.close();

After saving how the 3d image is, when the robot is not moving. The next part what robot does is it analyses the present 3d image(video stream subscribed from kinect) and if any object is moving, it will detect that, and save its point clouds, and for every seperate object at distance farther than 2 cam, they are considered as differenct ... (more)

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2017-03-24 20:38:03 -0500

Seen: 204 times

Last updated: Mar 26 '17