not sending different marker colours to rviz properly
when i send different coloured markers it doesn't shows exact object from which its color was extracted. as i've shown below: 1. real object
2. poicloud rendered in rviz
![image description](/upfiles/14819498042350993.png)]
object rendered with its colour and coordinates:
I just want to check if the colour datas saved by the 3d model of the object is real or not. Here background color of 2nd image is black. This code is just for demonstration purpose, i'm using this concept in another program.
int h = 479, w = 639, n = 306081; for(int raw = 0;raw<h;raw++) { for(int col = 0;col<w;col++) { P[raw][col] = output.at(col,raw); } } rosbag::Bag bag("image.bag"); int iv = -1, ix = -1, iy = -1, iz = -1, icx=-1, icy=-1, icz=-1; rosbag::View view(bag, rosbag::TopicQuery("numbers")); BOOST_FOREACH(rosbag::MessageInstance const m, view) { std_msgs::Float32::ConstPtr i = m.instantiate<std_msgs::Float32>(); iv = iv+1; if(iv>=0 && iv<306081) { ix = ix+1; xv[ix] = i->data; } if(iv>=306081 && iv<306081*2) { iy = iy+1; yv[iy] = i->data; } if(iv>=306081*2 && iv<306081*3) { iz = iz+1; zv[iz] = i->data; } if(iv>=306081*3 && iv<306081*4) { icx = icx+1; cx[icx] = i->data; } if(iv>=306081*4 && iv<306081*5) { icy = icy+1; cy[icy] = i->data; } if(iv>=306081*5 && iv<306081*6) { icz = icz+1; cz[icz] = i->data; } } bag.close(); // ** converting already read saved points to 2 d array int ip = -1; for(int i=0;i<h;i++) { for(int j=0;j<w;j++) { ip = ip+1; x[i][j] = xv[ip]; xp[i][j] = P[i][j].x; xc[i][j]=P[i][j].r; // (xp,yp,zp) is present image,(x,y,z) is cordinate of privious image y[i][j] = yv[ip]; yp[i][j] = P[i][j].y; yc[i][j]=P[i][j].g; z[i][j] = zv[ip]; zp[i][j] = P[i][j].z; zc[i][j]=P[i][j].b; } } // ** comparing previous and present point cloud and finding moved objects int im = -1; for(int k=0;k<h;k++) { for(int l=0;l<w;l++) { if( z[k][l]!=0 && zp[k][l]!=0 && !isnan(x[k][l]) && !isnan(xp[k][l]) && !isnan(y[k][l]) && !isnan(yp[k][l]) && !isnan(z[k][l]) && !isnan(zp[k][l]) ) { float dx=abs(float(x[k][l]-xp[k][l])*100); float dy=abs(float(y[k][l]-yp[k][l])*100); float dz=abs(float(z[k][l]-zp[k][l])*100); float d = sqrt(dx*dx+dy*dy+dz*dz); if(d>2) { im = im+1; a[im] =(int) xc[k][l]; b[im] =(int) yc[k][l]; c[im] =(int) zc[k][l]; // (a,b,c) has color of present points ac1[im]=xp[k][l]; bc1[im]=yp[k][l]; cc1[im]=zp[k][l]; // (ac1,bc1,cc1) has 3d coordinates xa[im] = l ...
" int h = 479, w = 639, n = 306081;" This is extremely bad coding style. If you use a camera with a different resolution or try to run you algorithm on a downsampled pointcloud, this will fail. Simply read width and height from the pointcloud and use these values.