Ask Your Question

Subhasis's profile - activity

2015-09-18 18:52:37 -0500 marked best answer Regarding Publishing Kinect Depth Image

Hi All,

I have kinect Depth image as mentioned below. I am trying to publish this from my node. I am having problem in converting my image to buffer which is needed to send it as a message. Please find below my code. I would be glad if somecan guide where I am doing a mistake.


image_transport::Publisher publisher_depth = it.advertise("kinectcamera/depth", 1);

kinectDepthImage = cvCreateImage( cvSize(640,480),IPL_DEPTH_16U,1);
here I store the image from Kinect
cvShowImage("Depth", kinectDepthImage); // This shows me fine the image. I have no issue.

//Now I try to publish it i any of the two ways And this is where It fails.

 cv::WImageBuffer_16u depthBufer(kinectDepthImage); 
 cv::WImageBuffer1_16u depthBufer(kinectDepthImage);

sensor_msgs::ImagePtr depth_msg = sensor_msgs::CvBridge::cvToImgMsg(depthBufer.Ipl(), "mono16");


my error is " /opt/ros/fuerte/include/opencv2/core/wimage.hpp:243: cv::WImage<T>::WImage(IplImage*) [with T = short unsigned int, IplImage = _IplImage]: Assertion `!img || img->depth == Depth()' failed.
pass1Aborted (core dumped) "

Please help me with this.

2014-06-09 22:30:22 -0500 received badge  Famous Question (source)
2014-02-17 13:50:41 -0500 received badge  Great Question (source)
2014-02-11 22:59:59 -0500 received badge  Famous Question (source)
2014-01-28 17:31:03 -0500 marked best answer Accessing float/double list using rosparam

Hi All,

I am having a problem to access a float array using rosparam.

I have my parameters inside a lunch file as given below.

<launch> <node name="target_follow" pkg="target_follow" type="target_follow" respawn="false" output="screen" required="true"> <rosparam> max_height: 1 min_height: 0.4 debug_windows: false debug_messages: false camera_param: [543.848868, 0.0, 297.599352,297.599352, 0.0, 543.745831, 240.206525, 0.0, 0.0, 1.0] </rosparam> </node> </launch>

I am able to access all my parameters by following way.

    ros::param::getCached("/target_follow/max_height", maxHeight);
    ros::param::getCached("/target_follow/min_height", minHeight);
    ros::param::getCached("/target_follow/debug_windows", showDebugWindows);
    ros::param::getCached("/target_follow/debug_messages", showDebugMessages);

can someone suggest me how can I access the list without a nodehandle object;

Regards, Subhasis

2014-01-28 17:31:02 -0500 marked best answer Tranform point coordinate from kinect frame to robot frame..

Hi All,

I am trying to learn tf for the purpose of using it for getting the coresponding coordinate of a frame with respect to robot frame having the data(x,y,z) from kinect frame. From the turtlesim tutorial I could not understand how i can use it in my case. As I see in turtlesim, there are two child frames(turtle1 and turtle2) and one parent frame which is the world. tutrle1 publishes it relative transform with respect to The world. turtle2 simply follows the turtle1 using this. But as i think here relative tranform between turtle1 and turtle2 is known.

I would really appreciate if someone can suggest me how to do it ? I am really new to tf. Is it possible for me tranform between base_link and optical_depth_frame of kinect and use it to do my task.

Regards, Subhasis

2014-01-28 17:27:51 -0500 marked best answer Need Help with Accessing the kinect Depth Image using OpenCV.

Hi All,

I tried to subscribe to kinect depth and color image topics from openni_launch..(/camera/rgb/image_color and /camera/depth/image )

I converted them to cv::Mat format using below code.

cv_rgb = cv_bridge::toCvCopy(color_image, enc::BGR8); cv_depth = cv_bridge::toCvCopy(depth_image, enc::BGR8); (I am not sure about this line)

Now I want to access each pixel values of both color and depth image. I am able to access the rgb image by the below code.

for(int i=0; i<cv_rgb->image.rows; i++) {
for(int j=0; j<cv_rgb->image.cols; j++) {

       cv_rgb-><cv::Vec3b>(i,j)[0] = 0;
       cv_rgb-><cv::Vec3b>(i,j)[1] = 0;
      cv_rgb-><cv::Vec3b>(i,j)[2] = 0;


I am unable to access the kinect depth image. It says me the type is 21. and Its a single channel image. Can anybody suggest me a way that I can access each pixel values of Kinect and how can this pixel values be translated in terms of real distance in mm ?

2014-01-28 17:27:46 -0500 marked best answer Reg: Subscribing to Depth and rgb image at the same time


I am new in ROS. I wanted to get both the depth and rgb image of an object at the same time and use it in the callback function. if I use it as mentioned below. It works for me.

void imageCallback(const sensor_msgs::ImageConstPtr& kinect_image)

image_transport::Subscriber sub = it.subscribe("camera/rgb/image_color", 1, imageCallback); or image_transport::Subscriber sub = it.subscribe("camera/depth/image", 1, imageCallback);

I dont know how can I get both the rgb and depth image in the callback funtion. The below way doesn't work.

void imageCallback(const sensor_msgs::ImageConstPtr& kinect_image,const sensor_msgs::ImageConstPtr& depth_image)

image_transport::Subscriber sub = it.subscribe("camera/rgb/image_color","camera/depth/image", 1, imageCallback);

I know, may be the question sounds very naive. But, I would really be grateful if someone can guide me.

2013-11-09 06:24:35 -0500 received badge  Famous Question (source)
2013-08-29 10:46:14 -0500 marked best answer Problem with using opencv_GPU inside ROS

Hi ALl,

I am very new into ROS. I wanted to use GPU support from OpenCv inside ROS platform. But I faced an error where it said ROS opencv is built without CUDA support. The I tried to build myself OpenCv(after downloading from OpenCv website) with CUDA support into the /usr/local . I wanted to test it with a simple code. But EVen though the version I downloaded and built is built with CUDA. My project somehow connects to the CV from ROS-fuerte and gives me an error as below.

OpenCV Error: No GPU support (The library is compiled without CUDA support) in mallocPitch, file /tmp/buildd/ros-fuerte-opencv2-2.4.2-0precise-20120908-1632/modules/core/src/gpumat.cpp, line 749 terminate called after throwing an instance of 'cv::Exception' what(): /tmp/buildd/ros-fuerte-opencv2-2.4.2-0precise-20120908-1632/modules/core/src/gpumat.cpp:749: error: (-216) The library is compiled without CUDA support in function mallocPitch

I have two questions. I am sure many of you will find it very simple problem. I am unable to understand how I can solve the below problems..

1) How can I use my local copy of OpenCv which I built with CUDA support ? 2) Is it possible to use it also in ROS ignoring the prebuilt openCv in ROS which is without CUDA support ?

regards, Subhasis

2013-08-29 08:35:56 -0500 received badge  Notable Question (source)
2013-08-05 04:03:51 -0500 received badge  Notable Question (source)
2013-07-23 04:10:11 -0500 received badge  Famous Question (source)
2013-07-17 03:46:54 -0500 received badge  Popular Question (source)
2013-07-16 01:26:24 -0500 received badge  Notable Question (source)
2013-07-15 05:44:31 -0500 received badge  Popular Question (source)
2013-07-15 02:51:24 -0500 asked a question Converting Kinect depth image to Real world coordinate.

Hi All,

I need a small help with converting kinects depth image to the real world coordinates of each depth pixel.

I used the below mentioned formulae found from a paper. But, I assume the Z value is not the correct real world z. Rather, it gives the distance of the pixel from the camera centre. In this (x,y) on the right hand side represent the pixel values and z is the corresponding depth in meters. inv_fx,inv_fy,ox,oy are from the camera matrix.

const float inv_fx = 1.f/cameraMatrix(0,0);
const float inv_fy = 1.f/cameraMatrix(1,1);
const float ox = cameraMatrix(0,2);
const float oy = cameraMatrix(1,2);

realworld.x = z; //The depth value from the Kinect in meters
realworld.y = -(x - ox) * z * inv_fx; 
realworld.z = -(y - oy) * z * inv_fy;

If I am wrong in the formulae, plese correct the formulae mentioned above for calculating the real world point coordinates for each pixel in the depth image.

Regards, Subhasis

2013-07-15 02:41:23 -0500 received badge  Popular Question (source)
2013-07-14 03:27:30 -0500 received badge  Notable Question (source)
2013-07-05 02:34:11 -0500 asked a question Laser_filter plugin

Hi All,

I currently want to use the laser_filter and laser pipeline concepts in ROS. I tried to go through the tutorial. But, Couldn't find a way to inetgrate a custom filter. As I see under filters node in ros there are some predefined filters like median, Mean etc. I need to create my own laser filter and make it as a part of filter_chain. I would be really grateful if someone can suggest me how to go about it. A small piece of code would also be helpful. If I am wrong understanding the concept please correct me as this is the first time I am going through this concept of laser_filters.

How can can instantiate a laser filter into a filter_chain in C++. In Ros they have given an example, But I don't see what filtering is happening in this chain.

Regards, Subhasis