ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Engharat's profile - activity

2012-10-16 01:14:53 -0500 received badge  Famous Question (source)
2012-09-21 11:11:02 -0500 received badge  Popular Question (source)
2012-09-21 11:11:02 -0500 received badge  Notable Question (source)
2012-08-20 23:04:11 -0500 received badge  Notable Question (source)
2012-08-20 23:04:11 -0500 received badge  Popular Question (source)
2012-08-20 23:04:11 -0500 received badge  Famous Question (source)
2012-08-07 06:10:09 -0500 received badge  Nice Question (source)
2012-07-03 06:29:01 -0500 received badge  Student (source)
2012-06-15 01:11:54 -0500 asked a question Cannot visualize depth image from turtlebot simulator

Hi, i'm going deep into my simulated kinect problem. The problem is: I cannot display depth image data from turtlebot simulated kinect. It is correctly published; I can see the messages flowing. The image seen from gazebo camera view is the plane with a cube on it;but when I try to visualize it through cv::imshow, I see skew lines gray colored with rest of window filled of black instead of the cube. The output from gazebo turtlebot simulator is mono8 type. (I can choose it or RGB8 types) The code that I'm using is:

cv_bridge::CvImagePtr bridge; try { bridge = cv_bridge::toCvCopy(image, ""); //cv::imshow("Kinect RGB image", bridge->image); } catch (cv_bridge::Exception& e) { ROS_ERROR("Failed to transform depth image."); return; }

// convert to something visible
cv::Mat img(bridge->image.rows, bridge->image.cols, CV_8UC1);
for(int i = 0; i < bridge->image.rows; i++)
{
    char* Di = bridge->image.ptr<char>(i);
    char* Ii = img.ptr<char>(i);
    for(int j = 0; j < bridge->image.cols; j++)
    {   
      Ii[j] = Di[j];
    }   
} 

// display
cv::imshow(WINDOW_NAME, img);
cv::waitKey(3);

}

that code is taken from depth_viewer.I changed it because depth_viewer expect a 32bit float, while I already has mono8 data. If i try to use image_view it gives OpenCV Error: Image step is wrong () in cvInitMatHeader, file /tmp/buildd/libopencv-2.3.1+svn6514+branch23/modules/core/src/array.cpp, line 162 terminate called after throwing an instance of 'cv::Exception' what(): /tmp/buildd/libopencv-2.3.1+svn6514+branch23/modules/core/src/array.cpp:162: error: (-13) in function cvInitMatHeader

Maybe because the depth image topic in mono8 is not in the right cv format. (i'm using ros electric)

2012-06-14 10:11:35 -0500 asked a question depth_image_proc not working!

Hi all, I started using ROS just few days ago. I've read the tutorial and played with the examples, but I'm still quite newbie of ros :) What I have to do is:getting the depth image from a simulated kinect, transform it into a point cloud, and make some other stuffs. I started launching turtlebot_simulator, that has a simulated kinect camera. I know that it already publish point cloud, but I need to start from the depth image. What I have done so far: 1) getting into gazebo.urdf.xacro of turtlebot, I added that line: <depthimagetopicname>/camera/depth/image_raw</depthimagetopicname> (anyone has hints to where to find all tags of libgazebo_ros_openni_kinect.so plugin?) 2)With that change I get /camera/depth/image_raw topic, and if I put myself in echo mode I can see the messages flowing. 3)Now, to be sure that this is the topic about depth image, I try to visualize it with depth_viewer package. With that command, rosrun depth_viewer depth_viewer /camera/depth/image:=/camera/depth/image_raw I got the error I soon realized that the image_raw published by turtlebot is in uint16 [mm] format,while depth_viewer needs it into float [m] format. So I found depth_image_proc nodelet and used it with a launch file recommended: https://gist.github.com/2400165

well, the launch file runs fine and that nodelet creates a bunch of topics BUT, while echoing them no one give messages.Nothing into the converted /camera/depth/image,nothing into the registered point clouds camera/depth_registered/points. That my problem at the end; understanding how to make depth_image_proc working and getting my visualizable depth image and my freshly made cloud points.