ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

SM's profile - activity

2017-02-20 17:30:44 -0500 received badge  Student (source)
2014-10-25 22:53:14 -0500 received badge  Popular Question (source)
2014-10-25 03:40:34 -0500 received badge  Famous Question (source)
2014-07-16 14:24:13 -0500 received badge  Famous Question (source)
2014-05-20 09:20:33 -0500 received badge  Notable Question (source)
2014-05-20 09:20:33 -0500 received badge  Popular Question (source)
2014-05-20 09:20:33 -0500 received badge  Famous Question (source)
2014-02-11 20:36:08 -0500 received badge  Famous Question (source)
2014-02-04 18:24:49 -0500 received badge  Notable Question (source)
2014-01-24 12:35:37 -0500 commented question Classification using 3D recognition

In the link that I gave it just mentions from line 292 that it is computing the keypoints. I am sorry but I do not know where and how they are being stored such that I can use them as raw information into a classifier. I thought that the way data can be extracted from pcl files, in the same way can we extract numeric values from the keypoint structure?

2014-01-16 05:56:15 -0500 commented question Classification using 3D recognition

My question is from the structure "keypoints", how can I access the values of the features?

2014-01-16 02:36:47 -0500 received badge  Popular Question (source)
2014-01-15 06:13:57 -0500 asked a question Classification using 3D recognition

I am using the 3D recognition http://pointclouds.org/documentation/tutorials/correspondence_grouping.php#correspondence-grouping with Kinect. I want to recognize different objects say, can, bottles and classify them into two classes. For this, I want to use a neural network. I cannot understand how to get the features. How do I extract the features and what will be the features using this code so as to feed into a neural network for classification? Thank you in advance.

2014-01-07 21:50:43 -0500 received badge  Famous Question (source)
2014-01-07 21:50:43 -0500 received badge  Notable Question (source)
2014-01-07 20:03:30 -0500 received badge  Notable Question (source)
2014-01-02 19:46:29 -0500 received badge  Notable Question (source)
2013-11-19 20:32:42 -0500 received badge  Popular Question (source)
2013-11-18 17:06:26 -0500 received badge  Popular Question (source)
2013-11-13 14:30:31 -0500 asked a question PCL error : input file has more points than advertised

I am running the code given SIFT with keypoints. However, it returns error. I do not know what this means and there is no output also. Please help in resolving

[pcl::PCDReader::read] input file apple_1_1_1.pcd has more points than advertised (3161)!

2013-11-08 10:56:43 -0500 asked a question Issues in working with pointcloud and 2D feature detector

I would like to know how I can apply point cloud RGB data with SURF features. With point cloud, I am subscribing to "camera/depth_registered/points", and in SURF /camera/rgb/image_rect. The objective is to do object recognition using the color and SURF features. So I had 2 approaches in mind (Approach1) Get a clean 3D segmented portion using point cloud and then work with these and 2D FURF features. So, based on the color information and the texture information obtained from point cloud and SURF, I can do object recognition. (Approach2): After the template matching is done nad the bounding box has been constructed, I go back to the template and extract the color information.

Any suggestions on how to go about this will be greatly appreciated. I am having a tough time in integrating point cloud with SURF. Can somebody please let me know how to combine them?

void 
cloud_cb (const sensor_msgs::PointCloud2ConstPtr& input)
{

    pcl::fromROSMsg (*input, *cloud);   
    // ... do data processing


     std::cout << "PointCloud before filtering has: " << cloud->points.size () << " data points." << std::endl; //*

      // Create the filtering object: downsample the dataset using a leaf size of 1cm
      pcl::VoxelGrid<pcl::PointXYZRGB> vg;
      pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_filtered (new pcl::PointCloud<pcl::PointXYZRGB>);
  vg.setInputCloud (cloud);
  vg.setLeafSize (0.01f, 0.01f, 0.01f);
  vg.filter (*cloud_filtered);
  std::cout << "PointCloud after filtering has: " << cloud_filtered->points.size ()  << " data points." << std::endl; //*

  // Create the segmentation object for the planar model and set all the parameters

}



int
main (int argc, char** argv)
{
  // Initialize ROS
  ros::init (argc, argv, "euclideanclusters");
  ros::NodeHandle nh;

  // Create a ROS subscriber for the input point cloud
  ros::Subscriber sub = nh.subscribe ("camera/depth_registered/points", 1, cloud_cb);

  // Create a ROS publisher for the output point cloud
  pub = nh.advertise<sensor_msgs::PointCloud2> ("output", 1);

  // Spin
  ros::spin ();
}


SURF

public:
  ImageConverter()
    : it_(nh_)
  {
    // Subscrive to input video feed and publish output video feed
    image_sub_ = it_.subscribe("/camera/rgb/image_rect", 1, 
      &ImageConverter::imageCb, this);
    image_pub_ = it_.advertise("/image_converter/output_video", 1);

   cv::namedWindow(OPENCV_WINDOW1);

  }

  ~ImageConverter()
  {
    cv::destroyWindow(OPENCV_WINDOW1);

  }

  void imageCb(const sensor_msgs::ImageConstPtr& msg)
  {

 cv_bridge::CvImage cv_img;   // training image


 cv_bridge::CvImagePtr cv_ptr_frames;  // kinect frames

 cv::Mat objectMat = imread( "juice2575.png" );

 //  imshow( WINDOW1, object );

 cv_img.header.stamp = ros::Time::now();
 cv_img.header.frame_id=msg->header.frame_id;
 cv_img.encoding = "rgb8";
 cv_img.encoding = sensor_msgs::image_encodings::RGB8;
 cv_img.image = object;


  sensor_msgs::Image im;
  cv_img.toImageMsg(im);  //Conversion of Input image to ROS image



  try
  {
     cv_ptr_frames = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);
   }
    catch (cv_bridge::Exception& e)
    {
     ROS_ERROR("cv_bridge exception: %s", e.what());
     return;
  }
    cv:: Mat sceneMat;
   sceneMat = cv_ptr_frames;

   // SURF Object detection

        bool objectFound = false;
            float nndrRatio = 0.7f;
        //vector of keypoints  
        vector< cv::KeyPoint > keypointsO;
        vector< cv::KeyPoint > keypointsS;   

        Mat descriptors_object, descriptors_scene;     

       //-- Step 1: Extract keypoints
        SurfFeatureDetector surf(hessianValue);
        surf.detect(sceneMat,keypointsS);
        surf.detect(objectMat,keypointsO);


        //-- Step 2: Calculate descriptors (feature vectors)
      SurfDescriptorExtractor extractor;
      extractor.compute( sceneMat, keypointsS, descriptors_scene );
      extractor.compute( objectMat, keypointso, descriptors_object );

      //-- Step 3: Matching descriptor vectors using FLANN matcher
      FlannBasedMatcher matcher; 
      descriptors_scene.size(), keypointsO.size(), keypointsS.size());
      std::vector< vector< DMatch ...
(more)
2013-11-07 14:26:13 -0500 asked a question How to work with kinect frames in RGB color format using cvbridge

I am trying to process color kinect frames following the tutorial from the link working with cv bridge. I have a training image juice_color.png which I can visualize as RGB. I specified the encoding as cv_img.encoding = "rgb8";

Going by the same logic, I used rgb8 for the incoming kinect frames, but the display window imshow( WINDOW2, cv_ptr_frames->image ); shows the kinect frames in gray scale. How do I work with RGB kinect frames ? How to prevent the color format change to gray scale? Thanking you.

static const std::string OPENCV_WINDOW1 = "Good Matches";
static const std::string WINDOW1 = "Training Image ";
static const std::string WINDOW2 = "Test Frames ";

class ImageConverter
{
  ros::NodeHandle nh_;
  image_transport::ImageTransport it_;
  image_transport::Subscriber image_sub_;
  image_transport::Publisher image_pub_;



public:
  ImageConverter()
    : it_(nh_)
  {
    // Subscrive to input video feed and publish output video feed
    image_sub_ = it_.subscribe("/camera/rgb/image_rect", 1, 
      &ImageConverter::imageCb, this);
    image_pub_ = it_.advertise("/image_converter/output_video", 1);

   cv::namedWindow(OPENCV_WINDOW1);

  }

  ~ImageConverter()
  {
    cv::destroyWindow(OPENCV_WINDOW1);

  }

  void imageCb(const sensor_msgs::ImageConstPtr& msg)
  {

 cv_bridge::CvImage cv_img;   // training image
 cv_bridge::CvImage cv_img_proc;  

 cv_bridge::CvImagePtr cv_ptr_frames; // kinect frames

 cv::Mat object = imread( "juice_color.png" );

   imshow( WINDOW1, object );  //Displayed in RGB color

 cv_img.header.stamp = ros::Time::now();
 cv_img.header.frame_id=msg->header.frame_id;
 cv_img.encoding = "rgb8";
 cv_img.encoding = sensor_msgs::image_encodings::RGB8;
 cv_img.image = object;


  sensor_msgs::Image im;
  cv_img.toImageMsg(im);  



  try
  {
     cv_ptr_frames = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::RGB8);
   }
    catch (cv_bridge::Exception& e)
    {
     ROS_ERROR("cv_bridge exception: %s", e.what());
     return;
  }




    imshow( WINDOW2, cv_ptr_frames->image   );
}
2013-11-07 06:05:24 -0500 received badge  Scholar (source)
2013-11-06 08:32:16 -0500 asked a question Issues in cv_bridge namespace declaration out of scope error

The following code is used to do some image processing using OpenCv and ROS. But it throws error saying that cv_bridge has not been declared. But, we have done the declaration and also used the conversion between Mat image to ROS image and vice-versa from //wiki.ros.org/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages#cv_bridge.2BAC8-Tutorials.2BAC8-UsingCvBridgeCppDiamondback.Converting_ROS_image_messages_to_OpenCV_images.

The objective is to read an image which is in Mat format, convert it to ROS type,do some processing and again convert to Mat format. Can somebody please help in mitigating the error and how to go about it? Thank you for your help.

    #include <ros/ros.h>
    #include <image_transport/image_transport.h>
    #include <cv_bridge/CvBridge.h>
    #include <sensor_msgs/image_encodings.h>
    #include <sensor_msgs/Image.h>
    #include <opencv2/imgproc/imgproc.hpp>
    #include <opencv2/highgui/highgui.hpp>
    #include <opencv2/nonfree/features2d.hpp>
    #include <opencv2/calib3d/calib3d.hpp>
    #include <opencv2/core/core.hpp>


    using namespace cv;

    class ImageConverter
    {
      ros::NodeHandle nh_;
      image_transport::ImageTransport it_;
      image_transport::Subscriber image_sub_;
      image_transport::Publisher image_pub_;
      sensor_msgs::CvBridge bridge_;

    public:
      ImageConverter()
        : it_(nh_)
      {
        // Subscrive to input video feed and publish output video feed
        image_sub_ = it_.subscribe("/camera/rgb/image_rect", 1, 
          &ImageConverter::imageCb, this);
        image_pub_ = it_.advertise("/image_converter/output_video", 1);

        cv::namedWindow(OPENCV_WINDOW);
      }

      ~ImageConverter()
      {
        cv::destroyWindow(OPENCV_WINDOW);
      }

      void imageCb(const sensor_msgs::ImageConstPtr& msg)
      {

     cv_bridge::CvImage cv_img;
     cv_bridge::CvImage cv_img_proc;

     cv_img.header.stamp = ros::Time::now();
     cv_img.header.frame_id=msg->header.frame_id;

     cv_img = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);


//// do some OpenCV processing


        try
      {
        image_pub_.publish(bridge_.cvToImgMsg(cv_img_proc, "bgr8"));
      }
      catch (sensor_msgs::CvBridgeException error)
      {
        ROS_ERROR("error");
      }

     }  

    };

    int main(int argc, char** argv)
    {
      ros::init(argc, argv, "image_converter");
      ImageConverter ic;
      ros::spin();
      return 0;
    }

/home/robot/ros_workspace/tutorialROSOpenCV/src/main.cpp: In member function ‘void ImageConverter::imageCb(const ImageConstPtr&)’:
/home/robot/ros_workspace/tutorialROSOpenCV/src/main.cpp:52:2: error: ‘cv_bridge’ has not been declared
/home/robot/ros_workspace/tutorialROSOpenCV/src/main.cpp:52:21: error: expected ‘;’ before ‘cv_img’
/home/robot/ros_workspace/tutorialROSOpenCV/src/main.cpp:53:2: error: ‘cv_bridge’ has not been declared
/home/robot/ros_workspace/tutorialROSOpenCV/src/main.cpp:53:21: error: expected ‘;’ before ‘cv_img_proc’
/home/robot/ros_workspace/tutorialROSOpenCV/src/main.cpp:55:2: error: ‘cv_img’ was not declared in this scope
/home/robot/ros_workspace/tutorialROSOpenCV/src/main.cpp:58:11: error: ‘cv_bridge’ has not been declared
/home/robot/ros_workspace/tutorialROSOpenCV/src/main.cpp:137:51: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
/home/robot/ros_workspace/tutorialROSOpenCV/src/main.cpp:156:41: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] make[3]: * [CMakeFiles/tutorialROSOpenCV.dir/src/main.o] Error 1

2013-11-06 06:27:32 -0500 commented answer Issue in implementing SURF feature extraction algorithm

Could you please put a code snippet of how I should be changing the specific lines? For Q1) how do I mention the frame ID as camera? is it the name of the topic which is image_sub_ ? Sorry, I could not follow.

2013-11-05 12:42:24 -0500 asked a question Issue in implementing SURF feature extraction algorithm

I am trying to implement SURF. From the comment mentioned in Answer (sorry for not being able to publish links as I am not allowed) : //answers.ros.org/question/61462/how-to-publish-opencv-mat-image-in-ros-fuerte/, I have problem in understanding the lines mentioned in the command cv_img.header.frame_id = "your_camera_frame";

My Questions are (Q1) What should I use in cv_img.header.frame_id
(Q2) what is the substitute of VideoCapture in ROS so that I can capture the video frames and work with this code? thanx in advance

> #include <ros/ros.h>
> #include <image_transport/image_transport.h>
> #include <cv_bridge/cv_bridge.h>
> #include <sensor_msgs/image_encodings.h>
> #include <opencv2/imgproc/imgproc.hpp>
> #include <opencv2/highgui/highgui.hpp>
> #include <opencv2/nonfree/features2d.hpp>
> #include "opencv2/calib3d/calib3d.hpp"
> 
> static const std::string OPENCV_WINDOW
> = "Image window";
> 
> class ImageConverter {  
> ros::NodeHandle nh_;  
> image_transport::ImageTransport it_;  
> image_transport::Subscriber
> image_sub_;  
> image_transport::Publisher image_pub_;
> public:   ImageConverter()
>     : it_(nh_)   {
>     // Subscrive to input video feed and publish output video feed
>     image_sub_ = it_.subscribe("/camera/rgb/image_rect",
> 1, 
>       &ImageConverter::imageCb, this);
>     image_pub_ = it_.advertise("/image_converter/output_video",
> 1);
> 
>     cv::namedWindow(OPENCV_WINDOW);   }
> 
>   ~ImageConverter()   {
>     cv::destroyWindow(OPENCV_WINDOW);   }
> 
>   void imageCb(const
> sensor_msgs::ImageConstPtr& msg)
>             Mat object = imread( "book.png", CV_LOAD_IMAGE_GRAYSCALE );
> **cv_bridge::CvImage cv_img;   
> cv_img.header.stamp =
> ros::Time::now();   
> cv_img.header.frame_id=
> 
> cv_img.image = img_topub;** 
> 
> img_pub_.publish(cv_img.toImageMsg());
> 
>  //Detect the keypoints using SURF
> Detector
>     int minHessian = 500;
> 
>     SurfFeatureDetector detector( minHessian );
>     std::vector<KeyPoint> kp_object;
> 
>     detector.detect( object, kp_object );
> 
>     //Calculate descriptors (feature vectors)
>     SurfDescriptorExtractor extractor;
>     Mat des_object;
> 
>     extractor.compute( object, kp_object, des_object );
> 
>     FlannBasedMatcher matcher;
> 
>     VideoCapture cap(0);
> 
>     namedWindow("Good Matches");
> 
>     std::vector<Point2f> obj_corners(4);
> 
>     //Get the corners from the object
>     obj_corners[0] = cvPoint(0,0);
>     obj_corners[1] = cvPoint( object.cols, 0 );
>     obj_corners[2] = cvPoint( object.cols, object.rows );
>     obj_corners[3] = cvPoint( 0, object.rows );
> 
>     
>     int framecount = 0;
>     while (key != 27)   {
>        Mat frame;
>         cap >> frame;
> 
>         Mat des_image, img_matches;
>         std::vector<KeyPoint> kp_image;
>         std::vector<vector<DMatch > > matches;
>         std::vector<DMatch > good_matches;
>         std::vector<Point2f> obj;
>         std::vector<Point2f> scene;
>         std::vector<Point2f> scene_corners(4);
>         Mat H;
>              
> 
>         detector.detect( image, kp_image );
>         extractor.compute( image, kp_image, des_image ); 
> matcher.knnMatch(des_object,
> des_image, matches, 2);
> 
>         for(int i = 0; i < min(des_image.rows-1,(int)
> matches.size()); i++) //THIS LOOP IS
> SENSITIVE TO SEGFAULTS
>         {
>             if((matches[i][0].distance < 0.6*(matches[i][1].distance)) &&
> ((int) matches[i].size()<=2 && (int)
> matches[i].size()>0))
>             {
>                 good_matches.push_back(matches[i][0]);
>             }
>         }
> 
>         //Draw only "good" matches
>         drawMatches( object, kp_object, image, kp_image,
> good_matches, img_matches,
> Scalar::all(-1), Scalar::all(-1),
> vector<char>(),
> DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS
> );
> 
>         if (good_matches.size() >= 4)
>         {
>             for( int i = 0; i < good_matches.size(); i++ )
>             {
>                 //Get the keypoints from the good matches
>                 obj.push_back( kp_object[ good_matches[i].queryIdx
> ].pt );
>                 scene.push_back( kp_image[ good_matches[i].trainIdx
> ].pt );
>             }
> 
>             H = findHomography( obj, scene, CV_RANSAC );
> 
>             perspectiveTransform( obj_corners, scene_corners, H);
> 
>             //Draw lines between the corners (the mapped object in the
> scene image )
>             line( img_matches, scene_corners[0] + Point2f(
> object.cols, 0), scene_corners[1] +
> Point2f( object.cols, 0), Scalar(0,
> 255, 0), 4 );
>             line( img_matches, scene_corners[1] + Point2f(
> object.cols, 0), scene_corners[2] +
> Point2f( object.cols, 0), Scalar( 0,
> 255, 0), 4 );
>             line( img_matches, scene_corners ...
(more)
2013-11-01 09:14:32 -0500 received badge  Popular Question (source)
2013-10-31 07:28:46 -0500 asked a question Issues about quaternion and conversion

I an using the openni_tracker.cpp and the skeleton tracking. My questions are (A) how do I obtain the quaternion values of each join? (B) How do I convert each joint's rotation matrix (say 3 joints) to half space quaternion with respect to the torso in skeleton tracking. Thank you in advance.

2013-10-31 07:21:26 -0500 asked a question Doubts regarding transformation and application in gesture recognition

I want to do gesture recognition using kinect and for that I recorded the 3D joint positions of skeleton tracking where each joint is represented by point cloud. I am using the skeleton tracker program and openni_tracker.cpp. According to my understand, (x,y) are the pixel coordinates and z coordinate is the depth value in mm. Pardon my ignorance, as I am a beginner and hence these questions may sound trivial. Initially, I am trying to get familiar with skeleton tracking and run some machine learning algorithm for gesture recognition. So, my feature sets = 3 D joint position of the right arm, 3D joint position with respect to torso, mean and quaternion angles of right arm. Do I need to have any transformation when (A) working only with kinect mounted on a high table (B) kinect mounted on robot. What do I need to do for gesture recognition?

2013-10-23 05:03:40 -0500 received badge  Popular Question (source)
2013-09-27 07:37:52 -0500 received badge  Editor (source)
2013-09-26 13:29:14 -0500 asked a question Issue with skeleton markers

I have been trying to work the skeletal tracking with markers using wiki.ros.org/skeleton_markers tutorial. The tracking with openni and rviz works fine but I cannot get to visualize the markers. In the markers_from_tf.launch file, we changed the line <include file="$(find openni_camera)/launch/openni_node.launch" /> to <include file="$(find openni_launch)/launch/openni.launch" /> since the skeleton_tracker was made fro the old version. There is an issue between the topic used for communication that is camera depth frame and openni_depth_frame.

How do we make the markers appear? Thank you.