ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

espee's profile - activity

2016-08-18 15:31:59 -0500 received badge  Famous Question (source)
2016-08-18 15:31:59 -0500 received badge  Notable Question (source)
2016-08-08 00:39:28 -0500 marked best answer Isn't the hector_quadrotor stack catkinized?

Although the stack (and all the packages) seems to be catkinized I am unable to install the packages. Whenever I run the catkin_make command (after arranging the folder structure for catkin) the devel and install folder are not created.

the terminal output when I run catkin_make is

Base path: /home/saroj/ros/ros_ws

Source space: /home/saroj/ros/ros_ws/src

Build space: /home/saroj/ros/ros_ws/build

Devel space: /home/saroj/ros/ros_ws/devel

Install space: /home/saroj/ros/ros_ws/install ####

Running command: "make cmake_check_build_system" in "/home/saroj/ros/ros_ws/build"

#### ####

Running command: "make -j2 -l2" in "/home/saroj/ros/ros_ws/build"

#### saroj@saroj:~/ros/ros_ws$

(I have just started using ROS (even ubuntu) and I apologize if the question sounds stupid)

2015-07-02 08:06:18 -0500 received badge  Student (source)
2014-07-14 02:49:54 -0500 received badge  Famous Question (source)
2014-04-02 16:10:44 -0500 received badge  Notable Question (source)
2014-02-26 13:15:54 -0500 received badge  Popular Question (source)
2014-02-25 20:42:40 -0500 asked a question opencv 3.0 and cv-bridge

I am currently using OpenCv 3.0. And now I would like to use it with ROS (as Ros still has OpenCv 2.4 ). I need to use cv_bridge in my work. However, because cv_bridge is linked with open cv libraries of openCV 2.4 (which I must hide from ROS system inorder to use opencv3.0) I tried to compile it with 3.0 libraries, but without any success. There are a lot of changes made on transitioning from 2.4 to 3.0 including lots of variable names. 

Has anyone tried to do so with any sucess? I would appreciate any information on how to proceed. 

2014-02-24 01:05:38 -0500 received badge  Famous Question (source)
2014-02-10 01:19:30 -0500 received badge  Famous Question (source)
2014-02-01 05:37:41 -0500 received badge  Nice Answer (source)
2014-01-20 23:11:01 -0500 received badge  Notable Question (source)
2014-01-14 22:39:19 -0500 received badge  Popular Question (source)
2014-01-14 17:33:23 -0500 commented question 6D pose estimation problem. How to estimate the rotation matrix?

I guess cvHomography is for feature points associated with a plane in the image. In my case the feature points are not necessarily associated with a single plane. In fact the image sequences are of rough terrain and camera has both translation and rotation.

2014-01-13 21:36:26 -0500 asked a question 6D pose estimation problem. How to estimate the rotation matrix?

I have been trying to estimate the 6D pose of a moving camera (in 6D) using opencv library. My steps are:

  1. use goodFeaturesToTrack to get features.
  2. calculate optical flow
  3. track features
  4. get fundamental matrix
  5. get essential matrix
  6. check the combination of R or t to determine the true R & t using triangulation (could find a better way to do this.. had to combine R & t and then later extract R & t)
  7. Use the height information (from altitude sensor) to get scale factor and scale the translation
  8. Update the transformation
  9. Save the new feature set and image for next loop.

But the results are far from expectation. I verified that R is orthogonal. But cant figure out where I went wrong. My code is as follows:


void imageCb(const sensor_msgs::ImageConstPtr& image) 
{
    try
      {
    if(start_flag_)
      {
        cv_ptr_prev_ = cv_bridge::toCvCopy(image, enc::BGR8);
      }
        cv_ptr_new_ = cv_bridge::toCvCopy(image, enc::BGR8);
      }
    catch (cv_bridge::Exception& e)
      {
    ROS_ERROR("Conversion error: %s", e.what());
    return;
      }
    if(start_flag_)
      {
    image_prev_.create(cv_ptr_prev_->image.rows, cv_ptr_prev_->image.cols , CV_8UC1);
    image_new_.create(cv_ptr_new_->image.rows, cv_ptr_new_->image.cols , CV_8UC1);
      }

    cvtColor( cv_ptr_new_->image, image_new_, CV_BGR2GRAY);
    cvtColor( cv_ptr_prev_->image, image_prev_, CV_BGR2GRAY);

    Mat F, E;                //Fundamental and Essential Matrices
    vector<Point2f> p1;      //common feature points in first image
    vector<Point2f> p2;      //common feature points in second image
    double f = 320;          //camera parameter focal length
    double Cx = 320.5;       //Resolution
    double Cy = 240.5;
    Mat K = (Mat_<double>(3,3) << f, 0 , -Cx, 0, f, -Cy, 0, 0, 1); //camera internal matrix

    Mat W = (Mat_<double>(3,3) << 0, -1, 0, 1, 0, 0, 0, 0 , 1);   //for SVD
    Mat Z = (Mat_<double>(3,3) << 0, 1, 0, -1, 0, 0, 0, 0 , 1);  //for SVD

    Mat w, u , vt; //SVD result of Essential Matrix
    Mat tx, Ra, Rb; //Temporary checking for R and t
    Mat t_unscaled, t_scaled, R;  //True R and t for the current step

    //Lucas Kanade Optical Flow

    vector<uchar> features_found;
    if(!start_flag_)
      {
    calcOpticalFlowPyrLK(image_prev_, image_new_, features_prev_, features_new_, features_found, noArray(),Size(win_size_*4+1, win_size_*4+1), 5,TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS,20, 0.3));
        for( int i = 0; i < (int)features_new_.size(); i++ )
      {
        circle(cv_ptr_prev_->image, features_new_[i],1,CV_RGB(255,255,255));
        circle(cv_ptr_prev_->image, features_prev_[i],1,CV_RGB(0,0,0));
        if( !features_found[i] )
          continue;
        else
          {
        line(cv_ptr_prev_->image, features_prev_[i], features_new_[i], Scalar(255,0,0), 1, CV_AA);
        p1.push_back(Point2f(features_prev_[i].x-Cx, features_prev_[i].y-Cy));
        p2.push_back(Point2f(features_new_[i].x-Cx, features_new_[i].y-Cy));
          }  
      }
    image_pub_.publish(cv_ptr_prev_->toImageMsg());          //publish image with features

    //Estimation
    vector<uchar> inliers;

    if(p1.size()==p2.size())
      {
        F =  findFundamentalMat(p1, p2, CV_FM_RANSAC,1.0, 0.999, inliers);
        //      cout<<"Number of inliers is::::::"<<sum(inliers)<<endl;
        E = K.t()*F*K;
        SVD::compute(E, w, u, vt);

        tx = u * Z * u.t();     Ra = u * W * vt;        Rb = u * W.t() * vt;

        if(determinant(Ra)<=0)        Ra = -Ra;
        if(determinant(Rb)<=0)        Rb = -Rb;

        t_unscaled = (Mat_<double>(1,3) << tx.at<double>(2,1), tx.at<double>(0,2), tx.at<double>(1,0) );

        Mat tminus = -(t_unscaled.clone());

        //Check which Ra or Rb or t

        vector<Mat> R_vec ...
(more)
2013-12-18 23:42:24 -0500 received badge  Famous Question (source)
2013-11-06 04:07:59 -0500 received badge  Famous Question (source)
2013-11-04 03:46:28 -0500 received badge  Notable Question (source)
2013-10-28 08:43:30 -0500 received badge  Famous Question (source)
2013-10-21 17:45:05 -0500 commented answer harris detector in callback

Thanks for the link...

2013-10-21 17:44:50 -0500 received badge  Notable Question (source)
2013-10-14 21:08:45 -0500 commented answer harris detector in callback

Thanks. Actually I got is solved by changing <float> to <uchar>. Could you please write a snippet of code for the wrapper? I tried w/o success.

2013-10-14 19:11:44 -0500 received badge  Popular Question (source)
2013-10-14 04:37:46 -0500 received badge  Popular Question (source)
2013-10-14 00:47:30 -0500 commented answer harris detector in callback

I double checked that.. both ways. It should be ( j , i ) as in the tutorial as it gives some sense to the corners detected.

2013-10-14 00:47:30 -0500 received badge  Commentator
2013-10-14 00:09:57 -0500 asked a question harris detector in callback

I have been trying to implement Harris corner detector. But the result are pretty skewed. There is always some symmetry in the detected corners as shown in the image below which definitely is not true.(right image is grayscaled image...why does it happens as such..must be 3 different channels joined together..

harris detection on terrain harris detection on terrain

My callback portion looks like:

void imageCb(const sensor_msgs::ImageConstPtr& image)
{
    cv_bridge::CvImagePtr cv_ptr;

    try                         
    {
        cv_ptr = cv_bridge::toCvCopy(image, enc::BGR8); 

    }
    catch (cv_bridge::Exception& e)
    {
        ROS_ERROR("Conversion error: %s", e.what());
        return;
    }

    cv::Mat image_gray, image_corner, image_norm, image_scaled;

    cv::cvtColor(cv_ptr->image, image_gray, CV_BGR2GRAY); 
    cv::cornerHarris(image_gray, image_corner,blockSize, apertureSize, k, cv::BORDER_DEFAULT);    


    cv::normalize(image_corner, image_norm, 0, 255, cv::NORM_MINMAX, CV_8UC1, cv::Mat());   
    cv::convertScaleAbs (image_norm, image_scaled); //scaled



    //Drawing circles around corner

    for(int j=0; j<cv_ptr->image.rows ; j++) //norm
    {
        for(int i = 0; i<cv_ptr->image.cols; i++)
        {
            if((int) image_scaled.at<float>(j,i) > thresh)
            {
                cv::circle(cv_ptr->image, cv::Point(i, j),2,CV_RGB(255,0,0)); //scaled
            }
        }
    }

    pub_.publish(cv_ptr->toImageMsg());     

}

I think it is basically due to the RGB channels but there are four similar clusters of corner which confused me. most of the code is directly taken from opencv example of harris detector. Any help is deeply appreciated. Thanks.

2013-10-12 23:14:14 -0500 received badge  Notable Question (source)
2013-10-12 17:52:40 -0500 commented answer processing image outside callback

I think the pointers in the object are initialized to NULL because of default constructor.. any ways to solve this issue?

2013-10-12 17:50:34 -0500 commented answer processing image outside callback

that doesnt solve either.

2013-10-12 17:26:09 -0500 commented answer How to change the yaw angle in ROS

if you just want a takeoff, rotation and land, your twist sequence should look sth like [0 0 1 0 0 0] (1m/s z-direction) [0 0 0 0 0 1] (1 rad/s along z-axis) [0 0 -1 0 0 0] (-1m/s z-direction) and finally [0 0 0 0 0 0].

2013-10-12 16:23:26 -0500 received badge  Popular Question (source)
2013-10-12 05:00:02 -0500 answered a question How to change the yaw angle in ROS

Try resetting the linear velocities to zeros when you want to hover?

2013-10-12 04:52:46 -0500 commented answer processing image outside callback

I did the following where imageIn_ is ImageConstPtr: image_in = cv_bridge::toCvCopy(imageIn_, enc::BGR8); Actually I followed the same tutorial.

2013-10-12 04:29:24 -0500 commented answer processing image outside callback

Thanks I did the correction..but no solutions yet.

2013-10-12 04:04:40 -0500 edited question processing image outside callback

Hello,

I am trying to process image outside callback. Its a Harris detector using opencv but I am running into segmentation fault error.

my code is as follows;

namespace enc = sensor_msgs::image_encodings;
class harris_node
{
  protected:
  //   image_transport::ImageTransport it_;
image_transport::Subscriber sub_;
image_transport::Publisher pub_;
ros::NodeHandle nh_;
ros::NodeHandle nhPrivate_;

sensor_msgs::ImageConstPtr imageIn_;
cv_bridge::CvImagePtr image_in;   //pointer to input image
cv_bridge::CvImagePtr image_out;  //for ros msg conversion 
cv::Mat cvImage_in;       //for processing, input at each step
cv::Mat cvImage_out;      //for processing, output at each step
int thresh;
int max_thresh;
int blockSize;
int apertureSize;
double k;


 public:
 harris_node(ros::NodeHandle& nh, ros::NodeHandle& nhPrivate):nh_(nh), nhPrivate_(nhPrivate), thresh(200), max_thresh(255), blockSize(2), apertureSize(3), k(0.04)
 {
    image_transport::ImageTransport it_(nh_);
    sub_ = it_.subscribe("front_cam/camera/image", 100, &harris_node::imageCb, this);
    pub_=it_.advertise("front_cam/camera/image_processed", 1);
 }

 void imageCb(const sensor_msgs::ImageConstPtr& image)
 {  

    try                         
    {
        imageIn_ = image;        
    }
    catch (cv_bridge::Exception& e)
    {
        ROS_ERROR("Conversion error: %s", e.what());
        return;
    }
 }

 void harris_detector()
 {
    image_in = cv_bridge::toCvCopy(imageIn_, enc::BGR8);
    cvImage_in = image_in->image;       
    cvtColor(cvImage_in, cvImage_out, CV_8UC1); 
    cvImage_in = cvImage_out;
    cornerHarris(cvImage_in, cvImage_out, blockSize, apertureSize, k, cv::BORDER_DEFAULT);    //Harris detector
    cvImage_in = cvImage_out;
    normalize(cvImage_in, cvImage_out, 0, 255, cv::NORM_MINMAX, CV_32FC1, cv::Mat());       //normalize
    cvImage_in = cvImage_out;
    convertScaleAbs (cvImage_in, cvImage_out); //scaled


    //Drawing circles around corner

    for(int j=0; j<cvImage_out.rows ; j++)
    {
        for(int i = 0; i<cvImage_out.cols; i++)
        {
            if((int) cvImage_out.at<float>(j,i) > thresh)
            {
                circle(cvImage_out, cv::Point(j,i),5,cv::Scalar(0),2,8,0);
            }
        }

    }
 }


 void spin()
 {
    ros::Rate rate(30);
    while (ros::ok())
    {
        spinOnce();
        rate.sleep();
    }
 }

 virtual void spinOnce()
 {
    harris_detector();
    spin();
 }
 };

 class harris_node_with_gui:public harris_node
 {
public:
harris_node_with_gui(ros::NodeHandle& nh, ros::NodeHandle& nhPrivate):   harris_node(nh, nhPrivate){}
void spinOnce()
{
    harris_node::spinOnce();
    cv::imshow("inputImage", cvImage_in);
    cv::imshow("Corners", cvImage_out); 
}
   };




 int main(int argc, char** argv)
 {
 ros::init(argc, argv, "image_processor"); 
 ros::NodeHandle nh;
 ros::NodeHandle nhPrivate("~");

 harris_node* node = 0;
 node = new harris_node_with_gui (nh, nhPrivate);
 node->spinOnce();
 }

gdb debugging shows Program received signal SIGSEGV, Segmentation fault.

on backtrace I get:

#0  0xb7fdd424 in __kernel_vsyscall ()
#1  0xb74b81df in raise () from /lib/i386-linux-gnu/libc.so.6
#2  0xb74bb825 in abort () from /lib/i386-linux-gnu/libc.so.6
#3  0xb74b1085 in ?? () from /lib/i386-linux-gnu/libc.so.6
#4  0xb74b1137 in __assert_fail () from /lib/i386-linux-gnu/libc.so.6
#5  0xb7f74555 in    cv_bridge::toCvCopy(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&, std::string const&) ()
 from /opt/ros/hydro/lib/libcv_bridge.so
#6  0x080532bb in harris_node::harris_detector() ()
#7  0x0805372f in harris_node::spinOnce() ()
#8  0x08053778 in harris_node_with_gui::spinOnce() ()
#9  0x080536e1 in harris_node::spin() ()
#10 0x08050f2a in main ()

Any help is appreciated..


Edited

After I swap i and j the backtrace output is still the same but I get another error:

image_processing: /usr/include/boost/smart_ptr/shared_ptr.hpp:412:    boost::shared_ptr<T>::reference boost::shared_ptr<T>::operator*() const [with T = const    sensor_msgs::Image_<std::allocator<void> >, boost::shared_ptr<T>::reference = const   sensor_msgs::Image_<std::allocator<void> >&]: Assertion `px != 0' failed.

So I think I have initialized the pointers in the class to NULL value and this has ... (more)