ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

harris detector in callback

asked 2013-10-14 00:09:57 -0500

espee gravatar image

updated 2013-10-14 00:57:17 -0500

I have been trying to implement Harris corner detector. But the result are pretty skewed. There is always some symmetry in the detected corners as shown in the image below which definitely is not true.(right image is grayscaled image...why does it happens as such..must be 3 different channels joined together..

harris detection on terrain harris detection on terrain

My callback portion looks like:

void imageCb(const sensor_msgs::ImageConstPtr& image)
{
    cv_bridge::CvImagePtr cv_ptr;

    try                         
    {
        cv_ptr = cv_bridge::toCvCopy(image, enc::BGR8); 

    }
    catch (cv_bridge::Exception& e)
    {
        ROS_ERROR("Conversion error: %s", e.what());
        return;
    }

    cv::Mat image_gray, image_corner, image_norm, image_scaled;

    cv::cvtColor(cv_ptr->image, image_gray, CV_BGR2GRAY); 
    cv::cornerHarris(image_gray, image_corner,blockSize, apertureSize, k, cv::BORDER_DEFAULT);    


    cv::normalize(image_corner, image_norm, 0, 255, cv::NORM_MINMAX, CV_8UC1, cv::Mat());   
    cv::convertScaleAbs (image_norm, image_scaled); //scaled



    //Drawing circles around corner

    for(int j=0; j<cv_ptr->image.rows ; j++) //norm
    {
        for(int i = 0; i<cv_ptr->image.cols; i++)
        {
            if((int) image_scaled.at<float>(j,i) > thresh)
            {
                cv::circle(cv_ptr->image, cv::Point(i, j),2,CV_RGB(255,0,0)); //scaled
            }
        }
    }

    pub_.publish(cv_ptr->toImageMsg());     

}

I think it is basically due to the RGB channels but there are four similar clusters of corner which confused me. most of the code is directly taken from opencv example of harris detector. Any help is deeply appreciated. Thanks.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
2

answered 2013-10-14 01:47:13 -0500

You use CV_8UC1 in normalize(). Because of this image_norm should be CV_8UC1 and image_scaled should be CV_8UC1, too. But you use image_scaled.at<float> !

A good practice is do use the Wrapper Mat_<uchar> image_scaled. Because:

  1. You can use a shorter method to access the elements (image_scaled(j,i))
  2. The types can be tested at compile time.
edit flag offensive delete link more

Comments

Thanks. Actually I got is solved by changing <float> to <uchar>. Could you please write a snippet of code for the wrapper? I tried w/o success.

espee gravatar image espee  ( 2013-10-14 21:08:45 -0500 )edit

http://docs.opencv.org/modules/core/doc/basic_structures.html#id7 this should help

MichaelKorn gravatar image MichaelKorn  ( 2013-10-16 08:30:03 -0500 )edit

Thanks for the link...

espee gravatar image espee  ( 2013-10-21 17:45:05 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2013-10-14 00:09:57 -0500

Seen: 271 times

Last updated: Oct 14 '13