Ask Your Question

SR's profile - activity

2018-12-13 05:28:05 -0600 received badge  Great Answer (source)
2017-10-17 13:47:54 -0600 received badge  Nice Answer (source)
2016-11-06 16:14:56 -0600 commented question pyrDown, pyrUp vs resize (blur operations)

For a fair comparison you should use cv::resize with the INTER_AREA interpolation mode as that's what you should use for downsampling.

2016-10-24 07:06:58 -0600 received badge  Nice Answer (source)
2016-03-21 05:39:42 -0600 received badge  Good Answer (source)
2016-01-27 18:15:52 -0600 answered a question Clear path detection using edge detection

In general, if any kind of edge detection is desired it would avoid blurring as it actually makes the edge detection only harder. If filtering is needed then use a median or a bilateral filter as these are edge-preserving.

2016-01-27 18:12:55 -0600 answered a question edge corners are not sharp with canny filter

I think this is a bug. Canny should not "modify" the edge, it should only find the ridge.

2015-12-10 11:38:29 -0600 received badge  Nice Answer (source)
2015-12-08 10:29:07 -0600 received badge  Nice Answer (source)
2015-09-29 16:10:22 -0600 commented answer How to initialize a FeatureDetector with OpenCV 3?

Yep, that was my stupid mistake. Thanks for the hint! But: Why does cv::Ptr<cv::ORB> allow implicit conversion to cv::ORB*? That behaviour is dangerous and misleading. The conversion should be made explicit via a get() function.

2015-09-28 13:08:19 -0600 asked a question How to initialize a FeatureDetector with OpenCV 3?

With OpenCV 3 I get a SegFault when running the detect() function of either the ORB or AKAZE detector although the pointer is not null. I assume that something is not initialized. But my compiler is unable to find the previously necessary cv::initModule_features2d();. What do I need to do to initialize the feature detectors properly?

The code is roughly like the following:

   cv::FeatureDetector* detector = cv::ORB::create();
   CV_Assert( detector != NULL );
   vector<cv::KeyPoint> kpts1;
   detector->detect(img, kpts, cv::Mat());

I tried both 8uc3 and 8uc1 images, both ORB and AKAZE and it always segfaulted. Tested on OS X 10.10.

2015-07-20 07:25:51 -0600 received badge  Nice Answer (source)
2015-05-21 17:20:59 -0600 commented answer How does Flann match descriptors?
2015-05-19 01:03:24 -0600 commented answer How does Flann match descriptors?

The descriptors are vectors but directions are given in the "feature space", not in 2D. Consider them as arrays of values describing image patches.

2015-05-18 16:04:01 -0600 answered a question How does Flann match descriptors?

FLANN allows to match descriptors by computing the approximate Euclidean distance between the descriptor vectors.

2015-05-11 13:34:23 -0600 answered a question re-order my code into a function

Before you refer to a variable with global it must exist. Therefore, assign a value to it right after the imports.

2015-05-06 08:42:26 -0600 commented answer [C++] Efficient element-by-element pixel access?

Yes that makes it expclit. The values is never rounded but truncated to [0, 255] though. I do not know if that's correct for your application.

2015-05-05 16:45:49 -0600 answered a question Viola Jones Algorithm:Detector Scaling

You should not scale the classification window but do a multi-scale search on the input image. That is, you incrementally downsize the input image while your classification window size is kept fixed.

2015-05-05 16:43:36 -0600 answered a question [C++] Efficient element-by-element pixel access?

You are mixing types. You have a float * centroidPtr but assign it to one element of destPix[j] in destPix[j][0] = *centroidPtr++;. However, destPix[j] is a uchar so the values are truncated and meaningless.

2015-05-05 16:41:29 -0600 edited question [C++] Efficient element-by-element pixel access?

Hello,

I know this has been asked many times and I've gone over the docs here but cannot seem to find an answer to my specific issue. I am trying to map the results from k-means clustering back into an image with reduced colors.

My old code works and does this using the Mat.at accessor, but it is very slow. I am trying to re-write this using pointers to improve performance, but right now the clustered image has many diagonal lines and the colors/pixels are not in the right place.

This works but is slow

for( int y = 0; y < resized.rows; y++ ){
    for( int x = 0; x < resized.cols; x++ ){
        {
            pixelPos =y + x*resized.rows;
            cluster_idx = labels.at<int>(pixelPos,0);
            clusteredImg.at<Vec3b>(y,x)[0] = centers.at<float>(cluster_idx, 0);
            clusteredImg.at<Vec3b>(y,x)[1] = centers.at<float>(cluster_idx, 1);
            clusteredImg.at<Vec3b>(y,x)[2] = centers.at<float>(cluster_idx, 2);
        }
    }

}

EDIT: The below code is working I had a silly mistake where I forgot that labels was M*Nx1 rather than MxNx1.

If there is any room for improvement please let me know!

for (int i = 0; i < resized.rows; ++i)
{
    cv::Vec3b* destPix = clusteredImg.ptr<cv::Vec3b>(i);
    int* labelPix = labels.ptr<int>(i);
    for (int j = 0; j < resized.cols; ++j)
    {
        pixelPos = i + j*resized.rows;
        cluster_idx = *labels.ptr<int>(pixelPos,0);
        float * centroidPtr = centers.ptr<float>(cluster_idx);

        destPix[j][0] = *centroidPtr++;
        destPix[j][1] = *centroidPtr++;
        destPix[j][2] = *centroidPtr++;
    }
}

Sorry if this is a stupid question, I am very new to C++.

I'm sure I have something wrong with how the destination data is assigned. I've checked the source data by printing debug statements and that seems to be correct.

2015-05-05 16:36:50 -0600 commented answer Derivation of Epipolar Line

Thanks for your comment as it preserves some knowledge. The original slides are not available anymore...

2015-04-21 16:10:08 -0600 received badge  Enthusiast
2015-03-25 14:17:48 -0600 received badge  Nice Answer (source)
2015-03-25 04:36:19 -0600 received badge  Good Answer (source)
2014-12-30 07:06:07 -0600 commented question I need details of FAST Algorithm

That's true. Now it looks correct, but it did not show up in the first place.

2014-12-30 04:39:05 -0600 commented answer [ANNOUNCEMENT] Welcome back to the OpenCV Q&A forum 2.0!

When editing a question, the one who edits it will be the new "original poster". The timestamp of editing is also wrong.

2014-12-30 04:39:05 -0600 commented question I need details of FAST Algorithm

The forum system is wrong. I did not ask this question, I just edited the question.

2014-12-30 04:39:04 -0600 edited question I need details of FAST Algorithm

How can ı find operating logic, fundamentals, application areas and matlab codes of FAST Algorithm.

2014-12-30 04:39:04 -0600 answered a question I need details of FAST Algorithm
2014-12-09 13:47:45 -0600 marked best answer Is it safe to use cv::Ptr<> within STL containers?

AFAIK std::auto_ptr<> is not suitable for STL containers.

Is it safe to use cv::Ptr<> within STL containers e.g. as std::vector< cv::Ptr<MyObject> >?

2014-11-18 00:36:28 -0600 received badge  Nice Answer (source)
2014-11-09 06:55:28 -0600 answered a question Use allocated buffer for Mat Data Pointer

If you want to "fill the container with your frame" then there is no way around allocating matrix and copying your data into that matrix. However, if you just want to wrap your data pointer by a cv::Mat object (provided that the data organization of this kind makes sense) you can simply call the wrapper constructor:

//! constructor for matrix headers pointing to user-allocated data
Mat(int rows, int cols, int type, void* data, size_t step=AUTO_STEP);

You can wrap your frame as simple as

Mat wrapped(MatrixRows, MatrixCols, MatrixType, MyDataBuffer);

Note that the destruction of wrapped does not free your buffer! You need to manage the underlying memory yourself.

If you then want to obtain a real copy it is probably the easiest to do

Mat copy = wrapped.clone();

See also http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-mat

2014-08-11 14:05:49 -0600 received badge  Nice Answer (source)
2014-07-31 08:26:32 -0600 received badge  Nice Answer (source)
2014-07-03 09:50:27 -0600 received badge  Good Answer (source)
2014-03-02 16:02:11 -0600 received badge  Nice Answer (source)
2014-03-01 17:40:55 -0600 received badge  Good Answer (source)
2014-02-25 06:10:54 -0600 commented answer How to change contrast of image?

You may simplify the inner part of the first loop to buf[i] = cv::saturate_cast<unsigned char>((((i - midBright) * contrast) / 256) + midBright + bright);

2014-02-10 11:46:20 -0600 edited answer How to change contrast of image?

A quick and dirty contrast enhancement via matrix expressions and a linear transformation:

 float alpha, beta = ...
 Mat image = imread( argv[1] );
 image += beta;
 image *= alpha;

For instance, set beta to -(minimum value) and alpha to (maximum value - minimum value).