Ask Your Question

mstankie's profile - activity

2020-09-24 02:55:48 -0600 received badge  Popular Question (source)
2017-02-13 03:36:27 -0600 commented answer how to scan lines vertically?

I do not think the y+=10 can be the reason of vertical scanning not working in your case. Try:

cv::Mat frame_gray_rot;
cv::transpose(frame_gray, frame_gray_rot);
ScanImageAndDoYourStuff(frame_gray_rot);

Doesn't it work?

2017-02-10 08:23:28 -0600 commented answer how to scan lines vertically?

It's actually not a typo. It's just to match the example code in the question: for( int y=0; y< frame_gray.rows; y+=10) {...}

2017-02-10 04:35:19 -0600 answered a question how to scan lines vertically?

The 'efficient' way of scanning an image is to iterate over rows, as described here:

void ScanImageAndDoYourStuff(Mat& I)
{
    // accept only 1-channel char type matrices
    CV_Assert(I.type() == CV_8UC1);

    int nRows = I.rows;
    int nCols = I.cols;

    int i,j;
    uchar* p;
    for( i = 0; i < nRows; i+=10)
    {
        p = I.ptr<uchar>(i);
        for ( j = 0; j < nCols; ++j)
        {
            // do your stuff, eg.
            if (p[j]==0) cout << "0 binary=" << endl;
            else cout << "1 binary=" << endl;
            // p[j] is now same as I.at<uchar>(i,j), but accessing it sequentially is muuuuch faster
        }
    }
}

If you need to scan columns, just transpose (cv::transpose) your image (effcient way of rotating in your case) and then your columns will become rows. Does it make sense?

PS. As described in the document mentioned above, using cv::Mat::at<T>(u,v) is not efficient -especially for scanning whole image.

2017-02-10 04:20:57 -0600 commented question Finding nearest non-zero pixel

Okay, this will show me how far I am from the closest non-zero point, and the local gradient will let me estimate the direction of closest point, right? Seems a good idea...

2017-02-07 07:56:46 -0600 asked a question Finding nearest non-zero pixel

I've got a binary image noObjectMask (CV_8UC1) and a given point objectCenter (cv::Point). If the objectCenter is a zer-value pixel, I need to find the nearest non-zero pixel starting from the given point.

The number of non-zero points in the whole image can be large (even up to 50%), so calculating distances for each point returned from cv::findNonZero seems to be non-optimal. As the highest probability is that the pixel will be in the close neighborhood, I currently use:

# my prototype script in Python, but the final version will be implemented in C++
if noObjectMask[objectCenter[1],objectCenter[0]] == 0:
        # if the objectCenter is zero-value pixel, subtract sequentially its neighborhood ROIs
        # increasing its size (r), until the ROI contains at least one non-zero pixel
        for r in range(noObjectMask.shape[1]/2):
            rectL = objectCenter[1]-r-1
            rectR = objectCenter[1]+r
            rectT = objectCenter[0]-r-1
            rectB = objectCenter[0]+r
            # Pythonic way of subtracting ROI: noObjectMask(cv::Rect(...))
            rect = noObjectMask[rectL:rectR, rectT:rectB]
            if cv2.countNonZero(rect)>0: break
        nonZeroNeighbours = cv2.findNonZero(rect)
        # calculating the distances between objectCenter and each of nonZeroNeighbours
        # and choosing the closest one

This works okay, as in my images the non-zero pixels are typically in the closest neighborhood (r<=10px), but the processing time increases dramatically with the distance of the closest pixel. Each repetition of countNonZero repeats counting of the previous pixels. This could be improved by incrementing the radius r by more than one, but this still looks a bit clumsy to me.

How to improve the procedure? And ideas? -Thanks!

2017-02-04 13:17:33 -0600 received badge  Enthusiast
2017-02-02 07:14:16 -0600 asked a question Fastest way of finding max/min in each row

I need to find a maximum intensity value in each row of a 640x480 32FC1 image. Is there a faster way than iterating each row separately using parallel_for (cv::ParallelLoopBody)? May I use GPU (via UMat) for that?

2017-01-24 06:14:30 -0600 commented answer Recognize and track "closed line" objects

I think you can solve the problem shown in the blog post with my approach, and I think this will be the fastest solution. But once you've got the result of flood fill that covers the whole external of the coin (set to zeros), try to subtract the new (filled) image from the original one. I imagine, the result will be just the external negative circle. All the coin shapes (its edge, the face and letters) will be removed by subtraction. You can then negate the binary image to get the true (white) mask over the coin area. Does this make sense to you?

2017-01-18 13:31:29 -0600 commented answer Recognize and track "closed line" objects

Is the comment related to my question or the Pedro's one?

2017-01-18 09:56:37 -0600 received badge  Teacher (source)
2017-01-18 05:08:08 -0600 answered a question Recognize and track "closed line" objects

The function cv::findContours (if called with cv2.RETR_TREE parameter) returns so-called hierarchy, which lets you find out whether the contour has an inside contours (children).

Assuming there will be no internal small holes left inside of the lines after thresholding (which can be removed using some basic morphology), none of the objects on the right-hand side of your image has any internal object. The one on left has an internal object, which is the inside of the line (and another one which is the small shape in the center). It'll be better visible is you 'revert' the colours in cv::threshold().

Learn more about the hierarchyhere: http://docs.opencv.org/trunk/d9/d8b/t...

I think, using this feature you are able to find the closed circles.

2017-01-17 09:02:29 -0600 received badge  Editor (source)
2017-01-17 09:01:49 -0600 answered a question Closing contours with approxPolyDP or convexHull

Maybe I do not fully understand the problem, but using contours in this case seems like ovekill to me. If that's always the last column, and there is always only one segment missing, then maybe:

  • rotate the image 90deg counterclockwise (in order to get the pointer to the last two columns and add the column faster) cv::transpose()
  • add the missing column cv::Mat::push_back() (append your image to the single-row image)
  • go along the (n-1)th column (which after rotation becomes the second row) and
    • starting from the first non-white pixel in (n-1)th column
      • set all n-th column pixels to red
    • until you reach next non-white pixel in the (n-1)th column.

For more information on how to iterate over the image see: http://docs.opencv.org/2.4/doc/tutori...

If there are more than one segments missing, this will be still doable.

2017-01-17 04:57:34 -0600 received badge  Scholar (source)
2017-01-17 04:57:31 -0600 commented answer LineIterator for non-8U images

Okay -that means this class does not provide any pixel access optimization. I just wanted to make sure, as there is the third argument of the constructor (connectivity), and the ptr member, and this suggest me that the class has its own pixel access methods. Thanks!

2017-01-17 04:51:46 -0600 received badge  Supporter (source)
2017-01-16 11:09:32 -0600 asked a question LineIterator for non-8U images

Hi there!

I have a CV_32FC2 image, and need to iterate over a line on that image (C++). The LineIterator constructor does not report any issue, but from the pointer I can get only Vec2b -not Vec2f, as I need.

  • Is it possible to use the class for non-8U images?
  • Does the class do more than iterating over _coordinates_? -Is it somehow optimized for pixel value access?
2016-02-19 16:29:55 -0600 received badge  Necromancer (source)
2014-11-12 06:32:24 -0600 answered a question How do I load an OpenCV generated yaml file in python?

It's probably too late for you to use my answer, but for future reference... For my that works:

yaml_data = numpy.asarray(cv2.cv.Load("my_file.yaml"))

my_file.yaml is generated by some OpenCV application written in C++, and contains a cv::Mat 2D matrix.