Ask Your Question
1

[Solved] find all the cards that are above and their center

asked 2015-01-18 07:25:10 -0600

gfx gravatar image

updated 2016-10-27 03:51:12 -0600

Here is my situation .... it is only an example, I do not work with static image, I work with camera capture, but the strategy to use to get the result is very similar...

input :

image description

I want to find all the cards that are above and their center... at present to increase the speed of calculation transform the image first in 16 colors .. then filter only red colour zone.

  1. transform in 1ch b/w
  2. find all contour and contour center point (save in a vector) with an area as large as rectangles (see second image brown circle)
  3. iterate vector with center point and compute distance betwen all founded points
  4. if the distance satisfies the construction of a rectangle, then the point is good ( second image green lines ) otherwise the line is not saved ( yellow lines ) . The rectangles found are stored in a vector.
  5. iterate a vector with rectangle and find center point of founded rectangle.

explain my actual strategy:

image description

this strategy works , but often asked false positives especially if there are a lot of cards . In fact I can also find green rectangles also between two different cards .... this is not good .

Someone with more experience I can suggest a more precise strategy.... and that also uses few resources ? In the examples I showed static images , but the program should run in real-time with images taken by the camera....I have just tray with ORB for example .... but I have not been able to get accurate results and speed with multiple items. ( I did not know how to behave to be precise with ORB ).

Tanks a lot for all suggest

edit retag flag offensive close merge delete

Comments

1

when you say all the cards that are above, you mean every card that is neither full or partially occluded? So, in the example picture 3 cards all in all.

or

all the cards in total.

theodore gravatar imagetheodore ( 2015-01-19 07:50:03 -0600 )edit

excuse much of the delay ... I could not answer before today .... only 3 cards all in all .....

gfx gravatar imagegfx ( 2015-01-21 03:08:09 -0600 )edit

3 answers

Sort by ยป oldest newest most voted
9

answered 2015-01-19 17:05:12 -0600

theodore gravatar image

Well gfx, I had some free time and I worked a bit on your problem. I considered that you wanted to detect all the individual cards on the table both full/partially occluded or not. Here are my results:

#include <opencv2/opencv.hpp>
#include <iostream>

int main()
{
    // Load your image
    cv::Mat src = cv::imread("cards1.png");

    // Check if everything was fine
    if (!src.data)
        return -1;

    // Show source image
    cv::imshow("Source Image", src);

image description

// Change the background from white to black, since that will help later to extract
// better results during the use of Distance Transform
for( int x = 0; x < src.rows; x++ ) {
  for( int y = 0; y < src.cols; y++ ) {
      if ( src.at<cv::Vec3b>(x, y) == cv::Vec3b(255,255,255) ) {
        src.at<cv::Vec3b>(x, y)[0] = 0;
        src.at<cv::Vec3b>(x, y)[1] = 0;
        src.at<cv::Vec3b>(x, y)[2] = 0;
      }
    }
}

// Show output image
cv::imshow("Black Background Image", src);

image description

// Create a kernel that we will use for accuting/sharpening our image
cv::Mat kernel = (cv::Mat_<float>(3,3) <<
        1,  1, 1,
        1, -8, 1,
        1,  1, 1); // an approximation of second derivative, quite strong

// do the laplacian filtering as it is
// well, we need to convert everything in something more deeper then CV_8U
// because the kernel has some negative values,
// and we can expect in general to have a Laplacian image with negative values
// BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
// so the possible negative number will be truncated
cv::Mat imgLaplacian;
cv::Mat sharp = src;
cv::filter2D(sharp, imgLaplacian, CV_32F, kernel);
src.convertTo(sharp, CV_32F);
cv::Mat imgResult = sharp - imgLaplacian;

// convert back to 8bits gray scale
imgResult.convertTo(imgResult, CV_8UC3);
imgLaplacian.convertTo(imgLaplacian, CV_8UC3);

// imshow( "laplacian", imgLaplacian );
imshow( "New Sharped Image", imgResult );

image description

// Create binary image from source image
cv::Mat bw;
src = imgResult;
cv::cvtColor(src, bw, CV_BGR2GRAY);
cv::threshold(bw, bw, 40, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
cv::imshow("Binary Image", bw);

image description

// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, CV_DIST_L2, 3);

// Normalize the distance image for range = {0.0, 1.0}
// so we can visualize and threshold it
cv::normalize(dist, dist, 0, 1., cv::NORM_MINMAX);
cv::imshow("Distance Transform Image", dist);

image description

// Threshold to obtain the peaks
// This will be the markers for the foreground objects
cv::threshold(dist, dist, .4, 1., CV_THRESH_BINARY);

// Dilate a bit
cv::Mat kernel1 = cv::Mat::ones(3, 3, CV_8UC1);
cv::dilate(dist, dist, kernel1);
cv::imshow("Peaks", dist);

image description

// Create the CV_8U version of the distance image
// It is needed for cv::findContours()
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);

// Find total markers
std::vector<std::vector<cv::Point> > contours;
cv::findContours(dist_8u, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
int ncomp = contours.size();

// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32SC1);

// Draw the foreground markers
for (int i = 0; i < ncomp; i++)
    cv::drawContours(markers, contours, i, cv::Scalar::all(i+1), -1 ...
(more)
edit flag offensive delete link more

Comments

1

+1 Seriously one of the best answers I have seen passing by in the last weeks! Could you maybe poor this one into a tutorial and add it to the tutorial section in the documentation?

StevenPuttemans gravatar imageStevenPuttemans ( 2015-01-20 02:45:12 -0600 )edit

@StevenPuttemans thanks for the good words. I would be glad to transform this one into a tutorial. However, I think that I will need some assistance to that, since I have not done this before. I found this guide about writing tutorials, is this the one that I should follow? If yes, then I have some questions on it. How, I can contact someone in order to clear up these questions. Thanks.

theodore gravatar imagetheodore ( 2015-01-20 06:33:21 -0600 )edit

@theodore, yeah that is the tutorial to follow. If you get stuck somewhere or you have questions, you can always contact me, I will be glad to help out where I can. You can put them here or open a new question for it.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-01-20 06:41:56 -0600 )edit
1

perfect!!!

theodore gravatar imagetheodore ( 2015-01-20 07:09:46 -0600 )edit
1

I am moved by the completeness of the response .... your STRATEGY ( I have not yet proven ) seems "lighter " than the one I used myself and probably a more accurate .... is certainly more professional

I respect you .

gfx gravatar imagegfx ( 2015-01-21 02:49:38 -0600 )edit
0

answered 2015-01-22 08:41:44 -0600

gfx gravatar image

updated 2016-10-27 03:47:59 -0600

ok ok ..... I add a comment in a nice post .... I have an idea:

1-hougline link to a tutorial 2-findcontour contourArea as filter 3-the result --> become a mask 4- than see code below

absdiff(previousframe, nextframe, n);
absdiff(currentframe, nextframe, n);
bitwise_and(n, l, result);

5-matchshape or hierarchy to find the orientation and angle of the cards ( this assumption that the cards are for example sugar packets ) .

what do you think ??

I resume the request on this my old post .... (because only one reply is possible for each one, and because I recently addressed the issue again).... the best answer for me now is these strategy (in opencv3 it has better performance respect opencv2.4.x):

              cvtColor(dest_image, imgHSV, CV_RGB2HSV); //Change the color format from BGR to HSV
          inRange(imgHSV, Scalar(a,b,c), Scalar(a1,b1,c1), imgThresh); 
          equalizeHist(imgThresh, imgThresh);
          imgThresh.copyTo(imgThresh1);
          if(imgThresh1.empty()) break;
          blur(imgThresh1, imgThresh1, Size( d2, d3 ), Point(-1,-1));

next step ... find contour and find centroid and area for each figure red rhomboid of playing cards ... do so (with blur erode dilate) to make invisible the numbers ... even filtering out the size of the contour found ...

next step ... create a class or function that calculate if centroid is part of rectangle (green rectangle of my previous draw) and store it in a vector with 5d rectangle point ... 4d for 4 point of rectangle + 1d for center of these rectangle...

next step ... using LineSegmentDetector::detect find all contour (2th time of use contour) card and only complete contour (centroid & area) store it in a vector

next step ... compare vector to vector centroid of vector5d(5d-last point) and centroid of contour card find with some tolerance ....

In these way you can fin all complete card which you are seen to be above all other...

I think only 4 loop program is enought for do these ....

Regards Giorgio

edit flag offensive delete link more

Comments

gfx if that works better for you then go with that way. You know better, what the requirements of your project are. My goal was to give you an idea and some motivation. And from your replies I can see that I managed to do that :).

theodore gravatar imagetheodore ( 2015-01-22 15:47:36 -0600 )edit
0

answered 2015-01-21 02:56:18 -0600

gfx gravatar image

updated 2015-01-21 03:06:03 -0600

I thank you again ... but I want you to notice something , look at the image below .

image description

as you see in red I reported the three cards you have correctly found .... in yellow card that will probably be detected .... in fact its contour.area will be a few units smaller than the others .

The differences are too few to be decisive . I have the same problem with my code .... your code , however, is better and gives at least 5 times fewer false positives of my ....

edit flag offensive delete link more

Comments

Today I tested work ... unfortunately the result is very similar to what I get with my method . .... False positives are a good amount .

gfx gravatar imagegfx ( 2015-01-21 12:07:14 -0600 )edit

The way that the cards occlude each other is quite tough. For that reason, you cannot avoid having false positives the point is to eliminate that as much as possible. Furthermore, if you noticed on the table there are 15 cards but with the method I propose you detect 14. One way that you could eliminate the false positives and might be able to detect all the cards could be to combine different approaches and then merge the outputs from each one of them. However, this might be time/source consuming since it will increase the processing time. Furthermore, in order to achieve a result as much as better, you will need to perform thorough validation tests, which will help you to understand and see different cases where your algorithm fails and then think how to improve it.

theodore gravatar imagetheodore ( 2015-01-21 14:37:07 -0600 )edit
1

ok now I noticed that you want to detect only the cards on the top (i.e. only 3). If you want to eliminate the detected blobs, which will lead only to the "good" cards what you can do is to increase the threshold value when you are thresholding the distance image. In the example I am using .4 if you increase it to .8 or .9 and then try to discard the blobs with a big area (apply findcontours on the final image).

Plus, since you are not interested for the other cards you can discard the sharpening and morphological operation procedures. Also something else, I do not know if the dimensions of the cards through the images are gonna be stable. But if it is then I think it is a feature that you can use in combination to what I am proposing in the first paragraph for ...(more)

theodore gravatar imagetheodore ( 2015-01-21 15:08:15 -0600 )edit

Tanks a lot .... I think like you ... unfortunately the images are not stable enough to rely only on findcontours (area , humoments etc .... ) .... I would need a great ring light to stabilize images .

To make up for the bad lighting conditions, I thought to use a mix of your latest proposal and matchshape .... I have not ever used and I do not know how quick it is to find the shape that I'm going to define .

If you have other ideas ... well .... ( do not need to write more code here ... you've done a great job ) .

gfx gravatar imagegfx ( 2015-01-22 08:16:42 -0600 )edit

Question Tools

4 followers

Stats

Asked: 2015-01-18 07:25:10 -0600

Seen: 2,888 times

Last updated: Oct 27 '16