Ask Your Question

Lory's profile - activity

2016-02-24 12:42:01 -0600 asked a question Where to find many uniform backgrounds to train a classifier?

I need to have a classifier to detect and recognize coins but all tutorials I have read up to know suggest as negative/background images this archive "http://tutorial-haartraining.googlecode.com/svn/trunk/data/negatives/"... Anyway, the classifier I got does not detect money when they are above tables, on in general above surfaces...

So, I would need an archive with tables ans surfaces to train a new classifier. Do you know some of them?

2016-02-04 09:07:15 -0600 commented question Is OpenCV supposed to train on positives or samples images?

@Eduardo: can I write you by email? Only if I don't annoy you...

2016-02-04 08:02:36 -0600 commented question Why does OpenCV recognize the object only in training images?

@StevenPuttemans: forgive me, I did not notice you were the author of the book...I'm sorry

2016-02-04 07:52:16 -0600 commented question Why does OpenCV recognize the object only in training images?

@StevenPuttemans: I've been trying to do it for days...I would like to ask you a favour, even if I may sound stupid: I wish you would write exactly what I must do (step by step and commands) because there's always some mistake I make...

2016-02-04 06:01:37 -0600 commented question Why does OpenCV recognize the object only in training images?

@StevenPuttemans thank you for your kind and useful reply. Anyway I've just discovered that it won't solve the problem because I've just tried to detect a straight coin (and not rotated like the one in my question) but it's not detected...so the problem is upstream

2016-02-04 05:05:46 -0600 commented question Is OpenCV supposed to train on positives or samples images?

Yes, yesterday night I launched

opencv_createsamples -vec i.vec -w 48 -h 48 -num 210 -img ./positives/i.jpg -maxidev 100 -maxxangle 0 -maxyangle 0 -maxzangle 0.9 -bgcolor 0 -bgthresh 0 (for i from 0 to 60)

So, as you can see I did not generate any warped image and I launched opencv_traincascade with LBP mode. But this does not work too...nothing is detected

2016-02-04 04:05:44 -0600 commented question why may detectMultiScale() give too many points out of the interested object?

@StevenPuttemans I don't know exactly what you mean but it's impossibile to collect real test images in the background conditions in which a classifier will have to work...people could lay the coin everywhere...

2016-02-04 03:35:33 -0600 asked a question Is OpenCV supposed to train on positives or samples images?

In one of my questions discussed here here there's the problem that OpenCV recognizes the object to detect only in training images. Up to today I have listened to discordant points of view: how does the .vec file have to be created? Does it need to contain only positive images (cropped images showing ONLY the interested object) or samples images (interested object with random background)? If it needs to contain only the cropped interested object, on which images does opencv_traincascade have to train? I read and read this tutorial over and over but I still don't understand which is the correct way to proceed...would anyone explain me?

2016-02-03 07:18:56 -0600 received badge  Enthusiast
2016-02-02 12:51:09 -0600 commented question Why does OpenCV recognize the object only in training images?

I'm using OpenCV 3 so I downloaded opencv_contrib for my version and tried to compile xfeatures2d module but I get a CMake error in CMakeList file...moreover xfeature2d is not even present inside opencv framework for xcode so even if I made it work on my pc then I could not use it in ios anyway...

2016-02-02 07:20:31 -0600 commented question Why does OpenCV recognize the object only in training images?

Ok. So please would you tell me which is the best way to proceed? Because I keep on making mistakes and I don't know how to improve...

2016-02-02 07:16:57 -0600 commented answer why may detectMultiScale() give too many points out of the interested object?

Many tutorial say to do what Eduardo did...what would be the corredct way in your opinion?

2016-02-02 07:03:17 -0600 asked a question Why does OpenCV recognize the object only in training images?

In order to make my iOS app recognize 1€, 2€ and 0.50€ coins I have been trying to use opencv_createsamples and opencv_traincascade to create my own classifier.xml. So, I cropper 60 images of 2€ coins from a short video like the following:

image description

Then, I combined them with random backgrounds using opencv_createsamples. I obtained 12000 images similar to this:

image description

and I ran the following commands:

opencv_createsamples -img positives/i.jpg -bg negatives.txt -info i.txt -num 210 -maxidev 100 -maxxangle 0.0 -maxyangle 0.0 -maxzangle 0.9 -bgcolor 0 -bgthresh 0 -w 48 -h 48 (for i from 0 to 60)

cat *.txt > positives.txt

opencv_createsamples -info positives.txt -bg negatives.txt -vec 2.vec -num 12600 -w 48 -h 48

opencv_traincascade -data final -vec 2.vec -bg negatives.txt -numPos 12000 -numNeg 3000 -numStages 20 -featureType LBP -precalcValBufSize 2048 -precalcIdxBufSize 2048 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -w 48 -h 48

Training stopped at 13-th stage. Once I got a cascade.xml I tried it at once (with detectMultiScale()) on a simple image taken by my smartphone but nothing is detected:

image description

while if I give as input one of the images used as traning, then it works very good:

image description

I can't really understand why this is happening and it's driving me insane, most of all because I have been trying to make it work for weeks...would you please tell me where I am making the mistake?

The short program I wrote is here:

#include "opencv2/opencv.hpp"

using namespace cv;

int main(int, char**) {

Mat src = imread("2b.jpg");

Mat src_gray;

std::vector<cv::Rect> money;

CascadeClassifier euro2_cascade;

cvtColor(src, src_gray, CV_BGR2GRAY );
equalizeHist(src_gray, src_gray);

if ( !euro2_cascade.load( "cascade.xml" ) ) {
    printf("--(!)Error loading\n");
    return -1;
}

euro2_cascade.detectMultiScale( src_gray, money, 1.1, 3, 0|CASCADE_SCALE_IMAGE/*CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_SCALE_IMAGE*/, cv::Size(10, 10),cv::Size(2000, 2000) );
printf("%d\n", int(money.size()));

for( size_t i = 0; i < money.size(); i++ ) {
    cv::Point center( money[i].x + money[i].width*0.5, money[i].y + money[i].height*0.5 );
    ellipse( src, center, cv::Size( money[i].width*0.5, money[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
}

namedWindow( "Display window", WINDOW_AUTOSIZE );
imwrite("result.jpg",src);
}
2016-02-01 05:20:09 -0600 commented question why may detectMultiScale() give too many points out of the interested object?

@Eduardo thank you very much for you comment Eduardo, I will try with another training at once. Anyway, I had already tried with a LBP training. Would you like to have a look at my updated question? Because If I use an image which had been used for training, then the detection is quite goog...this does not happen with an arbitrary image :(

2016-01-29 10:52:56 -0600 commented question why may detectMultiScale() give too many points out of the interested object?

@Eduardo all the commands I ran were the ones discussed here

http://www.memememememememe.me/traini...

This is the only "useful" tutorial I found whichi I followed strictly. Now I'm trying with -LBP flag but I don't know if it will improve things. Anyway, yes, as you said I had 100 photos showing only a 2€ coin which were then combined with random backgrounds by executing opencv_createsamples

Let me know if there's a way I can get my aim...it's for my thesis.

2016-01-29 10:06:41 -0600 commented question why may detectMultiScale() give too many points out of the interested object?

@Eduardo yes, all my positive images look like the one I posted in my question...there's the coin and the background. yes, opencv_createsamples should provide the coordinates as you said for each image containing a coin...as explained here (http://www.memememememememe.me/traini...) and in many other tutorial...

2016-01-29 08:54:31 -0600 commented answer why may detectMultiScale() give too many points out of the interested object?

Thank you for your answer but as I said in my question, even if I increment the number of neighbors the effect is the same...there are only many less points but their distribution is the same: very few on the coin, many more on the background...

2016-01-29 08:53:30 -0600 received badge  Supporter (source)
2016-01-29 08:03:55 -0600 received badge  Editor (source)
2016-01-29 07:52:34 -0600 asked a question why may detectMultiScale() give too many points out of the interested object?

I trained my pc with opencv_traincascade all one day long to detect 2€ coins using more than 6000 positive images similar to the following:

image description

Now, I have just tried to run a simple OpenCV program to see the results and to check the file cascade.xml. The final result is very disappointing:

image description

There are many points on the coin but there are also many other points on the background. Could it be a problem with my positive images used for training? Or maybe, am I using the detectMultiScale() with wrong parameters?

Here's my code:

#include "opencv2/opencv.hpp"
using namespace cv;

int main(int, char**) {

  Mat src = imread("2c.jpg", CV_LOAD_IMAGE_COLOR); 

  Mat src_gray;

  std::vector<cv::Rect> money;

  CascadeClassifier euro2_cascade;

  cvtColor(src, src_gray, CV_BGR2GRAY );
  equalizeHist(src_gray, src_gray);

  if ( !euro2_cascade.load( "/Users/lory/Desktop/cascade.xml" ) ) {
     printf("--(!)Error loading\n");
     return -1;
  }

  euro2_cascade.detectMultiScale( src_gray, money, 1.1, 0, 0, cv::Size(10, 10),cv::Size(2000, 2000) );

  for( size_t i = 0; i < money.size(); i++ ) {
     cv::Point center( money[i].x + money[i].width*0.5, money[i].y + money[i].height*0.5 );
     ellipse( src, center, cv::Size( money[i].width*0.5, money[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
  }

  //namedWindow( "Display window", WINDOW_AUTOSIZE );
  imwrite("result.jpg",src);
}

I have also tried to reduce the number of neighbours but the effect is the same, just with many less points... Could it be a problem the fact that in positive images there are those 4 corners as background around the coin? I generated png images with Gimp from a shot video showing the coin, so I don't know why opencv_createsamples puts those 4 corners.

UPDATE I also tried to create a LBP cascade.xml but this is quite strange: in fact, if I use, in the abve OpenCV program, an image used as training, then the detection is good: image description

Instead, if I use another image (for example, taken by my smartphone) there there's nothing detected. What does it mean this? Have I made any error during training?

image description

2016-01-25 10:20:54 -0600 asked a question OpenCV/iOS: SimpleBlobDetector detects 0 points

In my iOS app developed by Swift, for the moment, I am just trying to detect the center of an elliptical object (a 2€ coin) in a photo. It is the first time I have approached to OpenCV 3.1 so, following some documentation and answers on OpenCV questions, this is the code I have written in my class OpenCVWrapper.mm:

#include "OpenCVWrapper.hpp"

#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

@implementation OpenCVWrapper : NSObject

+ (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
  CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
  CGFloat cols = image.size.width;
  CGFloat rows = image.size.height;

  cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)

  CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to  data
                                                cols,                       // Width of bitmap
                                                rows,                       // Height of bitmap
                                                8,                          // Bits per component
                                                cvMat.step[0],              // Bytes per row
                                                colorSpace,                 // Colorspace
                                                kCGImageAlphaNoneSkipLast |
                                                kCGBitmapByteOrderDefault); // Bitmap info flags

  CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
  CGContextRelease(contextRef);

  return cvMat;
}

+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat 
{
  NSData *data = [NSData dataWithBytes:cvMat.data  length:cvMat.elemSize()*cvMat.total()];
  CGColorSpaceRef colorSpace;

  if (cvMat.elemSize() == 1) {
      colorSpace = CGColorSpaceCreateDeviceGray();
  } else {
      colorSpace = CGColorSpaceCreateDeviceRGB();
  }

  CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

  // Creating CGImage from cv::Mat
  CGImageRef imageRef = CGImageCreate(cvMat.cols,        //width
                                    cvMat.rows,                                 //height
                                    8,                                          //bits per component
                                    8 * cvMat.elemSize(),                       //bits per pixel
                                    cvMat.step[0],                            //bytesPerRow
                                    colorSpace,                                 //colorspace
                                    kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
                                    provider,                                   //CGDataProviderRef
                                    NULL,                                       //decode
                                    false,                                      //should interpolate
                                    kCGRenderingIntentDefault                   //intent
                                    );


  // Getting UIImage from CGImage
  UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
  CGImageRelease(imageRef);
  CGDataProviderRelease(provider);
  CGColorSpaceRelease(colorSpace);

  return finalImage;
}

+ (UIImage *)processImageWithOpenCV:(UIImage*)inputImage {

  Mat src = [self cvMatGrayFromUIImage:inputImage];
  Mat src_gray;

  cvtColor(src, src_gray, CV_BGR2GRAY );

  cv::Ptr<cv::SimpleBlobDetector> blobsDetector =     cv::SimpleBlobDetector::create();
  vector<KeyPoint> keypoints;
  blobsDetector->detect(src_gray, keypoints);
  printf("%lu",keypoints.size());
  for (size_t i = 0; i < keypoints.size(); ++i)
    circle(src, keypoints[i].pt, 4, Scalar(255, 0, 255), -1);

  return [self UIImageFromCVMat:src];
}

@end

and in my ViewController.swift I have:

override func viewDidLoad() {
  super.viewDidLoad()
  ...
  self.imageView.image = OpenCVWrapper.processImageWithOpenCV(UIImage(named: "euro.jpg"))
}

where euro.jpg is a simple photo of a 2€ coin on a table. The error I get is the following:

As you can expect there's nothing drawn on the original image. Moreover, after detecting the center, I would need to detect the width and height of the ellipse-coin. How Shall I do?

2016-01-20 11:30:12 -0600 asked a question iOS: how to detect a ellipse-like shape coin in UIImage with OpenCV

Part of my iOS app is based upon taking a photo of an object with an euro coin near and I would like the app itselt to recognize the coin (in particular 2€, 1€, 0.50€) and put a sort of UIView upon it. To have an idea, the following screenshot might be a possibile final result:

image description

Obviously, it's not necessary for the app to be extremely precise because, as you can see, I give the user further ways to adjust the UIView as much as possibile.

To get this aim, I am trying to understand how OpenCV works and to integrate it in my app. It's not a problem for me to write C++/Objective-C code even if my app is developed in Swift but I need your help to get a sort of skeleton-code. My teacher told me that OpenCV algorithms can be "learned" not only to look for particular shapes/forms but also to recognize a specific pattern image, such as euro coins, by giving them some other images as reference.

Would you please write some come for me from where to start from and tell me which OpenCV routines I need the most?