Ask Your Question

Tonystark124's profile - activity

2020-10-07 13:01:27 -0600 received badge  Popular Question (source)
2020-01-18 07:59:46 -0600 received badge  Popular Question (source)
2020-01-18 07:23:13 -0600 received badge  Organizer (source)
2020-01-18 07:21:38 -0600 asked a question Salt and Pepper impulse Denoising opencv

Salt and Pepper impulse Denoising opencv I am creating a generic method to work on salt and pepper noise and variants. T

2016-03-01 04:34:19 -0600 commented answer How to work with OPENCV4ANDROID- in android studio

@theoknock. You should add the dependenies in build.gradle module. Not the project.

2015-04-29 02:50:00 -0600 received badge  Nice Question (source)
2015-04-28 08:36:05 -0600 commented question Which adaboost to use and when?

@thdrksdfthmn, deal. am already on it

2015-04-28 08:32:40 -0600 received badge  Enthusiast
2015-04-26 13:53:48 -0600 received badge  Student (source)
2015-04-26 13:31:10 -0600 asked a question Which adaboost to use and when?

If anybody has good prior knowledge about the adaboost types offered by opencv traincascade, DAB, RAB, LB and GAB, can you please explain which adaboost to choose and when to? I have started reading about them, but if anybody could shorten it, that would be of great help, for me and everyone who references this question. Thanks!

2015-04-26 13:27:02 -0600 commented question matchShapes using grayscale images opencv

arigatho @berak

2015-04-26 00:48:00 -0600 commented question matchShapes using grayscale images opencv

@berak and can you please tell me how accurate it is? And are there functions in 3.0, that help in achieving tasks much, easier and better compared to 2.4.9? Any suggestions would help.

2015-04-25 12:10:07 -0600 received badge  Editor (source)
2015-04-25 12:09:07 -0600 commented question matchShapes using grayscale images opencv

@berak please check edit.

2015-04-24 14:54:40 -0600 commented question LBP based getMultiScale object detection.

@StevenPuttemans will the cascade function xml file details be considered? Since the function is used as "cascade_classifier.detectMultiScale()".?

2015-04-24 14:51:23 -0600 commented question LBP based getMultiScale object detection.

@StevenPuttemans if the detectMultiScale function is used for both Haar and LBP, how do I mention in the function to use LBP features?? I am sorry about my earlier comment. I knew how to set acceptance ratio. But while detecting, I get it the image is scanned and LBP features have to be obtained to compare. But if no argument is given to mention the same, how does the function differentiate.?

2015-04-24 13:59:54 -0600 asked a question matchShapes using grayscale images opencv

According to the matchShapes documentation, the input can be either gray scale images or contours. But when I tried two gray scale images, I got an assertion failed error. Upon further exploration, I found from here that the Mat object has to be a 1D vector and of type CV_32FC2 or CV_32SC2.

Using this answer, I converted the images to vector array of float after converting them to CV_32FC2. I still get an assertion error.

Can anyone tell me how can I compare 2 grayscale images using matchShapes function?

UPDATE

as asked in comments, I tried with 2 grayscale images. and this is the error I got

OpenCV Error: Assertion failed (contour1.checkVector(2) >= 0 && contour2.checkVector(2) >= 0 && (contour1.depth() == CV_32F || contour1.depth() == CV_32S) && contour1.depth() == contour2.depth()) in matchShapes, file /home/tonystark/Opencv/modules/imgproc/src/contours.cpp, line 1936 terminate called after throwing an instance of 'cv::Exception' what(): /home/tonystark/Opencv/modules/imgproc/src/contours.cpp:1936: error: (-215) contour1.checkVector(2) >= 0 && contour2.checkVector(2) >= 0 && (contour1.depth() == CV_32F || contour1.depth() == CV_32S) && contour1.depth() == contour2.depth() in function matchShapes

Just curious, anything to do with my opencv version? it is 2.4.9

2015-04-24 13:20:09 -0600 received badge  Scholar (source)
2015-04-24 04:39:37 -0600 asked a question get images from opencv_createsamples, annotations, png files

I followed the documentation from here and created samples using opencv_createsamples for positive images . I am trying to get these images, than storing it in .vec files.

From the documentation, there's a way to get it, as PNG or JPG files, but am not able to undertsand few things :-

    1. It asks for the back ground or negative images information. Is it going to use that to rotate and create samples of the positive images?  How exactly is the information of negative images going to help here? For example, lets say I want to detect images of cars. Can the background images be that of a ship? or should it be literally the images that come in its background.?

   2. for creating **.vec** files and samples, the background images information was not necessarily required. Does it mean that, even here, for png or jpg, it wont be used to manipulate the positive samples?

  3. I have included the back ground images as objects, that can appear in the scene , but not around the car or as its background. Am I going write or by back ground it just means th foreground-background of an image, in image processing terms?

 4. should annotations.lst be created by me or will it be done by the executable?

Please help me. I am really confused here

2015-04-23 09:44:02 -0600 received badge  Supporter (source)
2015-04-23 08:11:10 -0600 commented question LBP based getMultiScale object detection.

@StevenPuttemans Thank you for your input. could you please explain what you meant by factor of 10e^-5? Did you mean the acceptance ratio?

2015-04-23 07:08:55 -0600 asked a question LBP based getMultiScale object detection.

I trained my system using opencv's traincascade using LBP features for faster training. The result of training is shown below. I used 14 stages.

    cascadeDirName: data_cascade/
    vecFileName: opencv_createsamples/positive_samples.vec
    bgFileName: Doors/negatives.txt
    numPos: 4350
    numNeg: 2400
    numStages: 14
    precalcValBufSize[Mb] : 4000
    precalcIdxBufSize[Mb] : 5000
    stageType: BOOST
    featureType: LBP
    sampleWidth: 30
    sampleHeight: 50
    boostType: GAB
    minHitRate: 0.999
    maxFalseAlarmRate: 0.5
    weightTrimRate: 0.95
    maxDepth: 1
    maxWeakCount: 100

    ===== TRAINING 13-stage =====
    <BEGIN
    POS count : consumed   4350 : 4370
    NEG count : acceptanceRatio    2400 : 0.000223963
    Precalculation time: 72
    +----+---------+---------+
    |  N |    HR   |    FA   |
    +----+---------+---------+
    |   1|        1|        1|
    +----+---------+---------+
    |   2|        1|        1|
    +----+---------+---------+
    |   3|  0.99977| 0.559583|
   +----+---------+---------+
    |   4|  0.99931| 0.240417|
   +----+---------+---------+
   END>
   Training until now has taken 0 days 1 hours 57 minutes 53 seconds.

I have just showed the output of the final stage to show the acceptance ratio. Now, in order to detect using this classifier, I modified an existing opencv example using detectMultiScale(). The code is shown below

    #include "opencv2/objdetect/objdetect.hpp"
    #include "opencv2/highgui/highgui.hpp"
    #include "opencv2/imgproc/imgproc.hpp"

    #include <cctype>
    #include <iostream>
    #include <iterator>
    #include <stdio.h>

    using namespace cv;
    using namespace std;

   void detectAndDraw(Mat &img, CascadeClassifier& cascade,double scale)
   {
             vector<Rect> double_doors;
             Mat gray;
             Mat smallImg( cvRound (img.rows/scale), cvRound(img.cols/scale), CV_8UC1 );

            Scalar color = Scalar(0,0,255);

            cvtColor( img, gray, CV_BGR2GRAY );
            resize( gray, smallImg, smallImg.size(), 0, 0, INTER_LINEAR );
            equalizeHist( smallImg, smallImg );
            cout<<"equalization done"<<endl;
            cascade.detectMultiScale( smallImg, double_doors,scale,1,0,Size(10,10),Size(100,140));

            cout<<"multiscale detect complete"<<endl;
            for(int i=0;i<double_doors.size();i++)
            {
                      rectangle(img,double_doors[i],color,1,8,0);
            }

           cout<<"rectangles drawn"<<endl;
           imwrite("/home/tonystark/Project_awesome/Dataset/Training_set/Results/Double_Doors/Door_Detect1.jpg",img);

           imshow("Output Image",img);

    }

    int main(int argc, char **argv)
    {
            CascadeClassifier cascade, nestedCascade;
            cascade.load(argv[1]);
            double scale = atof(argv[2]);
            Mat img = imread(argv[3],1);

            cout<<"inputs read"<<endl;

            namedWindow("Input Image",WINDOW_NORMAL);
            namedWindow("Output Image",WINDOW_NORMAL);
            detectAndDraw(img,cascade,scale);

            cout<<"detection complete"<<endl;
            imwrite("/home/tonystark/Project_awesome/Dataset/Training_set/Results/Double_Doors/Door_Detect1.jpg",img);
            imshow("Input Image",img);
            waitKey(0);
            return 0;
    }

The input image for detection is 3000X2250 and I started the detection. Its been about 1:30 hours since and I am not sure what is exactly happening. From reading the documentation here, it tells me more or less like it supports only Haar. But from this here, I get that it can be used for both. I can see a parameter from the code, for Haar, in that link, for the function,

    face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) );

In my code, this is how I call the function

    cascade.detectMultiScale( smallImg, double_doors,scale,1,0,Size(10,10),Size(100,140));

Am I missing something here? Could you please point me in the right direction?