Ask Your Question

lezan's profile - activity

2018-05-29 14:51:12 -0600 commented answer DNN opencv with SSD resnet return wrong face dimension

Now is working like a charm. Thanks as always berak, your help is precious.

2018-05-29 04:03:24 -0600 commented answer Unspecified error (Can't create layer "data" of type "Input") in getLayerInstance

@dkurt that's a good point. I will try it.

2018-05-29 03:39:02 -0600 commented answer DNN opencv with SSD resnet return wrong face dimension

Now is working like a charm. Thanks as always berak, you are help is precious.

2018-05-29 03:31:59 -0600 commented answer DNN opencv with SSD resnet return wrong face dimension

Is it not enought to check the last layer of network? Perfect. Make sense. Thought could be a problem, it was not. Ok.

2018-05-29 03:27:59 -0600 marked best answer DNN opencv with SSD resnet return wrong face dimension

Hello, I playing with face and DNN but I cannot figure out of to solve an issue.

I am processing image 256x256. Using deploy.prototxt and res10_300x300_ssd_iter_140000.caffemodel (same one on dnn directory).

Some code.

cv::Mat faceROI;
cv::Mat image;

image = cv::imread(imagePath[imageId], CV_LOAD_IMAGE_COLOR);
cv::Mat imageDNNBlob = cv::dnn::blobFromImage(image, 1.0, cv::Size(300, 300), 
    Scalar(104.0, 177.0, 123.0), false, false);
netOpenCVDNN.setInput(imageDNNBlob, "data");
cv::Mat detection = netOpenCVDNN.forward("detection_out");
cv::Mat faces(detection.size[2], detection.size[3], CV_32F, detection.ptr<float>());
for (int i = 0; i < faces.rows; i++)
{
    float confidence = faces.at<float>(i, 2);
    if (confidence > 0.99)
    {
        int xLeftBottom = static_cast<int>(faces.at<float>(i, 3) * image.cols);
        int yLeftBottom = static_cast<int>(faces.at<float>(i, 4) * image.rows);
        int xRightTop = static_cast<int>(faces.at<float>(i, 5) * image.cols);
        int yRightTop = static_cast<int>(faces.at<float>(i, 6) * image.rows);

        cv::Rect faceRect((int)xLeftBottom, (int)yLeftBottom, 
                    (int)(xRightTop - xLeftBottom), (int)(yRightTop - yLeftBottom));
    faceROI = cv::Mat(image, faceRect);
         }
 }

Nothing too exotic, I just write down what I found in resnet_ssd_face.cpp. When I try to extract ROI from image with faceROI = cv::Mat(image, faceRect) I get an error on wrong dimensions with faceRect, in fact (with a particular image) I get 257 as dimension (height). faces.at<float>(i, 6)return a float >1.

What I miss? Can some help to figure out?

I have also some questions about this example:

  1. netOpenCVDNN.forward return a Mat, where size[2] is the number of object found, size[3] numbers of property of each object? Am I right? Where can I find more info about what forward return? (Already checked here and here. I think it is related to the layer "detection_out" of prototxt, but I can not get it).
  2. Mat facesis a matrix with all faces found, right? Where each rows is a face detected and each rows (face) have some property (cols), right? So faces.at<float>(i, 2) is the confidence of i-th face and from 3 to 4 are dimensions of face. What position 0 and 1 contains?
  3. Why cv::Mat imageDNNBlob have a numbers of rows and cols like -1?
  4. Last one: I am using image of 256x256 dimension. Input layer of dnn use 300x300 as dimension. What is the right solution? Resize image? Change input layer? Is cv::Size(300, 300) right in blobFromImage?

Thanks in advance.

2018-05-28 19:47:25 -0600 edited question DNN opencv with SSD resnet return wrong face dimension

DNN opencv with SSD resnet return wrong face dimension Hello, I playing with face and DNN but I cannot figure out of to

2018-05-28 19:45:21 -0600 asked a question DNN opencv with SSD resnet return wrong face dimension

DNN opencv with SSD resnet return wrong face dimension Hello, I playing with face and DNN but I cannot figure out of to

2018-05-28 19:12:32 -0600 commented question Compile OpenCV 3.4 and Cuda 9 with MS VS15 2017

As reported here Visual Studio 15.6 is the last one supported for CUDA 9.2. If you want to run (compile etc) CUDA with y

2018-05-28 16:16:31 -0600 commented answer Unspecified error (Can't create layer "data" of type "Input") in getLayerInstance

It does not make much sense to be wrong?

2018-05-28 06:57:42 -0600 commented answer Unspecified error (Can't create layer "data" of type "Input") in getLayerInstance

@berak Is there no way to use a grayscale image? You suggest to use color images just for dim=3 channels in prototxt? Sw

2018-05-27 06:07:37 -0600 commented question Compile OpenCV 3.4 and Cuda 9 with MS VS15 2017

What version of Visual Studio are you using? 15.6 is the last one supported by CUDA 9.2. P.S.: next time do not put you

2018-05-27 06:07:24 -0600 commented question Compile OpenCV 3.4 and Cuda 9 with MS VS15 2017

What version of Visual Studio are you using? 15.6 is the last one supported by CUDA 9.2. P.S.: next time do not put you

2018-05-27 06:06:06 -0600 commented question Compile OpenCV 3.4 and Cuda 9 with MS VS15 2017

What version of Visual Studio are you using? 15.6 is the last one supported by CUDA 9.2.

2018-05-07 15:43:36 -0600 commented question Grouping images by a person appearing on them

Is there something that can discriminate between them? For example, image name. However this is not a question related t

2018-05-07 10:38:07 -0600 commented question Grouping images by a person appearing on them

Is there something that can discriminate between them? For example, image name. However this is a question related to Op

2018-05-07 10:37:09 -0600 commented question Grouping images by a person appearing on them

Is there something that can discriminate between them? For example, image name.

2018-05-07 05:35:33 -0600 received badge  Critic (source)
2018-05-07 05:35:32 -0600 commented question how can i upgrade the detection of cv.findContours

Add some code and image example.

2018-05-03 09:07:25 -0600 edited answer Edge detection

You can try in this way: Apply threholding because you need a binary image. Apply morphological operator: first Open,

2018-05-03 09:06:16 -0600 answered a question Edge detection

You can try in this way: Apply threholding because you need a binary image. Apply morphological operator: first Open,

2018-05-03 06:39:46 -0600 commented question I am not able to build my opencv project in eclipse

I get it, but how do you install opencv on your machine? Do you compile it or get binary from site?

2018-05-03 04:26:16 -0600 received badge  Citizen Patrol (source)
2018-05-03 03:33:21 -0600 commented question I am not able to build my opencv project in eclipse

How you compile OpenCV? With VC? MinGW?

2018-05-03 03:17:40 -0600 edited answer What is the BGR to YUV and BGR to LAB conversion formula used by OpenCV

Have you checked documentaion here https://docs.opencv.org/3.4.1/de/d25/imgproc_color_conversions.html?

2018-05-02 14:21:30 -0600 received badge  Nice Answer (source)
2018-05-02 12:54:22 -0600 received badge  Teacher (source)
2018-05-02 09:44:38 -0600 commented question What is the BGR to YUV and BGR to LAB conversion formula used by OpenCV

https://docs.opencv.org/3.4.1/de/d25/imgproc_color_conversions.html checked?

2018-05-02 09:33:59 -0600 commented question OpenCV DNN module slower in C++ than in python

Well, without any code is difficult to answer, but I guess that OpenCV is not correcly installed, at least in release. H

2018-04-30 09:41:48 -0600 commented question OpenCV DNN module slower in C++ than in python

Have you try to switch from Debug mode to Release mode? Do you add AVX, AVX2 or something else?

2018-04-09 14:54:58 -0600 marked best answer BOWKMeansTrainer and features extraction

After some months, I start again to do some optimization with my code, but I forgot somethink, I think.

I need to perform Bag of Words to clusterize features extracted from images. Let's see some code.

Extract features and put them into a vector.

cv::Mat featuresVector;
for (int i = 0; i < numberImages; ++i) // <- first features extraction
{
    cv::Mat featuresExtracted = runExtractFeature(face, featuresExtractionAlgorithm);
    featuresVector.push_back(featuresExtracted);
}

Them, I want to clusterize them with BOWKMeansTrainer.

cv::BOWKMeansTrainer bowTrainer(dictionarySize, termCriteria, retries, centersFlags);
bowTrainer.add(featuresVector);
cv::Mat dictionary = bowTrainer.cluster();

Then, preare for bag of words in this way

cv::Ptr<cv::DescriptorMatcher> matcher = cv::FlannBasedMatcher::create();
cv::Ptr<cv::DescriptorExtractor> extractor = cv::xfeatures2d::SiftDescriptorExtractor::create(); // <-
cv::BOWImgDescriptorExtractor bowDE(extractor, matcher);
bowDE.setVocabulary(dictionary);

Now I can start bag of words

for (int i = 0; i < numberImages; ++i)
{
    cv::Mat face = faceMatVector[i]; // <- contains image read with imread.
    std::vector<cv::KeyPoint> keypoints;
    detector->detect(face, keypoints); <- // second one features extraction
    cv::Mat bowDescriptors;
    bowDE.compute(face, keypoints, bowDescriptors);
}

As you can see, in this way, I perform features extraction from image two time: first one with the first loop, and the second one with the last loop (see comments on code), because I need descriptors to clusterize with BOWKMeansTrainer, but I need keypoints to calculate bowDescriptors with BOWImgDescriptorExtractor (matching and so on).

My question is: Is this necessary or can I avoid that? I failed something? Can I take from keypoints from somewhere in the last loop without re-detect them? Can I just save keypoints detected in the first loop and then re-use them in the last loop to computer BOWImgDescriptorExtractor?

Thanks for your answer.

2018-04-09 12:37:54 -0600 commented answer BOWKMeansTrainer and features extraction

So can I use descriptors saved into featuresVector? Seems awesome, did not see this overloading.

2018-04-07 06:40:47 -0600 asked a question BOWKMeansTrainer and features extraction

BOWKMeansTrainer and features extraction After some months, I start again to do some optimization with my code, but I fo

2017-11-13 04:52:47 -0600 received badge  Enthusiast
2017-11-12 13:51:26 -0600 commented question SVM predict on OpenCV: how can I extract the same number of features

I tried with LINEAR, but result is not good (around 40%). About the last question: I did no try, but - as a profane - se

2017-11-12 11:21:45 -0600 commented question SVM predict on OpenCV: how can I extract the same number of features

Nope, I only try to go with my own solution. I thought you were spurring me in this direction :D. By the way, now just f

2017-11-12 11:07:08 -0600 commented question SVM predict on OpenCV: how can I extract the same number of features

Hey @berak, have you seen the last edit of question? I tried to follow your suggestion and for now I am working with dat

2017-11-12 10:52:25 -0600 edited question SVM predict on OpenCV: how can I extract the same number of features

SVM predict on OpenCV: how can I extract the same number of features I am play with OpenCV and SVM to make a classifier

2017-11-12 10:50:01 -0600 edited question SVM predict on OpenCV: how can I extract the same number of features

SVM predict on OpenCV: how can I extract the same number of features I am play with OpenCV and SVM to make a classifier

2017-11-12 10:35:21 -0600 commented question SVM predict on OpenCV: how can I extract the same number of features

Back from work, I have two questions, if you can answer: I need to re-use centers for the unseen images, right? I do n

2017-11-10 10:32:31 -0600 commented question SVM predict on OpenCV: how can I extract the same number of features

Back from work, I have two questions, if you can answer: I need to re-use centers for the unseen images, right? I do n

2017-11-02 13:25:55 -0600 commented question SVM predict on OpenCV: how can I extract the same number of features

I missed the last comment and I am waiting here from 6 hours, auch. Anyway, I cannot understand why my approach is wrong

2017-11-02 07:25:28 -0600 received badge  Student (source)
2017-11-02 06:18:07 -0600 commented question SVM predict on OpenCV: how can I extract the same number of features

I read it 10 times but I am bit confused. In particular, I am refering to the third point. Okay, I get it. Get it too.