Ask Your Question

bjorn89's profile - activity

2018-12-04 04:14:09 -0500 received badge  Notable Question (source)
2018-06-05 13:03:41 -0500 received badge  Famous Question (source)
2017-11-09 02:46:14 -0500 received badge  Popular Question (source)
2017-10-26 14:01:32 -0500 received badge  Self-Learner (source)
2017-10-26 08:57:12 -0500 answered a question Error with custom trained SSD

As @dkurt says, the min_size: 21.0 must be changed in min_size: 21. I tried this change and it works!

2017-10-26 06:39:24 -0500 edited question Error with custom trained SSD

Error with custom trained SSD Hi all, I've trained a SSD network with custom dataset. The input is a 300x169 image (sim

2017-10-26 06:23:44 -0500 commented question Error with custom trained SSD

@dkurt Hi. I compiled the 3.3.1 version, and now I get the error C4996 'cv::dnn::experimental_dnn_v2::createCaffeImporte

2017-10-22 04:37:30 -0500 asked a question Error with custom trained SSD

Error with custom trained SSD Hi all, I've trained a SSD network with custom dataset. The input is a 300x169 image (sim

2017-09-18 08:58:29 -0500 received badge  Famous Question (source)
2017-05-23 01:38:24 -0500 commented question OpenCV help on positions

Once that you have detected the coloured shape (eg., the green or the blue ones) you can take its center of mass. Since the position are constant, you already know the center of mass of the position (i.e, 1,2, etc.) and so you can compare the x and y center of mass coordinates. I think this is the simplest thing you can do!

2017-04-21 04:41:15 -0500 received badge  Notable Question (source)
2017-02-09 01:42:37 -0500 answered a question How to extract frames at 10 fps?

You can use a counter variable, for example int counter2 = 0;. Inside the while loop, use this if condition:

if(counter2 % 10 == 0)
{
    cap >> frame;
}
else
{
    counter2++;
    continue;
}

Notice that this will extract a frame every 10, it will not convert your video to 10fps.

Hope this helps.

2017-02-08 11:27:55 -0500 received badge  Popular Question (source)
2016-12-11 10:53:21 -0500 received badge  Nice Answer (source)
2016-12-08 13:14:08 -0500 received badge  Notable Question (source)
2016-07-26 11:25:46 -0500 commented question Mat and imread memory management

have you tried to declareMat img outside the for loop?

2016-06-14 08:39:49 -0500 marked best answer Percentage of overlap

Hi all, how can I calculate the percentage of 2 overlapping images? If the result is 100 the 2 images are completely overlapped, if 0 they're completely disjoined.

PS: Because I'm doing stitch of images, I'm calculating the orb features,I don't know if it helps!

2016-04-26 05:43:52 -0500 received badge  Popular Question (source)
2016-03-18 08:16:35 -0500 commented answer Copy histogram of an image to another

Hi! I've tried your solution and it works, but it doesn't match perfectly the histogram. it is normal? I've seen that the matlab module matches perfectly!

2016-03-18 03:35:39 -0500 commented answer Copy histogram of an image to another

Hi! If found by myself the same website, but I can't make it work with opencv 3.1. In particular, it gives me error on vector (which I correct by using vector <intZ), on double* _src_cdf = src_cdf.ptr(); (which I can't resolve), on LUT(chns[i], lut, chns[i]);(it undelines chns1, also this unresolved) and on every h[c][/ c] (it underlines the slash). How can I proceed?

2016-03-17 04:03:32 -0500 asked a question Copy histogram of an image to another

Hi all! Here's my problem: I have two images of the same thing acquired with different illumination condition. I found that is possible to copy the histogram of a reference image to a destination image, and it's called histogram specification or histogram matching. As shown here https://studentathome.wordpress.com/2..., in matlab it's pretty simple. There is a way to do the same thing on OpenCV?

EDIT The code linked on the page link text gives me error on the lines:

do1ChnHist(chns[i], src_mask, src_hist, src_cdf);
do1ChnHist(chns1[i], dst_mask, dst_hist, dst_cdf);

it says that I can't pass a matrix (src_hist etc) because the function accepts double*. How can I make it work with Opencv 3.1?

EDIT 2 Now it compiles and run, but i obtain as a matched image a black image. Since I need to match the histogram of the whole image, I created as mask a Mat on ones. Am I doing right? Any suggestion?

2016-01-23 07:56:01 -0500 answered a question Compiling error with -lippicv

Hi! i had the same error, I simply resolved by copying the library from the opencv sdk folder to us/locallib:

sudo cp 3rdparty/ippicv/unpack/ippicv/lib/intel64/libippicv.a /usr/local/lib/
2015-11-05 22:48:16 -0500 marked best answer Panorama mosaic from Aerial Images

I'm writing a program that creates a panorama mosaic in real time from a video. The steps that I've done are:

  1. Find features between the n-th frame and the (n-1)th mosaic.
  2. Calculate homography
  3. Use the homography with warpPerspective for stitch the images.

I'm using this code for stitch the images together:

warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);

 Mat final_img(Size(rImg.cols, rImg.rows), CV_8UC3);
 Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
 Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
 rImg.copyTo(roi2);
 vImg[1].copyTo(roi1);

If you see, from second 0.33 it starts to lose part of the mosaic. I'm pretty sure that depends by the ROI I've defined. My program should work like this : https://www.youtube.co/watch?v=59RJeL....

What can I do?

EDIT 2

Here's my code, I hope someone could help me to see the light at the end of the tunnel!!!

// I create the final image and copy the first frame in the middle of it
Mat final_img(Size(img.cols * 3, img.rows * 3), CV_8UC3);
Mat f_roi(final_img,Rect(img.cols,img.rows,img.cols,img.rows));
img.copyTo(f_roi);


//i take only a part of the ccomplete final image
Rect current_frame_roi(img.cols, img.rows, final_img.cols - img.cols, final_img.rows - img.rows);

while (true)
{

    //take the new frame
    cap >> img_loop;
    if (img_loop.empty()) break;

    //take a part of the final image
    current_frame = final_img(current_frame_roi);


    //convert to grayscale
    cvtColor(current_frame, gray_image1, CV_RGB2GRAY);
    cvtColor(img_loop, gray_image2, CV_RGB2GRAY);


    //First step: feature extraction with  Orb
    static int minHessian = 400;
    OrbFeatureDetector detector(minHessian);



    vector< KeyPoint > keypoints_object, keypoints_scene;

    detector.detect(gray_image1, keypoints_object);
    detector.detect(gray_image2, keypoints_scene);



    //Second step: descriptor extraction
    OrbDescriptorExtractor extractor;

    Mat descriptors_object, descriptors_scene;

    extractor.compute(gray_image1, keypoints_object, descriptors_object);
    extractor.compute(gray_image2, keypoints_scene, descriptors_scene);



    //Third step: match with BFMatcher
    BFMatcher matcher(NORM_HAMMING,false);
    vector< DMatch > matches;
    matcher.match(descriptors_object, descriptors_scene, matches);

    double max_dist = 0; double min_dist = 100;



    //distance between kepoint
    //with orb it works better without it
    /*for (int i = 0; i < descriptors_object.rows; i++)
    {
        double dist = matches[i].distance;
        if (dist < min_dist) min_dist = dist;
        if (dist > max_dist) max_dist = dist;
    }
    */




    //take just the good points
    //with orb it works better without it
    vector< DMatch > good_matches;

    good_matches = matches;

    /*for (int i = 0; i < descriptors_object.rows; i++)
    {
        if (matches[i].distance <= 3 * min_dist)
        {
            good_matches.push_back(matches[i]);
        }
    }*/
    vector< Point2f > obj;
    vector< Point2f > scene;


    //take the keypoints
    for (int i = 0; i < good_matches.size(); i++)
    {
        obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
        scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
    }

    //static Mat mat_match;
    //drawMatches(img_loop, keypoints_object, current_frame, keypoints_scene,good_matches, mat_match, Scalar::all(-1), Scalar::all(-1),vector<char>(), 0);


    // homography with RANSAC
    if (obj.size() >= 4)
    {

        Mat H = findHomography(obj, scene, CV_RANSAC,5);


        //take the x_offset and y_offset
        /*the offset matrix is of the type

        |1 0 x_offset |
        |0 1 y_offset |
        |0 0 1           |
        */
        offset.at<double>(0, 2) = H.at<double>(0, 2);
        offset.at ...
(more)
2015-07-24 04:17:38 -0500 asked a question Speed up mosaicing using only a part

Hi all! I'ma writing a program that make a mosaic from aerial videos. The obvious problem is that when the mosaic became larger, it takes more time to calculate feature. I've let run the program for 3 hour, but at a certain point an error comes out (like IDxONE and INTMAX stuff). So I though to take just a part of the mosaic for doing the calculation for speed it up and for avoid memory-based errors. I can extract the ROI with the instruction current_frame=Mat(mosaic,Rect(x,y,w,h)) and do the calculation, but the question is: How can I remap the point in the global mosaic coordinates?

Thanks!

I'm using visual studio 2013 and c++

EDITI'm using this code http://answers.opencv.org/question/60...

2015-07-23 15:23:30 -0500 commented answer Extract common part of images

I'll add it as soon as possible!

2015-07-15 05:40:00 -0500 answered a question Detect object in noisy image

I resolved but manually thresholding images and aplpy a gaussian blur, like @thdrksdfthmn said

2015-07-13 11:04:22 -0500 commented question Detect object in noisy image

Hi! I've done all the steps you wrote. I'f I apply threshold I get a black image. I'm going to edit the post so you can see what I get!

2015-07-13 10:24:45 -0500 asked a question Detect object in noisy image

Hi all, as the title says, I need to detect objects in noisy images. Here's how my program should work:

  1. Align 2 images
  2. use absdiff to find the diferences
  3. bound the differences (in my case the objects) with a rectangle.

The problem is that the two images could have only a part in common, so the difference image will be noisy. Here's an example

As you can see, there's a wheel. I need to bound that wheel, but if I use findcontour it get's all the contours except the one of the wheel. Can you help me?

EDIT I've converted to grayscale and appliend a gaussian blur on both images before subtraction. I get this:

How can I proceed now?

EDIT 2 Here's the original images

EDIT 3 I had to remove images

2015-07-13 09:25:25 -0500 answered a question Extract common part of images

I've resolved by using homography to map the common part from one image to another so I can crop it out!

2015-07-06 16:27:36 -0500 asked a question Extract common part of images

Hi, I'm writing a program that find differences between images. For now, I'm finding features with AKAZE, so I've the common point of the 2 images. The problem is that these 2 images have only a part in common. How can I extract the common part from both images? For better explanation: I need to extract the common part from the first image and then from the second, so I can do absdiff for finding difference.

Thanks to all!

2015-07-06 04:34:35 -0500 asked a question Highlights images differences with a rectangle

Hi all, I need to find differences between images and bound them with a rectangle. Here are the steps I'm doing:

  1. Align the two images. For now, I'm finding features between them and using findHomography to make them similar.
  2. Make a simple difference between them with absdiff

and what I obtain is this: image description

Now I need to print a rectangle that bounds that part. Better is to bound just the wheel and not also the thing around it. I tried this code

Mat thresh, gray;
cvtColor(diff, gray, CV_BGR2GRAY);
threshold(gray, thresh, 1, 255, THRESH_BINARY);

vector<vector<Point> > contours;
findContours(thresh, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
Rect bound;
bound = boundingRect(contours[0]);

//print the rectangle
rectangle(frame, bound, Scalar(0, 0, 255));

It worked with the images taken from here http://stackoverflow.com/questions/27... but it does not work with this.

I also tried with background subtraction but it works crappy.

How can I resolve?

Ps: I'm programming in c++

2015-07-06 04:08:07 -0500 commented question Image registration in opencv

I think you need to find the common points between the two images with a feature extractor, and then use findHomography to map these point from one image to the other.

2015-06-17 05:34:37 -0500 commented answer Image stitching of translating images

Reading the documentantions seems that I need DEM informations,which I don't have. What a mess XD

2015-06-17 02:33:43 -0500 commented answer Image stitching of translating images

@LBerger I've seen the web page you linked above, and that's what exactly happens to me! I've found orfeo before but I didn't knew it was the solution. I give a try and let you know. Thanks a lot man!

2015-06-16 10:54:22 -0500 commented answer Image stitching of translating images

I've found that estimateRigidTransform gives a matrix with rotation and translation.it works pretty well, but here's the problem: I need to orthorectify the images. How can it be done?

2015-06-15 14:27:29 -0500 commented answer Image stitching of translating images

Hi! It does not work,I have only a part of image and the other half is black. I see that I need rotation too. I can't use homography because it will ruin my stitch/mosaic with strange warping.. how can I add rotation?

2015-06-14 16:16:12 -0500 asked a question Image stitching of translating images

Hi all, I'm writing a program that stitch images taken by a flying drone. The problem is that images are translating like the drone is acting like a "scanner".So, when I calculate feature points and then homography, this one messes all my mosaic. There's a way in opencv (or with opencv together with another library) to stitch toghere images that differs by a translation instead a rotation?

2015-06-11 10:19:14 -0500 commented answer Stitcher module for non-linear stitching

I've obtained this with Hugin https://mega.co.nz/#!H0ghUYKL!JmsrOPE... , but I had used the program itself (not calling it from my c++ program). How could be that opencv stitching module can achieve these results?

2015-06-11 02:06:11 -0500 commented answer Stitcher module for non-linear stitching

The fact is that I'm already using these parameter to correct lens distortion, so I can't really figure out why is not working (I'm sorry xD). The big problem is that I have no time to rewrite a pipeline (I've a big delay with this work). Do you know any tool/library that do this through cli so I can call it from my c++ program?

2015-06-10 14:31:25 -0500 commented answer Stitcher module for non-linear stitching

So how can I stitch togheter planar images? I've both intrinsic parameters of camera and distortion coefficients (both calculated with the opencv example)