Ask Your Question

motiur's profile - activity

2014-01-31 05:48:50 -0600 commented answer Algorithm used in Descriptor Matcher trainer in OpenCV

So what is happening to other matchers.

2014-01-29 12:20:07 -0600 asked a question Algorithm used in Descriptor Matcher trainer in OpenCV

The code snippet below shows the basics of training a descriptor matcher used in object recognition.

detector = cv.FeatureDetector('ORB');
extractor = cv.DescriptorExtractor('ORB');
matcher = cv.DescriptorMatcher('BruteForce-Hamming')

for(i=1 to N)
{
   detector(i) = detector.detect(image(i));
   descriptor(i) = extractor.compute(image(i),detector(i));
   matcher.add(descriptor(i));

}
matcher.train();

The code is not syntactically correct, however I want to know how the train() function of matcher works here.

2013-12-01 18:31:33 -0600 asked a question Error in calculating perspective transform for opencv in Matlab

I am trying to recode feature matching and homography using mexopencv .Mexopencv ports OpenCV vision toolbox into Matlab .

My code in Matlab using OpenCV toolbox:

function hello
    disp('Feature matching demo. Press any key when done.');

    % Set up camera
    camera = cv.VideoCapture;
    pause(3); % Necessary in some environment. 

    % Set up display window
    window = figure('KeyPressFcn',@(obj,evt)setappdata(obj,'flag',true));
    setappdata(window,'flag',false);

    object = imread('D:/match.jpg');

    %Conversion from color to gray
    object = cv.cvtColor(object,'RGB2GRAY');

    %Declaring detector and extractor
    detector = cv.FeatureDetector('SURF');
    extractor = cv.DescriptorExtractor('SURF');

    %Calculating object keypoints
    objKeypoints = detector.detect(object);

    %Calculating object descriptors
    objDescriptors = extractor.compute(object,objKeypoints);

    % Start main loop
    while true
        % Grab and preprocess an image
        im = camera.read;
        %im = cv.resize(im,1);
        scene = cv.cvtColor(im,'RGB2GRAY');

        sceneKeypoints = detector.detect(scene);

        if length(sceneKeypoints) < 2 
            continue
        end;

        sceneDescriptors = extractor.compute(scene,sceneKeypoints);

        matcher = cv.DescriptorMatcher('BruteForce');
        matches = matcher.match(objDescriptors,sceneDescriptors);

        objDescriptRow = size(objDescriptors,2);
        dist_arr = zeros(1,objDescriptRow);

        for i=1:objDescriptRow
            dist_arr(i) = matches(i).distance;
        end;

        min_dist = min(dist_arr);

        N = 10000;    
        good_matches = repmat(struct('distance',0,'imgIdx',0,'queryIdx',0,'trainIdx',0), N, 1 );

        goodmatchesSize = 0;

        for i=1:objDescriptRow
            if matches(i).distance < 3 * min_dist
                good_matches(i).distance = matches(i).distance;
                good_matches(i).imgIdx = matches(i).imgIdx;
                good_matches(i).queryIdx = matches(i).queryIdx;
                good_matches(i).trainIdx = matches(i).trainIdx;
                goodmatchesSize = goodmatchesSize +1;
            end
        end

        im_matches = cv.drawMatches(object, objKeypoints, scene, sceneKeypoints,good_matches);

        objPoints = [];
        scnPoints = [];

        for i=1:goodmatchesSize

            qryIdx = good_matches(i).queryIdx;
            trnIdx = good_matches(i).trainIdx;
            if qryIdx == 0 
                continue 
            end;
            if trnIdx == 0
                continue
            end;

            first_point = objKeypoints(qryIdx).pt;
            second_point = sceneKeypoints(trnIdx).pt;

            objPoints(i,:)= (first_point);

            scnPoints(i,:) = (second_point);

        end

        H = cv.findHomography(objPoints,scnPoints);

        objectCorners = [];
        sceneCorners = zeros(4,2);

        for i=1:4
            for j=1:2
                sceneCorners(i,j) = 0;
            end;
        end;

        objectCorners(1,1) = 0;
        objectCorners(1,2) = 0;

        objectCorners(2,1) = size(object,2);
        objectCorners(2,2) = 0;

        objectCorners(3,1) = size(object,2);
        objectCorners(3,2) = size(object,1);

        objectCorners(4,1) = 0;
        objectCorners(4,2) = size(object,1);

        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

        sceneCorners= cv.perspectiveTransform(objectCorners,H);//This is where the problem occurs

       %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

        imshow(im_matches);
        % Terminate if any user input
        flag = getappdata(window,'flag');
        if isempty(flag)||flag, break; end
        pause(0.000000001);
    end

% Close

    close(window);

end

The error:

Error using cv.perspectiveTransform
Unexpected Standard exception from MEX file.
What()
is:C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\core\src\matmul.cpp:1926:
error: (-215) scn + 1 == m.cols && (depth == CV_32F || depth == CV_64F)

..

Error in hello (line 148)
     cv.perspectiveTransform(objectCorners,H);

Now , I have came across this which replicates my error . The problem faced in this post is similar to mine , and I fixed the issue regarding setting of object coordinates . How did I did that?. I have a working code which takes in live feed from webcam and matches images by checking out for homography in pure OpenCV ie without Matlab platform involved. The problem starts from checking out the perspectiveTranform(objCorners, H). Now , there is some ... (more)

2013-11-03 18:41:21 -0600 asked a question Displaying meta-data of matched image to user after successful image matching via surf

I managed to match two images using surf , now I want to say somehow to the user that I have successfully done the matching, via a text string. The rectangle that shows that the matching is done is visually appealing , but is there a way where I can store meta-data within the Mat struct , so that after successful match it can show the user the meta-data .

2013-11-03 18:23:05 -0600 commented answer Sample image as training image

Would you add a few more lines .

2013-11-01 07:02:14 -0600 asked a question Sample image as training image

So , I was following this this code sample from opencv about surf and homography and I was interested in the train sample that was required to such experiment . I downloaded the two images at the bottom box.png and box_in_scene.png to validate the correctness of this code , I was alright . Now , I went to test this code with my own image , on the left is an image of a flash drive , and on the right is an image of a scissor with an usb drive . I failed to get any rectangular box on the test image ( the scissor and usb drive) . Usb and scissor. However I know the code is working when I take different train sample for example this one with a paper box on the left and paper box in the mix with bed sheet . Box and bed sheet. Now my question is , what sort of training images should I rely on to give a good response , or is it something to do with the scenery that I choose as my test sample. Also had I chosen a video sample as my test case , would I be able to receive more responsive result . Thanks .

2013-10-08 06:35:12 -0600 asked a question Native OpenCV C++ for Android

Hi ,I was interested in porting link text to Android . I got this link text so far . Is this good enough . I am particularly interested in porting native OpenCV for C++ to Android since many of the apis is easily obtainable in C++ than in java . Plus I am also looking for Bag of Words featur extractor , which I have a received a lot of support from C++ api.