OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 05 Jul 2019 07:18:27 -0500Octave Wrapper/Mex Compilershttp://answers.opencv.org/question/215210/octave-wrappermex-compilers/I tried to build the Matlab wrappers for OpenCV, even though I don't have Matlab installed, with no luck,
CMake Warning at /home/user/git/opencv_contrib/modules/matlab/cmake/init.cmake:13 (message):
Matlab or compiler (mex) was not found. Disabling Matlab bindings...
Call Stack (most recent call first):
cmake/OpenCVModule.cmake:313 (include)
cmake/OpenCVModule.cmake:379 (_add_modules_1)
modules/CMakeLists.txt:7 (ocv_glob_modules)
presumably because the Matlab mex compiler wasn't found.
However this [link at Mathworks](https://uk.mathworks.com/support/requirements/supported-compilers.html) shows that there are non-Matlab compilers available for Linux, Mac, and Windows.
Would it be possible to migrate the exiting Matlab mex build process to the compilers listed above to enable us octave users access to the latest version of OpenCV? (mexOpenCV has fallen behind.)InformationEntropyFri, 05 Jul 2019 07:18:27 -0500http://answers.opencv.org/question/215210/Recall Precision Curve - OpenCVhttp://answers.opencv.org/question/195048/recall-precision-curve-opencv/ After reading the description of the opencv's following function:
`recallPrecisionCurve = cv.computeRecallPrecisionCurve(matches1to2, correctMatches1to2Mask)`
I have 2 important queries.
1) How can we get **correctMatches1to2Mask** (input for the above function)? If it is to be found using outlier rejection through RANSAC, then what about the **False Negatives** (the feature points which were detected in both images and could be matched, but they were neglected by the feature descriptor under test) ?
False Negatives are used in the formula of Recall: Recall = True Positives / (False Negatives + True Positives)
2) The output of this function is a Nx2 matrix. First column of this output matrix gives Recall values or Precision values?SAKhanWed, 04 Jul 2018 14:14:02 -0500http://answers.opencv.org/question/195048/Essential Matrix computation with Ransac 3000 iterationshttp://answers.opencv.org/question/188923/essential-matrix-computation-with-ransac-3000-iterations/The following function generally provides inaccurate Essential Matrix.
E, mask] = cv.findEssentialMat(ptsObj,ptsScene,'CameraMatrix',K,'Method','Ransac','Threshold',1);
Is there any way to compute the **Essential Matrix** using the above function with Ransac's 3000+ trials? Can we use loops in any way to accomplish a robust and more accurate solution?
SAKhanSun, 08 Apr 2018 17:33:49 -0500http://answers.opencv.org/question/188923/Any OpenCV Function Similar to helperFind3Dto2DCorrespondences() function of MATLAB ?http://answers.opencv.org/question/188920/any-opencv-function-similar-to-helperfind3dto2dcorrespondences-function-of-matlab/Is there any OpenCV Function that could generate same output as compared to the following function in MATLAB?
[worldPoints, imagePoints] = helperFind3Dto2DCorrespondences(vSet,...
cameraParams, indexPairs, currPoints);SAKhanSun, 08 Apr 2018 16:55:13 -0500http://answers.opencv.org/question/188920/Why Pose Estimation Results of MEXOPENCV and MATLAB Functions so diverse?http://answers.opencv.org/question/188753/why-pose-estimation-results-of-mexopencv-and-matlab-functions-so-diverse/Hello.
I was testing mexopencv functions for estimating pose for an image pair from MATLAB's Visual Odometry Example. However, the result of relative location computed by mexopencv functions is different (negative sign) from the ground-truth. Moreover, the results computed from MATLAB's Functions are matching with the ground-truth.
I have observed that this type of discrepancy is generated when there is a forward movement in the camera (shown in the images).
Why is it so? Is there any underlying bug in the code? Both codes are provided below.
MATLAB's Code:
--------------
close all, clear all, clc
K = [615 0 320; 0 615 240; 0 0 1];
cameraParams = cameraParameters('IntrinsicMatrix', K);
images = imageDatastore(fullfile(toolboxdir('vision'),'visiondata','NewTsukuba'));
% Load ground truth camera poses.
load(fullfile(toolboxdir('vision'),'visiondata','visualOdometryGroundTruth.mat'));
viewId = 15; % Number of 2nd Image, in the image pair that is to be matched
Irgb = readimage(images, viewId-1); % Read image number 1
% Convert to gray scale and undistort.
prevI = undistortImage(rgb2gray(Irgb), cameraParams);
prevPoints = detectSURFFeatures(prevI, 'MetricThreshold', 500); % Detect features.
% Select a subset of features, uniformly distributed throughout the image.
numPoints = 150;
prevPoints = selectUniform(prevPoints, numPoints, size(prevI));
% Extract features. Using 'Upright' features improves matching quality if
% the camera motion involves little or no in-plane rotation.
prevFeatures = extractFeatures(prevI, prevPoints, 'Upright', true);
% Read image 2.
Irgb = readimage(images, viewId);
% Convert to gray scale and undistort.
I = undistortImage(rgb2gray(Irgb), cameraParams);
% Match features between the previous and the current image.
[currPoints, currFeatures, indexPairs] = helperDetectAndMatchFeatures(prevFeatures, I);
format long
% Estimate the pose of the current view relative to the previous view.
[relative_orient, relative_location, inlierIdx] = helperEstimateRelativePose(...
prevPoints(indexPairs(:,1)), currPoints(indexPairs(:,2)), cameraParams);
relative_location_normalized = relative_location * norm(groundTruthPoses.Location{viewId});
groundTruth_Loc = groundTruthPoses.Location{viewId};
display(relative_location_normalized);
display(groundTruth_Loc);
MEXOPENCV's Code:
-----------------
close all, clear all, clc
images = imageDatastore(fullfile(toolboxdir('vision'),'visiondata','NewTsukuba'));
viewId = 15; % Number of 2nd Image, in the image pair that is to be matched
im1 = readimage(images, viewId-1);
im2 = readimage(images, viewId);
load(fullfile(toolboxdir('vision'),'visiondata','visualOdometryGroundTruth.mat'));
detector = cv.SIFT('ConstrastThreshold',0.04);
matcher = cv.DescriptorMatcher('BruteForce-L1');
[k1,d1] = detector.detectAndCompute(im1);
[k2,d2] = detector.detectAndCompute(im2);
matches = matcher.knnMatch(d1, d2, 2);
idx = cellfun(@(matches) matches(1).distance < 0.7 * matches(2).distance, matches);
matches = cellfun(@(matches) matches(1), matches(idx));
ptsObj = cat(1, k1([matches.queryIdx]+1).pt); % queryIdx: index of query descriptors in image 1
ptsScene = cat(1, k2([matches.trainIdx]+1).pt); % trainIdx: index of train descriptors in image 2
cameraMatrix_VOdometry = [615 0 320; 0 615 240; 0 0 1]; % INTRINSIC MATRIX for NewTsukuba VO Dataset
format long
[E, mask] = cv.findEssentialMat(ptsObj,ptsScene,'CameraMatrix',cameraMatrix_VOdometry,'Method','LMedS');
%display(E);
%Recover relative camera rotation and translation from an estimated essential matrix
[relativeOrient, relativeLoc, good, inlierIdx] = cv.recoverPose(E, ptsObj, ptsScene,'Mask',mask); % points1 and points2 are the matched feature points (inliers)
groundTruth_Orient = groundTruthPoses.Orientation{viewId};
%display(groundTruth_Orient);
%display(relativeOrient);
relativeLoc = transpose(relativeLoc);
relativeLoc_normalized = relativeLoc * norm(groundTruthPoses.Location{viewId});
groundTruth_Loc = groundTruthPoses.Location{viewId};
display(groundTruth_Loc);
display(relativeLoc_normalized);SAKhanThu, 05 Apr 2018 14:06:49 -0500http://answers.opencv.org/question/188753/How to set other parameters in StereoBMhttp://answers.opencv.org/question/70169/how-to-set-other-parameters-in-stereobm/I noticed that in mexopencv object StereoBM lacks many parameters that can be set in openCV. I find that weird, because in the related StereoSGBM you can set all the parameters.
Links for comparison:<br>
BM: https://kyamagu.github.io/mexopencv/matlab/StereoBM.init.html<br>
SGBM: https://kyamagu.github.io/mexopencv/matlab/StereoSGBM.StereoSGBM.html
I also checked the source code for StereoBM.m (link: https://github.com/kyamagu/mexopencv/blob/master/%2Bcv/StereoBM.m) and parameters like "SpeckleRange", "SpeckleWindowSize" are listed there, but i don't know how to use them.
So what I'm asking is if there is any way that i can set parameters (like "UniquenessRatio" "SpeckleRange" "SpeckleWindowSize" and so on) of object StereoBM using mexopencv?Kristian ZarnFri, 04 Sep 2015 19:16:39 -0500http://answers.opencv.org/question/70169/