OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 18 Dec 2020 07:38:58 -0600Problem with undistortPoints() function in pose estimation of imagehttp://answers.opencv.org/question/239443/problem-with-undistortpoints-function-in-pose-estimation-of-image/ I have written about my task [here](https://answers.opencv.org/question/238792/problem-with-building-pose-mat-from-rotation-and-translation-matrices/). I have
a set of images with known pose which were used for scene reconstruction and some query image from the same space without pose. I need to calculate the pose of the query image. I solved this problem using essential matrix. Here is a code
Mat E = findEssentialMat(pts1, pts2, focal, pp, FM_RANSAC, F_DIST, F_CONF, mask);
// Read pose for view image
Mat R, t; //, mask;
recoverPose(E, pts1, pts2, R, t, focal, pp, mask);
The only problem is that OpenCv documentation [states](https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga13f7e34de8fa516a686a56af1196247f) that the findEssentialMat function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix. That's not a case for us - images of scene and query image can be captured by cameras with different intrinsics.
I suppose to use this undistortPoints() function. According to [documentation](https://docs.opencv.org/3.4/da/d54/group__imgproc__transform.html#ga55c716492470bfe86b0ee9bf3a1f0f7e) the undistortPoints() function takes two important parameters distCoeffs and cameraMatrix.
Both images of the scene and query image have calibration parameters associated (fx, fy, cx, cy).
I obtain cameraMatrix parameter this way:
Mat K_v = (Mat_<double>(3, 3) <<
fx, 0, cx,
0, fy, cy,
0, 0, 1, CV_64F);
Is this correct? Moreover I need to get somewhere distCoeffs. So how I can obtain this distortion coefficients for images of the scene and query image? Probably I should solve it another way?sigmoid90Fri, 18 Dec 2020 07:38:58 -0600http://answers.opencv.org/question/239443/tvec and coordinate systems qhttp://answers.opencv.org/question/238833/tvec-and-coordinate-systems-q/The output of solvePnp tvec and rvec: is tvec in world coordinates? I am guessing it is b/c the chessboard calibration is in mm.
Second, what is tvec measuring from-to?
coldheatMon, 07 Dec 2020 12:20:58 -0600http://answers.opencv.org/question/238833/Problem with building pose Mat from rotation and translation matriceshttp://answers.opencv.org/question/238792/problem-with-building-pose-mat-from-rotation-and-translation-matrices/I have two images captured in the same space (scene), one with known pose. I need to calculate the pose of the second (query) image. I have obtained the relative camera pose using essential matrix. Now I am doing calculation of camera pose through the matrix multiplication ([here](https://answers.opencv.org/question/31421/opencv-3-essentialmatrix-and-recoverpose/) is a formula).
I try to build the 4x4 pose Mat from rotation and translation matrices. My code is following
Pose bestPose = poses[best_view_index];
Mat cameraMotionMat = bestPose.buildPoseMat();
cout << "cameraMotionMat: " << cameraMotionMat.rows << ", " << cameraMotionMat.cols << endl;
float row_a[4] = {0.0, 0.0, 0.0, 1.0};
Mat row = Mat::zeros(1, 4, CV_64F);
cout << row.type() << endl;
cameraMotionMat.push_back(row);
// cameraMotionMat.at<float>(3, 3) = 1.0;
Earlier in code. Fora each view image:
Mat E = findEssentialMat(pts1, pts2, focal, pp, FM_RANSAC, F_DIST, F_CONF, mask);
// Read pose for view image
Mat R, t; //, mask;
recoverPose(E, pts1, pts2, R, t, focal, pp, mask);
Pose pose (R, t);
poses.push_back(pose);
Initially method bestPose.buildPoseMat() returns Mat of size (3, 4). I need to extend the Mat to size (4, 4) with row [0.0, 0.0, 0.0, 1.0] (zeros vector with 1 on the last position). Strangely I get following output when print out the resultant matrix
> [0.9107258520121255,
> 0.4129580377861768, 0.006639390377046724, 0.9039011699443721;
> 0.4129661348384583, -0.9107463665340377, 0.0001652925667582038, -0.4277340727282191;
> 0.006115059555925467, 0.002591307168000504, -0.9999779453436902, 0.002497598952195387;
> 0, 0.0078125, 0, 0]
Last row does not look like it should be: [0, 0.0078125, 0, 0] rather than [0.0, 0.0, 0.0, 1.0]. Is this implementation correct? What could a problems with this matrix?sigmoid90Sun, 06 Dec 2020 04:51:36 -0600http://answers.opencv.org/question/238792/Pixel-wise matrix multiplicationhttp://answers.opencv.org/question/238652/pixel-wise-matrix-multiplication/I want to multiply every pixel in an image with a 3x3 matrix, treating every pixel as a 3D-vector of colors.
In mathematical terms the operation would be something like:
**u** = **M** **v**
where **u** is the resulting pixel value, **M** the (3x3) matrix, and **v** the original pixel value.
I have searched the documentation but have not been able to find a way to do this in OpenCV. Does anybody have any suggestions?fadeto404Thu, 03 Dec 2020 05:45:30 -0600http://answers.opencv.org/question/238652/C++ allocate and copy matrix take too much timehttp://answers.opencv.org/question/237444/c-allocate-and-copy-matrix-take-too-much-time/Good morning,
When I copy a part of a cv::Mat of size 5000X46357 and type=CV_32F to a cv::Mat of size 100X46357 and type=CV_32F, it take 3ms. To reduce this time, I initialize the cv::Mat of size 100X46357 to ones, but this take 7ms and it doesn't reduce the time of copy. Is it normal that cv::Mat::ones take this time?
I have also a problem with this matrix when I pass it in a function after the copy, if I don't initialize it with ones, the function take 9.5ms but if I initialize with ones, the function take 2.5ms; is it normal that the initialization of the matrix outside the function affect the time of the function?
My final problem is that I do a matrix operation who take 0.014ms and when I affect this operation to a matrix, it take 4.7ms. The result matrix has a size of 2X46357 and type=CV_32F .
So, is it normal that allocation and give data to a matrix take too much time?
I Work on Windows 10 Pro 1909 X64, AMD Ryzen Threadripper 1900X8-core cpu, NVidia titan V gpu, with opencv X64 version 4.1.1 , I work on visual studio 2017 and I compile on mode Release X64.
Here a part of code:
Copy of matrix
![image description](/upfiles/16045107564650775.png)
Matrix operation and assignment
![image description](/upfiles/16045107914527976.png)
Thank you for your respond.vdhersWed, 04 Nov 2020 10:45:54 -0600http://answers.opencv.org/question/237444/how do I add missing module cv::drawFrameAxes without breaking installationhttp://answers.opencv.org/question/235996/how-do-i-add-missing-module-cvdrawframeaxes-without-breaking-installation/I haven't had any problem with current opencv until I tried to use for the first time: cv::drawFrameAxes(...
Compiler VS2017
Error C3861 'drawFrameAxes': identifier not found solutionTransformObjPoints
Error C2039 'drawFrameAxes': is not a member of 'cv' solutionTransformObjPoints c:\users\clang\desktop\working folder\solutiontransformobjpoints\solutiontransformobjpoints\solutiontransformobjpoints.cpp 263
Can any one help me fix this? I don't want to break this installation over a module.
General configuration for OpenCV 3.4.1 =====================================
Version control: 3.4.1
Platform:
Timestamp: 2018-02-23T13:47:28Z
Host: Windows 10.0.16299 AMD64
CMake: 3.9.3
CMake generator: Visual Studio 15 2017 Win64
CMake build tool: C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional/MSBuild/15.0/Bin/MSBu
superflyFri, 02 Oct 2020 15:57:55 -0500http://answers.opencv.org/question/235996/Cx, Cy not equal to specs?http://answers.opencv.org/question/231261/cx-cy-not-equal-to-specs/My K(intrinsic parameter) from opencv calibration is CX = 378.322 and CY = 222.498. Imager resolution is 752 (H) x 480(V)
divded by 2 = 376 and 240. Im not saying it shouldnt be that but I am wondering because I have seen instances in which folks just set CX and CY to 1/2 imager vertical max and 1/2 imager horizontal max. Which should I do?superflyMon, 15 Jun 2020 12:14:18 -0500http://answers.opencv.org/question/231261/Strange streching effects in photo due to transformation by matrixhttp://answers.opencv.org/question/229362/strange-streching-effects-in-photo-due-to-transformation-by-matrix/ I am working on stitching algorithm and and sometime there is strange streching of photo. It is cause by low number of keypoints BUT I am interested what cause it in this matrix:
![matrix](/upfiles/15876576686657783.png)
and the effect of this matrix is this: https://imgur.com/a/dwzm9Af (sorry can not upload image here, it seems too big)
can you suggest how to reduce this effect or what cause it?
RockStar1337Thu, 23 Apr 2020 11:06:48 -0500http://answers.opencv.org/question/229362/Understanding the camera matrixhttp://answers.opencv.org/question/89786/understanding-the-camera-matrix/ Hello all,
I used a chessboard calibration procedure to obtain a camera matrix using OpenCV and python based on this tutorial: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html
I ran through the sample code on that page and was able to reproduce their results with the chessboard pictures in the OpenCV folder to get a camera matrix.
I then tried the same procedure with my own checkerboard grid and camera, and I obtained the following matrix:
mtx = [1535 0 638
0 1536 204
0 0 1]
I am trying to better understand these results, based on the camera sensor and lens I am using.
Based on: http://ksimek.github.io/2013/08/13/intrinsic/
Fx = fx * W/w
Fx = focal length in mm
W = sensor width in mm
w = image width in pixels
fx = focal length in pixels
The size of my images: 1264 x 512 (width x height)
I am using the following lens: http://www.edmundoptics.com/imaging/imaging-lenses/edmund-optics-designed-lenses/megapixel-finite-conjugate-video-imaging-lenses/58203/
This has focal length 8 mm.
I am using a FL3-U3-13Y3 camera from PtGrey (https://www.ptgrey.com/flea3-13-mp-mono-usb3-vision-vita-1300-camera), which has an image width of 12 mm, according to this picture:
![image description](/upfiles/1457570354722283.png)
From the camera matrix, fx is the element in the first row, first column. So above, fx = 1535. In short:
fx = 1535 pixels (from camera matrix I obtained)
w = 1264 pixels (image size I set)
W = 12 mm (from datasheet)
Fx = 8 mm (from datasheet)
Using: Fx = fx * W/w, we would expect
Fx = 1535 * 12 / 1264 = 14.57 mm
But the actual lens is 8 mm. Why the discrepancy?
I would think that the actual size of a chess grid would have to be known, but I did not see mention of manipulation of that in the tutorial link I provided. I basically had to scale down the chessboard grid so that it would work with my camera setup.
I would appreciate any help or insight on this.
Thanks in advance
**EDIT:**
Actually to be more specific, the lens has a maximum camera sensor format of 1/3", while the camera sensor format is 1/2". I found an article on this: http://www.cambridgeincolour.com/tutorials/digital-camera-sensor-size.htm
Focal length multiplier = (1/2) / (1/3) = 1.5
Focal length of lens as listed on datasheet = 8 mm
Equivalent focal length of lens= 1.5 * 8 mm = 12 mm
Still, 12 mm is off from 14.57 mm. Am I not factoring something else in my calculation? Could this be happening from bad images that still happen to find the chessboard corners?
Below is an example image:
![image description](/upfiles/146194871634683.png)solarflareWed, 09 Mar 2016 18:45:05 -0600http://answers.opencv.org/question/89786/convert cv::Mat to vector<vector<uchar>> img;http://answers.opencv.org/question/224578/convert-cvmat-to-vectorvectoruchar-img/how do i efficiently convert cv::Mat to vector<vector<uchar>> img. right now i am trying beolove code:
vector<vector<Vec3b>> inputImg(OrigImg.cols,vector<Vec3b>(OrigImg.rows));
for(int i=0;i<OrigImg.cols;i++) {
for(int j=0;j<OrigImg.rows;j++) {
inputImg[i][j] = backgroundSubImg.at<cv::Vec3b>(j,i);
}
}
Why i'm trying to do is since it looks like accessing each pixel in cv::Mat or scanning hole image is little bit slower by using backgroundSubImg.at<cv::Vec3b>(j,i); i want to access the color values of this pixels only by its position.
Right now my algorithm is accessing every pixel 640*460*8 times in each cycle.dineshlamaWed, 08 Jan 2020 02:21:34 -0600http://answers.opencv.org/question/224578/How to hash - Opencv matrix, lbph histogram?http://answers.opencv.org/question/215525/how-to-hash-opencv-matrix-lbph-histogram/Hi, I want to try to create a hash code from ***.yml. For example i have existing yml file with Opencv matrix and lbph histogram:
%YAML:1.0
opencv_lbphfaces:
threshold: 1.7976931348623157e+308
radius: 1
neighbors: 8
grid_x: 8
grid_y: 8
histograms:
- !!opencv-matrix
rows: 1
cols: 16384
dt: f
data: [ 2.49739867e-02,
and so on....
Please give me some suggestions, methods or existing source codes how to convert it into hashcode or another useful view. And if i understood this yml file correctly main face identity features store in histograms:data:?
I want to take this hashcode and put it into another system/request...
----------------------------------------chinesestudFri, 12 Jul 2019 00:56:40 -0500http://answers.opencv.org/question/215525/Bus error (core dumped) while declaring cv::Mathttp://answers.opencv.org/question/215084/bus-error-core-dumped-while-declaring-cvmat/Hi,
I have been dealing with very simple matrices. The code snippet is below
cv::Size sz, sz1, sz2;
sz2 = cv::Size(4,1);
cv::Mat M(sz2, CV_8U);
int color_array_size = (1/3) * (point_cloud->width * point_cloud->height);
uint32_t arr [point_cloud->width * point_cloud->height] = {0};
uint8_t red_array[color_array_size] ={}, blue_array[color_array_size] ={} , green_array[color_array_size] ={};
sz= cv::Size(point_cloud->width, point_cloud->height);
sz1= cv::Size( point_cloud->width, point_cloud->height);
cv::Mat_ <cv::Vec3f> image(sz, CV_32FC3);
cv::Mat image_color(sz, CV_32F, arr);
The trouble is caused due to `cv::Mat M(sz2, CV_8U);` building the code is fine. While running the executable `Bus error (core dumped)` occurs. Some other type of matrices are also causing this annoying trouble. Anyone could suggest something to get rid of this?Ani_CvTue, 02 Jul 2019 07:55:40 -0500http://answers.opencv.org/question/215084/sortIdx matrix in both directions gone wronghttp://answers.opencv.org/question/215036/sortidx-matrix-in-both-directions-gone-wrong/What I wish to do, is first sort the rows of the matrix and then sort the columns, so I get this result:
Input: randommatrix
[ 6, 197, 39, 29;
97, 110, 86, 193;
76, 129, 151, 138]
Output:
correctly sorted matrix
[ 6, 29, 39, 151;
76, 97, 110, 193;
86, 129, 138, 197]
Now I must use sortIdx instead of sort because I need to know the original column and row of each number. The sort method works fine.
So what I did was:
sortIdx(matrix, matrixRowIndices, SORT_EVERY_ROW + SORT_ASCENDING);
I reconstruct the sorted matrix in two for loops based on those indices, giving Mat sortedRowmatrix.
This prints correctly:
sorted row matrix =
[ 6, 29, 39, 197;
86, 97, 110, 193;
76, 129, 138, 151]
Now, I want to sort the columns. I use:
sortIdx(sortedRowmatrix, matrixColIndices, SORT_EVERY_COLUMN + SORT_ASCENDING);
I get the correct matrixColIndices (explanation below):
sorted col indices=
[0, 0, 0, 2;
2, 1, 1, 1;
1, 2, 2, 0]
since:
Per column analysis of "sorted row matrix":
- 0th column: nr 0 (6) is the smallest,
after that nr 2 (76), after that nr 1
(86)
- 1st column: nr 0 (29) is the
smallest, nr 1 (97) after that, nr 2
(129) after that.
- 2nd column: nr 0
(39) is the smallest, nr 1 (110)
after that, nr 2 (138) after that.
- 3rd column: nr 2 (151) is the
smallest, nr 1 (193) after that en nr
0 (197) after that
Then, I write out my final matrix based on original indices and I expect to get the sorted result.
Instead I get:
wrongly sorted matrix =
[ 6, 29, 39, 129;
151, 97, 110, 193;
97, 129, 138, 39]
Notice it also duplicates some numbers (97).
I cannot get my head around why my reconstruction apparently does not work.
I sort horizontally: this gives me the correct column indices.
I sort vertically: the horizontal ordning should stay the same, and this should give me correct row indices.
The normal sort method works (but of course does not give the original positions I need). I also tried sorting columns first, then rows. The first step sorting columns works. The second step again doesn't.
What goes wrong?
My code is at:
https://pastebin.com/0Li2j3xwAmberElferinkMon, 01 Jul 2019 11:01:47 -0500http://answers.opencv.org/question/215036/Matrix, invert() memory leakhttp://answers.opencv.org/question/209033/matrix-invert-memory-leak/- OpenCV => 3.4.5
- Operating Systemp / Platform -> Windows10 64 bit
- Complier -> Visual Studio 2015
// C++ code example
int m = (int)xData.size();
cv::Mat J_k(m, 4, CV_64FC1);
cv::Mat invJ;
cv::invert(J_k, invJ, cv::DECOMP_SVD); // memory leak
Memory leak occurs when I using the invert function.
Please check this code and HELP me if you know why this leak occurs..
Thx..gaebalsaebalTue, 19 Feb 2019 03:08:03 -0600http://answers.opencv.org/question/209033/SolvePNP. Compensate for head rotationhttp://answers.opencv.org/question/208438/solvepnp-compensate-for-head-rotation/ Hi all!
I need to orientate detected face points to parallel with camera view.
I use the following code:
public void CalculateEulerAngles(int faceIndex)
{
var objectPoints = new List<Point3f>
{
...
};
var marks = Faces[faceIndex].Marks;
var imagePoints = new List<Point2f>
{
...
};
var focalLength = Image.Cols;
var center = new Point2f(Image.Cols / 2f, Image.Rows / 2f);
var cameraMatrix = new double[,] {{focalLength, 0, center.X}, {0, focalLength, center.Y}, {0, 0, 1}};
double[] rvec, tvec;
Cv2.SolvePnP(objectPoints, imagePoints, cameraMatrix, null, out rvec, out tvec);
double[,] rvecRodrigues;
Cv2.Rodrigues(rvec, out rvecRodrigues);
double[] eulerAngles;
double[,] camMatrix;
GetEulerAngles(rvecRodrigues, out eulerAngles, out camMatrix);
//
Point2f[] points;
double[,] jacobian;
Cv2.ProjectPoints(objectPoints, rvec, tvec, cameraMatrix, null, out points, out jacobian);
for (int i = 0; i < points.Length; i++)
Cv2.Circle(Image, (int)points[i].X, (int)points[i].Y, 5, Scalar.Aqua);
//
}
void GetEulerAngles(double[,] rvec, out double[] euler, out double[,] camMatrix)
{
double[] transVect, eulerAngles;
double[,] cameraMatrix,rotMatrix,rotMatrixX,rotMatrixY,rotMatrixZ;
double[,] projMatrix =
{
{rvec[0, 0], rvec[0, 1], rvec[0, 2], 0},
{rvec[1, 0], rvec[1, 1], rvec[1, 2], 0},
{rvec[2, 0], rvec[2, 1], rvec[2, 2], 0}
};
Cv2.DecomposeProjectionMatrix(projMatrix, out cameraMatrix, out rotMatrix, out transVect,
out rotMatrixX, out rotMatrixY, out rotMatrixZ, out eulerAngles);
euler = eulerAngles;
camMatrix = cameraMatrix;
}
But, I don't know how I can rotate points use calculated Euler angles to orientate parallel with camera view, so points always "flatted" to camera. Thanks guysmachinecoreTue, 05 Feb 2019 11:26:01 -0600http://answers.opencv.org/question/208438/error: no matching member function for call to 'at'http://answers.opencv.org/question/207219/error-no-matching-member-function-for-call-to-at/ I have this ridiculously simple example ... but It doesn't compile ...
I must be making a stupid mistake.
I am new to C++ opencv ... any clue is much appreciate it.
#include <QCoreApplication>
#include <opencv2/opencv.hpp>
int main(int argc, char *argv[])
{
cv::Mat objp = cv::Mat::zeros( 10, 3 , CV_32FC1);
// objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
for(int i = 0; i < objp.size[0]; i++){
objp.at<CV_32FC1>(0, 0) = 0;
}
std::cout << "I am done";
}
/home/mike/Downloads/TestOpecv/main.cpp:9: error: no matching member function for call to 'at'
mikeitexpertThu, 17 Jan 2019 03:53:57 -0600http://answers.opencv.org/question/207219/How to flatten 2D matrix and then copy it to a specific column in another matrixhttp://answers.opencv.org/question/203989/how-to-flatten-2d-matrix-and-then-copy-it-to-a-specific-column-in-another-matrix/For example, I have a 2D Matrix with 3-channels:
const auto img = imread("path.jpg", IMREAD_COLOR);
const auto ROWS = img.rows;
const auto COLS = img.cols;
And then i split it to vector to get 2D Matrix with 1-channel:
std::vector<Mat> channels{};
split(img, channels);
I want to store the channels as a Matrix of size `[ROWS*COLS, 3]` so i tried to do this. I thought it is supposed to flatten each Matrix in the **channels** above to Matrix of size `[ROWS*COLS, 1]`, then copy it to the a specific column of in the **result**, but it was wrong:
Mat result(ROWS*COLS, 3, CV_32F);
auto i = 0;
for(const auto& channel : channels) {
channel.reshape(0, ROWS*COLS).copyTo(result.col(i));
++i;
}
I used a naive way which give the correct **result**:
for (const auto& channel : channels) {
for (auto y = 0; y < rows; ++y)
for (auto x = 0; x < cols; ++x) {
result.at<float>(y + x * rows, i) = channel.at<uchar>(y, x);
}
++i;
}
What did i do wrong with the first solution?longlpSat, 24 Nov 2018 09:26:00 -0600http://answers.opencv.org/question/203989/Finding largest rectangles in matrixhttp://answers.opencv.org/question/203701/finding-largest-rectangles-in-matrix/Hello :) <br>
I'm new to opencv. Having 2d occupancy matrix (box status is occupied, free or unknown) is there an algorithm covering whole matrix with possibly largest status rectangles?
<br> Thank you for your help
Example:
- white box = free
- red box = occupied
- green boxes are expected answers (green boxes should touch, but I draw it this way for clarity) e.g. coordinates of down left and top right corner
<br>
![image description](/upfiles/1542721506791390.png)ejeczmionekTue, 20 Nov 2018 07:20:57 -0600http://answers.opencv.org/question/203701/How to extract RANSAC's inlier object points from the matrix produced by solvePNPRansac() or findHomography()?http://answers.opencv.org/question/198851/how-to-extract-ransacs-inlier-object-points-from-the-matrix-produced-by-solvepnpransac-or-findhomography/Hi everyone,
I understand that both solvePNPRansac() and findHomography() produce a matrix correlated to the inlier and outlier keypoints.
Once I get the Mask from the findHomography(), is there a trivial way to correlate the position of the rows in the Mask to the list of matches and the keypoints (like a simple for loop shown below)?
for (int nextPosition = 0; nextPosition < Mask.height(); ++nextPosition) {
if(Mask.get(nextPosition,0)[0] > 0.0){
inlierList_good_matches.add(List_good_matches.get(nextPosition));
inlierList_KeyPoints1.add(List_KeyPoints1.get(nextPosition));
inlierList_KeyPoints2.add(List_KeyPoints2.get(nextPosition));
}
}
Thanks in advance!hayleyThu, 06 Sep 2018 13:20:16 -0500http://answers.opencv.org/question/198851/How can I get the FFT matrix?http://answers.opencv.org/question/197069/how-can-i-get-the-fft-matrix/ I have an image with index data [-(N-1)/2, .., 0,...,(N-1)/2] x [-(N-1)/2, .., 0, ..., (N-1)/2] and I need for a special calculation only the DFT, rather the FFT, matrix F, such that X = Fx, where X is the frequency sequence and x the real domain sequence.
Is there any routine that I can use that calculates the FFT matrix??baleaWed, 08 Aug 2018 03:49:01 -0500http://answers.opencv.org/question/197069/Convert pixel position to world direction?http://answers.opencv.org/question/66047/convert-pixel-position-to-world-direction/I have calibrated my camera with a checkerboard and achieved the distortion parameters and Intrinsic matrix of my camera.
Using these I have estimated the camera position and orientation using solvePnP against a known known set of reference points.
Now I want to find the world direction, from the camera toward a blob I am detecting inside my image. So I want to convert a pixel position to a 3D vector in that direction.
I want the direction so I can determine where in the world a ball is located. I have two cameras. If both of them can see the blob I will find the point nearest both lines and if only one can observe the ball I will just use the directions crossing with a the ground plane.
I am using blob detection in HSV colorspace to find the ball.
Any ideas on how to continue?
Kind regards
Jesper
TAXfromDKSat, 11 Jul 2015 10:25:21 -0500http://answers.opencv.org/question/66047/Converting a point into another reference frame (Aruco)http://answers.opencv.org/question/192815/converting-a-point-into-another-reference-frame-aruco/ I'm trying to convert the position of [a standalone/loose ArUco marker](https://docs.opencv.org/3.4.1/d5/dae/tutorial_aruco_detection.html) into the reference frame of [an ArUco board](https://docs.opencv.org/3.4.0/db/da9/tutorial_aruco_board_detection.html).
I'm using Python, so my matrices are created via NumPy and the ArUco methods are accessed via [cv2.aruco](https://longervision.github.io/2017/03/13/opencv-python-aruco/). My camera has been properly calibrated with the checkerboard pattern and I have the parameters.
I have the pose of the ArUco board via cv2.aruco.estimatePoseBoard. This returns a 1x3 rvec (rotation compared to the camera in [the Rodrigues format](https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga61585db663d9da06b68e70cfbf6a1eac) which I convert to a more standard 3x3 rotation matrix with cv2.Rodrigues(rvec) ) and a 1x3 tvec (position compared to the camera in meters, specifically one of the corners). I also have the pose of the loose ArUco marker via cv2.aruco.estimatePoseSingleMarkers. This returns an array of rvecs and tvecs corresponding to all visible ArUco markers (including the ones on the board) from which I pull the loose marker's position (specifically, the very center of the marker).
I'm trying to create a 4x4 transformation matrix that I can multiply the marker's position by to get the position of the marker in the board's reference frame. I only need rotation and translation as there is no need to skew (the board is a rectangle) or scale (both positions are in meters). I am currently taking the transpose of the board rotation matrix and combining it with the negative board position matrix to get the transformation matrix. This is then multiplied by the loose marker's position matrix (4x4 on the left, 4x1 vertical on the right) to get what should be a position in board space.
At the moment, it mostly works. I'm getting the correct position on the x axis, but y and z are completely wrong. Any chance I could get some help figuring out what I'm doing wrong?
camToBoard = np.identity(4)
camToBoard[:3,:3] = cv2.Rodrigues(boardRotation)[0]
camToBoard.transpose()
camToBoard[0][3] = -boardPosition[0]
camToBoard[1][3] = -boardPosition[1]
camToBoard[2][3] = -boardPosition[2]
markerPos4x1 = np.matrix([[markerPos[0]],[markerPos[1]],[markerPos[2]],[1]])
markerPosBoardFrame = np.matmul(camToBoard, markerPos4x1 )
print('Marker Pos: {}'.format(markerPosBoardFrame))AurekSkyclimberSat, 02 Jun 2018 00:42:39 -0500http://answers.opencv.org/question/192815/How do I properly add two Matrices?http://answers.opencv.org/question/190446/how-do-i-properly-add-two-matrices/I am having a problem making addition of two images, both are 255X255, and both are of type 8UC3 , but still I am getting this error:
OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array') in arithm_op, file.
**Update**:
One strange thing is that `LoadedImage.cols == LoadedImage2.cols` returns `0` (false) but they both return the same number (255), how does C++ not returning true for `255==255` is mindblowing for me at this point.
This is my code:
std::string type2str(int type) {
std::string r;
uchar depth = type & CV_MAT_DEPTH_MASK;
uchar chans = 1 + (type >> CV_CN_SHIFT);
switch ( depth ) {
case CV_8U: r = "8U"; break;
case CV_8S: r = "8S"; break;
case CV_16U: r = "16U"; break;
case CV_16S: r = "16S"; break;
case CV_32S: r = "32S"; break;
case CV_32F: r = "32F"; break;
case CV_64F: r = "64F"; break;
default: r = "User"; break;
}
r += "C";
r += (chans+'0');
return r;
}
int main(int argc, const char** argv)
{
//
// Load the image from file
//
Mat LoadedImage,LoadedImage2;
LoadedImage = imread(argv[1], IMREAD_COLOR);
LoadedImage2 = imread(argv[2], IMREAD_COLOR);
Mat add= LoadedImage +LoadedImage2; // this is the runtime error
std::cout <<LoadedImage.size << "and 2 = "<< LoadedImage2.size; //outputs 255 x255 and 2= 255 x 255
std::cout << type2str( LoadedImage.type())<<"and 2= "<<type2str( LoadedImage2.type()); //outputs 8UC3 and 2= 8UC3
}TeamARSat, 28 Apr 2018 18:25:55 -0500http://answers.opencv.org/question/190446/Row and Col problem when Mat represents 3d point cloudhttp://answers.opencv.org/question/189357/row-and-col-problem-when-mat-represents-3d-point-cloud/The row and col of a Mat to represent 3d point cloud in OpenCV is N * 3, N is the number of the points in the cloud and 3 is the x, y, z coordinate respectively. For example, when I use method loadPLYSimple to load data from PLY file I will get N * 3 Mat, when I use the Flann KDTREE, I need to pass N * 3 Mat as the parameter... But the problem is that when I try to perform a transformation on the data, such as rotation. If we have a 3 * 3 rotation Mat R, and the points Mat pc N * 3, the common way is just R * pc. However, pc is N * 3, so we need to do some extra transpose work. I'm not familiar with OpenCV, I just want to know if there is any better way to do that instead of doing the transpose work each time? Or maybe there is something which I do not understand hidden behind？Thanks.TabFri, 13 Apr 2018 16:28:56 -0500http://answers.opencv.org/question/189357/Best way to apply a function to each element of Mat (Android)http://answers.opencv.org/question/185944/best-way-to-apply-a-function-to-each-element-of-mat-android/Hello,
I've seen the thread [here](http://answers.opencv.org/question/22115/best-way-to-apply-a-function-to-each-element-of-mat/) with the same request but the two main answers were to use pointers and to use parallel methods. Neither are really available on the Android implementation of OpenCV.
What would be the fastest way to apply a function like sin(pi/k * element^2) on Android OpenCV and then insert that value into another matrix?
edit: related question: is it faster to collect elements in a Java array first and then insert all of them at once as a row instead of individually with `.put()`oralbSat, 03 Mar 2018 13:58:37 -0600http://answers.opencv.org/question/185944/In Python, is there a method or operator to determine if a frame is empty or an all zero-matrix?http://answers.opencv.org/question/185629/in-python-is-there-a-method-or-operator-to-determine-if-a-frame-is-empty-or-an-all-zero-matrix/ For example, I would like to finish this code:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if # frame is empty or all zeros #:
print("Empty Frame")
else:
cv2.imshow('frame',frame)
if cv2.waitKey(33) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
The line, 'if # frame is empty or all zeros #:', is there an operator I can use to determine if a frame is empty?masterenolWed, 28 Feb 2018 02:12:26 -0600http://answers.opencv.org/question/185629/Matrix Multiplication Type Issuehttp://answers.opencv.org/question/183554/matrix-multiplication-type-issue/ I'm trying to do color conversion from RGB into LMS and multiply the matrix using reshape.
void test(const Mat &ori, Mat &output, Mat &pic, int rows, int cols)
{
Mat lms(3, 3, CV_32FC3);
Mat rgb(3, 3, CV_32FC3);
lms = (Mat_<float>(3, 3) << 1.4671, 0.1843, 0.0030,
3.8671, 27.1554, 3.4557,
4.1194, 45.5161 , 17.884 );
/* switch the order of the matrix according to the BGR order of color on OpenCV */
Mat transpose = (3, 3, CV_32FC3, lms).t(); // this will do transpose from matrix lms
pic = ori.reshape(1, rows*cols);
rgb = pic*transpose;
output = rgb.reshape(3, cols);
}
About reshape, because my original Mat object is from an image with 3 channel (RGB), and I need to multiply them with matrix of 1 channel, it seems like I have an issue with the matrix type.
This is how I define my original mat
Mat ori = imread("colorcubes.png", CV_LOAD_IMAGE_COLOR);
int rows = ori.rows;
int cols = ori.cols;
Mat pic(rows, cols, CV_32FC3);
Mat output(rows, cols, CV_32FC3);
Error is : `OpenCV Error: Assertion failed (type == B.type() && (type == CV_32FC1 || type == CV_64FC1 || type == CV_32FC2 || type == CV_64FC2)) `
, so it's the type issue.
I tried to change all type into either 32FC3 of 32FC1, but doesn't seem to work. Any suggestion ?
raisa_Tue, 30 Jan 2018 12:00:08 -0600http://answers.opencv.org/question/183554/Aruco Detection - OpenCV Error: Incorrect size of input arrayhttp://answers.opencv.org/question/183056/aruco-detection-opencv-error-incorrect-size-of-input-array/ Hi There,
I try to detect my ArUco markers with this posted code (only part of it) and get on the last line the OpenCV Error: Incorrect size of input array ... in cvRodrigues2, file C:\...
It worked if I only have one marker in the picture. I think more marker are the problem? But how to solve it (as a beginner I am not sure, what to look for. I have searched the whole day but did not find anything :-( )
while (true)
{
if (playVideo == true)
{
if (!vid.read(frame))
break;
aruco::detectMarkers(frame, markerDictionary, markerCorners, markerIds);
aruco::estimatePoseSingleMarkers(markerCorners, arucoSquareDimension, cameraMatrix, distortionCoefficients, rotationVectors, translationVectors);
for (int i = 0; i < markerIds.size(); i++)
{
aruco::drawDetectedMarkers(frame, markerCorners, markerIds); //Zeichnet die Umrandung und die ID auf die Marker
aruco::drawAxis(frame, cameraMatrix, distortionCoefficients, rotationVectors[i], translationVectors[i], arucoSquareDimension*0.5f); //Zeichnet die Koordinatensysteme, die Zahl gibt die Größe der Vektoren in Meter an
//Hier wird die Marker ID in der Konsole ausgegeben
std::cout << "Die ID des Markers lautet:";
for (std::vector<int>::iterator it = markerIds.begin(); it != markerIds.end(); ++it)
std::cout << ' ' << *it;
std::cout << '\n';
//Hier wird die Rotation in der Konsole ausgegeben *180)/(3.14159265359)
std::cout << "Die Rotation des Markers ist:";
for (std::vector<Vec3d>::iterator it = rotationVectors.begin(); it != rotationVectors.end(); ++it)
std::cout << ' ' << (*it * 180) / (3.14159265359) << " grad";
std::cout << '\n';
//Rotationsmatrix
Mat R;
cv::Rodrigues(rotationVectors, R);
Thank you!
Sarahsarah1802Tue, 23 Jan 2018 08:53:19 -0600http://answers.opencv.org/question/183056/Do I need an OpenCV function to read cursor position?http://answers.opencv.org/question/179892/do-i-need-an-opencv-function-to-read-cursor-position/ Hi to the forum.
35 years ago, when I was working in the Image Processing industry (yes, I am that old) we made our own cursors from lookup tables. Now, I am working with OpenCV and I have seen how I can draw lines and circles (probably more complex shapes as well) using OpenCV functions (or do you call them modules?).
I am going to generate my own cursor using OpenCV on an OpenCV modified image (matrix). One such that I can change the cursor position on the bounded image by using a joystick or mouse. All that I need now is a way to read the cursor X/Y coordinates.
Now, there are a number of MS Windows functions that are used to get cursor coordinates and display them. Two problems that I see with using these MS c++ functions:
1. I don't necessarily wish to use the MS operating system long term for the final version.
2. I can see that if I have a camera derived image/matrix from OpenCV, that the XY coordinates received by the MS function might not (probably will not) be aligned with the XY matrix generated by OpenCV.
The above reasons are why I am looking for an OpenCV function to give me cursor coordinates derived from an OpenCV image.
Thank You
Tomminer_tomWed, 06 Dec 2017 12:41:31 -0600http://answers.opencv.org/question/179892/OpenGL 4x4 camera matrix, OpenCV ?x? camera matrixhttp://answers.opencv.org/question/178674/opengl-4x4-camera-matrix-opencv-x-camera-matrix/In OpenGL, the camera matrix is a 4x4 matrix. Is the camera matrix in OpenCV a 4x4 matrix as well?
The following is the code needed to make a very simple camera matrix. Compatible with OpenGL ES 2.0 and higher. See the camera matrix applied in the full code https://github.com/sjhalayka/blind_poker:
float projection_modelview_mat[16];
init_perspective_camera(y_fov_degrees,
static_cast<float>(screen_width)/static_cast<float>(screen_height),
0.01f, 2.0f, // Z near, far distances.
0, 0, 1, // Camera position.
0, 0, 0, // Look at position.
0, 1, 0, // Up direction vector.
projection_modelview_mat);
... where ...
void get_perspective_matrix(float fovy, float aspect, float znear, float zfar, float (&mat)[16])
{
// https://www.opengl.org/sdk/docs/man2/xhtml/gluPerspective.xml
const float pi = 4.0f*atanf(1.0);
// Convert fovy to radians, then divide by 2
float f = 1.0f / tan(fovy/360.0*pi);
mat[0] = f/aspect; mat[4] = 0; mat[8] = 0; mat[12] = 0;
mat[1] = 0; mat[5] = f; mat[9] = 0; mat[13] = 0;
mat[2] = 0; mat[6] = 0; mat[10] = (zfar + znear)/(znear - zfar); mat[14] = (2.0f*zfar*znear)/(znear - zfar);
mat[3] = 0; mat[7] = 0; mat[11] = -1; mat[15] = 0;
}
void get_look_at_matrix(float eyex, float eyey, float eyez,
float centrex, float centrey, float centrez,
float upx, float upy, float upz,
float (&mat)[16])
{
// https://www.opengl.org/sdk/docs/man2/xhtml/gluLookAt.xml
vertex_3 f, up, s, u;
f.x = centrex - eyex;
f.y = centrey - eyey;
f.z = centrez - eyez;
f.normalize();
up.x = upx;
up.y = upy;
up.z = upz;
up.normalize();
s = f.cross(up);
s.normalize();
u = s.cross(f);
u.normalize();
mat[0] = s.x; mat[4] = s.y; mat[8] = s.z; mat[12] = 0;
mat[1] = u.x; mat[5] = u.y; mat[9] = u.z; mat[13] = 0;
mat[2] = -f.x; mat[6] = -f.y; mat[10] = -f.z; mat[14] = 0;
mat[3] = 0; mat[7] = 0; mat[11] = 0; mat[15] = 1;
float translate[16];
translate[0] = 1; translate[4] = 0; translate[8] = 0; translate[12] = -eyex;
translate[1] = 0; translate[5] = 1; translate[9] = 0; translate[13] = -eyey;
translate[2] = 0; translate[6] = 0; translate[10] = 1; translate[14] = -eyez;
translate[3] = 0; translate[7] = 0; translate[11] = 0; translate[15] = 1;
float temp[16];
multiply_4x4_matrices(mat, translate, temp);
for(size_t i = 0; i < 16; i++)
mat[i] = temp[i];
}
void multiply_4x4_matrices(float (&in_a)[16], float (&in_b)[16], float (&out)[16])
{
/*
matrix layout:
[0 4 8 12]
[1 5 9 13]
[2 6 10 14]
[3 7 11 15]
*/
out[0] = in_a[0] * in_b[0] + in_a[4] * in_b[1] + in_a[8] * in_b[2] + in_a[12] * in_b[3];
out[1] = in_a[1] * in_b[0] + in_a[5] * in_b[1] + in_a[9] * in_b[2] + in_a[13] * in_b[3];
out[2] = in_a[2] * in_b[0] + in_a[6] * in_b[1] + in_a[10] * in_b[2] + in_a[14] * in_b[3];
out[3] = in_a[3] * in_b[0] + in_a[7] * in_b[1] + in_a[11] * in_b[2] + in_a[15] * in_b[3];
out[4] = in_a[0] * in_b[4] + in_a[4] * in_b[5] + in_a[8] * in_b[6] + in_a[12] * in_b[7];
out[5] = in_a[1] * in_b[4] + in_a[5] * in_b[5] + in_a[9] * in_b[6] + in_a[13] * in_b[7];
out[6] = in_a[2] * in_b[4] + in_a[6] * in_b[5] + in_a[10] * in_b[6] + in_a[14] * in_b[7];
out[7] = in_a[3] * in_b[4] + in_a[7] * in_b[5] + in_a[11] * in_b[6] + in_a[15] * in_b[7];
out[8] = in_a[0] * in_b[8] + in_a[4] * in_b[9] + in_a[8] * in_b[10] + in_a[12] * in_b[11];
out[9] = in_a[1] * in_b[8] + in_a[5] * in_b[9] + in_a[9] * in_b[10] + in_a[13] * in_b[11];
out[10] = in_a[2] * in_b[8] + in_a[6] * in_b[9] + in_a[10] * in_b[10] + in_a[14] * in_b[11];
out[11] = in_a[3] * in_b[8] + in_a[7] * in_b[9] + in_a[11] * in_b[10] + in_a[15] * in_b[11];
out[12] = in_a[0] * in_b[12] + in_a[4] * in_b[13] + in_a[8] * in_b[14] + in_a[12] * in_b[15];
out[13] = in_a[1] * in_b[12] + in_a[5] * in_b[13] + in_a[9] * in_b[14] + in_a[13] * in_b[15];
out[14] = in_a[2] * in_b[12] + in_a[6] * in_b[13] + in_a[10] * in_b[14] + in_a[14] * in_b[15];
out[15] = in_a[3] * in_b[12] + in_a[7] * in_b[13] + in_a[11] * in_b[14] + in_a[15] * in_b[15];
}
/*
void multiply_4x4_matrices(float (&in_a)[16], float (&in_b)[16], float (&out)[16])
{
for(int i = 0; i < 4; i++)
{
for(int j = 0; j < 4; j++)
{
out[4*i + j] = 0;
for (int k = 0; k < 4; k++)
out[4*i + j] += in_a[4*k + j] * in_b[4*i + k];
}
}
}
*/
void init_perspective_camera(float fovy, float aspect, float znear, float zfar,
float eyex, float eyey, float eyez, float centrex, float centrey,
float centrez, float upx, float upy, float upz,
float (&projection_modelview_mat)[16])
{
float projection_mat[16];
get_perspective_matrix(fovy, aspect, znear, zfar, projection_mat);
float modelview_mat[16];
get_look_at_matrix(eyex, eyey, eyez, // Eye position.
centrex, centrey, centrez, // Look at position (not direction).
upx, upy, upz, // Up direction vector.
modelview_mat);
multiply_4x4_matrices(projection_mat, modelview_mat, projection_modelview_mat);
}
... and the vertex and fragment shaders are:
// vertex shader
attribute vec3 position;
attribute vec2 tex_coord;
uniform mat4 mvp_matrix;
varying vec2 frag_tex_coord;
void main()
{
frag_tex_coord = tex_coord;
gl_Position = mvp_matrix*vec4(position, 1);
}
// fragment shader
uniform sampler2D tex;
varying mediump vec2 frag_tex_coord;
void main()
{
gl_FragColor = texture2D(tex, frag_tex_coord);
}sjhalaykaSat, 18 Nov 2017 14:40:28 -0600http://answers.opencv.org/question/178674/