OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 03 Jun 2017 16:17:37 -0500What is best practice to solve least square problem AX = Bhttp://answers.opencv.org/question/156394/what-is-best-practice-to-solve-least-square-problem-ax-b/ Hi, i have a system of linear equations AX = B where A is 76800x6, B is 76800x1 and we have to find X, which is 6x1. I was using X = invert(AT* A) AT* B where AT is transpose of A. But i want to increase the speed of operation and also it dose not protect my calculation under singularity conditions. Is there any faster way to do this.?UsmanArifSat, 03 Jun 2017 16:17:37 -0500http://answers.opencv.org/question/156394/High `cv::solve` error if trained on single linehttp://answers.opencv.org/question/86783/high-cvsolve-error-if-trained-on-single-line/I use `cv::solve` to solve two dimensional linear regression. If the training data happen to be on single line (that is, `y` is equal for all training variables, then the extrapolation produces large error.
Say, the training data are:
<!-- language: lang-none -->
X Y Z
---- ---- -----
4 7 458
5 7 554
7 7 735
8 7 826
The calculated coefficients are (notice the last two are very large numbers):
{92.8825, 3.74394e+007, -2.62076e+008}
If I use these to extrapolate the original values, large error is produced:
X Y Z'
---- ---- -----
4 7 427.53
5 7 520.412
7 7 706.177
8 7 799.06
All values are smaller by about 26-30. This seems to be an edge case. In my use case, if I have values all on single line (horizontal or vertical), I will predict the values only for that line, turning it effectively into one-dimensional linear regression. But the error is unacceptable.
Here is the code:
static void print(float a, float b, float c, int x, int y) {
cout << "x=" << x << ", y=" << y << ", z=" << (a*x + b*y + c) << endl;
}
int main() {
Mat matX(4, 3, CV_32F);
Mat matZ(4, 1, CV_32F);
int idx = 0;
matX.at<float>(idx, 0) = 4;
matX.at<float>(idx, 1) = 7;
matX.at<float>(idx, 2) = 1;
matZ.at<float>(idx++, 0) = 458;
matX.at<float>(idx, 0) = 5;
matX.at<float>(idx, 1) = 7;
matX.at<float>(idx, 2) = 1;
matZ.at<float>(idx++, 0) = 554;
matX.at<float>(idx, 0) = 7;
matX.at<float>(idx, 1) = 7;
matX.at<float>(idx, 2) = 1;
matZ.at<float>(idx++, 0) = 734;
matX.at<float>(idx, 0) = 8;
matX.at<float>(idx, 1) = 7;
matX.at<float>(idx, 2) = 1;
matZ.at<float>(idx++, 0) = 826;
Mat res(3, 1, CV_32F);
cv::solve(matX, matZ, res, DECOMP_QR);
float a = res.at<float>(0);
float b = res.at<float>(1);
float c = res.at<float>(2);
cout << "a=" << a << ", b=" << b << ", c=" << c << endl;
print(a, b, c, 4, 7);
print(a, b, c, 5, 7);
print(a, b, c, 6, 7);
print(a, b, c, 7, 7);
print(a, b, c, 8, 7);
}
OlivFri, 05 Feb 2016 16:55:55 -0600http://answers.opencv.org/question/86783/StereoRectifyUncalibrated "cannot solve under-determined linear system"http://answers.opencv.org/question/65816/stereorectifyuncalibrated-cannot-solve-under-determined-linear-system/ Hi all
I'm using cv2.stereoRectifyUncalibrated to try and calculate the appropriate rectification transformation between two sets of artificial correspondences:
import cv2
import cv2.cv as cv
import numpy as np
pts1 = [[423, 191], # top_l
[840, 217], # top_r
[422, 352], # bot_l
[838, 377], # bot_r
[325, 437], # front_l
[744, 464], # front_r
[288, 344], # wide_l
[974, 388]] # wide_r
pts2 = [[423, 192], # top_l
[841, 166], # top_r
[422, 358], # bottom_l
[839, 330], # bottom_r
[518, 440], # front_l
[934, 417], # front_r
[287, 363], # wide_l
[973, 320]] # wide_r
pts1 = np.array(pts1, dtype='f4')
pts2 = np.array(pts2, dtype='f4')
f, mask = cv2.findFundamentalMat(pts1, pts2, cv2.cv.CV_FM_8POINT)
pts1_r = pts1.reshape((pts1.shape[0] * 2, 1))
pts2_r = pts2.reshape((pts2.shape[0] * 2, 1))
ret, H1, H2 = cv2.stereoRectifyUncalibrated(pts1_r, pts2_r, f, (1280, 720))
print ret
I've included the data initialisation just to illustrate the array structure. You'll see I've avoided the assertion error mentioned in [this](http://opencv-users.1802565.n2.nabble.com/StereoRectifyUncalibrated-not-accepting-same-array-as-FindFundamentalMat-td5149185.html) discussion using reshape.
However, I now get the following error:
> OpenCV Error: Bad argument (The function can not solve under-determined linear systems) in solve, file /tmp/opencv20150527-4924-hjrvz/opencv-2.4.11/modules/core/src/lapack.cpp, line 1350
Out of context, the offending snippet looks like this:
int m = src.rows, m_ = m, n = src.cols, nb = _src2.cols;
...
if( m < n )
CV_Error(CV_StsBadArg, "The function can not solve under-determined linear systems" );
Where m and n are the number of rows and columns in 'inputArray' - supplied to cv::solve by cvsolve, and created somewhere in StereoRectifyUncalibrated.
My question is simply: what is going on here? I'm struggling to see how my artificial data could be responsible for the system being solved to be under-determined.slowWed, 08 Jul 2015 11:15:28 -0500http://answers.opencv.org/question/65816/Get object transform instead of camera posehttp://answers.opencv.org/question/38819/get-object-transform-instead-of-camera-pose/
I've successfully gotten an animated camera in 3D space using:
retval, rvec, tvec = cv2.solvePnP(objp, corners, K, dist_coef)
camera_pose = (-np.matrix(r_mat).T * np.matrix(tvec)).reshape(-1,3)
In my original shot the camera was still and the object was moving.
How do I go from an animated camera and a still object to an animated object and a still camera?
I'm using four points (corners) for the solve, and I would like to get the 3D coordinates of those points.
ThanksterrachildWed, 06 Aug 2014 05:14:19 -0500http://answers.opencv.org/question/38819/Beginner question about camera solvinghttp://answers.opencv.org/question/35800/beginner-question-about-camera-solving/I just found OpenCV and I don't know anything about how to use it.
If I want to feed it a set of x,y coordinates from tracked points, and get a camera solve in 3D space what do I need to do.
Can I do this from python?
ThanksterrachildFri, 27 Jun 2014 02:21:22 -0500http://answers.opencv.org/question/35800/opencv solve linear equation system too slowhttp://answers.opencv.org/question/34125/opencv-solve-linear-equation-system-too-slow/I used `solve` function to solve `Ax=b` problem. The A is large (about 40000*7000).
But it is very slow. Even slower than matlab.
By the way, I used the `SVD` method.
Any suggestion?tidyMon, 26 May 2014 22:52:38 -0500http://answers.opencv.org/question/34125/Assertion Failed Core.solve in Javahttp://answers.opencv.org/question/32887/assertion-failed-coresolve-in-java/The app I am working on, using OpenCV, is crashing with the following exception:
> 05-06 15:24:50.877:
> E/org.opencv.core(28997):
> core::solve_10() caught cv::Exception:
> /home/reports/ci/slave_desktop/50-SDK/opencv/modules/core/src/lapack.cpp:1197:
> error: (-215) type == _src2.type() &&
> (type == CV_32F || type == CV_64F) in
> function bool
> cv::solve(cv::InputArray,
> cv::InputArray, cv::OutputArray, int)
Here is the code I am using:
A = new Mat(4,3,CvType.CV_32F);
double[] A_values = {
u.x * p1.get(2, 0)[0]-p1.get(0, 0)[0], u.x * p1.get(2, 1)[0]-p1.get(0, 1)[0], u.x * p1.get(2, 2)[0]-p1.get(0, 2)[0],
u.y * p1.get(2, 0)[0]-p1.get(1, 0)[0], u.y * p1.get(2, 1)[0]-p1.get(1, 1)[0], u.y * p1.get(2, 2)[0]-p1.get(1, 2)[0],
v.x * p2.get(2, 0)[0]-p2.get(0, 0)[0], v.x * p2.get(2, 1)[0]-p2.get(0, 1)[0], v.x * p2.get(2, 2)[0]-p2.get(0, 2)[0],
v.y * p2.get(2, 0)[0]-p2.get(1, 0)[0], v.y * p2.get(2, 1)[0]-p2.get(1, 1)[0], v.y * p2.get(2, 2)[0]-p2.get(1, 2)[0]
};
A.put(0, 0, A_values);
B = new Mat(4,1,A.type());
double[] B_values = {
-(u.x * p1.get(2, 3)[0] - p1.get(0, 3)[0]),
-(u.y * p1.get(2, 3)[0] - p1.get(1, 3)[0]),
-(v.x * p2.get(2, 3)[0] - p2.get(0, 3)[0]),
-(v.y * p2.get(2, 3)[0] - p2.get(1, 3)[0]),
};
B.put(0, 0, B_values);
Mat X = new Mat(3,1, A.type());
Core.solve(A, B, X, Core.DECOMP_SVD);
Can someone tell me please why it crashes?
glethienTue, 06 May 2014 08:30:25 -0500http://answers.opencv.org/question/32887/solve (cvSolve) became twice slower after updating to OpenCV 2.4.6http://answers.opencv.org/question/18660/solve-cvsolve-became-twice-slower-after-updating-to-opencv-246/I have been using OpenCV 2.2.0 and I haven't had any problems with performance. After updating the library to the latest version (2.4.6) cvSolve function started taking about twice the time it used to take before. I haven't changed the algorithm anyhow, I have only updated the library. I am using pre-built libraries on Windows with VS2010.
What could be the reason of performance drop?DmitryMon, 12 Aug 2013 10:52:37 -0500http://answers.opencv.org/question/18660/