OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 25 Aug 2020 13:16:56 -0500result coordinate typehttp://answers.opencv.org/question/234253/result-coordinate-type/ I have been making intersection point calculations between parametric line eqs and planes in opencv. I am using raw imaging point data for the math calculations below. What units of measurement are my results? I haven't done a opencv Rodriquez, transformation or other function on this data set yet, otherwise the question would be settled.
I've also tried "converting" to U, V coordinates but am unsure if its correct.
double numer = plane.get_numer(d, n, Ro); // origin point - p - see diagram
double denom = plane.get_denom(n, Rd);
double t = numer / denom;
Vec3d IP = plane.get_IP(Ro, Rd, t); // IP = 'intersection point'
double U = CV_FX * IP[0] + CV_CX;
double V = CV_FY * IP[1] + CV_CY;
std::cout << "U,V = " << U << "," << V << std::endl;superflyTue, 25 Aug 2020 13:16:56 -0500http://answers.opencv.org/question/234253/3d construction from 2d planehttp://answers.opencv.org/question/233519/3d-construction-from-2d-plane/Hey all,
I need your advice on the work I'm doing now. I need to construct a 3d model based on the 2d plane. Over the plane, I have identified reasonable numbers of (x,y) coordinates of each corner the 2d plane has. Let say, if I have a fixed number for my z coordinate that makes all (x,y) coordinates have the same z coordinate, I'm looking for the ways to extrapolate this 2d plane into a 3d model.
My first approach is to first figure out how each (x,y) coordinates are connected to each other to represent the border of the 3d model. However, I'm not sure how to do it yet. After that, expand/extrapolate the z coordinate and connect all the coordinates to represent the 3d model.
Or is there a feature that allows me to build it instantly? The sample of my 2d plane with (x,y) coordinates can be seen in ![image description](/upfiles/15972221846933486.png)
I'm quite new with OpenCV and I write my code with python. So please forgive me if the question is trivial for some of you.
Thanks a lot for your advice.qbTue, 11 Aug 2020 04:11:16 -0500http://answers.opencv.org/question/233519/sample v for inclusion in parametric equationhttp://answers.opencv.org/question/229150/sample-v-for-inclusion-in-parametric-equation/I am working through the following equation in the plane-line intersection point. But the question is smaller than this.
I am confused about v, the direction vector:
t = - (dot(n,p) + d) / dot (n, v)
I know one contributing equation is:
p = po + t * v
I know that v represents the direction vector. What do I calculate v from :
- 2 different points on the plane?
- position vectors on the planes?
- other?superflyFri, 17 Apr 2020 14:06:41 -0500http://answers.opencv.org/question/229150/What is a good alternative to RANSAChttp://answers.opencv.org/question/215316/what-is-a-good-alternative-to-ransac/ I have a cloud of points that I am fitting a plane of best fit. This cloud of points is representing the ground in front of my vehicle. When I am sitting still, there is too much randomness and the plane of best fit jumps around too much. I am currently using RANSAC to find my plane of best fit. I need a more consistent or deterministic method that is still efficient. I understand that I will not be able to maintain the same efficiency but any ideas would be great. Essentially when I sit still I need the same (or almost the same) plane to be generated. jfenzMon, 08 Jul 2019 08:04:19 -0500http://answers.opencv.org/question/215316/How can I use Hough Transforms to fit a Planehttp://answers.opencv.org/question/214641/how-can-i-use-hough-transforms-to-fit-a-plane/ I want to find the plane of best fit for a set of data. I think the Hough Transform will be the most fruitful approach. Overall, I want to be able to detect the slope of a hill seen in an image. I would like help finding any good documentation or examples of fitting a plane to a set of 3d data using c++ and open CV. Eventually I will implement RANSAC or some method to make sure the process is optimized, but any thoughts are appreciated.jfenzSun, 23 Jun 2019 03:51:44 -0500http://answers.opencv.org/question/214641/How to compute the 3D location of a 2D point on the ground?http://answers.opencv.org/question/177170/how-to-compute-the-3d-location-of-a-2d-point-on-the-ground/I know the focal length f and the principal point of a camera P(x,y) and I can assume the ground plane is orthogonal to the image plane.
I have a 2D point on the picture that I know is on the ground, how do I get it on 3D using camera coordinates.
However I am not sure how to approach this? Any advice is appreciated.danoc93Sun, 29 Oct 2017 11:41:51 -0500http://answers.opencv.org/question/177170/How to extract red color plane in OpenCV C++http://answers.opencv.org/question/13575/how-to-extract-red-color-plane-in-opencv-c/How to extract red color plane in OpenCV C++?Ali TillawiSat, 18 May 2013 11:22:41 -0500http://answers.opencv.org/question/13575/Building reconstruction using plane sweephttp://answers.opencv.org/question/11236/building-reconstruction-using-plane-sweep/Hello everyone!
I'm new to OpenCV (only started this semester) although I already had some notions, mainly on Stereo Vision.
In a project I'm currently working on a project that reconstructs buildings from a set of images. However, I'm having some problems and I have some questions I hope someone can answer :D
What I'm doing is this:
1- Calibrate the camera using the tutorials example with a chessboard pattern. This gives me the intrinsic parameters matrix K and the distortion coefficients.
2- From a set of input images, I extract Canny edges and Hough lines. The Hough lines are the points where I expect to obtain better stereo matching among images.
3- Next, I try to obtain the extrinsic parameters for each image. For this, I iterate over all the images, find feature points and correspondences between image i and i-1 using a SurfFeatureDetector and a BruteForceMatcher. Find the fundamental matrix F and the Essential matrix E with: E = K^(t)*F*K. From E I can calculate the position and rotation of each image.
To this point I have all images positioned in space with the first image being at the origin and looking along positive Z.
4- Execute the Collins Plane Sweep using the positioned images and the Hough lines (interesting points). First all interesting points from all images are projected to a canonical plane (Z=z0) with the non linear planar homography:
Hi = K[r1 r2 z0*r3+t]
K is the intrinsic parameters matrix, r1, r2, r3 are the columns of the image rotation matrix and t is the translation matrix. Then for each sweep plane in the z axis (Z=zi) I use the points in the canonical plane to reproject them to this plane using:
xi = (zi-Cz)/(z0-Cz)*x0+(1-delta)*Cx
yi = (zi-Cz)/(z0-Cz)*y0+(1-delta)*Cy
(xi,yi) is the reprojected point in the plane Z=zi and [Cx Cy Cz] is the camera position. Then increment by one all plane cells inside a radius from such points.
In the end, all cells from each plane that have more than T votes are considered to contain a valid point in space, and as such I create a vertex there.
So my questions are:
I'm using an Nikon SLR Camera and I'm not sure if this is correct, since it has autofocus. However in the image tags, all images had a focal length of 18mm. I'm not sure whether it is the same focal length as the one in the intrinsic parameters.
In the matrix output by the calibration process (1) I get values of 10^3 order. In what metric are this values?
What is the purpose of the distortion coefficients and should I use them in my project? Where?
Am I calculating the Essential matrix and extrinsic parameters correctly? How can I be sure the results are good?
The building in the scene is about 20m away from the cameras, however I need a square plane with a side of 2*10^8 and each cell is 1*10^5 for the plane sweep algorithm. This is extremely large and yields a large amount of points ranging from
min=[-9.995e+07, -9.995e+07, 0]
to
max=[9.995e+07, 9.995e+07, 36500]
the z values in these vectors are the sweep planes and the value is correct (it is generated by me). However, I'm not so sure about the x and y values.
The collins plane sweep also has a statistical model of clutter to reduce the number of points to only the ones that are interesting. However I did not understand that part yet :S
Is there any plane sweep implementation available I can take a look at?
Sorry for the long post, but I'm not sure if I'm doing things right.
Thanks in advance
diegoWed, 10 Apr 2013 08:03:33 -0500http://answers.opencv.org/question/11236/how can I fill plane of connected line at the OpenCV ?http://answers.opencv.org/question/9675/how-can-i-fill-plane-of-connected-line-at-the-opencv/I had drawing line to point. So the Lines is made like plane.
I want to fill plane some color. **But I couldn't fill color of that**.
for example if I make triangle use points. I want to fill color in triangle.
----------
**How Can I fill** plane of connected line at the openCV?dennisTue, 19 Mar 2013 22:21:53 -0500http://answers.opencv.org/question/9675/