OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 26 Apr 2019 07:48:56 -0500Pose Estimation Tutorial not compilinghttp://answers.opencv.org/question/212156/pose-estimation-tutorial-not-compiling/ I am trying to follow this: [https://docs.opencv.org/3.4.4/dc/d2c/tutorial_real_time_pose.html](https://docs.opencv.org/3.4.4/dc/d2c/tutorial_real_time_pose.html)
I create a directory "build" in
> opencv/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation
and run `cmake ..`
I get this:
CMake Error at CMakeLists.txt:15 (ocv_include_modules_recurse):
Unknown CMake command "ocv_include_modules_recurse".
CMake Warning (dev) in CMakeLists.txt:
No cmake_minimum_required command is present. A line of code such as
cmake_minimum_required(VERSION 3.13)
I have OpenCV 2.4 compiled with `open_contrib` and 3.4 without. Usually when I compile a `.cpp` program that uses openCV I run.
g++ $(pkg-config --cflags --libs /usr/local/Cellar/opencv@3/3.4.5/lib/pkgconfig/opencv.pc) -std=c++11 main.cpp
since I am on MacOS. Should I ignore `cmake` and compile it like that ? How do I fix the error ?trttrtFri, 26 Apr 2019 07:48:56 -0500http://answers.opencv.org/question/212156/Estimate object pose using multiple camerashttp://answers.opencv.org/question/208716/estimate-object-pose-using-multiple-cameras/Hello!
***TL;DR
I need a function similar to solvePnP(), but that would be able to estimate the pose of a model using information from multiple cameras instead of only one camera***
I am trying to find the pose (rotation and translation) of a simple object covered with markers, using n cameras placed around the object.
The pose of each camera is known: I already have a matrix Ci for each camera i such as for a point X=(x,y,z,1) in real world coordinates, Pi*X gives me the coordinates of that point in the camera's coordinate system.
The object I am trying to estimate is composed of m points, and I know the position of each of them in the object's coordinate system.
I am already able to find the object's coordinates in the plane of each cameras.
So if I put all this together, for each point j seen in a camera i I get this:
**sij * Pij = Ci * A * Xj**
where:
**sij** is an unknown scalar (it is here because we don't know how far from the camera the point found is) that multiplies the projection of the point j on the camera i *(unknown)*
**Pij** is the coordinates of the point j projected on the camera i: (x',y',1)T *(known)*
**Ci** is the matrix that describes the rotation and translation of the camera i *(known)*
**A** is the matrix I'm looking for, it describes the transformation between the object's coordinate system and real world coordinates *(unknown)*
**Xj** is the point j in the object's coordinate system: (x,y,z,1)T *(known)*
I will typically see 4 different points on 3 different cameras (the 12 points found are all differents), which would give me a set of 12 of those linear systems.
How do I find the matrix A that satisfies the best this set of linear systems ?
This problem looks like something that could be solved using DLT (https://en.wikipedia.org/wiki/Direct_linear_transformation), but I'm not able to transform my systems to fit the form shown on this wikipedia page.
*My question is similar to this one : http://answers.opencv.org/question/131660/multi-view-solvepnp-routine/,
but the answer there does not solve my problem because it requires that the points used to resolve the pose of the model are seen in multiple cameras.*Nick_Mon, 11 Feb 2019 15:46:00 -0600http://answers.opencv.org/question/208716/How use Translation matrix in revoverPosehttp://answers.opencv.org/question/194910/how-use-translation-matrix-in-revoverpose/I want to find pose changes between two frame for example I have base image and scaled 1.5 times.<br>
For doing this I did following steps:
<ol>
<li>Detect feature points in two image</li>
<li>Detect match points in two image</li>
<li>Finding Essential Mat of two points</li>
<li>recoverPose of two points and Essential Mat </li>
</ol>
This is my code:<br>
Mat E = findEssentialMat( leftPointMatches ,rightPointMatches,cameraCalib);
Mat R,t;
recoverPose(E,leftPointMatches ,rightPointMatches,cameraCalib,R,T);
cout << T << endl;
But my problem is T is weird:
[0.5773502691896257;
-0.5773502691896257;
0.5773502691896256]
How should I use recover pose?
Is my expect wrong from revcoverPose?
Or my parameters is wrong?
<br>
OpenCV version is 3.4.1,<br>
Feature point detection by AKAZE with default parameters <br>
Feature point matching by KnnMatcher k=1NasserMon, 02 Jul 2018 09:08:42 -0500http://answers.opencv.org/question/194910/Issue in OpenCV sample tutorial for real time pose estimationhttp://answers.opencv.org/question/189414/issue-in-opencv-sample-tutorial-for-real-time-pose-estimation/Hi,
I was trying out the real time pose estimation tutorial from OpenCV ( [this tutorial](https://docs.opencv.org/3.3.0/dc/d2c/tutorial_real_time_pose.html) ), and while looking into the source code, I may have found a (possible) inconsistency -
The functions for converting between Rotation Matrix and Euler Angles ( please see [euler2rot](https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/Utils.cpp#L229) and [rot2euler](https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/Utils.cpp#L189) from the C++ source code for the tutorial).
I am aware that there are **6-types** of [Tait-bryan](https://en.wikipedia.org/wiki/Euler_angles) angles (they are colloquially referred to as **Euler Angles** ).
**My issue:**
- The above source code functions **do not** adhere to any of the 6 types. I checked a few sources to verify this -
From the [section on wikipedia page](https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix) and even wrote a simple Matlab script using symbolic variables ( please see below, at the end of the question ).
- **The source code in the functions seems to correspond to the Y-Z-X Tait bryan angles, but with the pitch and yaw interchanged.** Why is this the case ? (Does this have something to do with the fact that the camera's coordinate axes has z-facing forward, y-facing downward and x-facing right ? )
- And finally, I think, Z-Y-X Tait Bryan angles are the industry standard for robotics , hence is there any particular reason for using Y-Z-X (with the pitch and yaw interchanged, as noticed above) ?
**================= Matlab script below for reference ============**
syms roll
syms pitch
syms yaw
Rx = [1 0 0; 0 cos(roll) -sin(roll); 0 sin(roll) cos(roll)];
Ry = [cos(pitch) 0 sin(pitch); 0 1 0; -sin(pitch) 0 cos(pitch)];
Rz = [cos(yaw) -sin(yaw) 0; sin(yaw) cos(yaw) 0; 0 0 1];
R_Y_Z_X = Ry * Rz * Rx
**Output**:
R_Y_Z_X =
[ cos(pitch)*cos(yaw), sin(pitch)*sin(roll) - cos(pitch)*cos(roll)*sin(yaw), cos(roll)*sin(pitch) + cos(pitch)*sin(roll)*sin(yaw)]
[ sin(yaw), cos(roll)*cos(yaw), -cos(yaw)*sin(roll)]
[ -cos(yaw)*sin(pitch), cos(pitch)*sin(roll) + cos(roll)*sin(pitch)*sin(yaw), cos(pitch)*cos(roll) - sin(pitch)*sin(roll)*sin(yaw)]malharjajooSun, 15 Apr 2018 19:31:39 -0500http://answers.opencv.org/question/189414/How to verify the accuracy of solvePnP return values?http://answers.opencv.org/question/149759/how-to-verify-the-accuracy-of-solvepnp-return-values/Hello , I have used `solvePnP` to find the Pose of object and I am getting some results for `rvet` and `tvet`. Now I wanted to know how accurate they are.
How to I compute the accuracy of retunred values of solvePnP?
1 method I found was re-projection error.
But is there any way to **generate test cases**:
**Data need to generate as test cases:**
(image points, object points)
(expected rvet, expected tvet)
**Result:**
(computed rvet, computed tvet) - return values from solvePnP with each test case that we generated.
And now comparing the `((expected rvet, expected tvet)` and `(computed rvet, computed tvet)` to measure the accuracy for different flags available for solvePnP.
**Are there any ways/software/tools that helps me to generate accurate test cases.**(Varieties of test cases may include nose environment, distance of object from camera varies, planar object points, non co-planar object points .. etc) ?
djkpAFri, 19 May 2017 01:34:36 -0500http://answers.opencv.org/question/149759/Camera calibration and pose estimation (OpenGL + OpenCV)http://answers.opencv.org/question/130792/camera-calibration-and-pose-estimation-opengl-opencv/ Hello i'm fairly new to OpenCV. I'm trying to estimate the 3D pose of the camera in order to draw a 3D teapot using OpenGL. I have been testing for about a week and I partially understand the theory, I tried to replicate an example but I do not get it to appear correctly. I get the keypoints using SIFT and looks right.
I use this function in order to obtain the intrinsics and extrinsics parameters:
ret, K, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, im_size,None,None)
When I have these parameters I create the loop to draw the teapot:
def setup():
pygame.init()
pygame.display.set_mode((im_size[0], im_size[1]), pygame.OPENGL | pygame.DOUBLEBUF)
pygame.display.set_caption('OpenGL AR demo')
setup()
S = 1 # Selected Image
while True:
event = pygame.event.poll()
if event.type in (pygame.QUIT, pygame.KEYDOWN):
break
draw_background(I[S - 1])
set_projection_from_camera(K)
set_modelview_from_camera(rvecs[S-1], tvecs[S-1])
draw_teapot(100)
pygame.display.flip()
pygame.time.wait(50)
I estimate the projection using this function:
def set_projection_from_camera(K):
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
fx = K[0,0]
fy = K[1,1]
fovy = 2*arctan(0.5*im_size[1]/fy)*180/pi
aspect = (im_size[0]*fy)/(im_size[1]*fx)
# define the near and far clipping planes
near = 0.1
far = 500000.0
# set perspective
gluPerspective(fovy, aspect, near, far)
glViewport(0, 0, im_size[0], im_size[1])
I estimate the modelview using this function:
def set_modelview_from_camera(rvec, tvec):
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
Rx = array([[1, 0, 0], [0, 0, -1], [0, 1, 0]]) # rotate the teapot
M = eye(4)
M[:3, :3] = dot(Rx, cv2.Rodrigues(rvec)[0])
M[:3, 3] = tvec.T
cv2GlMat = array([[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,1]]) #OpenCV -> OpenGL matrix
M = dot(cv2GlMat, M)
m = M.T.flatten()
glLoadMatrixf(m)
The function to draw the background (the real image)
def draw_background(I):
bg_image = Image.fromarray(I)
bg_data = bg_image.tobytes('raw', 'RGBX', 0, -1)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
# bind the texture
glEnable(GL_TEXTURE_2D)
glBindTexture(GL_TEXTURE_2D, glGenTextures(1))
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,bg_image.size[0],bg_image.size[1],0,GL_RGBA,GL_UNSIGNED_BYTE,bg_data)
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST)
# create quad to fill the whole window
glBegin(GL_QUADS)
glTexCoord2f(0.0,0.0); glVertex3f(-1,-1,-1.0)
glTexCoord2f(1.0,0.0); glVertex3f( 1,-1,-1.0)
glTexCoord2f(1.0,1.0); glVertex3f( 1, 1,-1.0)
glTexCoord2f(0.0,1.0); glVertex3f(-1, 1,-1.0)
glEnd()
# clear the texture
glDeleteTextures(1)
I understand the concepts of Opencv Calibration, but I cannot put together into OpenGL and show the object correctly on the screen.totoliaWed, 01 Mar 2017 07:33:45 -0600http://answers.opencv.org/question/130792/Any plans to implement the face detection algorithm proposed by [Zhu and Ramanan, 2012]?http://answers.opencv.org/question/103215/any-plans-to-implement-the-face-detection-algorithm-proposed-by-zhu-and-ramanan-2012/It's considered the state of the art algorithm for face detection.
Link to the paper: http://www.ics.uci.edu/~xzhu/paper/face-cvpr12.pdfOxydronWed, 28 Sep 2016 16:16:35 -0500http://answers.opencv.org/question/103215/