2020-05-19 03:59:24 -0600 | received badge | ● Notable Question (source) |
2020-02-03 08:26:20 -0600 | received badge | ● Notable Question (source) |
2019-11-12 02:33:38 -0600 | marked best answer | Use sift and surf in latest release (4.1.1) on Ubuntu 18.04? ContextWhile importing
QuestionIs there a way to use these algorithms again without compiling opencv? |
2019-11-11 07:27:51 -0600 | asked a question | Use sift and surf in latest release (4.1.1) on Ubuntu 18.04? Use sift and surf in latest release (4.1.1) on Ubuntu 18.04? Context While importing cv2 which I instealled using pip, |
2019-07-30 04:33:51 -0600 | received badge | ● Popular Question (source) |
2019-07-22 21:03:13 -0600 | received badge | ● Popular Question (source) |
2019-04-16 10:05:48 -0600 | received badge | ● Famous Question (source) |
2019-02-11 06:27:22 -0600 | commented question | Build opencv with opencv_contrib and latest protobuf (3.6.1) in fact, it doesn't work either. Had same issues with caffe and dnn. (caffe was compiled using protobuf 3.6.1 on my mach |
2019-02-11 03:56:06 -0600 | edited question | Build opencv with opencv_contrib and latest protobuf (3.6.1) Build opencv with opencv_contrib and latest protobuf (3.6.1) Hi. I'm trying to build opencv from sources (https://githu |
2019-01-31 09:55:16 -0600 | edited question | Build opencv with opencv_contrib and latest protobuf (3.6.1) Build opencv with opencv_contrib and latest protobuf (3.6.1) Hi. I'm trying to build opencv from sources (https://githu |
2019-01-31 09:35:28 -0600 | asked a question | Build opencv with opencv_contrib and latest protobuf (3.6.1) Build opencv with opencv_contrib and latest protobuf (3.6.1) Hi. I'm trying to build opencv from sources (https://githu |
2018-12-06 09:08:57 -0600 | edited question | latest opencv bindings for python3 goes to an unknown python directory on Ubuntu latest opencv bindings for python3 goes to a unknown python directory on Ubuntu Issue description: When using cmake-gui |
2018-12-06 09:08:15 -0600 | asked a question | latest opencv bindings for python3 goes to an unknown python directory on Ubuntu latest opencv bindings for python3 goes to a unknown python directory on Ubuntu Issue description: When using cmake-gui |
2018-07-26 09:09:01 -0600 | received badge | ● Notable Question (source) |
2018-04-18 01:45:47 -0600 | received badge | ● Popular Question (source) |
2017-07-18 07:22:50 -0600 | commented question | Camera pose from homography? So, basically, if I have some complex 3D object like a tree or scene like a landscape with no planes at all, it won't work? |
2017-07-18 05:14:37 -0600 | asked a question | Draw good matches after RanSaC in green and discarded matches in red For an image pair, I'd like first to draw all matches according to Lowe's distance ratio. Then, I'd like to filter them using a RanSaC homography. Up to this point I'm ok. But I'd like to represent all the matches kept after RanSaC with green lines and all the discarded matches in red on the same image pair. How could I achieve that?
Reference for SIFT: https://www.robots.ox.ac.uk/~vgg/rese... |
2017-07-15 02:05:54 -0600 | asked a question | Camera pose from homography? Given K, an intrinsic camera matrix, a reference image from a camera 1 which pose is known and an image from a camera 2 which pose is unknown, is there a way to compute the pose of camera 2 using the homography matrix found between the two images from matched key-points if I know their 3D coordinates (these points may not be coplanar at all)? Or, if not, from anyone of the fundamental or essential matrices? Could it perform better or faster thant SolvePnP? |
2017-07-07 12:18:03 -0600 | commented answer | Rodrigues rotation K is the cross-product equivalent matrix form of the vector k. So said, you can apply (simple matricial product) K to a vector v, it would give exactly the same result as a vectorial cross product between k and your vector v. |
2017-07-03 16:34:24 -0600 | commented answer | Rodrigues rotation Anyway, in the 'talk' page on wiki ( https://en.wikipedia.org/wiki/Talk:Ro... ), one can read an interesting thins under "Error in formula?"... that I absolutely do not understand by the way... There must be some tweak or whatever, but I also saw this formula without the cos(θ) for the rotation matrix definition. |
2017-07-02 17:49:04 -0600 | commented answer | Rodrigues rotation yes but it's also consistent with what is down the page : https://wikimedia.org/api/rest_v1/med... so there is no cos(θ)... |
2017-07-02 16:47:32 -0600 | asked a question | Rodrigues rotation I do not understand the difference between these two equations:
https://en.wikipedia.org/wiki/Rodrigu...
http://docs.opencv.org/2.4/modules/ca...
Shouln't it be: v_{rot} = cos(θ)v + sin.... ? Then on the wiki page, there is no more cos(θ) in the definition of R...
|
2017-07-02 05:05:12 -0600 | commented answer | StereoBM_create.compute() with different images shape In fact those images are old archive images, I can only make the same assumption for all of them which result in a synthetic intrinsic matrix (I do know the focal for each image, but center is taken at the center of the image) and distortion coeffs are set to zero. Two images which can show some stereo may then have been taken from two different very old cameras. So I guess it's not possible to find a stereo map with the pairs? |
2017-07-01 19:36:05 -0600 | asked a question | StereoBM_create.compute() with different images shape I would like to find the depth or disparity map for an image pair, but the images don't have the exactly same dimensions... So I have this (strange) error: From this points, three questions:
I also have some really weird results:
|
2017-06-30 15:33:45 -0600 | commented question | Pose estimation Check out the You can also used the rotation matrix from Then, camera position expressed in world coordinates is given by:
|
2017-06-30 15:07:10 -0600 | asked a question | Zoom in image to retrieve pixel coordinates I would like to know if there is something I can do to zoom in the image I'm working with? Here's the current code: And what is the part |
2017-06-30 13:59:32 -0600 | answered a question | Retrieve yaw, pitch, roll from rvec I think (and hope) I'm done with it. Here's my workaround (if not a final solution) in 6 steps: 0. Imports : 1. Retrieve rvec and tvec from the 2. Convert This matrix is a rotation matrix for a x-y'-z" Tait-Bryan sequence if I'm not wrong (that's what I was searching for days!). So r_total = rz·rỵ·rx (rx occurs first). You can imagine you have first a camera frame (z-axis = through the lens, x=right, y=bottom) which is perfectly superposed to the world frame. You rotate it around x first, then y, finally z. The angles are the Euler angles hereafter. Lowercase axis = camera frame axes, uppercase = world frame axes. Camera frame is firmly attached to the body. 2. If you need, the camera position expressed in the world frame (OXYZ) is given by: 4. Create the projection matrix P = [ R | t ]: 5. Use I noticed we have to take the negative values here to be compliant with a conventional rotation (while looking in the same direction as the vector perpendicular to the plane where the rotation occurs; clockwise = positive. It's the conventional sense in mathematics also). Euler angles are the angles making up the total rotation, but expressed separately after the 3 axis of an oxyz frame firmly attached to the the camera. Euler angles form a 3x1 vector. 6. To retrieve attitude of the camera (just like if it was an airplane) in it's own body-attached frame, here's the magic that seems to work for me yet (one would have to check this with more precise instrument as my eyes...): (more) |
2017-06-30 06:09:35 -0600 | commented answer | Retrieve yaw, pitch, roll from rvec Yes but the rotation matrix given by |
2017-06-29 19:15:12 -0600 | edited question | Retrieve yaw, pitch, roll from rvec I need to retrieve the attitude angles of a camera (using
Then I have computed: If I'm right, camera position in the world coordinates system is given by: But how to retrieve corresponding attitude angles (yaw, pitch and roll as describe above) from the point of view of the observer (thus the camera)? I have tried implementing this : http://planning.cs.uiuc.edu/node102.h... in a function : but it gives me some results which are far away from reality on a true dataset (even when applying it to the inverse rotation matrix: Update: Rotation order seems to be of greatest importance. From: https://en.wikipedia.org/wiki/Euler_a... Update:I'm finally done. Here's the solution: That's all folks! |
2017-06-29 14:25:05 -0600 | asked a question | decomposeProjectionMatrix leads to strange rotation matrix I don't understand why this ( Then, Do you know why? Update:An other way to see that is the following: Let retrieve some translation and rotation vectors from solvePnP: Then, let rebuild the rotation matrix from the rotation vector: Projection matrix:And finally create the projection matrix as P = [ R | t ] with an extra line of [0, 0, 0, 1] to be square: If I understand, this matrix (does it have a name?) in addition to the camera intrinsic parameters matrix, brings points from the world reference frame to the camera reference frame. Checking:It is easily checked by drawing projected points on the original image: where Projected points may then be drawn on image: It works well. Projected points are near the original points. Inverse ...(more) |