Ask Your Question

swiss_knight's profile - activity

2020-05-19 03:59:24 -0600 received badge  Notable Question (source)
2020-02-03 08:26:20 -0600 received badge  Notable Question (source)
2019-11-12 02:33:38 -0600 marked best answer Use sift and surf in latest release (4.1.1) on Ubuntu 18.04?

Context

While importing cv2 which I instealled using pip, I encountered this message, hence I'm no more able to run an old demonstration code that compared different keypoints detectors and descriptirs which i build a few (2~) years ago:

error: OpenCV(4.1.1) /io/opencv_contrib/modules/xfeatures2d/src/sift.cpp:1207: error: (-213:The function/feature is not implemented) This algorithm is patented and is excluded in this configuration; Set OPENCV_ENABLE_NONFREE CMake option and rebuild the library in function

Question

Is there a way to use these algorithms again without compiling opencv?

2019-11-11 07:27:51 -0600 asked a question Use sift and surf in latest release (4.1.1) on Ubuntu 18.04?

Use sift and surf in latest release (4.1.1) on Ubuntu 18.04? Context While importing cv2 which I instealled using pip,

2019-07-30 04:33:51 -0600 received badge  Popular Question (source)
2019-07-22 21:03:13 -0600 received badge  Popular Question (source)
2019-04-16 10:05:48 -0600 received badge  Famous Question (source)
2019-02-11 06:27:22 -0600 commented question Build opencv with opencv_contrib and latest protobuf (3.6.1)

in fact, it doesn't work either. Had same issues with caffe and dnn. (caffe was compiled using protobuf 3.6.1 on my mach

2019-02-11 03:56:06 -0600 edited question Build opencv with opencv_contrib and latest protobuf (3.6.1)

Build opencv with opencv_contrib and latest protobuf (3.6.1) Hi. I'm trying to build opencv from sources (https://githu

2019-01-31 09:55:16 -0600 edited question Build opencv with opencv_contrib and latest protobuf (3.6.1)

Build opencv with opencv_contrib and latest protobuf (3.6.1) Hi. I'm trying to build opencv from sources (https://githu

2019-01-31 09:35:28 -0600 asked a question Build opencv with opencv_contrib and latest protobuf (3.6.1)

Build opencv with opencv_contrib and latest protobuf (3.6.1) Hi. I'm trying to build opencv from sources (https://githu

2018-12-06 09:08:57 -0600 edited question latest opencv bindings for python3 goes to an unknown python directory on Ubuntu

latest opencv bindings for python3 goes to a unknown python directory on Ubuntu Issue description: When using cmake-gui

2018-12-06 09:08:15 -0600 asked a question latest opencv bindings for python3 goes to an unknown python directory on Ubuntu

latest opencv bindings for python3 goes to a unknown python directory on Ubuntu Issue description: When using cmake-gui

2018-07-26 09:09:01 -0600 received badge  Notable Question (source)
2018-04-18 01:45:47 -0600 received badge  Popular Question (source)
2017-07-18 07:22:50 -0600 commented question Camera pose from homography?

So, basically, if I have some complex 3D object like a tree or scene like a landscape with no planes at all, it won't work?

2017-07-18 05:14:37 -0600 asked a question Draw good matches after RanSaC in green and discarded matches in red

For an image pair, I'd like first to draw all matches according to Lowe's distance ratio.

Then, I'd like to filter them using a RanSaC homography.

Up to this point I'm ok.

But I'd like to represent all the matches kept after RanSaC with green lines and all the discarded matches in red on the same image pair.

How could I achieve that?


Here's part of my code yet, assuming I already have SIFT kp and desc for both image1 and image2 from a previous processing step and where I save results before and after RanSaC in two separate files for the moment:

        FLANN_INDEX_KDTREE = 1
        index_params  = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
        search_params = dict(checks=50)
        flann = cv2.FlannBasedMatcher(index_params, search_params)

        matches = flann.knnMatch(desc1, desc2, k=2) 

        # Create a mask to draw only good matches:
        matchesMask = [[0,0] for j in xrange(len(matches))]

        # Ratio test as per Lowe's 2004 paper:
        gooddist  = []
        pts1 = []
        pts2 = []
        for a,(m,n) in enumerate(matches): # 1st and 2nd NN matches
            if m.distance < 0.7*n.distance: # play with ratio... 
                matchesMask[a] = [1,0]
                gooddist.append(m)           

        good = gooddist

        if len(good)>=6: 
            src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ])
            dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ])
            homog_method      = cv2.RANSAC
            homog_reprojthres = 5.0
            homog_mask        = None
            homog_max_iter    = 2000
            homog_confidence  = 0.995
            M, mask = cv2.findHomography(src_pts, dst_pts, homog_method, homog_reprojthres, homog_mask, homog_max_iter, homog_confidence)
            matchesMask2 = mask.ravel().tolist()


            draw_params3 = dict(matchColor = (0,255,255), # yellow
                               #singlePointColor = (20,255,200) if not loadSIFT else (255,220,20),
                                matchesMask = matchesMask, # draw only good inliers points
                                flags = 2) # Before RANSAC 

            draw_params4 = dict(matchColor = (0,255,0), # green
                                singlePointColor = None,
                                matchesMask = matchesMask2, # draw only good inliers points
                                flags = 2) # After RANSAC                  

            img1   = cv2.imread(imgfile1, 1) # 1:= color-mode                
            img2   = cv2.imread(imgfile2, 1) # 1:= color-mode

            img3  = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params3)
            cv2.imwrite(file_nameA_on_disk, img3)

            img4  = cv2.drawMatches(img1,kp1,img2,kp2,good,None,**draw_params4)
            cv2.imwrite(file_nameB_on_disk, img4)

Reference for SIFT: https://www.robots.ox.ac.uk/~vgg/rese...

2017-07-15 02:05:54 -0600 asked a question Camera pose from homography?

Given K, an intrinsic camera matrix, a reference image from a camera 1 which pose is known and an image from a camera 2 which pose is unknown, is there a way to compute the pose of camera 2 using the homography matrix found between the two images from matched key-points if I know their 3D coordinates (these points may not be coplanar at all)? Or, if not, from anyone of the fundamental or essential matrices?

Could it perform better or faster thant SolvePnP?

2017-07-07 12:18:03 -0600 commented answer Rodrigues rotation

K is the cross-product equivalent matrix form of the vector k. So said, you can apply (simple matricial product) K to a vector v, it would give exactly the same result as a vectorial cross product between k and your vector v.

2017-07-03 16:34:24 -0600 commented answer Rodrigues rotation

Anyway, in the 'talk' page on wiki ( https://en.wikipedia.org/wiki/Talk:Ro... ), one can read an interesting thins under "Error in formula?"... that I absolutely do not understand by the way... There must be some tweak or whatever, but I also saw this formula without the cos(θ) for the rotation matrix definition.

2017-07-02 17:49:04 -0600 commented answer Rodrigues rotation

yes but it's also consistent with what is down the page :

https://wikimedia.org/api/rest_v1/med... so there is no cos(θ)...

2017-07-02 16:47:32 -0600 asked a question Rodrigues rotation

I do not understand the difference between these two equations:


1. from wikipedia:
wiki Rodrigues formula

https://en.wikipedia.org/wiki/Rodrigu...


2. from open CV doc:
cv2 Rodrigues formal

http://docs.opencv.org/2.4/modules/ca...


Where is the cos(θ) gone on the wiki page in the formula 1. ?

Shouln't it be: v_{rot} = cos(θ)v + sin.... ?

Then on the wiki page, there is no more cos(θ) in the definition of R...


Or did I miss something?

2017-07-02 05:05:12 -0600 commented answer StereoBM_create.compute() with different images shape

In fact those images are old archive images, I can only make the same assumption for all of them which result in a synthetic intrinsic matrix (I do know the focal for each image, but center is taken at the center of the image) and distortion coeffs are set to zero. Two images which can show some stereo may then have been taken from two different very old cameras. So I guess it's not possible to find a stereo map with the pairs?

2017-07-01 19:36:05 -0600 asked a question StereoBM_create.compute() with different images shape

I would like to find the depth or disparity map for an image pair, but the images don't have the exactly same dimensions...

So I have this (strange) error:

error: /..../opencv/modules/calib3d/src/stereobm.cpp:1069: error: (-209) All the images must have the same size in function compute

From this points, three questions:

  1. Is it possible to run StereoBM_create.compute() on images with different sizes?
  2. Does it make sense to reshape one image to fit the smaller?
  3. Is there any other tool to perform a depth computation that could work on image with different sizes?

I also have some really weird results: disparity map
Created so: stereo = cv2.StereoBM_create(numDisparities=16, blockSize=21)


(I run cv2 on Python 2.7)

2017-06-30 15:33:45 -0600 commented question Pose estimation

Check out the cv2.decomposeProjectionMatrix() function to inverse the projection matrix.

You can also used the rotation matrix from cv2.Rodrigues() :
rmat = cv2.Rodrigues(rvec)[0]

Then, camera position expressed in world coordinates is given by:

cam_pos = -np.matrix(rmat).T * np.matrix(tvec)

2017-06-30 15:07:10 -0600 asked a question Zoom in image to retrieve pixel coordinates

I would like to know if there is something I can do to zoom in the image I'm working with?
It is to select some double clicked points I want to grab their coordinates.
It's mainly for high-res images on which I'd like more precision.
Scale factor 1 image pixel = 1 screen pixel would be fine to reach.

Here's the current code:

image_points= list()
def grep_pos(event,x,y,flags,param):
    if event == cv2.EVENT_LBUTTONDBLCLK:
        cv2.circle(imgcolor,(x,y),10,(255,255,0),-1)
        image_points.append((x,y))
    if event == cv2.EVENT_MOUSEWHEEL:
        a="maybe do stuff here to zoom, but how ?"


cv2.namedWindow('image',cv2.WINDOW_NORMAL)
cv2.setMouseCallback('image',grep_pos)
while(1):
    cv2.imshow('image',imgcolor)
    if cv2.waitKey(20) & 0xFF == 27:
        break

cv2.destroyAllWindows()

And what is the part & 0xFF == 27 for?

2017-06-30 13:59:32 -0600 answered a question Retrieve yaw, pitch, roll from rvec

I think (and hope) I'm done with it.

Here's my workaround (if not a final solution) in 6 steps:

0. Imports :

import math, numpy as np, cv2

1. Retrieve rvec and tvec from the cv2.solvePnP() (or with RanSaC) function:

retval, rvec, tvec = cv2.solvePnP(obj_pts, img_pts, cam_mat, dist_coeffs, rvec, tvec, flags=cv2.SOLVEPNP_ITERATIVE)

2. Convert rvec to a 3x3 matrix using cv2.Rodrigues():

rmat = cv2.Rodrigues(rvec)[0]

This matrix is a rotation matrix for a x-y'-z" Tait-Bryan sequence if I'm not wrong (that's what I was searching for days!). So r_total = rz·rỵ·rx (rx occurs first). You can imagine you have first a camera frame (z-axis = through the lens, x=right, y=bottom) which is perfectly superposed to the world frame. You rotate it around x first, then y, finally z. The angles are the Euler angles hereafter. Lowercase axis = camera frame axes, uppercase = world frame axes. Camera frame is firmly attached to the body.

2. If you need, the camera position expressed in the world frame (OXYZ) is given by:

cam_pos     = -np.matrix(rmat).T * np.matrix(tvec)

4. Create the projection matrix P = [ R | t ]:

P = np.hstack((rmat,tvec))

5. Use cv2.decomposeProjectionMatrix() (or any home-made function) to retrieve Euler angles:

euler_angles_radians = -cv2.decomposeProjectionMatrix(P)[6]
euler_angles_degrees = 180 * euler_angles_radians/math.pi

I noticed we have to take the negative values here to be compliant with a conventional rotation (while looking in the same direction as the vector perpendicular to the plane where the rotation occurs; clockwise = positive. It's the conventional sense in mathematics also).
This is due to the fact OpenCV work with a mobile frame around the vector and I was working in my mind with a fix one. Informations here: mathworld.wolfram.com/RotationMatrix.html )

Euler angles are the angles making up the total rotation, but expressed separately after the 3 axis of an oxyz frame firmly attached to the the camera.
Initially, the camera is aligned with the world frame, facing upwards if your world frame Z-axis is pointing upward. X=to the right of the cam. Y=normal to the two others, bottom of the cam.
First rotation is around X, then Y and finally Z. This could be written as r_total = rzryrx (rx occurs first).

Euler angles form a 3x1 vector.
Element [1,0] is the rotation around x, element [2,0] rotation around y, and element [2,0] rotation around z.

6. To retrieve attitude of the camera (just like if it was an airplane) in it's own body-attached frame, here's the magic that seems to work for me yet (one would have to check this with more precise instrument as my eyes...):

eul    = euler_angles_radians
yaw    = 180*eul[1,0]/math.pi # warn: singularity if camera is facing perfectly upward. Value 0 yaw is given by the Y-axis of the world frame.
pitch  = 180*((eul[0,0]+math.pi/2)*math.cos(eul[1,0]))/math.pi
roll ...
(more)
2017-06-30 06:09:35 -0600 commented answer Retrieve yaw, pitch, roll from rvec

Yes but the rotation matrix given by cv2.Rodrigues(rvec), can only fit one of the matrix presented in my first post. If decomposed an other way, it would yield strange results isn't it?

2017-06-29 19:15:12 -0600 edited question Retrieve yaw, pitch, roll from rvec

I need to retrieve the attitude angles of a camera (using cv2 on Python).

  • Yaw being the general orientation of the camera when on an horizontal plane: toward north=0, toward east = 90°, south=180°, west=270°, etc.
  • Pitch being the "nose" orientation of the camera: 0° = horitzontal, -90° = looking down vertically, +90° = looking up vertically, 45° = looking up at an angle of 45° from the horizon, etc.
  • Roll being if the camera is tilted left or right when in your hands: +45° = tilted 45° in a clockwise rotation when you grab the camera, thus +90° (and -90°) would be the angle needed for a portrait picture for example, etc.


I have yet rvec and tvec from a solvepnp().

Then I have computed:
rmat = cv2.Rodrigues(rvec)[0]

If I'm right, camera position in the world coordinates system is given by:
position_camera = -np.matrix(rmat).T * np.matrix(tvec)

But how to retrieve corresponding attitude angles (yaw, pitch and roll as describe above) from the point of view of the observer (thus the camera)?

I have tried implementing this : http://planning.cs.uiuc.edu/node102.h... in a function :

def rotation_matrix_to_attitude_angles(R) :
    import math
    import numpy as np 
    cos_beta = math.sqrt(R[2,1] * R[2,1] + R[2,2] * R[2,2])
    validity = cos_beta < 1e-6
    if not validity:
        alpha = math.atan2(R[1,0], R[0,0])    # yaw   [z]
        beta  = math.atan2(-R[2,0], cos_beta) # pitch [y]
        gamma = math.atan2(R[2,1], R[2,2])    # roll  [x]
    else:
        alpha = math.atan2(R[1,0], R[0,0])    # yaw   [z]
        beta  = math.atan2(-R[2,0], cos_beta) # pitch [y]
        gamma = 0                             # roll  [x]

    return np.array([alpha, beta, gamma])

but it gives me some results which are far away from reality on a true dataset (even when applying it to the inverse rotation matrix: rmat.T).
Am I doing something wrong?
And if yes, what?
All informations I've found are incomplete (never saying in which reference frame we are or whatever in a rigorous way).
Thanks.

Update:

Rotation order seems to be of greatest importance.
So; do you know to which of these matrix does the cv2.Rodrigues(rvec) result correspond?: rotation matrices

From: https://en.wikipedia.org/wiki/Euler_a...

Update:

I'm finally done. Here's the solution:

def yawpitchrolldecomposition(R):
import math
import numpy as np
sin_x    = math.sqrt(R[2,0] * R[2,0] +  R[2,1] * R[2,1])    
validity  = sin_x < 1e-6
if not singular:
    z1    = math.atan2(R[2,0], R[2,1])     # around z1-axis
    x      = math.atan2(sin_x,  R[2,2])     # around x-axis
    z2    = math.atan2(R[0,2], -R[1,2])    # around z2-axis
else: # gimbal lock
    z1    = 0                                         # around z1-axis
    x      = math.atan2(sin_x,  R[2,2])     # around x-axis
    z2    = 0                                         # around z2-axis

return np.array([[z1], [x], [z2]])

yawpitchroll_angles = -180*yawpitchrolldecomposition(rmat)/math.pi
yawpitchroll_angles[0,0] = (360-yawpitchroll_angles[0,0])%360 # change rotation sense if needed, comment this line otherwise
yawpitchroll_angles[1,0] = yawpitchroll_angles[1,0]+90

That's all folks!

2017-06-29 14:25:05 -0600 asked a question decomposeProjectionMatrix leads to strange rotation matrix

I don't understand why this (decomposeProjectionMatrix) doesn't give the same rotation matrices as the input ones:

import cv2
import numpy as np
import math
def Rotx(angle): 
    Rx = np.array([[1,                0,                0],
                   [0,  math.cos(angle), -math.sin(angle)],
                   [0, +math.sin(angle),  math.cos(angle)]
                   ])
    return Rx

def Roty(angle): 
    Ry = np.array([[ math.cos(angle), 0, +math.sin(angle)],
                   [               0, 1,                0],
                   [-math.sin(angle), 0,  math.cos(angle)]
                   ])
    return Ry

def Rotz(angle): 
    Rz = np.array([[ math.cos(angle), -math.sin(angle), 0],
                   [+math.sin(angle),  math.cos(angle), 0],
                   [               0,                0, 1]
                   ])
    return Rz


ax=22
by=77
cz=11

ax = math.pi*ax/180
by = math.pi*by/180
cz = math.pi*cz/180

Rx = Rotx(ax)
Ry = Roty(by)
Rz = Rotz(cz)

Pxyz = np.zeros((3,4))
Rxyz = np.dot(Rx,np.dot(Ry,Rz))

Pxyz[:,:3] = Rxyz

decomposition = cv2.decomposeProjectionMatrix(Pxyz)

Then, decomposition[3] is not equal to Rx, decomposition[4] is not equal to Ry and decomposition[5] != Rz. But surprisingly, decomposition[1] is equal to Rxyz
and Rdxyz = np.dot(decomposition[3],np.dot(decomposition[4],decomposition[5])) is not equal to Rxyz !!!

Do you know why?

Update:

An other way to see that is the following: Let retrieve some translation and rotation vectors from solvePnP:

retval, rvec,  tvec = cv2.solvePnP(obj_pts, img_pts, cam_mat, dist_coeffs, rvec, tvec, flags=cv2.SOLVEPNP_ITERATIVE)

Then, let rebuild the rotation matrix from the rotation vector:

rmat        = cv2.Rodrigues(rvec)[0]


Projection matrix:

And finally create the projection matrix as P = [ R | t ] with an extra line of [0, 0, 0, 1] to be square:

P        = np.zeros((4,4))
P[:3,:3] = rmat
P[:3,3]  = tvec.T # need to transpose tvec in order to fit with destination shape! 
P[3,3]   = 1
print P
[[  6.08851883e-01   2.99048587e-01   7.34758006e-01  -4.75705058e+01]
 [  6.78339121e-01   2.83943605e-01  -6.77666634e-01  -3.24002911e+01]
 [ -4.11285086e-01   9.11013706e-01  -2.99767575e-02   2.24834560e+01]
 [  0.00000000e+00   0.00000000e+00   0.00000000e+00   1.00000000e+00]]

If I understand, this matrix (does it have a name?) in addition to the camera intrinsic parameters matrix, brings points from the world reference frame to the camera reference frame.

Checking:

It is easily checked by drawing projected points on the original image:

projected_points = [np.dot(np.dot(cam_mat,P[:3]),op)/np.dot(np.dot(cam_mat,P[:3]),op)[2] for op in obj_pts]

where cam_mat is the intrinsic parameters matrix of the camera (basically with focal on the two first element of the diagonal and center coordinates in the two first element of the third column). And where obj_pts is an array of points coordinates expressed in the world reference frame, and in homogeneous coordinates, like this for example: [ 10. , 60. , 0. , 1. ].

Projected points may then be drawn on image:

 [cv2.circle(img,tuple(i),10,(0,0,255),-1) for i in projected_points.tolist()]

It works well. Projected points are near the original points.


Inverse ...

(more)