Ask Your Question

ale_Xompi's profile - activity

2019-06-27 08:43:25 -0600 received badge  Famous Question (source)
2018-07-09 15:10:41 -0600 received badge  Notable Question (source)
2018-01-14 14:45:43 -0600 received badge  Popular Question (source)
2017-01-07 05:29:53 -0600 received badge  Enthusiast
2017-01-02 11:46:42 -0600 asked a question Why is projecting a 3D point to 2D not within the image frame?

I am trying to generate key-points in a set of views given a number of 3D points, the extrinsic parameters of each view and the intrinsic parameters of the camera. However, I noticed that points do not lie within the size of the frame (e.g. 640 x 480). This is the data I am using:

  • Camera pose (1st view) as a 6D vector (orientation + position in world coordinate frame): [-90 90 0 | 80 45 45];
  • 3D point: [50 50 40]
  • Image: 640x480 pxs
  • Focal length: 30 mm
  • Sensor size: 22x16 mm (Sw x Sh)

Thus, the camera matrix (intrinsic parameters) becomes: [fIw/Sw 0 Iw/2; 0 fIh/Sh Ih/2; 0 0 1];

When applying the formula for the pinhole camera model:

f = K * R * [I|t] * M

where R and t are the rotation matrix and the translation vector, respectively - coming from the camera pose - and M is in homogeneous coordinates in this case, I cannot obtain a point within the frame size (i.e. 640 x 480). Please notice that is already scaled by its 3rd component to have a 2D point.

Do you have any idea why the projection does not work? When visualizing the point and the camera, I am already sure the point is in front of the camera in the world coordinate system.

Can you confirm that the identity matrix for the rotation corresponds to the camera looking upwards, please?

I also tried the function cv::projectPoints() to verify, but return an error probably connected with the fact the depth of the point is not positive in the camera coordinate system.

2017-01-02 10:47:56 -0600 commented answer Is cv::triangulatePoints() returning 3D points in world coordinate system?

thank you @Eduardo for your answer. The cv::triangulatePoints() does not require a stereo pair with fronto-parallel view (i.e. binocular), but you can apply the function with any pair of cameras given intrinsic and extrinsic parameters of each view and the feature points for each image. Of course, there are some degenerative configurations and the forward motion is the one that generates more uncertainty in the 3D points. However, after reading again HZ's Multi-View Geometry, the Hartley and Sturm's paper "Triangulation", and doing some experiments both in MATLAB and OpenCV, I can confirm that triangulated points are returned in world coordinate frame. Of course, if you fixed the first view as origin (of the world as well), then 3D points are given w.r.t. the first camera as you said.

2016-12-14 08:55:26 -0600 received badge  Editor (source)
2016-12-14 08:54:28 -0600 asked a question Is cv::triangulatePoints() returning 3D points in world coordinate system?

Considering a moving camera with fixed calibration matrix (intrinsic parameters), I am triangulating tracked feature points from two views that are not consecutive. The view poses are in camera coordinate system and images are already undistorted before detecting and tracking features.

Please can you confirm if the triangulated points are in world coordinate system after applying the cv::triangulatePoints() and cv::convertPointsFromHomogeneous() functions.

2016-07-28 09:38:06 -0600 commented answer GoPRO focal length in videos

So, this means that once you have calibrated a GoPRO camera (either Hero3+ or Hero4) and if you don't change settings (resolution and FOV), you can use the intrinsic parameters for all video recorded later. Am I correct?

2016-07-28 05:04:42 -0600 asked a question GoPRO focal length in videos

Hi all,

I would like to know if the focal length varies when recording videos. Because of auto-focus, I believe that for each recorded video the focal length is changing but how much this change can affect 3-D re-projections. Besides, I also don't know if the focal length remain fixed when recording or still it can change.

Please let me know something about that. Thanks.

2016-04-25 10:17:28 -0600 commented question findEssentialMat for coplanar points

I'm facing a similar problem with this function. My guess is there is a problem with the RANSAC optimization, but I hope that someone could provide an answer.

2016-01-27 10:27:41 -0600 commented question Wrong Hamming norm value between two ORB descriptors

Thank you for your answer. Yes, you're right. That means the range of the Hamming distance between two ORB descriptors is [0,256], right?

2016-01-27 06:27:09 -0600 asked a question Wrong Hamming norm value between two ORB descriptors

I'm currently working on feature point tracking and I want to compare the ORB descriptor of the current point with the previous one. I want to discard that point if the hamming distance between the two descriptors is higher than a specific threshold.

When I'm running hamm_dist = norm(pointDescr1, pointDescr2, NORM_HAMMING), the returned value sometimes is greater than 32 and I can't understand why. I was thinking it should be between 0 and 32.

Please let me know if you have an explanation. Thank you in advance.

2016-01-07 06:49:21 -0600 commented answer opencv 3.0 build error

I tried to install OpenCV 3.0.0-rc1 on Ubuntu 14.04 LTS 64-bit and I faced this problem. I noticed that in /usr/include/ there was only eigen2 library. So, I installed libeigen3-devfrom command line and then the installation builded correctly. (http://packages.ubuntu.com/source/tru...)

2016-01-07 06:49:20 -0600 commented question Rotation Matrix

I suppose you're using OpenCV 3.0.x, right? According to the documentation, the resulting vector is the Rodrigues, so you should be able to reconstruct the Rotation matrix with the Rodrigues function. I suggest you to try with a different camera and see if the 3x3 matrix is still symmetric. Regarding the Euler angles, the decomposeProjectionMatrix function optionally returns them giving a projection matrix as input. Otherwise, I can suggest you again to have a look at the following website/papers for defining your conventions and deriving the formulas to use to extract the angles: http://www.staff.city.ac.uk/~sbbh653/... , http://planning.cs.uiuc.edu/node101.html , https://en.wikipedia.org/wiki/Euler_a..., https://en.wikipedia.org/wiki/Rotatio....