2017-07-31 12:01:43 -0600 | received badge | ● Notable Question (source) |
2017-05-02 14:24:37 -0600 | received badge | ● Notable Question (source) |
2016-02-28 01:12:28 -0600 | received badge | ● Popular Question (source) |
2014-04-25 09:04:16 -0600 | received badge | ● Popular Question (source) |
2014-03-01 17:38:29 -0600 | received badge | ● Nice Answer (source) |
2013-08-11 10:26:13 -0600 | commented question | Pose estimation produces wrong translation vector Do you have a pair of cameras (stereo system) or just one camera? If you have the stereo camera it is a simple problem (it can be solved with solvePnP), however if you only have one camera you are not going to be able to compute the scaled translation in each step, you will only know the direction with the unit vector. The problem you are saying is called visual odometry, there is a good tutorial online about it (google: visual odometry Davide Scaramuzza) |
2013-08-10 23:14:15 -0600 | commented question | Pose estimation produces wrong translation vector Can you tell me exactly what you need? What is the purpose of your program? |
2013-08-10 15:14:48 -0600 | commented question | Pose estimation produces wrong translation vector I had a similar problem, firstly I installed the latest version of OpenCV (from github), that version has a file called "five-point.cpp" that has the findEssentialMat, decomposeEssentialMat and recoverPose functions that could help you. And secondly, you might check the functions solvePnP for pose recovery and correctMatches for triangulation, for me those were very useful. |
2013-08-10 13:12:12 -0600 | commented question | Pose estimation produces wrong translation vector If you use the essential matrix to determine the poses of the camera you are going to get the rotation matrix (3x3) and a translation vector (A UNIT VECTOR); so you will only know the direction. You need to scale that vector in order to get the right units. |
2013-07-30 18:44:12 -0600 | answered a question | Real Time detection of texture less objects? Hi, I just find this video Real-time Learning and Detection of 3D Texture-less Objects and also the paper, it is quite recent and uses ROS. |
2013-07-26 10:10:49 -0600 | answered a question | How to use five-point.cpp five-point.cpp is the file that provides functions to calculate the Essential matrix (a special case of the Fundamental matrix) using the five point algorithm. The findEssentialMat function is similar to findFundamentalMat, the difference is that you require the intrinsic parameters of your camera (calculated from calibration). You need to provide the following:
Example: |
2013-06-01 20:57:58 -0600 | received badge | ● Necromancer (source) |
2013-05-31 18:35:03 -0600 | commented question | When does a feature in 2.4.9 become stable? You can verify the possible release dates here. And yes, 2.4.9 version is unstable. |
2013-05-30 21:36:22 -0600 | commented question | When does a feature in 2.4.9 become stable? I'm already using OpenCV 2.4.9 (C++) in Ubuntu 12.10, the findEssentialMat implementation works fine. There was an issue about the instalation but I think it's already solved. Good luck. |
2013-05-28 11:46:36 -0600 | answered a question | 2D pixel coordinate to 3D world coordinate conversion Hi, you can check this website Make3D. In this project they trained a machine learning algorithm that can estimate depth from just one image. Regards. |
2013-05-17 11:14:47 -0600 | answered a question | Good Calibration for Essential matrix estimation Hi, the epipolar constrain (as you mentioned) is x2'Fx1=0 where x2' is the coordinate of the point in the second image and x1 is the coordinate of the point in the first image. If you use the essential matrix the epipolar constrain is X2'EX1=0 where X2' is the normalized coordinate of the point in the second image and X1 is the normalized coordinate of the point in the first image. You obtain the normalized coordinate with this: That's why I think you are getting a big number using the essential matrix. |
2013-05-17 10:49:08 -0600 | answered a question | How to obtain projection matrix? Hi, the projection matrix is defined as P = KT (matrix multiplication) where K => intrinsic parameters (camera parameters obtained by calibration) and T => extrinsic parameters (rotation matrix and translation vector [R|t] ) You can see this in the docs page. |
2013-05-16 20:20:19 -0600 | answered a question | Object recognition by edge (or corners) matching ? Hi, I just find this video Real-time Learning and Detection of 3D Texture-less Objects and also the paper, it is quite recent. |
2013-05-16 20:07:13 -0600 | commented question | Replacing SIFT by FREAK Hi, I would like to know if you could share your experience. Which combination was the best? As far as I know SIFT detector rejects corners because the SIFT descriptor works better with blobs. Then the performance of SIFT detector + SIFT descriptor is higher than FAST + SIFT descriptor. |
2013-05-15 02:21:49 -0600 | commented answer | Unit of pose vectors from solvePnP() Can you paste your code and the data that you are using? because I don't understand your doubt or what is the problem. |
2013-05-13 10:37:09 -0600 | received badge | ● Nice Answer (source) |
2013-05-13 08:10:55 -0600 | received badge | ● Teacher (source) |
2013-05-13 06:37:30 -0600 | answered a question | Unit of pose vectors from solvePnP() The rotation vector unit is radians and the translation vector unit depends on the unit of the 3D points you use, it could be meters, yards, inches, etc. |
2013-05-08 01:00:13 -0600 | asked a question | error using StarFeatureDetector + GridAdaptedFeatureDetector Hi, I'm trying to use the Star detector with GridAdaptedFeatureDetector but it doesn't work, the detector returns zero points. My code is the following: The output is a: 0. There are no problems using only the StarFeatureDetector, it gets about 400 points in the same image. I also tried with Ptr<featuredetector> detector = FeatureDetector::create("GridSTAR") but it does the same. GridAdaptedFeatureDetector works well when I use FAST, can someone explain me what am I doing wrong? Thanks, Raúl |
2013-04-30 05:29:16 -0600 | received badge | ● Student (source) |
2013-04-30 03:12:13 -0600 | commented question | error using BRISK + GridAdaptedFeatureDetector Ptr<FeatureDetector> brisk = FeatureDetector::create("GridBRISK") doesn't cause an error. But how can I modify the threshold of BRISK detector and also the numbers of cells in the Grid? |
2013-04-28 20:23:27 -0600 | asked a question | error using BRISK + GridAdaptedFeatureDetector Hi, I'm trying to find keypoints using BRISK, I want the points to be well distributed across the image so I use the brisk detector + GridAdaptedFeatureDetector. The code is the following: The problem is that I'm getting this error: If I only use brisk without the GridAdaptedFeatureDetector it works fine. Am I doing something wrong? or is a bug? I don't know... Thanks, Raúl |
2013-04-23 18:08:19 -0600 | received badge | ● Supporter (source) |
2013-04-19 15:25:08 -0600 | asked a question | using useExtrinsicGuess in solvePnPRansac? Hi, I have doubts about the proper use of solvePnPRansac: I know the intrinsic parameters, the 3D points in the scene and their projections on an image, also I know the rotation vector (rvec) but I don't know the traslation vector (tvec). How can I specify that I only want to use rvec as an extrinsic guess? Thanks, Raúl |
2013-04-17 02:21:49 -0600 | commented question | LevMarqSparse::bundleAdjust Hi, What kind of application are you developing? I may recommend you to use SSBA (Simple Sparse Bundle Adjustment). In the book "Mastering OpenCV with Practical Computer Vision Projects" there is a free chapter (4) that explains 3D reconstruction using SSBA with OpenCV. |
2013-04-14 20:53:49 -0600 | asked a question | How to use GenericDescriptorMatcher? Hi I'm trying to use the Common Interfaces of Generic Descriptor Matchers but I have not discovered how to start yet. My problem is the following: I have two images with their respective keypoints sets, also I know before hand the Fundamental matrix that relates them, I want to match the keypoints using the epipolar constrain (p1'*Fundamental*p2 = 0), in other words, generate a vector< DMatch > that relates the points that satisfy the epipolar constrain. At this point I would like to use the interfaces that OpenCV provides in order to have my code as generic as possible. However there are not examples of how to use these tools. Can someone point me to the right direction? or maybe I don't need these interfaces? Thanks in advance. PD Sorry for my bad english. |
2013-03-20 16:23:53 -0600 | commented answer | Problems while including headers in OpenCV 2.4.9 Thanks, that's the reason. Now, I will try your recommendations |
2013-03-19 19:03:05 -0600 | asked a question | Problems while including headers in OpenCV 2.4.9 Hi, I am looking an implementation of the five points algorithm in order to determine the essential matrix for a real-time application. The present version of OpenCV (2.4.4) does not include that algorithm. However, looking information about that in Google I discover that OpenCV 2.4.9 (available on github) has an implementation of the five point algorithm opencv/modules/calib3d/src/five-point.cpp. So I downloaded the source, compile it, and install it without any problem using cmake and a sudo make install. My cmake configuration using cmake-gui: (more) |
2013-03-19 15:26:07 -0600 | commented answer | How to enable findEssentialMat in opencv 2.4.9? Thanks, I already correct it |
2013-03-19 04:31:52 -0600 | received badge | ● Editor (source) |
2013-03-18 23:29:57 -0600 | asked a question | How to enable findEssentialMat in opencv 2.4.9? Hi, I'm being trying to use the new function findEssentialMat() in OpenCV 2.4.9 but when I try to compile my program it says that findEssentialMat is not defined. I include calib3d and I also link the proper library. How should I compile OpenCV to enable the function? This is my program: When I try to compile it I received the following message: Thanks in advance |