Ask Your Question

RaulPL's profile - activity

2017-07-31 12:01:43 -0600 received badge  Notable Question (source)
2017-05-02 14:24:37 -0600 received badge  Notable Question (source)
2016-02-28 01:12:28 -0600 received badge  Popular Question (source)
2014-04-25 09:04:16 -0600 received badge  Popular Question (source)
2014-03-01 17:38:29 -0600 received badge  Nice Answer (source)
2013-08-11 10:26:13 -0600 commented question Pose estimation produces wrong translation vector

Do you have a pair of cameras (stereo system) or just one camera? If you have the stereo camera it is a simple problem (it can be solved with solvePnP), however if you only have one camera you are not going to be able to compute the scaled translation in each step, you will only know the direction with the unit vector. The problem you are saying is called visual odometry, there is a good tutorial online about it (google: visual odometry Davide Scaramuzza)

2013-08-10 23:14:15 -0600 commented question Pose estimation produces wrong translation vector

Can you tell me exactly what you need? What is the purpose of your program?

2013-08-10 15:14:48 -0600 commented question Pose estimation produces wrong translation vector

I had a similar problem, firstly I installed the latest version of OpenCV (from github), that version has a file called "five-point.cpp" that has the findEssentialMat, decomposeEssentialMat and recoverPose functions that could help you. And secondly, you might check the functions solvePnP for pose recovery and correctMatches for triangulation, for me those were very useful.

2013-08-10 13:12:12 -0600 commented question Pose estimation produces wrong translation vector

If you use the essential matrix to determine the poses of the camera you are going to get the rotation matrix (3x3) and a translation vector (A UNIT VECTOR); so you will only know the direction. You need to scale that vector in order to get the right units.

2013-07-30 18:44:12 -0600 answered a question Real Time detection of texture less objects?

Hi, I just find this video Real-time Learning and Detection of 3D Texture-less Objects and also the paper, it is quite recent and uses ROS.

2013-07-26 10:10:49 -0600 answered a question How to use five-point.cpp

five-point.cpp is the file that provides functions to calculate the Essential matrix (a special case of the Fundamental matrix) using the five point algorithm. The findEssentialMat function is similar to findFundamentalMat, the difference is that you require the intrinsic parameters of your camera (calculated from calibration). You need to provide the following:

  1. p1: points of the image 1 (vector<point2f> or Mat)
  2. p2: points of the image 2 (vector<point2f> or Mat)
  3. focal: focal distance (double), it is the element (0,0) of your intrinsic parameters matrix.
  4. pp: principal point (Point2d), this vector must have the elements (0,2) and (1,2) of the intrinsic parameters matrix.
  5. method: in this case it could be RANSAC or LMeDS (similar to findFundamentalMat)
  6. probability of success: usually 0.99
  7. error: this value is the threshold used in RANSAC to determine if a match is considered an outlier or an inlier
  8. output: a matrix that contains ones and zeros, indicates which correspondences are outliers using a zero and inliers using a one.

Example:

     Mat essential = findEssentialMat(p1,p2,focal,pp,RANSAC,0.99,1,output);
2013-06-01 20:57:58 -0600 received badge  Necromancer (source)
2013-05-31 18:35:03 -0600 commented question When does a feature in 2.4.9 become stable?

You can verify the possible release dates here. And yes, 2.4.9 version is unstable.

2013-05-30 21:36:22 -0600 commented question When does a feature in 2.4.9 become stable?

I'm already using OpenCV 2.4.9 (C++) in Ubuntu 12.10, the findEssentialMat implementation works fine. There was an issue about the instalation but I think it's already solved. Good luck.

2013-05-28 11:46:36 -0600 answered a question 2D pixel coordinate to 3D world coordinate conversion

Hi, you can check this website Make3D. In this project they trained a machine learning algorithm that can estimate depth from just one image.

Regards.

2013-05-17 11:14:47 -0600 answered a question Good Calibration for Essential matrix estimation

Hi, the epipolar constrain (as you mentioned) is x2'Fx1=0 where x2' is the coordinate of the point in the second image and x1 is the coordinate of the point in the first image.

If you use the essential matrix the epipolar constrain is X2'EX1=0 where X2' is the normalized coordinate of the point in the second image and X1 is the normalized coordinate of the point in the first image.

You obtain the normalized coordinate with this:

 X = inverse(K)*x

That's why I think you are getting a big number using the essential matrix.

2013-05-17 10:49:08 -0600 answered a question How to obtain projection matrix?

Hi, the projection matrix is defined as P = KT (matrix multiplication)

where K => intrinsic parameters (camera parameters obtained by calibration)

     [fx, 0, cx;
 K =  0, fy, cy;
      0,  0,  1]

and T => extrinsic parameters (rotation matrix and translation vector [R|t] )

     [r11, r12, r13, t1;
 T =  r21, r22, r23, t2;
      r31, r32, r33, t3]

You can see this in the docs page.

2013-05-16 20:20:19 -0600 answered a question Object recognition by edge (or corners) matching ?

Hi, I just find this video Real-time Learning and Detection of 3D Texture-less Objects and also the paper, it is quite recent.

2013-05-16 20:07:13 -0600 commented question Replacing SIFT by FREAK

Hi, I would like to know if you could share your experience. Which combination was the best? As far as I know SIFT detector rejects corners because the SIFT descriptor works better with blobs. Then the performance of SIFT detector + SIFT descriptor is higher than FAST + SIFT descriptor.

2013-05-15 02:21:49 -0600 commented answer Unit of pose vectors from solvePnP()

Can you paste your code and the data that you are using? because I don't understand your doubt or what is the problem.

2013-05-13 10:37:09 -0600 received badge  Nice Answer (source)
2013-05-13 08:10:55 -0600 received badge  Teacher (source)
2013-05-13 06:37:30 -0600 answered a question Unit of pose vectors from solvePnP()

The rotation vector unit is radians and the translation vector unit depends on the unit of the 3D points you use, it could be meters, yards, inches, etc.

2013-05-08 01:00:13 -0600 asked a question error using StarFeatureDetector + GridAdaptedFeatureDetector

Hi, I'm trying to use the Star detector with GridAdaptedFeatureDetector but it doesn't work, the detector returns zero points. My code is the following:

vector<KeyPoint> kp0;
Mat framek0=imread("/home/raul/workspace/im15.png",0);
Ptr<FeatureDetector> star = new StarFeatureDetector(16,5,10,8,5);
Ptr<FeatureDetector> detector = new GridAdaptedFeatureDetector(star,1024,16,16);
detector->detect(framek0,kp0);
cout<<kp0.size()<<endl;

The output is a: 0.

There are no problems using only the StarFeatureDetector, it gets about 400 points in the same image.

I also tried with Ptr<featuredetector> detector = FeatureDetector::create("GridSTAR") but it does the same. GridAdaptedFeatureDetector works well when I use FAST, can someone explain me what am I doing wrong?

Thanks, Raúl

2013-04-30 05:29:16 -0600 received badge  Student (source)
2013-04-30 03:12:13 -0600 commented question error using BRISK + GridAdaptedFeatureDetector

Ptr<FeatureDetector> brisk = FeatureDetector::create("GridBRISK") doesn't cause an error. But how can I modify the threshold of BRISK detector and also the numbers of cells in the Grid?

2013-04-28 20:23:27 -0600 asked a question error using BRISK + GridAdaptedFeatureDetector

Hi, I'm trying to find keypoints using BRISK, I want the points to be well distributed across the image so I use the brisk detector + GridAdaptedFeatureDetector. The code is the following:

vector<KeyPoint> kp0,kp1;
Mat framek=imread("/home/raul/Escritorio/fotosPSE2/im190.png",0);
Mat framek2=imread("/home/raul/Escritorio/fotosPSE2/im202.png",0);        
Ptr<FeatureDetector> brisk = FeatureDetector::create("BRISK");
Ptr<FeatureDetector> detector = new GridAdaptedFeatureDetector(brisk,300,6,8);
detector->detect(framek,kp0,Mat());
detector->detect(framek2,kp1,Mat());

The problem is that I'm getting this error:

OpenCV Error: Bad argument (No parameter 'threshold' is found) in set, file /home/raul/OpenCV/OpenCV-2.4.9/modules/core/src/algorithm.cpp, line 619
OpenCV Error: Bad argument (No parameter 'threshold' is found) in set, file /home/raul/OpenCV/OpenCV-2.4.9/modules/core/src/algorithm.cpp, line 619
OpenCV Error: Bad argument (No parameter 'threshold' is found) in set, file /home/raul/OpenCV/OpenCV-2.4.9/modules/core/src/algorithm.cpp, line 619
terminate called after throwing an instance of 'tbb::captured_exception'
what():  /home/raul/OpenCV/OpenCV-2.4.9/modules/core/src/algorithm.cpp:619: error: (-5) No parameter 'threshold' is found in function set

If I only use brisk without the GridAdaptedFeatureDetector it works fine.

Am I doing something wrong? or is a bug? I don't know...

Thanks, Raúl

2013-04-23 18:08:19 -0600 received badge  Supporter (source)
2013-04-19 15:25:08 -0600 asked a question using useExtrinsicGuess in solvePnPRansac?

Hi,

I have doubts about the proper use of solvePnPRansac: I know the intrinsic parameters, the 3D points in the scene and their projections on an image, also I know the rotation vector (rvec) but I don't know the traslation vector (tvec). How can I specify that I only want to use rvec as an extrinsic guess?

Thanks, Raúl

2013-04-17 02:21:49 -0600 commented question LevMarqSparse::bundleAdjust

Hi, What kind of application are you developing? I may recommend you to use SSBA (Simple Sparse Bundle Adjustment). In the book "Mastering OpenCV with Practical Computer Vision Projects" there is a free chapter (4) that explains 3D reconstruction using SSBA with OpenCV.

2013-04-14 20:53:49 -0600 asked a question How to use GenericDescriptorMatcher?

Hi

I'm trying to use the Common Interfaces of Generic Descriptor Matchers but I have not discovered how to start yet. My problem is the following: I have two images with their respective keypoints sets, also I know before hand the Fundamental matrix that relates them, I want to match the keypoints using the epipolar constrain (p1'*Fundamental*p2 = 0), in other words, generate a vector< DMatch > that relates the points that satisfy the epipolar constrain. At this point I would like to use the interfaces that OpenCV provides in order to have my code as generic as possible. However there are not examples of how to use these tools. Can someone point me to the right direction? or maybe I don't need these interfaces?

Thanks in advance.

PD Sorry for my bad english.

2013-03-20 16:23:53 -0600 commented answer Problems while including headers in OpenCV 2.4.9

Thanks, that's the reason. Now, I will try your recommendations

2013-03-19 19:03:05 -0600 asked a question Problems while including headers in OpenCV 2.4.9

Hi, I am looking an implementation of the five points algorithm in order to determine the essential matrix for a real-time application. The present version of OpenCV (2.4.4) does not include that algorithm. However, looking information about that in Google I discover that OpenCV 2.4.9 (available on github) has an implementation of the five point algorithm opencv/modules/calib3d/src/five-point.cpp. So I downloaded the source, compile it, and install it without any problem using cmake and a sudo make install.

My cmake configuration using cmake-gui:

General configuration for OpenCV 2.4.9 =====================================
Version control:               2.4.4-687-g87563c6

Platform:
Host:                        Linux 3.5.0-26-generic x86_64
CMake:                       2.8.9
CMake generator:             Unix Makefiles
CMake build tool:            /usr/bin/make
Configuration:               Release

C/C++:
Built as dynamic libs?:      YES
C++ Compiler:                /usr/bin/c++  (ver 4.7.2)
C++ flags (Release):         -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mavx -ffunction-sections -O3 -DNDEBUG  -DNDEBUG
C++ flags (Debug):           -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mavx -ffunction-sections -g  -O0 -DDEBUG -D_DEBUG -ggdb3
C Compiler:                  /usr/bin/gcc
C flags (Release):           -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mavx -ffunction-sections -O3 -DNDEBUG  -DNDEBUG
C flags (Debug):             -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mavx -ffunction-sections -g  -O0 -DDEBUG -D_DEBUG -ggdb3
Linker flags (Release):      
Linker flags (Debug):        
Precompiled headers:         YES

OpenCV modules:
To be built:                 core imgproc flann highgui features2d calib3d ml video objdetect contrib legacy nonfree photo ts videostab
Disabled:                    gpu java softcascade stitching world
Disabled by dependency:      python(deps: softcascade)
Unavailable:                 androidcamera ocl

GUI: 
QT 4.x:                      YES (ver 4.8.3 EDITION = OpenSource)
QT OpenGL support:           YES (/usr/lib/x86_64-linux-gnu/libQtOpenGL.so)
OpenGL support:              YES (/usr/lib/x86_64-linux-gnu/libGLU.so /usr/lib/x86_64-linux-gnu/libGL.so /usr/lib/x86_64-linux-gnu/libSM.so /usr/lib/x86_64-linux-gnu/libICE.so /usr/lib/x86_64-linux-gnu/libX11.so /usr/lib/x86_64-linux-gnu/libXext.so)

Media I/O: 
ZLib:                        /usr/lib/x86_64-linux-gnu/libz.so (ver 1.2.7)
JPEG:                        /usr/lib/x86_64-linux-gnu/libjpeg.so (ver )
WEBP:                        build (ver 0.2.1)
PNG:                         /usr/lib/x86_64-linux-gnu/libpng.so (ver 1.2.49)
TIFF:                        /usr/lib/x86_64-linux-gnu/libtiff.so (ver 42 - 4.0.2)
JPEG 2000:                   /usr/lib/x86_64-linux-gnu/libjasper.so (ver 1.900.1)
OpenEXR:                     /usr/lib/libImath.so /usr/lib/libIlmImf.so /usr/lib/libIex.so /usr/lib/libHalf.so /usr/lib/libIlmThread.so (ver 1.6.1)

Video I/O:
DC1394 1.x:                  NO
DC1394 2.x:                  YES (ver 2.2.0)
FFMPEG:                      YES ...
(more)
2013-03-19 15:26:07 -0600 commented answer How to enable findEssentialMat in opencv 2.4.9?

Thanks, I already correct it

2013-03-19 04:31:52 -0600 received badge  Editor (source)
2013-03-18 23:29:57 -0600 asked a question How to enable findEssentialMat in opencv 2.4.9?

Hi, I'm being trying to use the new function findEssentialMat() in OpenCV 2.4.9 but when I try to compile my program it says that findEssentialMat is not defined. I include calib3d and I also link the proper library.

How should I compile OpenCV to enable the function?

This is my program:

#include "opencv2/opencv.hpp"
using namespace std;
using namespace cv;

Mat getEssential(const vector<KeyPoint>& keypoints1,const vector<KeyPoint>& keypoints2,vector<DMatch>& matches){
vector<Point2f> p1, p2;
for (vector<DMatch>::const_iterator it= matches.begin();it!= matches.end(); ++it) {
    float x=keypoints1[it->queryIdx].pt.x;
    float y=keypoints1[it->queryIdx].pt.y;
    p1.push_back(Point2f(x,y));
    x=keypoints2[it->trainIdx].pt.x;
    y=keypoints2[it->trainIdx].pt.y;
    p2.push_back(Point2f(x,y));
}
Mat output;
Mat essen = findEssentialMat(p1,p2,focal,pp,CV_RANSAC,0.99,1,output);
vector<DMatch> inliers;
for(int i=0;i<output.rows;i++){
    int status=output.at<char>(i,0);
    if(status==1){
        inliers.push_back(matches[i]);
    }
}
matches=inliers;
return essen;
}

int main(){
Ptr<FeatureDetector> fast = new FastFeatureDetector(10,true);
Ptr<FeatureDetector> detector = new PyramidAdaptedFeatureDetector(fast,3);
FREAK freak(true,true,22.0f,0);
BFMatcher matcher(NORM_HAMMING,true);
vector<DMatch> matches;

vector<KeyPoint> kp0,kp1;
Mat d0, d1;

Mat im0 = imread("/home/Chini/im0.png",0);
Mat im1 = imread("/home/Chini/im1.png",0);
detector->detect(im0,kp0,Mat());
detector->detect(im1,kp0,Mat());
freak.compute(im0,kp0,d0);
freak.compute(im1,kp1,d1);
matcher.match(d0,d1,matches);
Mat e = getEssential(kp0,kp1,matches);
}

When I try to compile it I received the following message:

example.cpp: In function ‘cv::Mat getEssential(const std::vector<cv::KeyPoint>&, const    std::vector<cv::KeyPoint>&, std::vector<cv::DMatch>&)’:
example.cpp:18:62: error: ‘findEssentialMat’ is not defined

Thanks in advance