Ask Your Question

glethien's profile - activity

2017-08-31 18:25:15 -0500 received badge  Popular Question (source)
2014-05-13 07:56:19 -0500 commented question Bundle Adjustment

So there is the only way to rewrite the OpenCV Java Android App in C++ using the NDK and then include SSBA-3.0?

2014-05-13 04:28:32 -0500 asked a question Bundle Adjustment


I am wondering if the OpenCV implementation for Java contains some Bundle-Adjustment. If not do you have any tips which libary I can use? I've googled a lot and only found boofCV and JavaCV but it seems those libaries are not very easy to use with openCV data.


2014-05-07 06:16:26 -0500 asked a question SVD with different solution as WolframAlpha

I've found out that the results of the SVDecomp function in Java are very different from the results of WolframAlpha.

The input matrix is excactly the same for OpenCV and WolframAlpha

{{0.2229632566816983, 18.15370964847313, 4.87085706173828},
{-14.31728552253419, 2.642676839378287, -33.69501515553716},
{-2.982323803144884, 33.70091859922499, 0.8997452211463326}}

Here are the results from WolframAlpha:

U = (-0.441818862735368 | 0.214800119324567 | -0.871009185525260
-0.245069575462508 | -0.962880608842737 | -0.113145200062862
-0.862981457340684 | 0.163468167704881 | 0.478059789601005)
W = (38.5925763913943 | 0 | 0
0 | 36.8337256561100 | 0
0 | 0 | 3.76859638821616×10^-10)
V = (0.155053443270976 | 0.362336795687042 | 0.919059560758203
-0.978207790691182 | 0.186347267503429 | 0.0915653543928191
0.138086740713550 | 0.913228745925823 | -0.383334461865688)

And here is what OpenCV produces when using SCDecomp:

 U: [0.4418188627353685, 0.2148001193245664, -0.8710091855252606;
0.2450695754625076, -0.9628806088427376, -0.113145200062862;
 0.8629814573406845, 0.1634681677048805, 0.4780597896010051]
 W: [38.59257639139431; 36.83372565611004; 3.768597946996713e-10]
 VT:[-0.155053443270976, 0.3623367956870423, 0.9190595607582029;
 0.9782077906911818, 0.1863472675034285, 0.09156535439281914;
-0.1380867407135498, 0.9132287459258235, -0.3833344618656882]

To mention: W in OpenCV is not a matrix, as well as the sign of the values are sometimes different.

Is this a bug? Here is my SourceCode

Core.SVDecomp(E, w, u, vt);
2014-05-07 04:43:49 -0500 asked a question Core.gemm for MatrixMulitplication

I want to do a Matrix multiplication with OpenCV on Java.

As Core.muliply is not a "right" matrix multiplication

multiply(Mat src1, Mat src2, Mat dst, double scale)

Calculates the per-element scaled product of two arrays.

I am confused about how to perform a "normal" matrix multiplication. Core.gemm seems to solve the problem.

gemm(Mat src1, Mat src2, double alpha, Mat src3, double gamma, Matdst)

Performs generalized matrix multiplication.

Gemm uses this formula for the multiplication

dst = alphasrc1.t()src2 + beta*src3.t();

Now I want to get the essential matrix from the fundamental matrix I've calculated before using this piece of Code

Mat fundamental = Calib3d.findFundamentalMat(object_left, object_right, Calib3d.RANSAC, 3, 0.99); Core.gemm(cameraMatrix.t(), fundamental, 1, cameraMatrix, 0, E, Core.GEMM_1_T);

Core.gemm(E, cameraMatrix , 1, E, 0, E, Core.GEMM_1_T);

The flag Core.GEMM_1_T means to transpose src1

Therefore I calculate cameraMatrix^T * fundamental and save this ( lets call this E') Next step is that I calculate E' * cameraMatrix.

2014-05-06 08:30:25 -0500 asked a question Assertion Failed Core.solve in Java

The app I am working on, using OpenCV, is crashing with the following exception:

05-06 15:24:50.877: E/org.opencv.core(28997): core::solve_10() caught cv::Exception: /home/reports/ci/slave_desktop/50-SDK/opencv/modules/core/src/lapack.cpp:1197: error: (-215) type == _src2.type() && (type == CV_32F || type == CV_64F) in function bool cv::solve(cv::InputArray, cv::InputArray, cv::OutputArray, int)

Here is the code I am using:

    A = new Mat(4,3,CvType.CV_32F);
    double[] A_values = {
        u.x * p1.get(2, 0)[0]-p1.get(0, 0)[0],   u.x * p1.get(2, 1)[0]-p1.get(0, 1)[0],    u.x * p1.get(2, 2)[0]-p1.get(0, 2)[0],           
        u.y * p1.get(2, 0)[0]-p1.get(1, 0)[0],   u.y * p1.get(2, 1)[0]-p1.get(1, 1)[0],    u.y * p1.get(2, 2)[0]-p1.get(1, 2)[0],           
        v.x * p2.get(2, 0)[0]-p2.get(0, 0)[0],   v.x * p2.get(2, 1)[0]-p2.get(0, 1)[0],    v.x * p2.get(2, 2)[0]-p2.get(0, 2)[0],           
        v.y * p2.get(2, 0)[0]-p2.get(1, 0)[0],   v.y * p2.get(2, 1)[0]-p2.get(1, 1)[0],    v.y * p2.get(2, 2)[0]-p2.get(1, 2)[0]                
    A.put(0, 0, A_values);

    B = new Mat(4,1,A.type());
    double[] B_values = {
            -(u.x * p1.get(2, 3)[0] - p1.get(0, 3)[0]),
            -(u.y * p1.get(2, 3)[0] - p1.get(1, 3)[0]),
            -(v.x * p2.get(2, 3)[0] - p2.get(0, 3)[0]),
            -(v.y * p2.get(2, 3)[0] - p2.get(1, 3)[0]),             
    B.put(0, 0, B_values);

        Mat X = new Mat(3,1, A.type());
        Core.solve(A, B, X, Core.DECOMP_SVD);

Can someone tell me please why it crashes?

2014-05-06 06:05:43 -0500 received badge  Editor (source)
2014-05-06 05:59:14 -0500 received badge  Student (source)
2014-05-06 05:51:12 -0500 asked a question SolvePnP - How to use it?

Hi, I am doing some multiview geometry reconstruction with structure from motion. So far I am having having the following

  • Two images as initial input
  • Camera parameters and distortion coeff
  • The working rectification pipeline for the initial input images
  • Creation of a disparity map
  • Creating a pointCloud from the disparity map with iterating over the disparity map and taking the value as z (x and y are the pixel coordinates of the pixel in the disparity map) (What is not working is reprojectImageTo3D as my Q matrix seems to be very wrong, but everything else is working perfectly)

This gives me a good pointcloud of the scene.

Now I need to add n more images to the pipeline. I've googled a lot and found the method solvePnP will help me.

But now I am very confused...

SolvePnP will take a list of the 3D points and the corresponding 2D image points and reconstruct the R and T vector for the third, fourth camera... and so on. I've read that the tho vectors need to be aligned, meaning that the first 3D point in the first vector corresponds to the first 2D point in the 2nd vector.

So far so good. But from where do I take those correspondances? Can I use this method reprojectPoints for getting those two vectors??? Or is my whole idea wrong using the disparity map for depth reconstruction? (Alternative: triangulatePoints using the good matches found before).

Can someone help me getting this straight? How can I use solvePnP to add n more cameras and therefore 3D Points to my pointcloud and improve the result of the reconstruction?

2014-04-15 09:23:07 -0500 asked a question Points form one camera system to another

Hi, I am having some troubles doing a simple transformation. My setup is that I am having one camera and taking two pictures with the same camera of the same scene but with slithly different angle and position. For my SfM approach I am doing the essential decomposition which works perfectly as I am getting the 4 possible solutions for the relative camera pose. I assume that the first camera is located at the origin of the world-space.

The problem is when I try to find the correct comination of R1/R2 and T1/T2. My idea is that I triangulate all good matches found before. This will give me a list of points (x,y,z,w) in the world space/ first-camera-space.

Now I want to transform a copy of this point into the system of the 2nd camera and check if the z-value of the original point and the copied-transformed-Point are positiv. If this happens for most of the points, the used combination of R and T is the correct one, but it still finds the wrong combination as the rectification produces wrong results (from hardcoding and testing I know that R2 and T2 are correct, but it finds R1 and T2)

Here is my code:

 * Transforms the given point into the target camera system 
 * and returns the z value
 * @Param point Point to transform
 * @Param system Camera system to transform to
 * @return double z value
private Mat transformPointIntoSystem(Mat point, Mat system){
    double[] vals = {point.get(0, 0)[0], point.get(1, 0)[0] ,point.get(2, 0)[0] ,1};
    Mat point_homo = new Mat(4,1,CvType.CV_32F);
    point_homo.put(0, 0, vals);

    Mat dst = new Mat(3,4,CvType.CV_32F);
    // Multiply the matrix * vector
    Core.gemm(system, point_homo, 1, dst, 0, dst);

    return dst; 

 private void findCorrectRT(Mat R1, Mat R2, Mat T1, Mat T2, Mat R, Mat T, MatOfPoint2f objLeft, MatOfPoint2f objRight){
    Mat res = new Mat(); // Result mat for triangulation

    Mat P1 = new Mat(4,4,CvType.CV_32F);
    double[] diagVal = {1,0,0,0,
    P1.put(0, 0, diagVal);

    int[] max = new int[4];
    for(int i = 0; i < max.length; i++)
        max[i] = 0;

    Mat P2 = buildCameraMatrix(R1, T1);     
    Calib3d.triangulatePoints(P1, P2, objLeft, objRight, res);      
    publishProgress(res.size().width + " , " + res.size().height);
    for(int i = 0; i < res.size().width; i++){
        Mat X1 = transformPointIntoSystem(res.col(i), P1);
        Mat X2 = transformPointIntoSystem(X1, P2);

        if(X1.get(2, 0)[0] >= 0 && X2.get(2, 0)[0] >= 0){
            max[0] += 1;

    P2 = buildCameraMatrix(R1, T2);     
    Calib3d.triangulatePoints(P1, P2, objLeft, objRight, res);      
    publishProgress(res.size().width + " , " + res.size().height);
    for(int i = 0; i < res.size().width; i++){
        Mat X1 = transformPointIntoSystem(res.col(i), P1);
        Mat X2 = transformPointIntoSystem(X1, P2);

        if(X1.get(2, 0)[0] >= 0 && X2.get(2, 0)[0] >= 0){
            max[1] += 1;

    P2 = buildCameraMatrix(R2, T1);     
    Calib3d.triangulatePoints(P1, P2, objLeft, objRight, res);      
    publishProgress(res.size().width + " , " + res.size().height);
    for(int i ...
2014-03-31 13:29:49 -0500 received badge  Self-Learner (source)
2014-03-31 13:28:20 -0500 answered a question SVD on Android SDK

I've found the misstake. This code is not doing the right matrix multiplication.

      Mat E = new Mat();
      Core.multiply(cameraMatrix.t(),fundamental, E); 
      Core.multiply(E, cameraMatrix, E);

I changed this to

      Core.gemm(cameraMatrix.t(), fundamental, 1, cameraMatrix, 1, E);

which is now doing the right matrix multiplication. As far as I can get ir from the documentation, Core.multiply is doing the multiplication for each element. not the dot product of row*col.

2014-03-31 04:36:53 -0500 commented answer Decomposition of essential matrix leads to wrong rotation and translation

Could it be possible that my fundamental matrix is equals to the essential matrix as Hartley and Zissermann states:

„11.7.3 The calibrated case

In the case of calibrated cameras normalized image coordinates may be used, and the essential matrix E computed instead of the fundamental matrix”

2014-03-29 08:19:17 -0500 commented answer Decomposition of essential matrix leads to wrong rotation and translation

Thanks soo much!!!!

2014-03-29 06:52:14 -0500 asked a question Decomposition of essential matrix leads to wrong rotation and translation

Hi, I am doing some SfM and having troubles getting R and T from the essential matrix.

Here is what I am doing in sourcecode:

            Mat fundamental = Calib3d.findFundamentalMat(object_left, object_right);
    Mat E = new Mat();

    Core.multiply(cameraMatrix.t(), fundamental, E); // cameraMatrix.t()*fundamental*cameraMatrix;
    Core.multiply(E, cameraMatrix, E);

    Mat R = new Mat();
    Mat.zeros(3, 3, CvType.CV_64FC1).copyTo(R);

    Mat T = new Mat();

    calculateRT(E, R, T);

private void calculateRT(Mat E, Mat R, Mat T){

     * //-- Step 6: calculate Rotation Matrix and Translation Vector
        Matx34d P;
        //decompose E 
        SVD svd(E,SVD::MODIFY_A);
        Mat svd_u = svd.u;
        Mat svd_vt = svd.vt;
        Mat svd_w = svd.w;
        Matx33d W(0,-1,0,1,0,0,0,0,1);//HZ 9.13
        Mat_<double> R = svd_u * Mat(W) * svd_vt; //
        Mat_<double> T = svd_u.col(2); //u3

        if (!CheckCoherentRotation (R)) {
        std::cout<<"resulting rotation is not coherent\n";
        return 0;

    Mat w = new Mat();
    Mat u = new Mat();
    Mat vt = new Mat();

    Core.SVDecomp(E, w, u, vt, Core.DECOMP_SVD); // Maybe use flags
    double[] W_Values = {0,-1,0,1,0,0,0,0,1};
    Mat W = new Mat(new Size(3,3), CvType.CV_64FC1, new Scalar(W_Values) );

    Core.multiply(u, W, R);
    Core.multiply(R, vt, R);

    T = u.col(2);

And here are the results of all matrizes after and during calculation.

        Number matches: 10299
        Number of good matches: 590
        Number of obj_points left: 590.0

        [4.209958176688844e-08, -8.477216249742946e-08, 9.132798068178793e-05;
        3.165719895008366e-07, 6.437858397735847e-07, -0.0006976204595236443;
        0.0004532506630569588, -0.0009224427024602799, 1]

        [0.05410018455525099, 0, 0;
        0, 0.8272987826496967, 0;
        0, 0, 1]

        [0, 0, 1;
         0, 0.9999999999999999, 0;
         1, 0, 0]

        [1; 0.8272987826496967; 0.05410018455525099]

        [0, 0, 1;
         0, 1, 0;
         1, 0, 0]

        [0, 0, 0;
         0, 0, 0;
         0, 0, 0]

        [1; 0; 0]

And for completion here are the image I am using left: right:

Can someone point out where something is goind wrong or what I am doing wrong?

2014-03-29 05:51:25 -0500 received badge  Scholar (source)
2014-03-29 05:51:18 -0500 received badge  Supporter (source)
2014-03-28 15:55:57 -0500 asked a question Inaccurate feature matching

Hi, I am currently developing an AndroidApp using OpenCV4Android. The aim of this app is a structure form motion in a little larger scale. The concept of the app and the mathematical background is done. The problem is the matching of the feature points. But for a better understanding I will provide a little bit more background.

The problem is that I am having two images of the same scene. Between the images the camera has been translated and maybe there is some rotation. For the structure form motion I need the fundamental matrix and get the essential matrix for later usage.

As OpenCV4Android does not provide SIFT or SURF Feature matching, I am using ORB and Brutforce_Hamming matcher. The Featuredetector is getting excactly 500 Featurepoints per image (8MP) But the matcher for the featuerpoints is only gettin between 0 and 3 matches... I need far more matches for getting a good fundamental matrix.

I am doing the whole process on undistored images using the cameracalibration done before.

Here is the code I am using to get the matches and the FeaturePoints: // Create a feature detector which uses SIFT Features FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);

// Create an extractor for the description of the feature points using 3 // WHAT IS 3
DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.ORB);

// Matches the described feature points of both images together using bruteforce_hamming
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);

// Minimal distance between two matches
double min_dist = 80; 
// Maximal distance between two matches
double max_dist = 10000;
 * Calculates the matches of KeyPoints between two images
 * @Param left the left image
 * @Param right the right image
 * @return List<DMatch> List of matches between left and right
private List<DMatch> computeGoodMatches(Mat left, Mat right, MatOfPoint2f object_left, MatOfPoint2f object_right){
    // For the left image

    MatOfKeyPoint key_left = new MatOfKeyPoint();
    detector.detect(left, key_left);

    publishProgress("Keypoints left: " + String.valueOf(key_left.size()));

    Mat desc_left = new Mat();
    extractor.compute(left, key_left, desc_left);

    // For the right image

    MatOfKeyPoint key_right = new MatOfKeyPoint();
    detector.detect(right, key_right);

    publishProgress("Keypoints right: " + String.valueOf(key_right.size()));

    Mat desc_right = new Mat();
    extractor.compute(right, key_right, desc_right);

    // Calculate the matches (evaluating the good matches happens later in the code)
    MatOfDMatch matches = new MatOfDMatch();
    matcher.match(desc_left, desc_right, matches);

    List<DMatch> matchesList = matches.toList();

    publishProgress("Number matches: " + String.valueOf(matchesList.size()));

    LinkedList<DMatch> good_matches = new LinkedList<DMatch>();

    // Quick calculation of min and max distance between matches 
    for( int j = 0; j < desc_left.rows(); j++ ){
        Double dist = (double) matchesList.get(j).distance;
        if( dist < min_dist ) min_dist = dist;
        if( dist > max_dist ) max_dist = dist;

    // Only use good matches
    // Good = 2*min_dist or 0.02 
    for(int j = 0; j < desc_left.rows(); j++){
        if(matchesList.get(j).distance <= min_dist){

    List<KeyPoint> list_key_left =  key_left.toList();
    List<KeyPoint> list_key_right = key_right.toList();

    LinkedList<Point> objList1 = new LinkedList<Point>();
    LinkedList<Point> objList2 = new LinkedList<Point>();

    for(int j = 0; j<good_matches.size(); j++){

    object_right ...
2014-03-28 07:19:41 -0500 commented question SVD on Android SDK

Thansk - That is excactly what I was looking for!

2014-03-28 06:12:30 -0500 asked a question SVD on Android SDK

Hi, I am doing some Structure-from-Motion approach and having some trouble getting R and T from the essential matrix.

So far I am doing the following steps:

  • Calibrate Camera
  • Take two images with the same camera
  • undistort images
  • find feature points
  • match features
  • calculate fundamental matrix F using Mat fundamental = Calib3d.findFundamentalMat(object_left, object_right);
  • Calculate essential matix E using the following block of code:

    Mat E = new Mat();
    Core.multiply(cameraMatrix.t(),fundamental, E);
    Core.multiply(E, cameraMatrix, E);

Using E I now need to calculate R and T, the relative rotation and translation between both cameras. I've read the chapter about SCV and E in Hartley and Zisserman, but now I am struggeling with OpenCV Code. A quick google-question brought me this code which is in C++:

Matx34d P;
//decompose E 
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,1,0,0,0,0,1);//HZ 9.13
Mat_<double> R = svd_u * Mat(W) * svd_vt; //
Mat_<double> T = svd_u.col(2); //u3

if (!CheckCoherentRotation (R)) {
  std::cout<<"resulting rotation is not coherent\n";
  return 0;

The problem is now that I don't know how to transfer this code to Android/Java. There is no object SVD. I am doing the SVD for E using Mat svd = E.inv(Core.DECOMP_SVD); which returns a Mat Object.

Can someone help me please? I need access to U, W and VT from the singular value decomposition.