Ask Your Question

JVorwald's profile - activity

2020-07-11 15:14:51 -0500 received badge  Nice Answer (source)
2018-12-18 00:20:04 -0500 received badge  Popular Question (source)
2015-08-22 07:17:31 -0500 received badge  Teacher (source)
2015-08-16 19:54:01 -0500 received badge  Enthusiast
2015-08-15 10:46:38 -0500 received badge  Scholar (source)
2015-08-15 10:41:08 -0500 commented question How to read / write xml file in Java

OK, so what is the xml / yml format for cam/dist data? Did you down vote the question because you know the answer but think the question is trivial or not valid?

2015-08-15 10:36:51 -0500 answered a question How to read / write xml file in Java

This example writes the calibration and distortion matrix in c++ to either xml or yml, demonstrating the file format. Here's a copy of the code in case the link breaks. Change the suffix to xml to see that format.

#include "opencv2/opencv.hpp"
#include <time.h>

using namespace cv;

int main(int, char** argv)
    FileStorage fs("test.yml", FileStorage::WRITE);

    fs << "frameCount" << 5;
    time_t rawtime; time(&rawtime);
    fs << "calibrationDate" << asctime(localtime(&rawtime));
    Mat cameraMatrix = (Mat_<double>(3,3) << 1000, 0, 320, 0, 1000, 240, 0, 0, 1);
    Mat distCoeffs = (Mat_<double>(5,1) << 0.1, 0.01, -0.001, 0, 0);
    fs << "cameraMatrix" << cameraMatrix << "distCoeffs" << distCoeffs;
    fs << "features" << "[";
    for( int i = 0; i < 3; i++ )
        int x = rand() % 640;
        int y = rand() % 480;
        uchar lbp = rand() % 256;

        fs << "{:" << "x" << x << "y" << y << "lbp" << "[:";
        for( int j = 0; j < 8; j++ )
            fs << ((lbp >> j) & 1);
        fs << "]" << "}";
    fs << "]";
    return 0;
2015-08-15 10:15:26 -0500 received badge  Nice Question (source)
2015-08-15 09:48:45 -0500 received badge  Self-Learner (source)
2015-08-15 09:25:05 -0500 answered a question Proceedure for obtaining/updating camera pose for moving camera

One approach is this example code implements an algorithm that consists of 1) detect 5000 fast key points, 2) calculate optical flow to get matched key points, 3) find essential matrix, and 4) find pose. To solve the problem of scaling, the translation vector is scaled to match the actual displacement between the photos. The program is set up to read photo files from the KITTI Odometry database and compare the calculated trajectory with the measured trajectory.

The program can be easily translated to Java, and modified to 1) orb detect, 2) brisk extract / brute force haminglut match, but the results do not match as well using 500 orb key points as the 5000 fast key points with optical flow

Some other, possibly better approaches, are given here and here.

2015-08-15 09:11:07 -0500 asked a question How to set parameters for Orb in Java?

This previous answer seems clear, except the following code doesn't change the number of features detected. How to extract more than 500 features (the default)? The parameter list / order was changed to match the protected variable list in orb.cpp.

File outputFile = new File("detectorParams.yml");
detector = FeatureDetector.create(FeatureDetector.ORB);;
MatOfKeyPoint kp1 = new MatOfKeyPoint();
for (images in video) {
  Mat img = ...
  detector.detect(img, kp1);

YML Parameter file:

nfeatures: 2000
scaleFactor: 1.2000000000000000e+000
nlevels: 8
edgeThreshold: 31
firstLevel: 0
wta_k: 2
scoreType: 0
patchSize: 31
fastThreshold: 0

Here is the xml file that was tried (generated based on this example

<?xml version="1.0"?>

kp1 always has 500 rows which is the default value for nfeatures. So, the reading the yml file did not change the value of nfeatures to 2000. I have tried 1) the original list in the previous answer, 2) writing to xml and yml but I didn't change the format for xml, 3) writing only nfeatures.

Code to generate xml / yml files

#include "opencv2/opencv.hpp"
#include <time.h>

using namespace cv;

int main(int, char** argv)
    int nfeatures=2000, nlevels=8, edgeThreshold=31, firstLevel=0, wta_k=2,
            scoreType=ORB::HARRIS_SCORE, patchSize=31, fastThreshold=0;
    double scaleFactor=1.2;

    for (int ft=0; ft<2; ft++) {
        FileStorage fs;
        if (ft==0) {
            fs = FileStorage("test_descrip.xml", FileStorage::WRITE);
        } else {
            fs = FileStorage("test_descrip.yml", FileStorage::WRITE);
        fs << "nfeatures" << nfeatures;
        fs << "scaleFactor" << scaleFactor;
        fs << "nlevels" << nlevels;
        fs << "edgeThreshold" << edgeThreshold;
        fs << "firstLevel" << firstLevel;
        fs << "wta_k" << wta_k;
        fs << "scoreType" << scoreType;
        fs << "patchSize" << patchSize;
        fs << "fastThreshold" << fastThreshold;

    return 0;

I tried writing a c++ to output the parameters, but nothing gets printed.

int main(int argc, char** argv) {

//    CV_WRAP static Ptr<ORB> create(int nfeatures=500, float scaleFactor=1.2f, int nlevels=8, int edgeThreshold=31,
//        int firstLevel=0, int WTA_K=2, int scoreType=ORB::HARRIS_SCORE, int patchSize=31, int fastThreshold=20);
    Ptr<ORB> orb=ORB::create(2000,1.3f,9,32,0,2,ORB::HARRIS_SCORE,35,22);

    FileStorage fs1("testorb.yml", FileStorage::WRITE);
    FileStorage fs2("testorb.xml", FileStorage::WRITE);
    return 0;

This creates the two files but they are both empty


<?xml version="1.0"?>


2015-08-02 07:11:03 -0500 asked a question How to read / write xml file in Java

Has anyone written the calibration / distortion matrix from Java and have them read by a c++ code using xml?

2015-07-30 02:18:08 -0500 received badge  Student (source)
2015-07-29 18:05:45 -0500 received badge  Editor (source)
2015-07-29 18:05:09 -0500 asked a question Proceedure for obtaining/updating camera pose for moving camera

I would like to determine the translation and rotation of a single monocular camera (android phone) mounted on a micro helicopter. The camera has been calibrated with the chess board, so the camera matrix and distortion parameters are available. Is the following the correct procedure? The camera is moving, the background is fixed.

0) Initialize pos_R=Mat.eye(3) and pos_T=mat.zeros(3,1). 
1) Store the first image in Mat img_train and use ORB detector, BRISK extractor to obtain keypoints / features
2) Store the next video image in Mat img_query, use ORB/BRISK with BF_HG radius matcher
3) Find distances between keypoint matches and keep only distances below threshold
4) For the first frame, set it as key frame.  For subsequent frames update the keyframe if the number of keypoints falls to less than a required number (30) or if the percent of keypoint matches falls below a required percentage (50).
5) Obtain the change in rotation and translation between the current and the last key frame.  Use findEssentialMat to obtain the essential matrix from camera focal, principle point, and matching point.  Then use recoverPose to obtain camera_R, camera_T
6) Update pos_R and pos_T using gemm.   pos_R = camera_R * keyFrameR_R.  pos_T = keyFrame_R * camera_T + keyFrame_T
7) Convert to camera angles for display using Rodrigues
8) store query image, keypoints, and features into train image, keypoints, and features
9) Repeat starting from step 2

If we can get this working on android, we'll test it by moving the camera 1 foot forward/aft, left/right, up/down. Then rotate camera about vertical axis by 30, 60 deg, and pitch the camera by 15 deg, to see how the results look.

As the project progresses, INS will be integrated and Kalman filter implemented. Is there any video of indoor flight available for testing?

I've ran the procedure on a video from a model helicopter, but I don't know the truth values. The video came from a onboard cam on youtube. I can see some problems. x, y, z are not in an earth system (X east, Y north, Z up) but instead may be in a system with x up, y right, and z forward. From a 3d graph of the x/y/z results it appears that earth z is the distance from the z axis, because the helicopter starts and ends on the z axis, and returns to the z axis at times that may correspond to the vehicle hitting the ground.

The rotation / translation are in the current camera x/y/z frames, which I think are camera up, camera right, camera forward directions. To get to earth axis (X east, Y north, Z up) would require some conversion.

Edit 1: Added key frame and comment about earth axis and results from sample video.

2015-07-26 00:03:34 -0500 answered a question Setting CFLAGS and CXXFLAGS for OpenCV 2.4.3

In CMake, add two new boolean variables and leave them unchecked (off).