2020-07-11 15:14:51 -0600 | received badge | ● Nice Answer (source) |
2018-12-18 00:20:04 -0600 | received badge | ● Popular Question (source) |
2015-08-22 07:17:31 -0600 | received badge | ● Teacher (source) |
2015-08-16 19:54:01 -0600 | received badge | ● Enthusiast |
2015-08-15 10:46:38 -0600 | received badge | ● Scholar (source) |
2015-08-15 10:41:08 -0600 | commented question | How to read / write xml file in Java OK, so what is the xml / yml format for cam/dist data? Did you down vote the question because you know the answer but think the question is trivial or not valid? |
2015-08-15 10:36:51 -0600 | answered a question | How to read / write xml file in Java This example writes the calibration and distortion matrix in c++ to either xml or yml, demonstrating the file format. Here's a copy of the code in case the link breaks. Change the suffix to xml to see that format. |
2015-08-15 10:15:26 -0600 | received badge | ● Nice Question (source) |
2015-08-15 09:48:45 -0600 | received badge | ● Self-Learner (source) |
2015-08-15 09:25:05 -0600 | answered a question | Proceedure for obtaining/updating camera pose for moving camera One approach is this example code implements an algorithm that consists of 1) detect 5000 fast key points, 2) calculate optical flow to get matched key points, 3) find essential matrix, and 4) find pose. To solve the problem of scaling, the translation vector is scaled to match the actual displacement between the photos. The program is set up to read photo files from the KITTI Odometry database and compare the calculated trajectory with the measured trajectory. The program can be easily translated to Java, and modified to 1) orb detect, 2) brisk extract / brute force haminglut match, but the results do not match as well using 500 orb key points as the 5000 fast key points with optical flow Some other, possibly better approaches, are given here and here. |
2015-08-15 09:11:07 -0600 | asked a question | How to set parameters for Orb in Java? This previous answer seems clear, except the following code doesn't change the number of features detected. How to extract more than 500 features (the default)? The parameter list / order was changed to match the protected variable list in orb.cpp. YML Parameter file: Here is the xml file that was tried (generated based on this example kp1 always has 500 rows which is the default value for nfeatures. So, the reading the yml file did not change the value of nfeatures to 2000. I have tried 1) the original list in the previous answer, 2) writing to xml and yml but I didn't change the format for xml, 3) writing only nfeatures. Code to generate xml / yml files I tried writing a c++ to output the parameters, but nothing gets printed. This creates the two files but they are both empty testorb.xml testorb.yml |
2015-08-02 07:11:03 -0600 | asked a question | How to read / write xml file in Java Has anyone written the calibration / distortion matrix from Java and have them read by a c++ code using xml? |
2015-07-30 02:18:08 -0600 | received badge | ● Student (source) |
2015-07-29 18:05:45 -0600 | received badge | ● Editor (source) |
2015-07-29 18:05:09 -0600 | asked a question | Proceedure for obtaining/updating camera pose for moving camera I would like to determine the translation and rotation of a single monocular camera (android phone) mounted on a micro helicopter. The camera has been calibrated with the chess board, so the camera matrix and distortion parameters are available. Is the following the correct procedure? The camera is moving, the background is fixed. If we can get this working on android, we'll test it by moving the camera 1 foot forward/aft, left/right, up/down. Then rotate camera about vertical axis by 30, 60 deg, and pitch the camera by 15 deg, to see how the results look. As the project progresses, INS will be integrated and Kalman filter implemented. Is there any video of indoor flight available for testing? I've ran the procedure on a video from a model helicopter, but I don't know the truth values. The video came from a onboard cam on youtube. I can see some problems. x, y, z are not in an earth system (X east, Y north, Z up) but instead may be in a system with x up, y right, and z forward. From a 3d graph of the x/y/z results it appears that earth z is the distance from the z axis, because the helicopter starts and ends on the z axis, and returns to the z axis at times that may correspond to the vehicle hitting the ground. The rotation / translation are in the current camera x/y/z frames, which I think are camera up, camera right, camera forward directions. To get to earth axis (X east, Y north, Z up) would require some conversion. Edit 1: Added key frame and comment about earth axis and results from sample video. |
2015-07-26 00:03:34 -0600 | answered a question | Setting CFLAGS and CXXFLAGS for OpenCV 2.4.3 In CMake, add two new boolean variables and leave them unchecked (off). |