2017-08-31 18:25:15 -0500 | received badge | ● Popular Question (source) |

2014-05-13 07:56:19 -0500 | commented question | Bundle Adjustment So there is the only way to rewrite the OpenCV Java Android App in C++ using the NDK and then include SSBA-3.0? |

2014-05-13 04:28:32 -0500 | asked a question | Bundle Adjustment Hi, I am wondering if the OpenCV implementation for Java contains some Bundle-Adjustment. If not do you have any tips which libary I can use? I've googled a lot and only found boofCV and JavaCV but it seems those libaries are not very easy to use with openCV data. Thanks |

2014-05-07 06:16:26 -0500 | asked a question | SVD with different solution as WolframAlpha I've found out that the results of the SVDecomp function in Java are very different from the results of WolframAlpha. The input matrix is excactly the same for OpenCV and WolframAlpha Here are the results from WolframAlpha: And here is what OpenCV produces when using SCDecomp: To mention: W in OpenCV is not a matrix, as well as the sign of the values are sometimes different. Is this a bug? Here is my SourceCode |

2014-05-07 04:43:49 -0500 | asked a question | Core.gemm for MatrixMulitplication I want to do a Matrix multiplication with OpenCV on Java. As Core.muliply is not a "right" matrix multiplication
I am confused about how to perform a "normal" matrix multiplication. Core.gemm seems to solve the problem.
Gemm uses this formula for the multiplication
Now I want to get the essential matrix from the fundamental matrix I've calculated before using this piece of Code
The flag Core.GEMM_1_T means to transpose src1 Therefore I calculate cameraMatrix^T * fundamental and save this ( lets call this E') Next step is that I calculate E' * cameraMatrix. |

2014-05-06 08:30:25 -0500 | asked a question | Assertion Failed Core.solve in Java The app I am working on, using OpenCV, is crashing with the following exception:
Here is the code I am using: Can someone tell me please why it crashes? |

2014-05-06 06:05:43 -0500 | received badge | ● Editor (source) |

2014-05-06 05:59:14 -0500 | received badge | ● Student (source) |

2014-05-06 05:51:12 -0500 | asked a question | SolvePnP - How to use it? Hi, I am doing some multiview geometry reconstruction with structure from motion. So far I am having having the following - Two images as initial input
- Camera parameters and distortion coeff
- The working rectification pipeline for the initial input images
- Creation of a disparity map
- Creating a pointCloud from the disparity map with iterating over the disparity map and taking the value as z (x and y are the pixel coordinates of the pixel in the disparity map) (What is not working is reprojectImageTo3D as my Q matrix seems to be very wrong, but everything else is working perfectly)
This gives me a good pointcloud of the scene. Now I need to add n more images to the pipeline. I've googled a lot and found the method solvePnP will help me. But now I am very confused... SolvePnP will take a list of the 3D points and the corresponding 2D image points and reconstruct the R and T vector for the third, fourth camera... and so on. I've read that the tho vectors need to be aligned, meaning that the first 3D point in the first vector corresponds to the first 2D point in the 2nd vector. So far so good. But from where do I take those correspondances? Can I use this method reprojectPoints for getting those two vectors??? Or is my whole idea wrong using the disparity map for depth reconstruction? (Alternative: triangulatePoints using the good matches found before). Can someone help me getting this straight? How can I use solvePnP to add n more cameras and therefore 3D Points to my pointcloud and improve the result of the reconstruction? |

2014-04-15 09:23:07 -0500 | asked a question | Points form one camera system to another Hi, I am having some troubles doing a simple transformation. My setup is that I am having one camera and taking two pictures with the same camera of the same scene but with slithly different angle and position. For my SfM approach I am doing the essential decomposition which works perfectly as I am getting the 4 possible solutions for the relative camera pose. I assume that the first camera is located at the origin of the world-space. The problem is when I try to find the correct comination of R1/R2 and T1/T2. My idea is that I triangulate all good matches found before. This will give me a list of points (x,y,z,w) in the world space/ first-camera-space. Now I want to transform a copy of this point into the system of the 2nd camera and check if the z-value of the original point and the copied-transformed-Point are positiv. If this happens for most of the points, the used combination of R and T is the correct one, but it still finds the wrong combination as the rectification produces wrong results (from hardcoding and testing I know that R2 and T2 are correct, but it finds R1 and T2) Here is my code: (more) |

2014-03-31 13:29:49 -0500 | received badge | ● Self-Learner (source) |

2014-03-31 13:28:20 -0500 | answered a question | SVD on Android SDK I've found the misstake. This code is not doing the right matrix multiplication. I changed this to which is now doing the right matrix multiplication. As far as I can get ir from the documentation, Core.multiply is doing the multiplication for each element. not the dot product of row*col. |

2014-03-31 04:36:53 -0500 | commented answer | Decomposition of essential matrix leads to wrong rotation and translation Could it be possible that my fundamental matrix is equals to the essential matrix as Hartley and Zissermann states: „11.7.3 The calibrated case In the case of calibrated cameras normalized image coordinates may be used, and the essential matrix E computed instead of the fundamental matrix” |

2014-03-29 08:19:17 -0500 | commented answer | Decomposition of essential matrix leads to wrong rotation and translation Thanks soo much!!!! |

2014-03-29 06:52:14 -0500 | asked a question | Decomposition of essential matrix leads to wrong rotation and translation Hi, I am doing some SfM and having troubles getting R and T from the essential matrix. Here is what I am doing in sourcecode: And here are the results of all matrizes after and during calculation. And for completion here are the image I am using left: https://drive.google.com/file/d/0Bx9OKnxaua8kXzRFNFRtMlRHSzg/edit?usp=sharing right: https://drive.google.com/file/d/0Bx9OKnxaua8kd3hyMjN1Zll6ZkE/edit?usp=sharing Can someone point out where something is goind wrong or what I am doing wrong? |

2014-03-29 05:51:25 -0500 | received badge | ● Scholar (source) |

2014-03-29 05:51:18 -0500 | received badge | ● Supporter (source) |

2014-03-28 15:55:57 -0500 | asked a question | Inaccurate feature matching Hi, I am currently developing an AndroidApp using OpenCV4Android. The aim of this app is a structure form motion in a little larger scale. The concept of the app and the mathematical background is done. The problem is the matching of the feature points. But for a better understanding I will provide a little bit more background. The problem is that I am having two images of the same scene. Between the images the camera has been translated and maybe there is some rotation. For the structure form motion I need the fundamental matrix and get the essential matrix for later usage. As OpenCV4Android does not provide SIFT or SURF Feature matching, I am using ORB and Brutforce_Hamming matcher. The Featuredetector is getting excactly 500 Featurepoints per image (8MP) But the matcher for the featuerpoints is only gettin between 0 and 3 matches... I need far more matches for getting a good fundamental matrix. I am doing the whole process on undistored images using the cameracalibration done before. Here is the code I am using to get the matches and the FeaturePoints: // Create a feature detector which uses SIFT Features FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB); (more) |

2014-03-28 07:19:41 -0500 | commented question | SVD on Android SDK Thansk - That is excactly what I was looking for! |

2014-03-28 06:12:30 -0500 | asked a question | SVD on Android SDK Hi, I am doing some Structure-from-Motion approach and having some trouble getting R and T from the essential matrix. So far I am doing the following steps: - Calibrate Camera
- Take two images with the same camera
- undistort images
- find feature points
- match features
- calculate fundamental matrix F using
`Mat fundamental = Calib3d.findFundamentalMat(object_left, object_right);` Calculate essential matix E using the following block of code: `Mat E = new Mat(); Core.multiply(cameraMatrix.t(),fundamental, E); Core.multiply(E, cameraMatrix, E);`
Using E I now need to calculate R and T, the relative rotation and translation between both cameras. I've read the chapter about SCV and E in Hartley and Zisserman, but now I am struggeling with OpenCV Code. A quick google-question brought me this code which is in C++: The problem is now that I don't know how to transfer this code to Android/Java. There is no object SVD. I am doing the SVD for E using Can someone help me please? I need access to U, W and VT from the singular value decomposition. |

Copyright OpenCV foundation, 2012-2018. Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license.