2018-11-01 03:20:23 -0500 | received badge | ● Famous Question (source) |
2017-04-19 17:14:00 -0500 | received badge | ● Notable Question (source) |
2016-12-01 03:36:47 -0500 | received badge | ● Popular Question (source) |
2016-01-03 10:41:41 -0500 | received badge | ● Critic (source) |
2015-11-05 08:22:31 -0500 | commented question | Max-Clique Approximation cv::Mat summation On a related note, the solution to the problem as a vector of indices whether an approximation or the true max-clique very justifiably ought to be a function in this library. |
2015-11-05 08:07:16 -0500 | asked a question | Max-Clique Approximation cv::Mat summation I have a visual odometry routine where I am attempting to perform inlier detection. I have an nxn square cv::Mat consistency matrix, where cell ij is 1 if the absolute distance difference between the matches in their unique 3D frames (prior and current) is below some threshold. See "Real-Time Stereo Visual Odometry for Autonomous Ground Vehicles" by Andrew Howard, JPL (2008) for more details. Anyway, I want to find the node (match) of this matrix with the maximum degree. My matrix is upper triangular and this means that for every ij where i = j, I need the sum of row i + col j (or equivalently, degree i = sum row i + sum col i). This can certainly be accomplished with some loops, but my question is, are there built in OpenCV functions that I can make use of to create cleaner code, without having to write my own? |
2015-10-24 10:43:46 -0500 | received badge | ● Self-Learner (source) |
2015-10-19 17:46:08 -0500 | commented answer | about houghcircles Well, how many rods are there? |
2015-10-19 17:38:49 -0500 | commented question | horrible calibration results Your FOV is 98 deg, have you tried fisheye calibration? |
2015-10-19 10:51:52 -0500 | received badge | ● Student (source) |
2015-10-18 09:05:15 -0500 | received badge | ● Enthusiast |
2015-10-17 08:24:17 -0500 | asked a question | Rigid body motion or 3D Transformation I have a set of 3D points in camera space and I wish to transform them to world space. I have R and t for my camera. I can make my own transformation matrix [R|t] and gemm this with a matrix of my 3D points converted to homgeneous, but it seems to me that perhaps this process is already contained in an OpenCV function as it is a common procedure. Is there such a function? Forgive me if this has been answered elsewhere, I have not found it. |
2015-10-17 08:09:49 -0500 | received badge | ● Scholar (source) |
2015-10-17 08:09:42 -0500 | answered a question | FAST Response is 0 This is my mistake. I now am seeing non-zero FAST responses as I expected, so I am not sure what I was looking at earlier. One can see in the code that the absolute value of the score ( the sum of the absolute difference between the pixels in the contiguous arc and the center pixel ) is in the Keypoint pushed onto the vector of Keypoints. My other point still stands, as I think it would be worth knowing whether the keypoint is a positive or negative keypoint. Perhaps in practice brute-force matchers break early on such a discrepancy, but it seems likely that it would save computation for detectors to return two vectors of points. |
2015-10-15 06:47:07 -0500 | commented question | FAST Response is 0 Calculating a value myself is of course possible, but it would be much more efficient to build that into the detection step. The fact that FAST can perform non-maximal suppression means that it is basing its suppression decision on a response value (yes, I still need to inspect the implementation). See this paper for a mention of what I am getting at. |
2015-10-15 06:34:17 -0500 | commented question | FAST Response is 0 To be a FAST corner, 9 contiguous of 16 must be either all be less than the central pixel by some threshold or greater than the central pixel by some threshold. The amount by which the smallest pixel difference beats the threshold could be considered the response. Those FAST corners where the central pixel is less than the 9 contiguous pixels could be considered a negative corner and where the central pixel is greater than the 9 contiguous pixels we could call that corner a positive corner. In the matching stage, it would save computation to only match positive corners to positive corners and negative corners to negative corners. The exact same statements can be made for blob detectors like STAR. |
2015-10-14 19:22:48 -0500 | received badge | ● Editor (source) |
2015-10-14 18:37:25 -0500 | asked a question | FAST Response is 0 I am using 2.4.10 and the FAST keypoint response is always 0. Is this fixed in later releases? Additionally, If one were attempting to economize resources and improve matching, it would be prudent to independently match the positive and negative points be they corners or blobs such that running 2 detectors, say FAST and CENSURE would require 4 sets of points to be matched. That said, is there a way to query the keypoint to determine if it is a positive response or a negative response? It seems to me that the response is always positive, which is unfortunate. If I am right, that is a serious oversight of the library. |
2015-10-14 10:11:55 -0500 | received badge | ● Teacher (source) |
2015-10-14 09:10:27 -0500 | received badge | ● Necromancer (source) |
2015-10-14 09:10:27 -0500 | received badge | ● Self-Learner (source) |
2015-10-14 09:09:14 -0500 | answered a question | cv::viz Point Cloud Never determined the real issue, but the following works as expected: As you can see, I'm removing those points that I know are likely to be invalid ( negative depth or depth beyond 2.5 meters ). |
2015-07-17 18:07:19 -0500 | commented question | Awful disparity map using StereoBM Try understanding the parameters you are adjusting. The values you appear to be using are not good choices. Understanding the simplest stereo block matching algorithm will help you, I suggest you consult Google Scholar for papers. |
2015-07-05 00:26:09 -0500 | asked a question | cv::viz Point Cloud I have a stereo webcam from which I compute disparities and project to 3D. I would like to be able to have a vtk viewing window continuously re-render these points. When I try to do so, however, I get inconsistent behavior. For one thing the points blink in and out. They should always be on the visible, but they are absent about half the frames. Also, there appears to be issues with the viewer such that when I approach the points too closely they disappear, something like a near culling distance, but this applies to the entire cloud, not just those points that would have been closest. My viewer thread looks like this: Any ideas on what I can do to achieve a stable cloud viewer? |
2015-07-05 00:26:09 -0500 | asked a question | Account Recovery Google stopped supporting OpenID. I am unable to recover my account via email password reset. I am using the correct email and it is not in spam. Please fix the password reset via email. Thanks. |
2015-07-04 16:44:33 -0500 | received badge | ● Supporter (source) |