2015-12-13 19:22:20 -0600 | received badge | ● Student (source) |
2015-11-24 13:32:04 -0600 | received badge | ● Critic (source) |
2015-11-10 18:06:52 -0600 | asked a question | Why does Vis3d::spin() complete instantly without user input? I have written the most basic of Vis applications. In essence, here is the main(): When I execute this program, the spin method completes instantaneously. Well, not quite: I see a window flash in front of me before it completes. It is my understanding that the event loop should keep spinning until it detects relevant user input. I tried restructuring my code, replacing the As expected, the spinOnce exhibits the same behavior, and completes without displaying any window. Is this a problem with my installation? |
2015-10-28 15:23:35 -0600 | received badge | ● Enthusiast |
2015-10-27 11:19:02 -0600 | commented question | recoverPose translation values Did you ever identify the problem here? Also, what program are you using for visualization? |
2015-10-27 11:18:51 -0600 | received badge | ● Supporter (source) |
2015-10-27 11:10:06 -0600 | asked a question | Why does recoverPose return a non-zero position when identical point vectors are supplied? By accident I tried estimating the relative position of an image to itself (don't ask). I would expect a result of 0 translation and 0 rotation. Surprisingly, I get a non-zero translation result. In fact I get a rather significant result: 0.0825 -0.0825. In essence my code is as follows: In the above code, t != 0. My question is: is a non-zero result for recoverPose valid when points1 and points2 are identical? If so, why? |
2015-07-30 23:46:52 -0600 | asked a question | How do I estimate camera pose from two sets of features in openCV? I have a sequence of images. I would like to estimate the camera pose for each frame in my sequence. I have calculated features in each frame, and tracked them through the sequence. I would now like to estimate the camera pose for each frame. I do so using the following openCV routines: Mat essentialMatrix = findEssentialMat(pointsA, pointsB, f, pp, RANSAC, 0.999, 1.0, mask); recoverPose(essentialMatrix, pointsA, pointsB, R, T, focalLength, principlePoint); Where pointsA and pointsB are comprised of 2D coordinates of features that are present in both frames. pointsA is associated with the frame before pointsB. The problem I am encountering is that the R and T estimations are very noisy, to the point where I believe something is wrong with my pose estimation. My question is, how do I estimate the camera pose from two sets of features? Note: I am familiar with this answered question. However, I believe openCV3 now includes methods that address this problem more eloquently. |