2016-01-16 18:33:01 -0600 | asked a question | Homograph Matrix Off in Image Stitching I am trying to do an image stitching project where I use point pairs calculated by tracking points between frames of a video using the Lucas Kanade algorithm to find homography matrices. After writing the program and it came time for stitching together the frames of a video, I decided to run a test where I simply display the perspective warped versions of each image onto a black canvas to see how the Homography matrix had warped them. When I did this, instead of moving over bit by bit between frames, frames were translated further and further distances, way off from a slight nudge between frames [----------------------------------------------------------------------------Empty Space---------------------------------------] [Frame0---------------------------------------------------------------------------------------------------------------------------] [------------Frame1-------------------------------------------------------------------------------------------------------------- ] [-------------------------------------------Frame 2-------------------------------------------------------------------------------] [---------------------------------------------------------------------------------------------------------------Frame 3-----------] Subsequent frames would be out of visual range. I am not quiet sure why this is happening. I implemented a back-projection error check to make sure only points with accurate optical flow calculations were passed on. I also set the back-projection threshold for findHomography to be 10, 1, and then 0.5, all to no avail. I am stitching multiple images, so I am multiplying my homography matrices between frames. This seems to be compounding the error. Why is this happening and how can I fix my homography matrices? Here is my code (ignore commented out tests. Also, some of the indentation formatting might have been messed with while copying over to the forum): (more) |
2016-01-03 22:15:54 -0600 | asked a question | Lucas Kanade Optical Flow Tracking Problem I have been trying to do some homography estimation between different frames in a video using Lucas Kanade Optical Flow Tracking (yes, I have already taken a look at the opencv sample). I have written up some code and tested it to see if I could start out by just tracking points in some videos I took. In every video, the points start out fine, and are tracked well for a few frames. Then, all of a sudden, the following happens: This happens about 10 frames in after the points seem to be tracked just fine. Similar results occur in all of the other videos I have tested. Why is this happening and how can I fix it? Update #1 Here is a code snippet that may help in solving the issue (ignore the formatting errors that occurred while posting): def stitchRow(videoName):
color = np.random.randint(0,255,(100,3)) |
2015-12-30 14:23:41 -0600 | commented question | Passing ORB Features to calcOpticalFlowPyrLK My conversion goes like this: prevPnts are the keypoints and prevCoords is a Python list of the coordinates of each keypoint. Should I not be using a python list to store the coordinates? After all the error message did say: What, then, should I use? |
2015-12-30 12:59:13 -0600 | received badge | ● Editor (source) |
2015-12-30 12:58:24 -0600 | asked a question | Passing ORB Features to calcOpticalFlowPyrLK I am doing a project where I need to be able to track different keypoints found using ORB in a video. I understand that generally, Shi-Tomasi points returned by "goodFeaturesToTrack" are used, but I am doing this for an image stitching project and thus need to be able to use the useful descriptor information that goes along with each ORB keypoint. I have seen a similar article on the subject, but there does not seem to be a solution that was reached. My current method is to make an array of tuples of the coordinates taken from each feature I detect. I am worried that this is not the correct format, however, because I keep getting this error at the calcOpticalFlowPyrLK line: I took some Shi-Tomasi points out of the image using goodFeaturesToTrack and printed them out to the console. For some reason, only one came up, and it was formatted like so: Here is a snippet of what the ORB feature array looks like: So my array is composed of 2-valued tuples inside of an array. What exactly is the format of the Shi-Tomasi points (the extra set of brackets seems redundant), and how would I convert my current array of tuples to that form if I need to? |
2015-12-02 20:48:57 -0600 | commented question | Passing ORB Descriptors to the Stitcher Class Would it be possible to modify the source code to use my features and then recompile? I am using python, so how would I do this (I can program in C++, however). |
2015-12-01 22:03:14 -0600 | asked a question | Passing ORB Descriptors to the Stitcher Class I am working on an image stitching project that requires me to use features that I calculate myself. The reason I am doing so is because I am stitching together images of crop fields (taken via drone), each of which look so similar that finding the same descriptors in adjacent images is nearly impossible. My current strategy is to calculate descriptors for the first image, and then tracking those throughout subsequent images using Kanade Lucas Tomasi optical flow estimation. I am using ORB descriptors, and would like to be able to pass in the descriptors calculated using optical flow for subsequent images to the stitcher. Is this possible? Is this method the best for what I am trying to accomplish? Regards, Jacob |
2015-08-25 01:25:16 -0600 | received badge | ● Enthusiast |
2015-08-21 23:18:20 -0600 | commented question | Stitching Images with SIFT Features I have looked through this example before. I am still unsure how I would take my existing functions and put them in the format of a features finder. How do I figure out how my class should be formatted? |
2015-08-20 22:53:52 -0600 | asked a question | Stitching Images with SIFT Features I am trying to create an image stitcher that uses SIFT features, but it appears that I cannot use SIFT with the OpenCV Stitcher class. I have already created a method of finding SIFT features and descriptors, but I am not sure how to pass these values to the Stitcher class.Is there a way to do so and if there is, what is it? |