Ask Your Question

procoding's profile - activity

2016-01-16 18:33:01 -0600 asked a question Homograph Matrix Off in Image Stitching

I am trying to do an image stitching project where I use point pairs calculated by tracking points between frames of a video using the Lucas Kanade algorithm to find homography matrices. After writing the program and it came time for stitching together the frames of a video, I decided to run a test where I simply display the perspective warped versions of each image onto a black canvas to see how the Homography matrix had warped them. When I did this, instead of moving over bit by bit between frames, frames were translated further and further distances, way off from a slight nudge between frames

[----------------------------------------------------------------------------Empty Space---------------------------------------]

[Frame0---------------------------------------------------------------------------------------------------------------------------]

[------------Frame1-------------------------------------------------------------------------------------------------------------- ]

[-------------------------------------------Frame 2-------------------------------------------------------------------------------]

[---------------------------------------------------------------------------------------------------------------Frame 3-----------]

Subsequent frames would be out of visual range. I am not quiet sure why this is happening. I implemented a back-projection error check to make sure only points with accurate optical flow calculations were passed on. I also set the back-projection threshold for findHomography to be 10, 1, and then 0.5, all to no avail. I am stitching multiple images, so I am multiplying my homography matrices between frames. This seems to be compounding the error. Why is this happening and how can I fix my homography matrices? Here is my code (ignore commented out tests. Also, some of the indentation formatting might have been messed with while copying over to the forum):

import numpy as np
import sys
import cv2
import math

lastFeatures = None
currentFeatures = None
opticFlow = None
panRow = None
Rows = None
finalPanorama = None

def loadRow(dirPath, fType, numImages,  column):
    imageRow = []
    for i in range(0, numImages):
            imageRow.append(cv2.imread("%s/%i_%i.%s" % (dirPath, column, i, fType), cv2.IMREAD_COLOR))
    return imageRow

def findNthFeatures(prevImg, prevPnts, nxtImg):

    back_threshold = 0.5

    nxtDescriptors = []
    prevGrey = None
    nxtGrey = None
    nxtPnts = prevPnts[:]

    prevGrey = cv2.cvtColor(prevImg, cv2.COLOR_BGR2GRAY)
    nxtGrey = cv2.cvtColor(nxtImg, cv2.COLOR_BGR2GRAY)

    lucasKanadeParams = dict(winSize = (19,19), maxLevel = 100, criteria = (cv2.TERM_CRITERIA_EPS |     cv2.TERM_CRITERIA_COUNT, 10, 0.03))

    nxtPnts, status, err = cv2.calcOpticalFlowPyrLK(prevGrey, nxtGrey, prevPnts, None, **lucasKanadeParams)
    backProjections, status, err = cv2.calcOpticalFlowPyrLK(nxtGrey, prevGrey, nxtPnts, None, **lucasKanadeParams)
    d = abs(prevPnts - backProjections).reshape(-1, 2).max(-1)
    status = d < back_threshold
    goodNew = nxtPnts[status].copy()
    goodLast = prevPnts[status].copy()

    return goodLast, goodNew

def getHomographies(videoName):
    color = np.random.randint(0,255,(100,3))    
    lastFrame = None
    currentFrame = None
    lastKeypoints = None
    currentKeypoints = None
    firstImage = True
    featureRefreshRate = 5

    feature_params = dict( maxCorners = 100,
                        qualityLevel = 0.1,
                        minDistance = 8,
                        blockSize = 15)

    frameCount = 0

    Homographies = []

    cv2.namedWindow('display', cv2.WINDOW_NORMAL) 
    cap = cv2.VideoCapture(videoName)
    flags, frame = cap.read()

    while flags:
    if firstImage:                                                
        firstImage = False
            lastFrame = frame[:,:].copy()
            lastGray = cv2.cvtColor(lastFrame, cv2.COLOR_BGR2GRAY)
        lastKeypoints = cv2.goodFeaturesToTrack(lastGray, mask = None, **feature_params)
            flags, frame = cap.read()
            frameCount += 1
        else:
            mask = np.zeros_like(lastFrame)           
            currentFrame = frame[:,:].copy()
            frameCount += 1

        lastKeypoints, currentKeypoints = findNthFeatures(lastFrame, lastKeypoints, currentFrame)
  #         for i,(new,old) in enumerate(zip(currentKeypoints, lastKeypoints)):
  #             a, b = new.ravel()
  #             c, d = old.ravel()
   #             mask = cv2.line(mask, (a,b), (c,d), color[i].tolist(), 2)
   #             frame = cv2.circle(frame, (a,b), 5, color[i].tolist(), -1)
   #         img = cv2 ...
(more)
2016-01-03 22:15:54 -0600 asked a question Lucas Kanade Optical Flow Tracking Problem

I have been trying to do some homography estimation between different frames in a video using Lucas Kanade Optical Flow Tracking (yes, I have already taken a look at the opencv sample). I have written up some code and tested it to see if I could start out by just tracking points in some videos I took. In every video, the points start out fine, and are tracked well for a few frames. Then, all of a sudden, the following happens:

image description

This happens about 10 frames in after the points seem to be tracked just fine. Similar results occur in all of the other videos I have tested. Why is this happening and how can I fix it?

Update #1

Here is a code snippet that may help in solving the issue (ignore the formatting errors that occurred while posting):

def findNthFeatures(prevImg, prevPnts, nxtImg):

nxtDescriptors = []
prevGrey = None
nxtGrey = None
nxtPnts = prevPnts[:]

prevGrey = cv2.cvtColor(prevImg, cv2.COLOR_BGR2GRAY)
nxtGrey = cv2.cvtColor(nxtImg, cv2.COLOR_BGR2GRAY)

lucasKanadeParams = dict( winSize = (19,19), maxLevel = 10, criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))

nxtPnts, status, err = cv2.calcOpticalFlowPyrLK(prevGrey, nxtGrey, prevPnts, None, **lucasKanadeParams)


goodNew = nxtPnts[status==1]
return goodNew

def stitchRow(videoName): color = np.random.randint(0,255,(100,3))
lastFrame = None currentFrame = None lastKeypoints = None currentKeypoints = None lastDescriptors = None currentDescriptors = None firstImage = True

feature_params = dict( maxCorners = 100,
                        qualityLevel = 0.1,
                        minDistance = 8,
                        blockSize = 15)

frameCount = 0

Homographies = []

cv2.namedWindow('display', cv2.WINDOW_NORMAL) 
cap = cv2.VideoCapture(videoName)
flags, frame = cap.read()

while flags:
if firstImage:                                                
    firstImage = False
        lastFrame = frame[:,:].copy()
        lastGray = cv2.cvtColor(lastFrame, cv2.COLOR_BGR2GRAY)
    lastKeypoints = cv2.goodFeaturesToTrack(lastGray, mask = None, **feature_params)
        flags, frame = cap.read()
        frameCount += 1
    else:
        mask = np.zeros_like(lastFrame)           
        currentFrame = frame[:,:].copy()
        frameCount += 1
        #if(frameCount % 3 == 0):

        cv2.imshow('display', currentFrame)
    currentKeypoints = findNthFeatures(lastFrame, lastKeypoints, currentFrame)
        #for i,(new,old) in enumerate(zip(currentKeypoints, lastKeypoints)):
         #   a, b = new.ravel()
          #  c, d = old.ravel()
           # mask = cv2.line(mask, (a,b), (c,d), color[i].tolist(), 2)
           # frame = cv2.circle(frame, (a,b), 5, color[i].tolist(), -1)
        #img = cv2.add(frame,mask)

        cv2.imshow('display', img)
        cv2.waitKey(0)
        for i in range(0, len(lastKeypoints)):
            lastKeypoints[i] = tuple(lastKeypoints[i])
            print lastKeypoints[i]
            cv2.waitKey(0)
        homographyMatrix = cv2.findHomography(lastKeypoints, currentKeypoints)
    Homographies.append(homographyMatrix)   
    lastFrame = currentFrame
    lastDescriptors = currentDescriptors
    lastKeypoints = currentKeypoints

        flags, frame = cap.read()
2015-12-30 14:23:41 -0600 commented question Passing ORB Features to calcOpticalFlowPyrLK

My conversion goes like this:

for point in prevPnts:                                                                           
      prevCoords.append(point.pt)

prevPnts are the keypoints and prevCoords is a Python list of the coordinates of each keypoint. Should I not be using a python list to store the coordinates? After all the error message did say:

TypeError: prevPts is not a numpy array

What, then, should I use?

2015-12-30 12:59:13 -0600 received badge  Editor (source)
2015-12-30 12:58:24 -0600 asked a question Passing ORB Features to calcOpticalFlowPyrLK

I am doing a project where I need to be able to track different keypoints found using ORB in a video. I understand that generally, Shi-Tomasi points returned by "goodFeaturesToTrack" are used, but I am doing this for an image stitching project and thus need to be able to use the useful descriptor information that goes along with each ORB keypoint. I have seen a similar article on the subject, but there does not seem to be a solution that was reached. My current method is to make an array of tuples of the coordinates taken from each feature I detect. I am worried that this is not the correct format, however, because I keep getting this error at the calcOpticalFlowPyrLK line:

TypeError: prevPts is not a numpy array, neither a scalar

I took some Shi-Tomasi points out of the image using goodFeaturesToTrack and printed them out to the console. For some reason, only one came up, and it was formatted like so:

[[[ 2976.   332.]]]

Here is a snippet of what the ORB feature array looks like:

[(2228.739013671875, 1203.9490966796875), (2898.794189453125, 1092.8704833984375), (3060.037353515625, 852.7973022460938), (3217.697265625, 150.49363708496094), (372.6509094238281,157.66000366210938), (3120.951416015625, 1519.2691650390625)]

So my array is composed of 2-valued tuples inside of an array. What exactly is the format of the Shi-Tomasi points (the extra set of brackets seems redundant), and how would I convert my current array of tuples to that form if I need to?

2015-12-02 20:48:57 -0600 commented question Passing ORB Descriptors to the Stitcher Class

Would it be possible to modify the source code to use my features and then recompile? I am using python, so how would I do this (I can program in C++, however).

2015-12-01 22:03:14 -0600 asked a question Passing ORB Descriptors to the Stitcher Class

I am working on an image stitching project that requires me to use features that I calculate myself. The reason I am doing so is because I am stitching together images of crop fields (taken via drone), each of which look so similar that finding the same descriptors in adjacent images is nearly impossible. My current strategy is to calculate descriptors for the first image, and then tracking those throughout subsequent images using Kanade Lucas Tomasi optical flow estimation. I am using ORB descriptors, and would like to be able to pass in the descriptors calculated using optical flow for subsequent images to the stitcher. Is this possible? Is this method the best for what I am trying to accomplish? Regards, Jacob

2015-08-25 01:25:16 -0600 received badge  Enthusiast
2015-08-21 23:18:20 -0600 commented question Stitching Images with SIFT Features

I have looked through this example before. I am still unsure how I would take my existing functions and put them in the format of a features finder. How do I figure out how my class should be formatted?

2015-08-20 22:53:52 -0600 asked a question Stitching Images with SIFT Features

I am trying to create an image stitcher that uses SIFT features, but it appears that I cannot use SIFT with the OpenCV Stitcher class. I have already created a method of finding SIFT features and descriptors, but I am not sure how to pass these values to the Stitcher class.Is there a way to do so and if there is, what is it?