Ask Your Question
0

Frame Difference based tracker stuck with first frame

asked 2018-01-07 05:49:30 -0600

Atosu gravatar image

updated 2018-01-08 04:22:02 -0600

So I have been working these last few weeks on detecting and tracking motion on a video. My purpose is simple, detecting a dog once he moves and tracking it (with a rectangle box surrounding it). I only want to track one motion (the dog's) ignoring any other motion. After many unsuccessful trials with object and motion tracking algorithms and codes using Opencv, I came across something that I was able to modify to get closer to my goal. The only issue is that the code seems to keep the information from the first frame during the whole video, which causes the code to detect and put a rectangle in an empty area ignoring the actual motion.

Here's the code I'm using:

import imutils
import time
import cv2

previousFrame = None
count = 0
test = 0
temp_frame = None
rect = None

def searchForMovement(cnts, frame, min_area):

    global rect
    text = "Undetected"
    flag = 0

    for c in cnts:
        # if the contour is too small, ignore it
        if cv2.contourArea(c) < min_area:
            continue
        #print(c)

        #Use the flag to prevent the detection of other motions in the video
        if flag == 0:
            (x, y, w, h) = cv2.boundingRect(c)
            x = x - 100
            y = y - 100
            w = 400
            h = 400
            #print("x y w h")
            #print(x,y,w,h) 
            cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
            text = "Detected"
            rect = c
            flag = 1

    if text == "Undetected":
        (x, y, w, h) = cv2.boundingRect(rect)
        x = x - 100
        y = y - 100
        w = 400
        h = 400
        cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
        text = "Detected"



    if text == "Undetected":
        print (text, temp_frame)


    return frame, text

def trackMotion(ret,frame, gaussian_kernel, sensitivity_value, min_area):

    if ret:

        # Convert to grayscale and blur it for better frame difference
        # frame = cv2.bilateralFilter(frame, 7, 150, 150)
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        gray = cv2.GaussianBlur(gray, (gaussian_kernel, gaussian_kernel), 0)


        global previousFrame
        global count
        global test
        global temp_frame

        if previousFrame is None:
            previousFrame = gray
            return frame, "Uninitialized", frame, frame   

        frameDiff = cv2.absdiff(previousFrame, gray)
        thresh = cv2.threshold(frameDiff, sensitivity_value, 255, cv2.THRESH_BINARY)[1]

        thresh = cv2.dilate(thresh, None, iterations=2)
        _, cnts, _ = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

        frame, text = searchForMovement(cnts, frame, min_area)



        if text == "Detected":
            temp_frame = frame

        else:
            frame = temp_frame


        if text == "Undetected":
            print (text, temp_frame)


        if count % 10 == 0:
            previousFrame = gray

        count = count + 1

    return frame, text, thresh, frameDiff       


if __name__ == '__main__':

    video = "Track.avi"
    video0= "Track.mp4"
    video1= "Ntest1.avi"
    video2= "Ntest2.avi"

    camera = cv2.VideoCapture(video2)
    time.sleep(0.25)
    min_area = 5000 #int(sys.argv[1])



    while camera.isOpened():

        gaussian_kernel = 27
        sensitivity_value = 5
        min_area = 2500

        ret, frame = camera.read()

        #Check if the next camera read is not null
        if ret:
            frame, text, thresh, frameDiff = trackMotion(ret,frame, gaussian_kernel, sensitivity_value, min_area)


        else:
            print("Video Finished")
            close = False
            while not close:
                key1 = cv2.waitKey(3) & 0xFF
                if key1 == 27 or key1 == ord('q'):
                    close = True

            break

        cv2.namedWindow('Thresh',cv2.WINDOW_NORMAL)
        cv2.namedWindow('Frame Difference ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2018-01-07 19:21:38 -0600

Tetragramm gravatar image

You see the line

#previousFrame = gray

That # means it's commented out. Try un-commenting it.

edit flag offensive delete link more

Comments

1

Thanks a lot for the answer. I tried your recommendation, and now the ghosting effect is completely gone. However, it also affected the tracking. Instead of tracking the whole body of the dog, it tracks small movements. Like, it detects the head movement, then the tail movement, then another small movement. And once the dog stops moving the box disappears. I am thinking of restricting the boundary box to a specific size instead of a continuously changing one (still trying to figure out how to do it) and also thinking about making the previousFrame = gray happen every few iterations instead of every time. Do you think that would solve this new issue ?

Atosu gravatar imageAtosu ( 2018-01-07 21:48:05 -0600 )edit
1

Is the scene stationary? If so, you can use the background subtraction modules.

You can also try the tracking module, KCF is a good fit.

Lastly, you can subtract older and older frames like you suggest, but it will also see things where the dog used to be, and the effect will be more pronounced the longer your difference. HERE is a paper on one way to improve differencing that's not so bad.

Tetragramm gravatar imageTetragramm ( 2018-01-07 22:00:23 -0600 )edit
1

Yes the scene is stationary. The camera is fixed and only the dog is moving. There are cases where the dog would push an object or move a floor pad but I'm restricting the motion detection to only one move. I tried before with KCF and other object tracking models, but none of them gave good results. It seems like they got affected by noise/ghosting or because the dog is present since the first frame. Is there a better algorithm I should consider? Actually it got better after I used older frames and also restricted the box to a specific size (I updated my question and code), but once the dog stops and only moves his head or tail, the box focuses on that part and ignores the rest of the body. Is there a way to track its whole body even when it stops moving? (Sorry for ...(more)

Atosu gravatar imageAtosu ( 2018-01-08 02:37:42 -0600 )edit
1

Ok, my recommendation is definitely the MOG2 or another background segmentation method. HERE is a helpful tutorial.

Tetragramm gravatar imageTetragramm ( 2018-01-09 22:41:23 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2018-01-07 05:49:30 -0600

Seen: 1,458 times

Last updated: Jan 08 '18