I have written an optical flow/background subtraction method in openCV. The program works by initialising, and then re-initialising every five frames, the points to be tracked for the optical flow method. The points are found by finding the contour points for each contour in a background subtraction method and then passing some of those points to the Lucas kanade optical flow tracker to be tracked.

However every five frames each point tracked and found by the tracker is essentially rendered moot as they are all replaced with the new contour points. This is done as the tracker gets incorrect at times, as well as me wanting to find a way to combine these two.

I wish to find a way to more seamless integrate the two and I have been trying to find an update equation that will take into account the points tracked by the optical flow method as well as the new contour points, perhaps weighting it so that the contour points contribute more to the found points. However I have absolutely no idea how to even begin. I have an idea of perhaps making the new pixel value a weighted average of both the contour point and the optical flow tracked point, with perhaps the weight of the optical flow point decreasing as time increases so as to take into account how the tracked pixel value becomes less accurate as time goes on, however I do not know how to implement that.

- The new tracked points may not necessarily map 1-1 to the found contour points and so how would I be able to find which contour point relates to which "tracked" point.
- How I can weight it so the contour points contribute more to the found point.

I would be extremely grateful if anyone could offer any pointers my code can be found here (python 2.7) It is not that long. Only 100 lines in length. I

Here are some of the more relevant parts, this is the (re)initialisation using the contours

```
if frame_count % detect_interval == 0 and len(contours) !=0:
p = update_and_drawBox(contours,boxpoints)
```

Function definition is here

```
def update_and_drawBox(contours,boxpoints):
for i in range(0, len(contours)):
cnt = contours[i]
cnt=cnt.astype(np.float32)
if (len(cnt)>9):
x,y,w,h = cv.boundingRect(cnt)
cx=x+(w/2)
cy=y+(h/2)
boxpoints.append([x,y])
boxpoints.append([x+w,y+h])
boxpoints.append([x+w,y])
boxpoints.append([x,y+h])
boxpoints.append([cx,cy])
cv.rectangle(vis,(x,y),(x+w,y+h),(255,0,0),2)
boxpoints=np.asarray(boxpoints).astype(np.float32)
return boxpoints
```

This function carries out the optical flow estimation and then update tracks checks if they are good estimates and if so draws them onto the screen

```
if len(tracks) > 0:
p0 = np.float32([tr[-1] for tr in tracks]).reshape(-1, 1, 2)
p1, _st, _err = cv.calcOpticalFlowPyrLK(previousFrame, frame_gray, p0, None, **lk_params)
p0r, _st, _err = cv.calcOpticalFlowPyrLK(frame_gray, previousFrame, p1, None, **lk_params)
tracks=update_tracks(p0,p0r,tracks,p1)
```