OpenCV real-time tracking only QrCode

asked 2018-09-05 15:14:38 -0500

minimanimo gravatar image

Hello everyone, I'm a newbie in this fantastic world of the computer vision.

A part of my project is like a "in-out people counter" but only detecting QRCodes. For that, I started by using the pyzbar library, in order to detect and decode the QRCode. I'm passing frames getting from a webcam and for each of them decoding with pyzbar, then drawing a rectangle around it and finally show with cv2. By doing that, each time a frame is passed, the decode function detect the qrcode as new one (obviously).

What I want to do is to track only and exclusively the QRCode, by ignoring everything else in the environment, identifying and tracking it so that it can only be decoded once as long as it is visible from the camera. The webcam works like a scanner. At the end what I would like to achieve as a result is:

  • When a qrcode appears at the top of the webcam images, it is identified (once);
  • when the qrcode crosses the middle line, a counter is increased; (I saw a tutorial that uses frame subtraction)
  • when the qrcode comes out of the bottom of the images, the program is ready to detect a new qrcode.

The idea is that there will only be one qrcode at a time in the image.

def decodeAndDraw(im):
cv2_im = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
image = Image.fromarray(cv2_im)

draw = ImageDraw.Draw(image)
for barcode in decode(image):
    rect = barcode.rect
            (rect.left + rect.width, + rect.height)

  draw.polygon(barcode.polygon, outline='#e945ff')

image_data = np.asarray(image) 
return image_data

Can you point me in the right direction? Many thanks!

edit retag flag offensive close merge delete


To track barcode, you can use optical flow or homography. I think this will solve your problem.

ak1 gravatar imageak1 ( 2018-09-06 04:36:47 -0500 )edit

Hi ak1, thanks for you reply. I took a look at homography but but I have seen that this method maps the points in one image to the corresponding points in the other image, so comparing two images as a base. Assuming one images is from the stream of the webcam capturing the qrcode I need, however, as a base another static image of qrcode that I must recognize and this is not good because in my project is expected to recognize and decode more qrcode (at run-time). For that reason I think that this solution you provided me is not good for solving my problem. Am I in wrong? Thank you.

minimanimo gravatar imageminimanimo ( 2018-09-06 13:08:36 -0500 )edit

It is not necessary that you should compare it with static frame(ground truth). See Take first frame as a reference where barcode detection is done efficently. Now use that bounding rect of detected barcode in next frame using optical flow or homography. After some x frames again detect the barcode and track it further. So algo will be like this.

  1. Take first frame as ref.
  2. Do barcode detecton in that frame and save the barcode bounding rect.
  3. Now use feature matcher between ref frame and next frame and estimate homography or optical flow between them.
  4. Now use that homography or optical flow to estimate bounding rect position of barcode in next frame.
  5. After some frames repeat from step1.

Note: Use homography or optical flow based on your assumptions.

ak1 gravatar imageak1 ( 2018-09-10 00:39:09 -0500 )edit

@minimanimo You can do this in real time using above algo. Sorry for late reply.

ak1 gravatar imageak1 ( 2018-09-10 00:40:45 -0500 )edit

@ak1 Thanks for this explanation. I have 2 question for you:

1) Does this approach work even if the qrcode is placed on a moving object in the scene? I ask this question because this way the moving mass is greater and therefore also the area that is identified between a frame and the reference one

2) What is the reason for discarding some frames, as well as (I suppose) from a performance point of view (less frame to process)?

minimanimo gravatar imageminimanimo ( 2018-09-11 14:48:04 -0500 )edit

@minimanimo I will first answer que 2. ans2: I am not discarding some frames. I am saying, here we are detecting 1st frame then we are tracking some next x frames. I am asking to detect again x+2 th frame and do further tracking. We are doing this for following 2 reasons, 1)There might be a case where barcode is not in fov or partially occluded by fov.
2)There might be some errors while tracking.

Now for question 1. ans1: I have a doubt. Is your camera is also moving with moving object?. If camera is stationary then this will work perfectly. If your camera is moving along with moving object, then I think you have to remove camera ego motion (I am sure about that).

ak1 gravatar imageak1 ( 2018-09-12 02:51:59 -0500 )edit

@ak1 This is what I've managed to achieve at the moment:

Following you advices, and the example of optical flow in opencv documentation as a base:

  • Init: wait for a qrcode in the scene

  • Get rect boundaries, save points and draw circles

  • If x+2 frames

    --> try to decode qrcode and get new boundaries and draw circles on it

    --> else (if no qrcode is detected) estimate optical flow, so get points and draw

  • repeat*

I'm in a small room, the lamp is bright and positioned above it; I'm using the camera of my smartphone OnePlus One; As you can see from the video tracking is not well done and also the qrcode is not detected by pyzbar if it is moving.How can I improve the situation?

minimanimo gravatar imageminimanimo ( 2018-09-12 17:54:12 -0500 )edit

@ak1 How can I improve the situation? I need more reliability of the points because I will have to calculate the centroids and use those for counting like as "in-out people counter". My settings are:

# params for ShiTomasi corner detection
feature_params = dict( maxCorners = 100,
                       qualityLevel = 0.3,
                       minDistance = 7,
                       blockSize = 7)

# Parameters for lucas kanade optical flow
lk_params = dict( winSize  = (20,20),
                  maxLevel = 4,
                  criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))

Is maxLevel too high? In the tutorial is used a value = 2

minimanimo gravatar imageminimanimo ( 2018-09-12 17:56:17 -0500 )edit

@minimanimo.sorry for late reply. I was busy with my convocation. You can do like this, link: Yeah you have to set the parameters properly. For optical flow you need to maintain 2 assumptions, 1) Brightness constancy 2) change should small. If change is large(fast motion) use pyramids to tackle this. I think the parameters used by jayrambhia (above link) should work for you. (as per observation from your video). You can also follow above blog to achieve your aim.

ak1 gravatar imageak1 ( 2018-09-17 00:23:16 -0500 )edit