Ask Your Question

myplacedk's profile - activity

2014-02-25 16:21:37 -0600 received badge  Student (source)
2014-02-01 12:52:46 -0600 asked a question Detect amount of sideways camera movement

This is the algorithm I'm trying to implement:

  1. Assume sideways camera movement. (Like looking out of the window on a train.)
  2. Take a photo
  3. Wait until 50% (configurable) of what was on the photo has moved out of the frame.
  4. Repeat step 2-3

I need OpenCV for step 3. My best guess so far is this:

  1. Analyse the photo with goodFeaturesToTrack to get 100 features.
  2. Take a new photo.
  3. Use calcOpticalFlowPyrLK to find the new location of the features.
  4. Use the status output from calcOpticalFlowPyrLK to see how many of the features was found on the latest photo.
  5. If less than 50%, go back to step 2.

Alternative step 4-5:

  • Check the average sideways movement of detected features.
  • If more than half of image width, go back to step 2.

I expected prevPts and nextPts from calcOpticalFlowPyrLK two be two lists of the same features, with their locations on two images. So 7th item on both lists represents the 7th feature from the first image. In prevPts it's the location on the first image, in nextPts it's the location on the latest image.

But often they are not. They can be completely different features, so detecting movement by comparing the locations is completely unreliable.

Is there a better way? Or maybe I'm using goodFeaturesToTrack the wrong way?

Here's my complete code. It's Java for now, I'm going to use it on Android later:

import static org.opencv.highgui.Highgui.CV_CAP_PROP_FRAME_HEIGHT;
import static org.opencv.highgui.Highgui.CV_CAP_PROP_FRAME_WIDTH;

import java.awt.image.BufferedImage;
import java.io.ByteArrayInputStream;
import java.io.InputStream;

import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;

import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfByte;
import org.opencv.core.MatOfFloat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.MatOfPoint2f;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.highgui.Highgui;
import org.opencv.highgui.VideoCapture;
import org.opencv.imgproc.Imgproc;
import org.opencv.video.Video;

public class Camera {
    private static final int MAX_CORNERS = 100;
    private static final double QUALITY_LEVEL = 0.05;
    private static final double MIN_DISTANCE = 0.01;
    private static final int MAX_LEVEL = 2;
    private final Size winSize = new Size(15, 15);

    private JFrame jframe = new JFrame();
    private JLabel label;

    private VideoCapture cap;

    private Mat frame;
    private Mat frame_gray;
    private Mat old;
    private Mat old_gray;
    private boolean first;

    public static void main(String[] args) {
        Camera cam = new Camera();
        while (true) {
            cam.detectMove(0.5);
        }
    }

    public Camera() {
        first = true;

        System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
        cap = new VideoCapture(0);
        if (!cap.isOpened()) {
            throw new RuntimeException("Could not capture video.");
        }

        int dWidth = (int) cap.get(CV_CAP_PROP_FRAME_WIDTH);
        int dHeight = (int) cap.get(CV_CAP_PROP_FRAME_HEIGHT);

        System.out.println("Frame size : " + dWidth + " x " + dHeight);

        old = new Mat(640, 480, CvType.CV_8UC4);
        old_gray = new Mat(640, 480, CvType.CV_8UC1);
        frame = new Mat(640, 480, CvType.CV_8UC4);
        frame_gray = new Mat(640, 480 ...
(more)