Ask Your Question

mayday's profile - activity

2020-12-10 00:49:58 -0500 received badge  Famous Question (source)
2017-10-31 09:10:40 -0500 received badge  Notable Question (source)
2016-12-15 04:22:27 -0500 received badge  Notable Question (source)
2016-02-11 06:58:26 -0500 received badge  Popular Question (source)
2016-01-15 09:16:42 -0500 received badge  Popular Question (source)
2014-03-09 15:22:25 -0500 commented answer Object classification (pedestrian, car, bike)

Thanks for the response. I realise it's difficult, and tbh, it is a bit daunting (since I am new to CV), but I'd like to make an effort. I think I may try to use the BOW + SVM approach. 3-10 seconds for detection is really not suitable for real-time detection . I assume the result of BOW + SVM will give me an xml file as output? which I can then use in cascade classifier?

2014-03-08 23:54:21 -0500 asked a question Object classification (pedestrian, car, bike)

Hi, I'm working on a novelty detection project (in Android) using background subtraction. I would like to do simple object detection after a novelty is detected (e.g. Person or Car). I have read some literature on how to do this using e.g. Bag Of Words + SVM, but I'm still not sure how to approach the task. Is using the Bag Of Words + SVM the best approach for multi-class classification? Essentially, after the foreground is detected through background subtraction, I would like to be able to draw a bounding box around it, specifying this is a car or a person. Also, I have searched online but I can't seem to find a good source for data sets I would need for the task.

Thanks very much for any help/suggestions.

2014-03-03 11:29:39 -0500 asked a question Using NEON opencv4android apk

Hi guys,

I'm making an Android app (background subtraction). My phone supports neon - which I am currently using with the background subtraction algorithm. I see in the OpenCV4Android sdk, there are versions of OpenCV Manager with neon support:

  1. OpenCV_2.4.8_Manager_2.16_armv7a-neon.apk
  2. OpenCV_2.4.8_Manager_2.16_armv7a-neon-android8.apk

I'm guessing 2. is specific for android. My question is how can I use that version of the OpenCV Manager in my Android app? I'm trying to improve performance of my app, and I assume that will improve it somewhat.

Thanks.

2014-02-08 17:00:47 -0500 commented question Image Stitching (Java API)

Thanks for the responses. I have used this version of the homography function: Mat H = Calib3d.findHomography(obj, scene, Calib3d.RANSAC, 1) ...unfortunately, it hasn't helped - in the documentation the re-projection error (4th parameter) is recommended to be 1-10 - i have tried them all to no avail. I have experimented with a number of algorithm combinations for detection, extraction and matching - again, no success. I tried SURF,SURF and FLANN combination, but no success - the number of 'good' matches I got was 20. The diagonal lines tell me something is wrong, yes - but I don't know how to fix it.

2014-02-06 19:35:02 -0500 commented question Image Stitching (Java API)

I figured it out after posting the question, and I used the constructor you specified. However, my image stitching doesn't actually work :/ I updated the code. Maybe you can spot something wrong? Is it the way I'm combing the 2 images?

2014-02-04 12:53:17 -0500 received badge  Editor (source)
2014-02-04 12:41:34 -0500 asked a question Image Stitching (Java API)

Hi, I'm trying to stitch two images together, using the OpenCV Java API. I have been following this C++ tutorial here http://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/ However, I get the wrong output and I cannot work out the problem.

FAULTY CODE

public class ImageStitching {

static Mat image1;
static Mat image2;

static FeatureDetector fd;
static DescriptorExtractor fe;
static DescriptorMatcher fm;

public static void initialise(){
    fd = FeatureDetector.create(FeatureDetector.BRISK); 
    fe = DescriptorExtractor.create(DescriptorExtractor.SURF); 
    fm = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);

    //images
    image1 = Highgui.imread("room2.jpg");
    image2 = Highgui.imread("room3.jpg");

    //structures for the keypoints from the 2 images
    MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
    MatOfKeyPoint keypoints2 = new MatOfKeyPoint();

    //structures for the computed descriptors
    Mat descriptors1 = new Mat();
    Mat descriptors2 = new Mat();

    //structure for the matches
    MatOfDMatch matches = new MatOfDMatch();

    //getting the keypoints
    fd.detect(image1, keypoints1);
    fd.detect(image1, keypoints2);

    //getting the descriptors from the keypoints
    fe.compute(image1, keypoints1, descriptors1);
    fe.compute(image2,keypoints2,descriptors2);

    //getting the matches the 2 sets of descriptors 
    fm.match(descriptors2,descriptors1, matches);

    //turn the matches to a list
    List<DMatch> matchesList = matches.toList();

    Double maxDist = 0.0; //keep track of max distance from the matches
    Double minDist = 100.0; //keep track of min distance from the matches

    //calculate max & min distances between keypoints
    for(int i=0; i<keypoints1.rows();i++){
        Double dist = (double) matchesList.get(i).distance;
        if (dist<minDist) minDist = dist;
        if(dist>maxDist) maxDist=dist;
    }

    System.out.println("max dist: " + maxDist );
    System.out.println("min dist: " + minDist);

    //structure for the good matches
    LinkedList<DMatch> goodMatches = new LinkedList<DMatch>();

    //use only the good matches (i.e. whose distance is less than 3*min_dist)
    for(int i=0;i<descriptors1.rows();i++){
        if(matchesList.get(i).distance<3*minDist){
            goodMatches.addLast(matchesList.get(i));
        }
    }

    //structures to hold points of the good matches (coordinates)
    LinkedList<Point> objList = new LinkedList<Point>(); // image1
    LinkedList<Point> sceneList = new LinkedList<Point>(); //image 2

    List<KeyPoint> keypoints_objectList = keypoints1.toList();
    List<KeyPoint> keypoints_sceneList = keypoints2.toList();

    //putting the points of the good matches into above structures
    for(int i = 0; i<goodMatches.size(); i++){
        objList.addLast(keypoints_objectList.get(goodMatches.get(i).queryIdx).pt);
        sceneList.addLast(keypoints_sceneList.get(goodMatches.get(i).trainIdx).pt);
    }

    System.out.println("\nNum. of good matches" +goodMatches.size());

    MatOfDMatch gm = new MatOfDMatch();
    gm.fromList(goodMatches);

    //converting the points into the appropriate data structure
    MatOfPoint2f obj = new MatOfPoint2f();
    obj.fromList(objList);

    MatOfPoint2f scene = new MatOfPoint2f();
    scene.fromList(sceneList);

    //finding the homography matrix
    Mat H = Calib3d.findHomography(obj, scene);

    //LinkedList<Point> cornerList = new LinkedList<Point>();
    Mat obj_corners = new Mat(4,1,CvType.CV_32FC2);
    Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);

    obj_corners.put(0,0, new double[]{0,0});
    obj_corners.put(0,0, new double[]{image1.cols(),0});
    obj_corners.put(0,0,new double[]{image1.cols(),image1.rows()});
    obj_corners.put(0,0,new double[]{0,image1.rows()});

    Core.perspectiveTransform(obj_corners, scene_corners, H);

    //structure to hold the result of the homography matrix
    Mat result = new Mat();

    //size of the new image - i.e. image 1 + image 2
    Size s = new ...
(more)
2014-01-30 18:30:26 -0500 commented answer Optimising OPenCV4Android image processing

Currently, everything is done in Java. The current OpenCV functions I'm using include resize, colour conversion, Mat methods such as get and put for pixel processing. The main problem is the looping through of each image and carrying out the calculations. Using NDK is what I want to do, but I'm inexperienced in this (and C++) and have been struggling to get started. The main question here is, how can I pass the frame I get from onCameraFrame() to a native function for processing? So e.g. in Java, I convert the frame to gray or rgb (using OpenCV Java API) then I want to pass this to a native function that will carry out the pixel level processing.

2014-01-30 08:28:55 -0500 asked a question Optimising OPenCV4Android image processing

Hi, I am working on a background subtraction project with a moving camera, on Android. Currently, I have the algorithm working on a static camera - but it is very slow, depending on resolution, e.g. I get about 1 FPS on 250x300 (I resize the 800x480 CvCameraViewFrame), using gray scale frames. I have my own background subtraction algorithm , so am using onCameraFrame() callback to grab each frame and do pixel-level processing (with several calculations for each pixel) before returning the frame with foreground pixels set to black. All processing is currently done using the Java API.

My question is how can I improve performance? Considering I will have to add code for feature detection, extraction, matching, homography, etc. to make the background subtraction work on a moving camera - the performance will only get slower. My development device is a Nexus 4 - which has a Qualcomm quad core processor with ARM Neon support. I have researched and I think OpenCV4Android has support for Neon optimizations? I'm not sure how to enable this?

I appreciate any help on enabling support for Arm Neon and any other tips! Thanks.