2020-12-10 00:49:58 -0600 | received badge | ● Famous Question (source) |
2017-10-31 09:10:40 -0600 | received badge | ● Notable Question (source) |
2016-12-15 04:22:27 -0600 | received badge | ● Notable Question (source) |
2016-02-11 06:58:26 -0600 | received badge | ● Popular Question (source) |
2016-01-15 09:16:42 -0600 | received badge | ● Popular Question (source) |
2014-03-09 15:22:25 -0600 | commented answer | Object classification (pedestrian, car, bike) Thanks for the response. I realise it's difficult, and tbh, it is a bit daunting (since I am new to CV), but I'd like to make an effort. I think I may try to use the BOW + SVM approach. 3-10 seconds for detection is really not suitable for real-time detection . I assume the result of BOW + SVM will give me an xml file as output? which I can then use in cascade classifier? |
2014-03-08 23:54:21 -0600 | asked a question | Object classification (pedestrian, car, bike) Hi, I'm working on a novelty detection project (in Android) using background subtraction. I would like to do simple object detection after a novelty is detected (e.g. Person or Car). I have read some literature on how to do this using e.g. Bag Of Words + SVM, but I'm still not sure how to approach the task. Is using the Bag Of Words + SVM the best approach for multi-class classification? Essentially, after the foreground is detected through background subtraction, I would like to be able to draw a bounding box around it, specifying this is a car or a person. Also, I have searched online but I can't seem to find a good source for data sets I would need for the task. Thanks very much for any help/suggestions. |
2014-03-03 11:29:39 -0600 | asked a question | Using NEON opencv4android apk Hi guys, I'm making an Android app (background subtraction). My phone supports neon - which I am currently using with the background subtraction algorithm. I see in the OpenCV4Android sdk, there are versions of OpenCV Manager with neon support:
I'm guessing 2. is specific for android. My question is how can I use that version of the OpenCV Manager in my Android app? I'm trying to improve performance of my app, and I assume that will improve it somewhat. Thanks. |
2014-02-08 17:00:47 -0600 | commented question | Image Stitching (Java API) Thanks for the responses. I have used this version of the homography function: Mat H = Calib3d.findHomography(obj, scene, Calib3d.RANSAC, 1) ...unfortunately, it hasn't helped - in the documentation the re-projection error (4th parameter) is recommended to be 1-10 - i have tried them all to no avail. I have experimented with a number of algorithm combinations for detection, extraction and matching - again, no success. I tried SURF,SURF and FLANN combination, but no success - the number of 'good' matches I got was 20. The diagonal lines tell me something is wrong, yes - but I don't know how to fix it. |
2014-02-06 19:35:02 -0600 | commented question | Image Stitching (Java API) I figured it out after posting the question, and I used the constructor you specified. However, my image stitching doesn't actually work :/ I updated the code. Maybe you can spot something wrong? Is it the way I'm combing the 2 images? |
2014-02-04 12:53:17 -0600 | received badge | ● Editor (source) |
2014-02-04 12:41:34 -0600 | asked a question | Image Stitching (Java API) Hi, I'm trying to stitch two images together, using the OpenCV Java API. I have been following this C++ tutorial here http://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/ However, I get the wrong output and I cannot work out the problem. FAULTY CODE (more) |
2014-01-30 18:30:26 -0600 | commented answer | Optimising OPenCV4Android image processing Currently, everything is done in Java. The current OpenCV functions I'm using include resize, colour conversion, Mat methods such as get and put for pixel processing. The main problem is the looping through of each image and carrying out the calculations. Using NDK is what I want to do, but I'm inexperienced in this (and C++) and have been struggling to get started. The main question here is, how can I pass the frame I get from onCameraFrame() to a native function for processing? So e.g. in Java, I convert the frame to gray or rgb (using OpenCV Java API) then I want to pass this to a native function that will carry out the pixel level processing. |
2014-01-30 08:28:55 -0600 | asked a question | Optimising OPenCV4Android image processing Hi, I am working on a background subtraction project with a moving camera, on Android. Currently, I have the algorithm working on a static camera - but it is very slow, depending on resolution, e.g. I get about 1 FPS on 250x300 (I resize the 800x480 CvCameraViewFrame), using gray scale frames. I have my own background subtraction algorithm , so am using onCameraFrame() callback to grab each frame and do pixel-level processing (with several calculations for each pixel) before returning the frame with foreground pixels set to black. All processing is currently done using the Java API. My question is how can I improve performance? Considering I will have to add code for feature detection, extraction, matching, homography, etc. to make the background subtraction work on a moving camera - the performance will only get slower. My development device is a Nexus 4 - which has a Qualcomm quad core processor with ARM Neon support. I have researched and I think OpenCV4Android has support for Neon optimizations? I'm not sure how to enable this? I appreciate any help on enabling support for Arm Neon and any other tips! Thanks. |