Ask Your Question
0

Optimising OPenCV4Android image processing

asked 2014-01-30 08:28:55 -0600

mayday gravatar image

Hi, I am working on a background subtraction project with a moving camera, on Android. Currently, I have the algorithm working on a static camera - but it is very slow, depending on resolution, e.g. I get about 1 FPS on 250x300 (I resize the 800x480 CvCameraViewFrame), using gray scale frames. I have my own background subtraction algorithm , so am using onCameraFrame() callback to grab each frame and do pixel-level processing (with several calculations for each pixel) before returning the frame with foreground pixels set to black. All processing is currently done using the Java API.

My question is how can I improve performance? Considering I will have to add code for feature detection, extraction, matching, homography, etc. to make the background subtraction work on a moving camera - the performance will only get slower. My development device is a Nexus 4 - which has a Qualcomm quad core processor with ARM Neon support. I have researched and I think OpenCV4Android has support for Neon optimizations? I'm not sure how to enable this?

I appreciate any help on enabling support for Arm Neon and any other tips! Thanks.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-01-30 12:39:18 -0600

Melgor gravatar image

First of all, do you use Java wrapper to OpenCV(in NDK) or you write own function in NDK which use OpenCV? Each calling wrapper from Java to NDK is very expensive, time consuming really big.

So I advice you to write own function in C++, compile it in NDK for Android and write one wrapper to it from Java code.

If you doing it now, you should use multicore process ( 4 image processed in one time)

edit flag offensive delete link more

Comments

Currently, everything is done in Java. The current OpenCV functions I'm using include resize, colour conversion, Mat methods such as get and put for pixel processing. The main problem is the looping through of each image and carrying out the calculations. Using NDK is what I want to do, but I'm inexperienced in this (and C++) and have been struggling to get started. The main question here is, how can I pass the frame I get from onCameraFrame() to a native function for processing? So e.g. in Java, I convert the frame to gray or rgb (using OpenCV Java API) then I want to pass this to a native function that will carry out the pixel level processing.

mayday gravatar imagemayday ( 2014-01-30 18:30:26 -0600 )edit

You can take a look at the face detection app from the opencv4android samples. There is an example there. I am trying to do something similar for my app and it seems that background subtraction doesn't work so well when changing the background.

andrei.toader gravatar imageandrei.toader ( 2014-02-07 04:40:39 -0600 )edit

Question Tools

Stats

Asked: 2014-01-30 08:28:55 -0600

Seen: 608 times

Last updated: Jan 30 '14