Ask Your Question

NDK or SDK for the image processing

asked 2014-09-23 03:55:57 -0500

Centos gravatar image

updated 2015-12-09 20:18:50 -0500


We are developing an eye tracker for Android. Right now we are in the research phase and we want to decide how to do the image processing.

I recently read in a book that is best to do the image processing part as a desktop application in C/C++ and then port it to Android.

I was wandering if there will be a big difference if you do everything with the Android SDK or we should at least do the image processing part with the NDK.

Also, we have found some tutorials but we don't have any experience with the NDK. I would like to ask how easy it is to set it up and work with it.

Thank you for your time.

edit retag flag offensive close merge delete



Just wondering how you got to mind that questions about android NDK and SDK are related to OpenCV? Are you planning to use OpenCV? If not your question should be at another forum...

StevenPuttemans gravatar imageStevenPuttemans ( 2014-09-23 04:47:37 -0500 )edit

I think this is a valid question because in this forum there are experienced OpenCV users - and maybe some of them already have dealt with this design decision. Whether to swtich to the NDK highly depends on the use case - Google even says: "the NDK will not benefit most apps" - so this question is indeed an OpenCV question.

PhilLab gravatar imagePhilLab ( 2014-09-23 12:14:50 -0500 )edit

After writing my initial answer, I feel that a proper discussion of the relevant decision factors will have required a whole book chapter to explain. Please don't take my words at face value. My answer is not meant to discourage you from learning C++ and NDK. My rule-of-thumb is this: (1) if you are already experienced in writing image processing code in C++ AND your development timeline is six months or more, OR (2) if your development timeline is one year or more AND you can find ways to leverage your C++ knowledge while learning along the process, then it is worthwhile to learn and use NDK.

rwong gravatar imagerwong ( 2014-09-23 14:35:10 -0500 )edit

On the other hand, for all non-trivial image processing research, it is important to know the fundamentals. This means (1) knowing the algorithms beyond the "hand-waving" level, (2) have a rough idea of how it is coded at the low level, even if you don't code it yourself, (3) have a mathematical grasp of how the parameters affect the algorithm (not just a change-and-see intuition, although that is important too), (4) know how/when to substitute for similar steps within an algorithm, or between algorithms that serve a similar purpose. If you need help on this respect, you might have to share some of the source code and/or test data to enable others to look into the issues. This may influence some initial project decisions.

rwong gravatar imagerwong ( 2014-09-23 14:40:24 -0500 )edit

Finally, I am not making the claim that OpenCV represents the state-of-the-art. I claim that OpenCV represents the "mainstream choice of image processing and computer vision applications", but for certain parts of OpenCV API, especially ones that had existing for a long time, such as Canny, HoughLines and HoughCircles, that have been advances elsewhere which have never been integrated into OpenCV because of backward-compatibility or performance concerns. You may find it necessary to reimplement these basic functionalities if you need a higher precision than what OpenCV currently offers. For this reason, it is useful to have a second image processing toolbox which you can estimate the potential gain from a custom implementation. MATLAB/Octave/SciPy will allow you to prototype your own.

rwong gravatar imagerwong ( 2014-09-23 14:44:14 -0500 )edit
rwong gravatar imagerwong ( 2014-09-23 20:21:04 -0500 )edit

2 answers

Sort by ยป oldest newest most voted

answered 2014-09-23 09:23:07 -0500

rwong gravatar image

Remark. Stackoverflow already has a lot of recipes for eye-tracking with OpenCV, starting with Canny edge detection followed by HoughCircles.

Two of the questions: whether to use C/C++ (Android NDK) in addition to Java (Android SDK), and whether to use OpenCV vs. custom coding, depend on the same factor: whether custom pixel-level processing routines need to be developed for your application.

That, in turn, often depends on how much research and development resource (budget and timeframe) you can afford. If development resource is constrained, custom pixel-level processing is often too prohibitive to consider, even if it is deemed desirable.

The OpenCV API provides many image processing functions that are taught in image processing textbooks. The same API is made available to both C++ and Java. Therefore, if you can somehow fit your algorithms into just using the existing OpenCV API (including the very powerful OpenCV Mat class, which supports many elementwise operations and array slicing), then any language choice would be equally valid, and your team can choose whichever is more convenient and comfortable.

Code that is expressed using OpenCV API can be easily ported between the OpenCV C++ API and OpenCV Java API.

What I mean by custom pixel-level processing routines is that:

  • Code that cannot be expressed by decomposing into a number of OpenCV API calls, or
  • Requires alternative implementation details which OpenCV API doesn't provide as an option, or
  • Requires performance or precision characteristics which cannot be provided by the current OpenCV library implementation
  • And last but not the least, something that needs to be executed on millions or billions of items per second, such that straightforward Java or C/C++ code is not going to satisfy.

When custom pixel-level processing is required, one will be forced to use C/C++, and will typically need to look at platform-specific SIMD programming (such as ARM NEON and x86 SSE2/SSE3/SSE4), or GPU processing. One will have to select and modify algorithms to favor data parallelism over other modes of computation. In some extreme cases one has to go further and encode some tight loops in assembly language.

Keep in mind that whatever can be coded into a simple C++ for-loop (1 - 2 levels), OpenCV might have already implemented it for you. Feel free to ask questions, both here and on StackOverflow.

The move from Java to C/C++ will increase development cost; custom pixel-level processing development can easily increase cost by a factor of ten or more. Therefore, given a constrained development budget or timeframe, one would be well-advised to use cleverness to fit your algorithms into using existing OpenCV API as much as possible, in order to avoid or limit the scope of algorithms that require custom pixel-level processing.

Prototyping algorithms on the desktop is always recommended, because you can then pull the data into other visualization frameworks such as MATLAB, Octave, etc. Sometimes numerical issues cannot be easily diagnosed unless one visually analyzes the output. A single camera frame contains over a million ... (more)

edit flag offensive delete link more

answered 2014-09-23 12:10:52 -0500

PhilLab gravatar image

If you can stick with Java, then I would recomend it. Because then the nice guys from OpenCV have already saved you the work of setting up JNI and the JNI wrapper.

Debugging for your PC is much, much more convenient than debugging for native Android NDK code! So if you want to use the NDK (because of reasons rwong already mentioned), I would highly recommend to develop a desktop wrapper for your computer vision code so that you can run it both on Android and your PC. (Pro tip: this keeps your code modular).

edit flag offensive delete link more

Question Tools

1 follower


Asked: 2014-09-23 03:55:57 -0500

Seen: 1,813 times

Last updated: Sep 23 '14