Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Remark. Stackoverflow already has a lot of recipes for eye-tracking with OpenCV, starting with Canny edge detection followed by HoughCircles.


Two of the questions: whether to use C/C++ (Android NDK) in addition to Java (Android SDK), and whether to use OpenCV vs. custom coding, depend on the same factor: whether custom pixel-level processing routines need to be developed for your application.

That, in turn, often depends on how much research and development resource (budget and timeframe) you can afford. If development resource is constrained, custom pixel-level processing is often too prohibitive to consider, even if it is deemed desirable.


The OpenCV API provides many image processing functions that are taught in image processing textbooks. The same API is made available to both C++ and Java. Therefore, if you can somehow fit your algorithms into just using the existing OpenCV API (including the very powerful OpenCV Mat class, which supports many elementwise operations and array slicing), then any language choice would be equally valid, and your team can choose whichever is more convenient and comfortable.

Code that is expressed using OpenCV API can be easily ported between the OpenCV C++ API and OpenCV Java API.


What I mean by custom pixel-level processing routines is that:

  • Code that cannot be expressed by decomposing into a number of OpenCV API calls, or
  • Requires alternative implementation details which OpenCV API doesn't provide as an option, or
  • Requires performance or precision characteristics which cannot be provided by the current OpenCV library implementation
  • And last but not the least, something that needs to be executed on millions or billions of items per second, such that straightforward Java or C/C++ code is not going to satisfy.

When custom pixel-level processing is required, one will be forced to use C/C++, and will typically need to look at platform-specific SIMD programming (such as ARM NEON and x86 SSE2/SSE3/SSE4), or GPU processing. One will have to select and modify algorithms to favor data parallelism over other modes of computation. In some extreme cases one has to go further and encode some tight loops in assembly language.


Keep in mind that whatever can be coded into a simple C++ for-loop (1 - 2 levels), OpenCV might have already implemented it for you. Feel free to ask questions, both here and on StackOverflow.


The move from Java to C/C++ will increase development cost; custom pixel-level processing development can easily increase cost by a factor of ten or more. Therefore, given a constrained development budget or timeframe, one would be well-advised to use cleverness to fit your algorithms into using existing OpenCV API as much as possible, in order to avoid or limit the scope of algorithms that require custom pixel-level processing.


Prototyping algorithms on the desktop is always recommended, because you can then pull the data into other visualization frameworks such as MATLAB, Octave, etc. Sometimes numerical issues cannot be easily diagnosed unless one visually analyzes the output. A single camera frame contains over a million pixels. It does not make sense to dump a million pixel values into a an application debug trace.


Setting up NDK is not difficult.

  • Install the Android SDK and NDK.
  • Install Eclipse and Android Development Tools (ADT)
  • Download the OpenCV for Android SDK.

The OpenCV for Android SDK contains sample applications demonstrating the use of SDK and NDK, which you can use as starting point.

However, if you enter full-effort C++ NDK development, then you might find a lot of frustrations with the Eclipse NDK build system, and sometimes just nasty bugs within Eclipse itself. Be prepared to learn about how to workaround the many unfixed bugs in this development environment.