Ask Your Question

Alexander Pacha's profile - activity

2017-03-13 08:38:50 -0500 received badge  Taxonomist
2016-10-24 15:47:06 -0500 received badge  Notable Question (source)
2016-01-03 22:30:24 -0500 received badge  Popular Question (source)
2015-04-28 08:35:08 -0500 received badge  Nice Question (source)
2015-02-18 04:24:03 -0500 received badge  Enthusiast
2015-02-14 11:46:59 -0500 answered a question Error Opencv4Android: Caused by: java.lang.IllegalArgumentException: Service Intent must be explicit

The problem is explained here. Since Android 5.0 service intents must be explicit intents and the OpenCV library uses implicit intents in the (new Intent("org.opencv.engine.BIND")), so it doesn't work.

Changing the targetSDK is just a workaround for now, but eventually the source-code has to be changed in the future (Alex might know more). The solution would be to change the Method initOpenCV to something like this:

public static boolean initOpenCV(String Version, final Context AppContext,
        final LoaderCallbackInterface Callback)
    AsyncServiceHelper helper = new AsyncServiceHelper(Version, AppContext, Callback);
    Intent explicitIntent = new Intent(AppContext, org.opencv.engine.???.class);
    if (AppContext.bindService(explicitIntent,
            helper.mServiceConnection, Context.BIND_AUTO_CREATE))
        return true;
        InstallService(AppContext, Callback);
        return false;

but since parts of OpenCV are built with the JNI, I don't know what to put there instead of the ??? because 'BIND' is just a placeholder it seems to me.

2015-02-13 16:14:38 -0500 asked a question Better way to include OpenCV into Android application

When reading the tutorials or answers about using OpenCV with Android, I get the impression that I must add over 100MB of data into my GIT-repository which I obviously don't want, but did so far. Is there a better way to include OpenCV into my Android application without polluting my repo?

When using OpenCV, I aim for the OpenCV Manager on Android Devices anyway, so I just want to add a minimal jar-file to my project that I can use, like the one, which is actually built when following the procedure from above.

Ideally it would be possible to add OpenCV just like any other dependency into my Gradle Buildscript and the respective jar is automatically pulled from a repository like jcenter.

dependencies {
     compile 'org.opencv:android:2.4.10'

Then all I had to do was to include something like this into my build-script and could start using OpenCV almost straight away. Would that be hard to realise?

If that is not possible, what do you think of building the library only once and just adding the 200k jar file to my repository and application? (If that works, it could be added to the documentation as a "lightweight"-way of using OpenCV with Android).

2015-02-13 15:41:34 -0500 commented question How to migrate from OpenId to Google+ Sign-In

Thanks for the infos, still I don't know how to change my verification here from Google to plain mail/password.

2015-02-11 21:53:37 -0500 asked a question How to migrate from OpenId to Google+ Sign-In

Hi, I logged in to today with my Google-Account and got the message, that only OpenID-Accounts will be valid in the future and I should migrate to OpenID.

However, I can't find a way to do this. Should this be performed inside Google, or inside this website? I want to avoid that I can't log into this site in May 2015 anymore. Any ideas or official resources?

2014-11-25 07:18:39 -0500 received badge  Good Answer (source)
2014-11-25 06:06:50 -0500 commented answer Java API : loading image from any Java InputStream

Make sure to use the correct format for imdecode(). The assertion fails, if the provided data did not match the image-format that was expected (= specified).

2014-08-17 14:39:02 -0500 received badge  Necromancer (source)
2014-08-17 14:01:03 -0500 answered a question Java API : loading image from any Java InputStream

Here is the helper-method that I wrote to actually perform this task:

private static Mat readInputStreamIntoMat(InputStream inputStream) throws IOException {
    // Read into byte-array
    byte[] temporaryImageInMemory = readStream(inputStream);

    // Decode into mat. Use any IMREAD_ option that describes your image appropriately
    Mat outputImage = Highgui.imdecode(new MatOfByte(temporaryImageInMemory), Highgui.IMREAD_GRAYSCALE);

    return outputImage;

private static byte[] readStream(InputStream stream) throws IOException {
    // Copy content of the image to byte-array
    ByteArrayOutputStream buffer = new ByteArrayOutputStream();
    int nRead;
    byte[] data = new byte[16384];

    while ((nRead =, 0, data.length)) != -1) {
        buffer.write(data, 0, nRead);

    byte[] temporaryImageInMemory = buffer.toByteArray();
    return temporaryImageInMemory;
2014-03-01 14:32:08 -0500 received badge  Nice Question (source)
2014-03-01 14:21:43 -0500 received badge  Nice Answer (source)
2013-08-10 07:31:25 -0500 commented answer Set a threshold on FAST feature detection?

This would not work, because the keypoints returned by the FAST detector are not sorted, so you would end up taking 500 (more or less) random points instead of the 500 best points.

2013-08-06 23:53:03 -0500 commented question OpenCV KNearest input

You are right, the documentation ( is missing the first three parameters.

2013-08-05 07:27:25 -0500 commented question No success trying to grab images from my built-in MacBook Pro Cam

Just two small hints: If opening takes some time, it's not useful to put the sleep() after the if-clause, because it already passed the relevant point. Second, make sure you have properly initialised opencv ( OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_5, ...))

2013-08-05 02:55:54 -0500 received badge  Necromancer (source)
2013-08-04 18:54:30 -0500 received badge  Editor (source)
2013-08-04 18:52:29 -0500 answered a question Set a threshold on FAST feature detection?

One way to reduce the number of keypoints detected is to increase the threshold. To do this, take a look at this post. You can replace the writeLine()-call with

writeToFile(outputFile, "%YAML:1.0\nthreshold: 30 \nnonmaxSupression: true\n");

to set the parameter for the FAST feature detector.

The other way is to get the best Keypoints is by sorting them according to their response and then pick only the n best ones:

// Detect the features with you Feature Detector
FeatureDetector fastDetector = FeatureDetector.create(FeatureDetector.FAST);
MatOfKeyPoint matrixOfKeypoints;
fastDetector.detect(imageMat, matrixOfKeypoints);

// Sort and select 500 best keypoints
List<KeyPoint> listOfKeypoints = matrixOfKeypoints.toList();
Collections.sort(listOfKeypoints, new Comparator<KeyPoint>() {
    public int compare(KeyPoint kp1, KeyPoint kp2) {
        // Sort them in descending order, so the best response KPs will come first
        return (int) (kp2.response - kp1.response);

List<KeyPoint> listOfBestKeypoints = listOfKeypoints.subList(0, 500);

One final remark: Gauglitz et al. 2011 showed that it is important for visual tracking, that keypoints are spatially well distributed, so keep in mind, that you might also want to select the best keypoints according to a grid, so make sure your points are spatially well distributed.

2013-07-27 02:18:40 -0500 received badge  Necromancer (source)
2013-07-25 10:17:43 -0500 received badge  Teacher (source)
2013-07-22 23:38:11 -0500 answered a question Java: How to set parameters to ORB FeatureDetector?

If you don't have the OpenCVTestRunner, this does the job (Exception-Handling omitted):

private void init() {


    FeatureDetector orbDetector = FeatureDetector.create(FeatureDetector.ORB);

    File outputDir = getCacheDir(); // If in an Activity (otherwise getActivity.getCacheDir();
    File outputFile = File.createTempFile("orbDetectorParams", ".YAML", outputDir);
    writeToFile(outputFile, "%YAML:1.0\nscaleFactor: 1.2\nnLevels: 8\nfirstLevel: 0 \nedgeThreshold: 31\npatchSize: 31\nWTA_K: 2\nscoreType: 1\nnFeatures: 500\n");;

    [... use detector ... ]

private void writeToFile(File file, String data) {
    FileOutputStream stream = new FileOutputStream(file);
    OutputStreamWriter outputStreamWriter = new OutputStreamWriter(stream);
2013-07-22 22:13:57 -0500 received badge  Scholar (source)
2013-07-22 15:22:20 -0500 received badge  Student (source)
2013-07-22 00:51:23 -0500 answered a question Windows 7 installation fail: 'Can not open file OpenCV...exe as archive'

Seems like you already accepted an answer for this question on stackoverflow. What was wrong with it?

Maybe you wanna check your archiving program and give 7-Zip a try.

2013-07-22 00:36:53 -0500 commented question Is the STAR detector the implementation of CenSurE?

Created a bug-report:

2013-07-21 23:37:51 -0500 asked a question Does using FLANN-Matcher for every frame make sense?

I want to track interesting points in the video view. Due to performance reasons, I picked ORB as feature detector and feature descriptor that gives me 500 points each frame.

Now when it comes to the matching part, I only need a few correspondences (>7 for homography), so an approximate nearest neighbor search is fine.

My first question is: Should I use FLANN matcher with LSH Indexing like this (considering that I have a binary feature descriptor, this sounds reasonable):

FlannBasedMatcher matcher (new flann::LshIndexParams(20,10,2));
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );

or should I rather go for a Brute-Force matcher (if I have only 500 points), since I am not reusing the index that the FLANN matcher built a second time, but each frame a new one. Or simply rely on the AutotunedIndexParams? Is the building of the FLANN-Index computational expensive?

The second question is, how to find the optimal parameters for the LshIndexParams? The original documentation (p.8) of FLANN uses 12, 20 and 2. What is the correct key-size of ORB-features?

Third and final question: Are the improvements, described in this paper(download) already implemented, or do I have to directly use the authors code?

2013-07-19 01:41:56 -0500 asked a question Is the STAR detector the implementation of CenSurE?

Is the STAR feature detector the implementation of the CenSurE Feature detector (download), published by Agrawal, Konolige and Blas?

All I found was this homepage that indicates my believe, but why is it called STAR? The source code is completely undocumented and the official reference only mentions the name Konolige but not the paper.

2013-06-29 22:24:04 -0500 received badge  Supporter (source)