Ask Your Question

beniroquai's profile - activity

2018-02-05 03:13:29 -0600 received badge  Popular Question (source)
2015-11-25 14:18:27 -0600 answered a question Segfault in convertTo Android OpenCV 2.4.x->3.0

Ok. I found the answer by myself. There was an error with the Opencv 3.0 Gradle Module for the Java frontend. Even though I thought I've updated the files it seems, that it was still on 2.4.11. Anyway. Now it works! :)

2015-11-23 00:51:20 -0600 asked a question Segfault in convertTo Android OpenCV 2.4.x->3.0

Yesterday I moved my project from OpenCV 2.4 to 3.0. I've tested the code in VisualStudio. Everything works fine, but in Android I'm getting errors when I'm trying to run the code. I've built the library with OpenCL enabled (see code bellow). A simple example app, that does edge detection on CPU/GPU works fine, but the app, which was 2.4.x previously is getting weired errors:

signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x9

I'm accessing a Mat comming from Java-Side by its Jlong-Adress:

Mat& I_h_temp = *(Mat*)in_addrRawHolo;

Then I do some conversion:

I_h.convertTo(I_h, CV_64FC1);
sqrt(I_h, I_h);

There is an error comming and the App crashes:

    11-23 07:32:39.191 368-368/? I/DEBUG: *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
11-23 07:32:39.191 368-368/? I/DEBUG: UUID: 1c75fc96-14cc-406f-b6d5-c2e129dcc0f3
11-23 07:32:39.191 368-368/? I/DEBUG: Build fingerprint: 'Sony/C6903/C6903:5.1.1/14.6.A.0.368/1533290499:user/release-keys'
11-23 07:32:39.192 368-368/? I/DEBUG: Revision: '0'
11-23 07:32:39.192 368-368/? I/DEBUG: ABI: 'arm'
11-23 07:32:39.192 368-368/? I/DEBUG: pid: 32007, tid: 3898, name: Thread-8533  >>> de.example <<<
11-23 07:32:39.192 368-368/? I/DEBUG: signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x9
11-23 07:32:39.209 368-368/? I/DEBUG:     r0 9ed54810  r1 00000002  r2 aed69940  r3 00000005
11-23 07:32:39.209 368-368/? I/DEBUG:     r4 9ed54810  r5 00000000  r6 9eb56b80  r7 aed69938
11-23 07:32:39.210 368-368/? I/DEBUG:     r8 00000005  r9 ffffffff  sl 00040000  fp 00000000
11-23 07:32:39.210 368-368/? I/DEBUG:     ip aed69940  sp ae93c718  lr 00000001  pc 9e420c9c  cpsr a00d0030
11-23 07:32:39.212 368-368/? I/DEBUG:     #00 pc 00170c9c  /data/app/de.example-1/lib/arm/libopencv_java3.so (cv::Mat::create(int, int const*, int)+1291)
11-23 07:32:39.212 368-368/? I/DEBUG:     #01 pc 00186cc3  /data/app/de.example-1/lib/arm/libopencv_java3.so (cv::_OutputArray::create(cv::Size_<int>, int, int, bool, int) const+546)
11-23 07:32:39.212 368-368/? I/DEBUG:     #02 pc 000fc7ab  /data/app/de.example-1/lib/arm/libopencv_java3.so (cv::Mat::convertTo(cv::_OutputArray const&, int, double, double) const+314)
11-23 07:32:39.212 368-368/? I/DEBUG:     #03 pc 0009110d  /data/app/de.example-1/lib/arm/libopencv_java3.so (Java_org_opencv_core_Mat_n_1convertTo__JJI+52)

The exact same code works with the old Version. I think there is a problem with the library? I've built it using

set PATH=%PATH%;C:\Users\Bene\Downloads\ninja.exe
mkdir OpenCVCL3
cd OpenCVCL3
cmake -GNinja -DCMAKE_MAKE_PROGRAM="C:/Users/Bene/Downloads/ninja.exe" -DCMAKE_TOOLCHAIN_FILE=C:/opencv3cl/platforms/android/android.toolchain.cmake -DANDROID_ABI="armeabi-v7a with NEON" -DCMAKE_BUILD_WITH_INSTALL_RPATH=ON -DWITH_OPENCL=YES C:/opencv3cl
path/to/ninja.exe install/strip

My Android.mk

LOCAL_PATH      := $(call my-dir)
LOCAL_PATH_EXT  := $(call my-dir)/../libs/
include $(CLEAR_VARS)

#opencv
OPENCVROOT:= C:/OpenCVCL3/install
OPENCV_CAMERA_MODULES:=off
OPENCV_INSTALL_MODULES:=on
#OPENCV_LIB_TYPE:= STATIC
OPENCV_LIB_TYPE:=SHARED

include ${OPENCVROOT}/sdk/native/jni/OpenCV.mk

LOCAL_ARM_MODE  := arm

LOCAL_MODULE    := native_holo

LOCAL_CFLAGS    += -DANDROID_CL
LOCAL_CFLAGS    += -O3 -ffast-math

LOCAL_C_INCLUDES ...
(more)
2015-11-10 00:28:38 -0600 commented question Android OpenCL DFT vs. CPP Version is very slow!

Ok. You're right. I've changed canny to Laplacian and running the code a second time, changes its computational time from ~400ms to ~20 ms which is really impressive! Too bad, that it only works with this algorithm. Do you think the OCL-Version of the algorithms (i.e. DFT, Canny, etc) will become stable in about a ~year?

Another Question: Why is the first "run" still slow (twice as long as CPU-version) and only the second "run" actually speeds it up? Is it because the kernel needs to be created on the GPU-side? Do you have any good ressources for that? And also a list with "stable" functions? Thank you very much!

2015-11-09 16:42:57 -0600 received badge  Editor (source)
2015-11-09 16:38:49 -0600 commented question Android OpenCL DFT vs. CPP Version is very slow!

I've changed the code to some easier algorithm Gauss and Canny comparison with UMat and Mat programm. I've figured out, that the first time running the code is slowlier than the second time. What could be the reason for that? Also the computational time is still (exactly) the same compared to CPU. Can I somehow figure out if the code runs on the GPU on the smartphone? The GPU works, which is tested with the Tutorial.. Any Ideas?

2015-11-09 13:56:10 -0600 commented question Android OpenCL DFT vs. CPP Version is very slow!

Thank you for your comment, but still nothing has changed. Would it mak sense to read the Image as OpenGL texture and process it like the Tutorial 4 suggests? I need to say, that the Tutorial, with the OpenGL works perfectly fine and really fast, but loading the images with ´imread()´ simply doesn't work. I don't know what I'm missing..

2015-11-09 00:24:59 -0600 commented question Android OpenCL DFT vs. CPP Version is very slow!

Yeah, I did that. It half the time and is then still longer than the CPU-Version - or even worth - gives me the error listed above. I think I need to deallocate something? OpenCV Error: Assertion failed (u->refcount == 0 || u->tempUMat()) in virtual void cv::ocl::OpenCLAllocator::upload(.. BTW, I was able to compile the Tutorial and T-API in general works quiet well. Is it an error with imread/imwrite? Using Frames generated by camera from GLTexture works quiet well. Look here

2015-11-08 16:06:49 -0600 asked a question Imread() Umat for OpenCL in Android doesn't work!?

Finally I was able to compile the Code in the Tutorial 4 of Android OpenCV here .

It works flawelessly on my Xperia Z1. It speeds up processing time about 4-5 times. The step to convert GL-Image to UMat is here:

cv::UMat uIn, uOut, uTmp;
    cv::ocl::convertFromImage(imgIn(), uIn);
    LOGD("loading texture data to OpenCV UMat costs %d ms", getTimeInterval(t));
    theQueue.enqueueReleaseGLObjects(&images);

    t = getTimeMs();
    //cv::blur(uIn, uOut, cv::Size(5, 5));
    cv::Laplacian(uIn, uTmp, CV_8U);
    cv:multiply(uTmp, 10, uOut);
    cv::ocl::finish();
    LOGD("OpenCV processing costs %d ms", getTimeInterval(t));

    t = getTimeMs();
    cl::ImageGL imgOut(theContext, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, texOut);
    images.clear();

How could I use the code above with an image from the file-system using imread ? I've tried the Tags like ACCESS_READ, ACCESS_RW, etc. but the code slows down - I think because it's due to CPU-processing. Any Ideas?

2015-11-08 15:57:03 -0600 commented question using OpenCv 3.0 with OpenCl 1.1 devices

I'm currently working with an Android device. It uses OpenCL 1.1 and it seems to work. But it seems only partially. The new OpenCL - tutorial works and is quiet fast! For writing the Code you can see this here. Example Code (T-Api, CPP and pure ocl) here

2015-11-08 10:31:58 -0600 asked a question Android OpenCL DFT vs. CPP Version is very slow!

Hey, I've started to learn a bit about Android GPU programming and wanted to implement the DFT with the new T-API in OpenCV 3.0. My Device is a Sony XPERIA Z1 which runs with OpenCL 1.1 (on Lollipop - hope that doesnt cause problems? Khronos website says, that Adreno 330 supports KitKat)

When comparing the two codes, the GPU-Version takes ~3200ms and the CPU-Version ~2800 ms .. What could be the issue? Any ideas?

UPDATE

I've changed the code to something easier:

UMat uIn, uOut, uTmp, uEdges, uBlur;
Mat input = imread( path+filename, IMREAD_GRAYSCALE );//.getUMat( ACCESS_FAST );
input.copyTo(uIn);
startTimer=clock();

GaussianBlur(uIn, uBlur, Size(1, 1), 1.5, 1.5);
Canny(uBlur, uEdges, 0, 30, 3);
stopTimer=clock();
imwrite(path+filename_result, uEdges);

cv::ocl::finish();
double elapse = 1000.0* (double)(stopTimer - startTimer)/(double)CLOCKS_PER_SEC;

Running the Code the first time is slowlier, than the second time, but takes exactly the same time than the CPU-implementation.

Any Ideas?

2015-10-15 00:43:31 -0600 asked a question Using OpenCL and OpenCV in Android Studio (Tutorial 4)

I need to say, that I'm not very familiar with OpenCL and I simply wanted to start programming, just having an impression on how the computation is acclerated on mobile devices (in my case Xperia Z1 which has OpenCL drivers).

What I've tried so far are several tutorials like the one from OpenCV: http://docs.opencv.org/master/d7/dbd/...

and this here (and several other slides): http://de.slideshare.net/noritsuna/ho...

But I simply cannot get started with the OpenCL part. The OpenCV part is not a problem, but I can't get the OpenCL librarys etc. to work (Placing the ".so" file from the phone? installing the adreno-sdk?..). Does anybody know how I have to setup Android Studio and/or Windows to get started with the Tutorial supported by OpenCV? Since the NDK-Module is a part of the Gradle-build option, it shouldn't be that hard?

Thank you very much!

2015-07-03 01:00:58 -0600 received badge  Enthusiast
2015-07-02 04:44:44 -0600 received badge  Student (source)
2015-07-02 02:12:52 -0600 asked a question Possbile to align two Images ('nonlinear' warp) with OpticalFlow-Information? Super-Resolution Inline Hologram reconstruction

I've a set of several images which are shifted to small amounts in X/Y-direction. Starting from e.g. four low resolution images, I want to reconstruct it to one high resolution image. Therefore I need to get the appropriate shift. Currently I'm using the Android Version of CalcOpticalFlowPyrLK(). This detects the shifts in X and Y-direction. Sample Images can be found here

My setup to acquire is the following:

LED => Pinhole => Distance Z (light propagates) => transmissive biological sample => small distance z (interference) => Sensor

This represents an inline Hologram acquistion as seen in the Papers like this My goal is to shift the LED in X/Y-Direction. This cases a shift of the object/interference-pattern on the sensor. By "re-shifting" the object and merging the LR-Images, I can get the Sub-Pixel superresolution. My problem is, that the Shift seems to be not linear. Due to a magnification depending on the distance from the optical axis.

I was thinking that the camera calibration method could help, but didn't have the time to look into it.

Another way might be to us the GoodFeatureDetector and detect matches between LR1.jpg and LR2.jpg and then "non-linearly" warp the second image to match pixel-pixel-position. Does this make sense? Is this supported in OpenCV? didn't find a good point to start from.

My idea:

  1. Finde Features
  2. Use e.g. 20 points
  3. Try to bring Point_i(x_i, y_i) in Picture 2 to position Point_i(x_i, y_i) of Picture 1
  4. Sum/Merge Pixel into a HR-Mat

Right now there is something like "Motion Blur" in the resulting image. The Registration works quiet ok, but not good enough for Super-Resolution. ;)

Thank you very much!

2015-06-13 12:41:15 -0600 asked a question Easy way to Convert Magnitude/Phase back to Real/Imag for DFT?

I'm facing a poblem in OpenCV4Android. I'm trying to do a phase recovery of an incoming image like the Gerchber-Saxton algorithm does.

I'm propagating the lightfield with a "Fresnel-Propagator" along Z-axis. This works quiet well. In Matlab code I have a complex-datatype with phase/magnitude as well as real/imaginary part. I'm having no problems to switch forth and back, but in OpenCV the exact same operation seems to have no proper result. Converting back and forth doesnt give same results

I've written some code for Imag/Real conversion like the one below (real sin is simply a cosine)

Mat toImag(Mat magMat, Mat phaseMat) {

    Mat resultMat = new Mat(magMat.size(), magMat.type());

    for (int iwidth = 0; iwidth < magMat.width(); iwidth++) {
        int mag=0, phase=0;
        double imag=0;
        for (int iheight = 0; iheight < magMat.height(); iheight++) {

            mag = (int)magMat.get(iwidth, iheight)[0];
            phase = (int)phaseMat.get(iwidth, iheight)[0];
            imag= mag*sin(phase);


            resultMat.put(iwidth, iheight, imag);
        }
    }

The alternative was using PolarToCart-function, but I'm not sure if I could use it to convert the Euler-representation of a compolex number to compnent-representation. Does anybody know how to solve this issue?

2015-05-08 23:48:51 -0600 answered a question Super resolution on Android

Bad question though, but how do I actually build this jni? Would I just take one of the Android OpenCV Native Tutorials on the web, replace the C-Code with the Superresolution code, build header and compile? I would love to see something in my Java code like .. video.superresolution(Input, Output, params) Is there a good tutorial which shows this procdure? I'm familiar with NDK though, but Android Studio is hard to work with..

Thank you!

2015-05-08 17:08:31 -0600 received badge  Supporter (source)