Ask Your Question

przemulala's profile - activity

2016-07-05 14:27:50 -0600 answered a question [Android] Get camera frame but show no preview

I have found the answer that worked for me, so I'm sharing it with the community:

You can set your preview (in this case a CameraBridgeViewBase) transparent by setting the alpha value of the view, with 0 being completely invisible.

mOpenCvCameraView.setAlpha(0);

This should make your preview "disappear".

I have also posted it to above linked thread on a StackOverflow forum.

2016-07-03 11:38:35 -0600 received badge  Editor (source)
2016-07-03 11:36:32 -0600 asked a question [Android] Get camera frame but show no preview

Hi guys,

I'd like to process frames from front camera on my Android device. But I've seen that if I don't have JavaCameraView (it's defined in layout XML file) visible, OnCameraFrame() method of CameraBridgeViewBase.CvCameraViewListener2 (my Activity implements it) is not being called.

I've found same problem on StackOverflow, but the solution sadly doesn't work for me.

2016-06-24 06:51:05 -0600 asked a question SIGSEGV on Android with first use of OpenCV after restarting an Activity

I've run into weird problem. To keep things short: I've written an Android app that utilizes OpenCV with two Activities:

Activity1 previews front camera and on user click starts Activity2, sending current frame's address by Intent

Activity2 assigns to it's local field of type Mat clone of the frame under the given address and allows the user simple manipulations on it (namely to perform inRange method in HSV colorspace). Mat is converted to Bitmap an displayed in ImageView: this happens on Activity2 start (for original captured frame) and after each user manipulation of SeekBar.

Activity1 is a parent of Activity2, therefore clicking back button restarts Activity1. And now comes the weird part(s):

1) after some time (less than 1 minute) Activity1 crashes, with

libc: Fatal signal 11 (SIGSEGV)

as you can imagine, it does not happen when Activity2 is never started

2) if I restart Activity2 by simply returning to Activity1 and clicking again, OnCreate() and other methods from regular lifecycle are being called and the app crashes with the same fatal signal error in line when convertion from Mat to Bitmap is performed:

Utils.matToBitmap(mCapturedFrame, bm);

So, it's the first time OpenCV's function is being called after restarting an Activity. The best part is, mCapturedFrame exsists and the aformentioned method is called after sucesfully loading the OpenCV library. What's more, I release() locally created Mats, as well as field that contains captured frame Mat (when I return from Activity2). The error looks like memory leak in case of Activity1, but where?!

The last thing is, when the app crashes, I can see in Android Monitor:

OpenCV error: Cannot load info library for OpenCV

But how the heck is this even possible, if library is being loaded sucessfully? I really can't see what I'm doing wrong here and will be glad for any suggestions. Feel free to download my java code files from here: http://speedy.sh/RMPKH/thesis.zip

2016-06-16 08:28:54 -0600 commented answer OpenCL and GPU with Android OpenCV SDK

OpenCV 3.0 onwards implements UMat which implicitly uses OpenCL acceleration. I'm assuming that writing native code that uses UMat with OpenCV 3.0 (and higher) in C++ and exposing it to Android via JNI and NDK gives you the power of OpenCL on Android devices with OpenCL-enabled GPUs. I'm not sure but maybe someone can confirm that?

2016-06-16 06:59:47 -0600 commented question Android external camera + OpenCV

Sadly no :( It seems that there are some solutions out there, but they are not universal, which means that you have to root the device and what's even worse they tend to work only for very specific sets of devices (external camera + smartphone). Another possibility is to buy chinese camera that will provide WiFi streaming of video and try to convert raw bytes into video sequence. Neither option is easy to do.

2016-06-16 06:55:04 -0600 asked a question Expose native C++ OpenCV code to Android - but clever (== less work) way!

Hi fellow coders!

I've written desktop OpenCV-based C++ app that can be described as following blackbox sequence:

Image from camera -> A LOT OF PROCESSING HERE -> std::tuple<std::vector<double> eyesPosition, bool leftEyeClosed, bool rightEyeClosed>

My goal is to re-use this C++ code in an Android app. I've started my research and found some basic stuff on OpenCV Android NDK development and JNI + some samples (namely "Tutorial 2 - Mixed processing). But it isn't still clear to me, if (and how!) I could use my C++ code as aboved descrived blackbox sequence. I still have a lot of work on an app, so I'd like to write as little JNI code as possible. Namely, the perfect solution for me is to expose to JNI only this one C++ function that I've written, which takes in an image and returns tuple - this function call a lot of other functions, creates classes and so on, but maybe I could provide the rest of my C++ code as a dynamic library or sth similar? Sadly, I don't know the answer and haven't found anything like that (samples provided - for ex. face detection - expose the whole C++ code to JNI and I'd be grateful to avoid this).

If you have any ideas, sample code, tutorials that answer my needs - I'd be thankful for providing this resources :)

2016-04-19 20:17:00 -0600 received badge  Student (source)
2016-04-06 05:15:15 -0600 received badge  Scholar (source)
2016-04-06 05:14:58 -0600 commented answer Which Machine Learning technique for simple f(x, y) = z?

In case of fake data KNN works like a charm :) I've decided to use it for my final solution then :) Maybe I'll have to do some tweaks but it's so elegantly simple (you only manipulate k param for the amount of closest points returned) that I believe I won't have much work with it. Thank you Tetragramm for your answer! I'm glad I haven't dived into SVM and MLP :) I'm accepting your answer :)

2016-04-05 06:46:57 -0600 commented answer Which Machine Learning technique for simple f(x, y) = z?

I'll check it out (maybe even today) and let you know how it works. Thanks for your input!

2016-04-05 06:45:59 -0600 received badge  Supporter (source)
2016-04-04 15:41:33 -0600 asked a question Which Machine Learning technique for simple f(x, y) = z?

Hi guys,

I'm looking for best way to predict on which part of the screen the user is looking at (output) based on eye to markers position (input). I've managed to describe input as two doubles X, Y and working on calibration method that will connect this pair with Z (area ID, at which the user is looking for particular X, Y pair) - that will be my "training data". So, it's basically f(X, Y) = Z.

My initial idea was to use machine learning provided by OpenCV. I've read some articles, mainly on SVM and ANN (MLP), but I kinda feel like that's too much of weaponery for such a trivial task. The most important aspect for me is fast learning and prediction, as it's suppose to work nearly real-time.

My question is, which method of ML should I choose for this purpose? Maybe there's even more trivial way of achieving my goal?

2016-03-21 06:53:17 -0600 received badge  Enthusiast
2016-03-13 01:28:04 -0600 asked a question Android external camera + OpenCV

Have you ever tried getting live video stream from external camera connected to Android device to later process it with OpenCV functions?

I've found some libraries that seem to allow external camera connection and video grabbing, ex. UVCCamera, but OpenCV supports only device's rear and front cameras. Maybe there's some other computer vision library for Android that will do the trick? Or some changes to the device (like rooting or sth)? I'll be glad for any clues!