Ask Your Question

scorpeeon's profile - activity

2019-04-11 05:32:30 -0600 received badge  Famous Question (source)
2018-05-25 08:48:55 -0600 received badge  Good Question (source)
2016-07-04 10:08:27 -0600 received badge  Notable Question (source)
2016-05-19 08:47:12 -0600 received badge  Good Question (source)
2015-08-31 21:00:45 -0600 received badge  Popular Question (source)
2015-06-21 04:18:42 -0600 received badge  Enthusiast
2015-06-17 04:06:15 -0600 received badge  Nice Question (source)
2015-06-12 08:21:25 -0600 asked a question OpenCV 3.0 - list of GPU accelerated functions through T-API?

Hello!

So I've read about the new T-API (transparent API) and how it makes very easy to essentially use the same code with no regards of whether it will actually run on a CPU or GPU (through OpenCL / OCL). It sounds really great, and I've read there are currently about 100 functions than can take advantage of the GPU this way. My question: is there an up-to-date, complete list of these functions anywhere?

Converting Mats to UMats (or the other way around) have some overhead I imagine, so I would check which functions I use can take advantage of the GPU to evaluate where it's worth for me right now to make the transition to UMats.

Thanks

2015-06-11 09:26:05 -0600 commented question OpenCV 3.0 - investigate crash in release mode

Yeah that's true you usually want statically linked library for distributed apps where stability is a major concern (though I see scenarios where dynamic linking would be practical so OpenCV could be updated without having to recompile the app or do anything with it), and this application is not really at that stage yet, it was just an example. Nevertheless dynamically linked libs should work too... If I could find out where the error comes from I could at least post a bug report.

2015-06-11 08:07:16 -0600 commented question OpenCV 3.0 - investigate crash in release mode

Thanks for the answer. Well, I guess that's one way to look at it, but that can be quite inconvenient if you want to distribute an application with OpenCV. I mean, you can't expect everyone to build OpenCV, there's a reason prebuilt libraries are available and I never had such issue with them. I don't think this is normal, it rather feels like a bug that should be fixed somewhere... I just can't get around it.

2015-06-11 07:45:33 -0600 asked a question OpenCV 3.0 - investigate crash in release mode

Hello!

I work on an application that uses many OpenCV functions and I'm trying to make the transition to 3.0 since it's been released. I downloaded it from here: http://sourceforge.net/projects/openc... (I use VS2013.) Since 3.0 is mostly compatible with 2.x code, within a short time, I got a compiling code.

The strange thing is that it crashes in 64 bit in release mode: Unhandled exception at 0x000007FED0483430 (opencv_world300.dll) in XDesktopDev.exe: 0xC0000005: Access violation reading location 0x0000000000000048.

It happens here in my code on a line where Kalman filter is used (presumably within the predict() function?): Mat prediction = kalmanFilter.predict();

So I tried to debug the issue by running it in debug mode, but strangely, it doesn't crash in debug mode. I found this to be quite interesting since debug mode usually has additional asserts so I expected I could reproduce it in debug mode.

The next thing I tried to get additional information about the crash is I built OpenCV using cmake (so I get the pdb files needed for debug info). I used the source files from from the same package that I used for the binaries (linked above). I used mostly the default options for building, what I modified is that I selected to build opencv_world, because that's what is distributed in the OpenCV binaries (build\x64\vc12\lib) and that's what I used when the crash happened. And here comes the strange part: with my build, the crash doesn't happen (in the same 64 bit, release mode configuration). With that, I've pretty much ran out of ideas about how to investigate this crash. The only thing I can think of now is that maybe prebuilt binaries were built with different options (additional optimizations turned on?) that were off for me, but then again, I don't know what exact options were used for the prebuilt libs - is this information available anywhere?

BTW I also tried to run it in 32 bit, and it also doesn't crash that way, not in release, nor in debug mode.

So it looks like, this crash in only happening in 64 bit, only in release mode, only with the prebuilt libs. And I found no way how to investigate it further. Any tips?

Thanks

2014-08-20 02:42:21 -0600 received badge  Nice Question (source)
2013-12-05 11:12:37 -0600 received badge  Organizer (source)
2013-12-05 10:35:35 -0600 asked a question Best way/place to get timestamp on Android?

So I have an Android application that is doing real-time image processing using native OpenCV through JNI. Getting very precise timestamps is critical in my application because the app is tracking an object and the data (position and timestamp) from multiple devices running the app could be used to triangulate the 3d position of the tracked object. So there is an OpenCV method which can be used to get a quite precise timestamp, it's called Core.getTickCount(). So the main question is when to call this getTickCount() method.

When taking a picture, the OnShutter() method of the Android camera would be a proper way to do this (as far as i understand), but as far as i understand this is not available on preview images, only after taking pictures. Since the app is doing fast real-time object tracking, using the preview images would be the preferable way to go (though I'm considering taking photos every time just to be able to use the onShutter() event for timestamp, but that would probably slow down the application quite a bit - I'll experiment with what later and see how it goes).

In OpenCV for Android (as far as I know) the preferred way to get the camera preview image (to OpenCV) is through the onCameraFrame(CvCameraViewFrame inputFrame) method of the CvCameraViewListener2 interface. The problem is that this is called only after Android receives the preview in onPreviewFrame(byte[] frame, Camera arg1) method (in PreviewCallback) (which is implemented in OpenCV in the JavaCameraView) which then notifies a CameraWorker running in a separate thread which can take same time adding further to the delay.

And even the onPreviewFrame method is said to be "Called as preview frames are displayed." so it might have a significant delay compared to the real actual time of the preview picture being taken - though it's probably much closer to the real time of the image then the onCameraFrame method. Also if I get the image in onCameraFrame method, I can't really seem to be able to use even the onPreviewFrame method to get the timestamp because it is not guaranteed that the onCameraFrame method belongs to the same preview image as the last called onPreviewFrame, beacause as I already mentioned that is running in a separate thread which might not always get notified (more details here).

So what do you think I should do to get the most precise timestamps for my preview images? Is there any other simple way to do this that I'm missing? Any help is appreciated.

2013-11-08 14:29:39 -0600 received badge  Editor (source)
2013-11-08 14:27:02 -0600 received badge  Student (source)
2013-11-08 14:22:52 -0600 asked a question Camera Preview slow on Android OpenCV +possible bottleneck found

So I noticed since a long time and wondered ever since why (on multiple devices) using JavaCameraView (not doing any processing just displaying preview images from camera), the smoothness of the camera preview is generally lacking compared to "regular" Android apps using the camera, or even Android OpenCV apps where instead of using JavaCameraView the camera image is manually converted to formats supported by OpenCV. (like this one: https://github.com/ikkiChung/MyRealTimeImageProcessing)

As I understand the "recommended" way to use the camera in OpenCV apps on Android (according to the tutorials and sample apps) is to use a CameraBridgeViewBase implementation (JavaCameraView or NativeCameraView), implement the CvCameraViewListener2 interface and override the onCameraFrame() method where you can process the camera frame.

I found a possible bottleneck which could be the the deliverAndDrawFrame method of the CameraBridgeViewBase.

The problem might be that in the implementation of JavaCameraView the CameraWorker (which would call deliverAndDrawFrame which calls onCameraFrame()) is running on a separate thread and while they're synchronized, the CameraWorker is not always receiving the frames from onPreviewFrame, every now and then it's skipping frames.

I verified this by making a class which extended the JavaCameraView and overrode the onPreviewFrame and logged the calls of this method:

@Override
public void onPreviewFrame(byte[] frame, Camera arg1) {
    Log.i(TAG,"onPreviewFrame called");
    super.onPreviewFrame(frame, arg1);
}

I also logged the onCameraFrame occurrences, calling Log.i(TAG,"onCameraFrame called"); in it.

If working fine this should output alternately the two logs like this: onPreviewFrame called onCameraFrame called onPreviewFrame called onCameraFrame called etc.

While this is true in my test in about 90% of the cases, I can find occurrences where "onPreviewFrame called" appears multiple times before another "onCameraFrame called" message like this: onPreviewFrame called onCameraFrame called onPreviewFrame called onPreviewFrame called onCameraFrame called

This means there are frames from the camera that NEVER make it to the onCameraFrame method. I suspect this frame skipping probably indicates that the deliverAndDrawFrame method's execution time sometimes exceeds the time between frames from the camera, which I think is really high considering it shouldn't do any hard processing work. The smoothness of other applications (not using JavaCamereView) suggests that this could be done more efficiently. Do you think there's any easy way to make it faster? Doing real-time processing is really hard if receiving frames from camera alone is already not real-time.

I only tested JavaCameraView, but since I think NativeCameraView (which also doesn't appear smooth enough) uses the same deliverAndDrawFrame method it's probably also affected.

Any help/insight is appreciated.