2020-11-06 16:18:36 -0600 | received badge | ● Popular Question (source) |
2020-05-01 23:42:18 -0600 | received badge | ● Notable Question (source) |
2018-05-30 21:10:33 -0600 | received badge | ● Good Question (source) |
2018-05-09 14:53:43 -0600 | received badge | ● Popular Question (source) |
2015-10-24 08:22:38 -0600 | received badge | ● Supporter (source) |
2015-10-11 04:33:45 -0600 | commented question | First use of UMat does not run on GPU? So I tracked the problem through the debugger. During the creation of the Kernel in ocl_bilateralFilter_8u (smooth.cpp, line 2965; Kernel is built in line 3030), OpenCV seems to build the OpenCL Program in ocl.cpp, line 3234. The problem finally occurs in getProg (ocl.cpp, line 2580) in the line This finally calls in ocl.cpp, line 3499. The debugger doesn't let me go deeper but this operation is what takes up all the time. Since I can't go into the function I have no idea what to do or what's going wrong. |
2015-10-11 03:10:38 -0600 | commented question | First use of UMat does not run on GPU? Output: I don't really see a problem unfortunately. Do you? |
2015-10-11 02:28:31 -0600 | commented question | First use of UMat does not run on GPU? Very strange... how does this even happen? I also compiled the program with VS2013. I used OpenCV 3.0 gold. I also tested the program on my desktop pc and my notebook and on both I get the same result. I also tested your suggestion but it didn't change anything. The problem don't seem to be the images. The problem is always the first use of a library function with a specific set of parameters. I'll try it on another different pc later. |
2015-10-10 08:17:31 -0600 | received badge | ● Editor (source) |
2015-10-10 08:16:05 -0600 | asked a question | First use of UMat does not run on GPU? I experience some very strange behaviour using UMats. My intent is to speedup my algorithms by running OpenCV library functions on my GPU (AMD HD 7850, OpenCL capable). In order to test this I load a set of seven images and perform a bilateral filter or a sobel operation on them. However, it seems that every time I use one of those functions with a new set of parameters it is executed on the CPU first. Only starting from the second use of those same parameters my program uses the GPU. I compiled this with VS 2013 and OpenCV 3.0 gold. For example, using the same bilateral filter on all images: Output: The GPU utilization goes up, however only after about 2 seconds (i.e. after the first iteration is completed). However, when using a different set of parameters each time: Output: And all of it is executed on my CPU. Also, those functions run extremely slow. Using Mat instead of UMat only takes these operations about 40ms. I guess there's some crosstalk between the program and OpenCL until the library decides to use the CPU. The same behaviour shows when using Sobel: The first three operations are executed on the CPU. Then, iterations 4 to 7 finish on the GPU almost immediately, with the GPU utilization once again going up (because they use the same parameter set as iteration 3). Output: Is this a bug? Am I doing something wrong? Just applying each operation once at the start of the program in order to prevent this feels very hacky. Also I don't know how long the parameter usages are "cached" (I use this word since I have no idea what happens in the background ... (more) |
2015-07-30 07:33:41 -0600 | received badge | ● Enthusiast |
2015-07-28 07:50:15 -0600 | asked a question | Finding Connected Components in Natural Color Images I've been working on an application that extracts characters from natural images, i.e. color images with a lot of structure. Up to now I've been using Canny Edge Detector and the Stroke Width Transform to extract components from the image. For comparison I also want to use a different method based on segmentation by color. Basically, what I want is to split my image into different components consisting of neighboring pixels with similar color values. Based on popular approaches for connected component labeling I've iterated through the image and used Union-Find in order to merge similar regions. However, since I have natural images with a lot of structure, there are literally hundreds and hundreds of (mostly very small) components within one image. Note for example the structure of the trees: This makes that approach very slow (the first pass doing the raw labeling is very fast, but identifying which regions to merge for up to thousands of regions takes too much time). The problem persists even after filtering and using a coarser quantifization. I also tried to incorporate flood fill of OpenCV which brings the great functionality of utilizing a mask. I started flood fill from each pixel that was not yet assigned, which was quite fast. However, the mask uses uchar and therefore can't be used to store labels that are bigger than 255 so I had to use multiple masks which feels quite hacky. Also, flood fill is not very flexible regarding its similarity measure. The connected components functionality of OpenCV can of course not be used, since I don't work on binary images. Does anybody know of a good approach that can be used for my problem? Maybe I just haven't found the right functions in OpenCV yet? |
2015-07-20 04:31:31 -0600 | received badge | ● Scholar (source) |
2015-07-15 12:31:03 -0600 | received badge | ● Nice Question (source) |
2015-07-14 04:11:54 -0600 | asked a question | Is there any reason not to use UMat? I'm currently writing a piece of software that uses various different modules of OpenCV (some examples are edge detection via Canny, filtering operators, optical flow and I have some own algorithms that work on the opencv matrices). My question is, with the introduction of UMat in OpenCV3, is there any reason to still use Mat? Currently I'm still using Mat everythere (having only recently moved to 3.0), but trying out Farnebacks optical flow method, I realized it's much faster using the GPU speedup of UMat. But I love uniformity so at best I'll want to use ONLY UMat or ONLY Mat in my entire software. So I'm now thinking about using UMat everywhere so that I won't have to convert between the two of them. Is this a good idea? Are there drawbacks of using UMat everywhere? I read of some cases where using the GPU actually led to a loss of speed. Do these problems still exist in the gold release? |
2015-06-22 07:32:37 -0600 | received badge | ● Student (source) |
2015-06-22 07:24:15 -0600 | asked a question | NormalBayesClassifier Predict Errors I'm trying to get a NormalBayesClassifier running and by now have the impression that I'm using something about this class fundamentally wrong. So far, this is the (complete) code: If I try do run this, the program crashes (in Debug mode) at the predict statement with the following exception: Ouch. If I only test a single data case, I don't get a crash but still weird exceptions (sorry for the German, but you should get the point): Results in: Ultimately I want to do something like this: Needless to say this also doesn't work. OpenCV prints an error for that: Can anyone see what I'm doing wrong? I use an SVM Classifier and an KNN Classifier in exactly the same way. They work like charms. I'm using OpenCV3 Gold Release by the way (but had the same errors in RC1). |