2018-08-17 13:42:43 -0600 | commented question | Has anyone built OpenCV with CUDA compute 7.0? Ah damn, thanks for that additional datapoint. So it's at least not fundamental to opencv in general, so I wonder if it |
2018-08-17 13:41:10 -0600 | commented question | Has anyone built OpenCV with CUDA compute 7.0? Ah damn, thanks for that additional datapoint. So it's at least no fundamental to opencv currently, so I wonder if it's |
2018-08-15 17:25:20 -0600 | asked a question | Has anyone built OpenCV with CUDA compute 7.0? Has anyone built OpenCV with CUDA compute 7.0? I'm currently attempting to build OCV with CUDA compute level 7.0 enabled |
2018-02-08 13:40:59 -0600 | answered a question | Why can I get contours from a 'thresh' image but not from a 'mask' image when using VideoCapture? You've given this information about the size of the image and whatever, but the real question is: are there contours in |
2017-03-07 15:45:12 -0600 | commented answer | No effect from using cuda::Stream? I get that part, but what I was really asking about is the fact that I would have assumed that the addition of the stream would make the function call non-blocking, so a time measurement around it would show essentially zero time for launch (analogous to launching a CUDA kernel normally, or even a standard c++ thread). If the function call waits for the operation to complete even with a stream included, then it wouldn't be possible to have two different streams doing two different things, because you have to be able to launch them both in a non-blocking fashion. |
2017-03-06 11:20:33 -0600 | commented answer | Comparing two images whether same or not Why are you spamming comments all over ancient posts? You're not helping. You're causing clutter. |
2017-03-06 00:38:56 -0600 | commented answer | How to remove the small blobs? The hole will still be there if you use the code I provided at the top of the post. Did you imshow newImage? |
2017-03-06 00:26:37 -0600 | answered a question | How to remove the small blobs? If the problem you're asking for help with is the contour being filled in, probably the easiest way to make it how you want is to use that filled in contour as a mask on your image: Then you'll have the contour you want in newImage. Also if you want the largest contour, it's probably easiest to just sort the vector of contours. This is the function I always use for that: Then the largest one will just be the first one in the vector, so it would just be this to make the mask: |
2017-03-05 23:51:25 -0600 | asked a question | No effect from using cuda::Stream? I was just experimenting with using Streams with cuda functions to see what kind of performance impact they have, but it seems like very often the function takes just as long to launch with and without using the stream parameter. I was expecting the call with the stream to launch an asynchronous operation which would essentially take 0 time, but it seems to be the full time of the operation itself. For example, with a GpuMat uploaded I tried and and I've seen no time difference whatsoever. Is there something I'm missing in how to use these? |
2015-12-25 15:55:08 -0600 | commented answer | how to use cv::cuda::Convolution::convolve in opencv? Yeah that's just the kernel size |
2015-12-25 15:45:31 -0600 | answered a question | how to use cv::cuda::Convolution::convolve in opencv? You first have to construct a Convolution object using the template size you want to use (or don't and it will resize, but it will be slightly slower): Then you can use it on whatever image you want: It's also always the same size as the input. |
2015-12-20 22:20:01 -0600 | answered a question | not rect ROI defined by 4 points I'd recommend thresholding the image to isolate the color you want, and then do countNonZero on that thresholded image. |
2015-12-19 13:04:17 -0600 | answered a question | Loading image and getting each pixel's color (C++) Check out the OCV tutorials...lots of good stuff there: http://docs.opencv.org/2.4/doc/tutori... |
2015-12-19 12:38:13 -0600 | answered a question | Trouble to compile opencv 3.0.0 solution (.sln) file with visual studio when the BUILD_OPENCV_WORLD is included in cmake (3.4.1) All opencv world is is the individual modules combined into one just so you don't have to deal with as many files. It contains no functionality itself. There's a known issue with building with opencv world, so just skip that and build everything else. You'll have all the modules individually then and it will work as normal. |
2015-12-09 19:37:43 -0600 | answered a question | [Resolved] Error with opencv 3.0 It does look like the property sheet is correct as long as those are the correct paths. Is C:\opencv\build\x86\vc12\lib; the exact path to the libraries including capitalization and all that? If it is, I would have expected it to work. And you also added the opencv bin to the system path in the environment variables? |
2015-12-08 23:16:16 -0600 | answered a question | [Resolved] Error with opencv 3.0 That's almost certainly a problem with your VS property sheets. Do you have all the correct paths specified and the libraries included? |
2015-12-08 23:15:06 -0600 | commented answer | Missing opencv_rgbd library I'm not sure what you tried, but make sure that both the OCV3 and the contrib modules that you're using are the same version (i.e. don't download the repository itself, use the "release" tab at the top of the github page). If other contrib modules are the source of the error, you can always exclude them and just build rgdb also. |
2015-12-07 21:21:18 -0600 | answered a question | Selective hole filling If the things you're trying to fill are consistently round as in your example, you can use findContours and use the returned hierarchy to only look at internal contours. Check them for circularity and only fill those that are circular enough. |
2015-12-07 20:50:13 -0600 | answered a question | Missing opencv_rgbd library RGBD is not part of the main opencv 3 release but is actually in the contrib modules, as you can see here: http://docs.opencv.org/master/#gsc.tab=0 If you download the contrib module release also, there's an option in cmake to include them. |
2015-12-07 20:45:33 -0600 | answered a question | computing FFT at a pixel in image with opencv That doesn't mean anything... An FFT gives you frequency information about an image. A single pixel does not have any sort of frequency so it doesn't have an FFT. |
2015-12-06 09:53:08 -0600 | answered a question | calculating how many times white pixels appear form frame difference You can use countNonZero() which returns an int of how many nonzero pixels are in a frame. To account for noise you could use some kind of minimum score that's required before you count the eyes as present, but just put that in your loop and you'll be set. |
2015-12-05 21:04:47 -0600 | received badge | ● Scholar (source) |
2015-12-05 21:02:55 -0600 | answered a question | "No OpenGL support" error when using build on another computer Just in case anyone else is dumb like me and somehow finds this: after messing around with this for 9 hours, somehow after posting this question I realized that despite the property sheets using the correct include and lib folders, and having added the bin to the system path, there was a previous opencv bin also in the path which hadn't been built with OpenGL. So it was using the libraries and headers from the new build but the dll's from the old. Deleting that fixed it. |
2015-12-05 20:16:35 -0600 | asked a question | "No OpenGL support" error when using build on another computer I'm not sure if anyone has had this same problem and found a solution, but I built OpenCV on one machine, and it functions completely normally with OpenGL. I've tried to copy the build to another computer where it runs normally with normal OpenCV functionality (including CUDA), but if I try to use OpenGL functionality (e.g. namedWindow options) then it gives the error that it wasn't built with OpenGL. The property sheets in VS have all the same libraries included also. Any advice would be much appreciated. |
2015-12-04 23:01:45 -0600 | asked a question | Is it possible to display a GpuMat directly? It seems very inefficient to download a GpuMat to system memory only to send it back to the gpu when it's displayed. Is there some way to display an image directly from the gpu? imshow on a GpuMat doesn't appear to work, but is there some kind of workaround anyone knows of? |
2015-10-27 11:49:19 -0600 | received badge | ● Enthusiast |
2015-10-27 11:49:19 -0600 | received badge | ● Enthusiast |
2015-10-22 13:41:03 -0600 | commented question | How do I interpret the information in the location output matrix from cuda::findMinMaxLoc? Yeah the normal minMaxLoc works fine, but I was kinda just interested in figuring out what I was missing about findMinMaxLoc, because unless I'm completely missing something, it seems completely broken. Returning a 1x2 Mat makes no sense and the documentation doesn't explain any expected behavior either. |
2015-10-20 11:11:54 -0600 | received badge | ● Critic (source) |
2015-10-20 11:10:46 -0600 | asked a question | Would subtracting the phases of two images be a superior difference metric than subtracting the images directly? I'm hoping someone can sanity check this idea as I am admittedly a bit of a noob when it comes to working with FFT's. Say I have two (registered) images of two of the same object and want to use one as a baseline to check for differences in quality control (looking for scratches and whatnot). My initial naive approach is to just subtract the two images directly and then whatever remains can be treated as defects. However, this method is subject to error in the presence of illumination differences. I'm thinking that I would be better served if I take the FFT of the images, and subtract only the phase information, and then use the IFFT of that result as the defect map, and that this should hopefully eliminate false positives due to lighting. Does this seem like a reasonable assumption or is there some kind of detail that I'm overlooking? Thanks for any advice you can offer! |
2015-10-16 15:02:32 -0600 | received badge | ● Editor (source) |
2015-10-16 14:59:38 -0600 | asked a question | How do I interpret the information in the location output matrix from cuda::findMinMaxLoc? This may be a stupid question, but when I run cuda::findMinMaxLoc it produces a 1x2 32FC1 Mat for "location". Apparently this somehow contains information about the locations of the min and the max, but I have no clue how 2 digits are giving me coordinates for two points. Any help is appreciated. |
2015-10-16 09:44:54 -0600 | commented question | FindContours() application has stopped working Yes please post your code along with the error you're seeing. |
2015-10-16 04:30:09 -0600 | received badge | ● Teacher (source) |
2015-10-15 15:12:49 -0600 | answered a question | When I use OpenCV 2.4,the image does not load! Is f.jpg in the working directory of your program? Usually you want to put the full path to the image there just to ensure you're accessing the right location. My guess is you're not looking in the right place for the image, because the code looks fine overall. |
2015-10-15 15:12:01 -0600 | commented question | When I use OpenCV 2.4,the image does not load! The last two comments are completely incorrect. b = img.empty() will assign whatever is returned to b, and then b will be evaluated by the if statement. So if img.empty() is false then b will be set to false and you will not enter the if. The if statement is perfectly fine as is. |
2015-10-14 14:24:36 -0600 | commented question | cannot open source file On step 6 is that a copy-paste of what you have? Because there should be no 'f' at the front and it should be "....\include" rather than "....\include" |