2018-09-12 13:13:32 -0600 | received badge | ● Enthusiast |
2018-06-17 19:48:15 -0600 | asked a question | Why is it so hard to install OpenCV for python on Mac? Why is it so hard to install OpenCV for python on Mac? I use Ubuntu at home, I was able to get up and running with OpenC |
2017-07-12 19:26:20 -0600 | received badge | ● Notable Question (source) |
2017-06-13 11:11:00 -0600 | received badge | ● Taxonomist |
2016-03-21 05:52:20 -0600 | received badge | ● Nice Question (source) |
2016-02-01 06:56:16 -0600 | received badge | ● Popular Question (source) |
2014-07-07 03:21:40 -0600 | received badge | ● Nice Answer (source) |
2013-06-27 12:40:20 -0600 | received badge | ● Teacher (source) |
2013-05-30 18:35:57 -0600 | commented answer | OpenCV for Windows (2.4.1): Cuda-enabled app won't load on non-nVidia systems ...ran out of characters to ask questions... Note some of my asterisks in the previous comment were interpreted as markup for italics. Let me know if you are confused. Another question; if I do need to go the dual-build route, will I be able to just deal with two different versions of opencv_gpu.dll? Or is it two different versions of everything? I do recall when building OpenCV, before kicking off the whole build I did a test of just opencv_core.vcproj, and that did involve some gpu matrix stuff, so I'm thinking maybe the answer is no. Would the files cudart32_50_35.dll, cublas32_50_35.dll, cufft32_50_35.dll, etc, be considered "drivers" that I could reasonably expect my end users to download and install before installing my stuff? (Even if they don't have NVIDIA cards?) |
2013-05-30 18:28:00 -0600 | commented answer | OpenCV for Windows (2.4.1): Cuda-enabled app won't load on non-nVidia systems I'm having the same problem (except for me it's OpenCV 2.4.4 which I built with the 5.0 toolkit, so I'm having problems with cudart_32_50_35.dll not being present on other computers that I want to install my app onto). What's the point of cv::gpu::getCudaEnabledDeviceCount(), if not to enable you to write code which will work on both CUDA and non-CUDA machines? Are there linker commands or visual studio switches that will statically link opencv_gpu.dll with the necessary cuda.dll (I understand "statically link" as "roll into")? Do I need to include the cuda runtime dll with my install (checking dependencies with dumpbin, I think I might also need to include cudafft.dll etc) |
2013-05-15 17:23:26 -0600 | answered a question | Problems building 2.4.4 64-bit vs9 with GPU/CUDA Well not that anybody cares (thanks for nothin, OpenCV forums!), but I finally found a useful link via stackoverflow, see tutorial here: http://blog.cuvilib.com/2011/03/22/how-to-build-opencv-2-2-with-gpu-cuda-on-windows-7/ Only thing I had to change from that tutorial was the generator ("... win64"). |
2013-05-02 09:57:12 -0600 | answered a question | What are the SURF support restrictions? My recommendation; ditch SURF and just use ORB. It's faster and gets better quality matches and it's not patent-restricted like SIFT and SURF. Win Win Win. You shouldn't have to change much code if you are using all opencv capabilities for keypoints, descriptors, and matching; just a different call to find keypoints and generate descriptors, and switch your descriptor matcher to a HammingDistance one instead of L2. |
2013-05-02 09:51:23 -0600 | asked a question | Problems building 2.4.4 64-bit vs9 with GPU/CUDA Hey, I looked around for a while but could not find an answer to my specific problem. I was using the binary distribution of OpenCV 2.4.4, but that is not built with GPU/CUDA support. So I am building it myself. I got the 32-bit version built OK, but I am having trouble with the 64-bit. The InstallGuide has a cryptic statement "generate solutions using CMake, as described above. Make sure, you chose the proper generator (32-bit or 64-bit)" Well, cryptic to me at least, since I'm a newb at cmake. It took me a couple of hours of searching to discover that I needed But it seems that is not quite enough. My debug and release builds each took hours, but produced almost nothing in the end. I dove into modules/core and tried to get started building just opencv_core.vcproj, but it dies apparently because while the OpenCV code is being built 64-bit, the CUDA part is only 32 bit. Is there some extra switch I need to give cmake to fix this? Here's the error: More info from the build log, apparently this build also uses cmake, and I don't see any 64-bit flags; how do I give flags to cmake in the first place, so that the inner cmake will use the right flags? See here: |
2013-04-08 15:29:54 -0600 | commented answer | ORB_GPU not as good as ORB(CPU) You're a genius! Don't know why I didn't figger that out myself! :) |
2013-04-08 14:07:33 -0600 | received badge | ● Scholar (source) |
2013-04-08 13:51:08 -0600 | commented answer | ORB_GPU not as good as ORB(CPU) OK thx, I clicked the "accept" button, but got an error ">50 points required to accept or unaccept your own answer to your own question. Maybe you can do me the favor of submitting a stub answer that I can accept? |
2013-04-08 13:02:37 -0600 | received badge | ● Self-Learner (source) |
2013-04-08 12:13:08 -0600 | answered a question | ORB_GPU not as good as ORB(CPU) Hopefully by now the waiting period has passed and I can answer my own question. cv::gpu::ORB_GPU::blurForDescriptor = true; see comments above. |
2013-04-05 15:06:50 -0600 | commented question | ORB_GPU not as good as ORB(CPU) (b) It would be nice if ORB_GPU had an interface that allowed computing of descriptors only, from passed-in keypoints, like cv::ORB provides. (c) The quantity and quality of ORB_GPU is still worse than ORB; now that I figured out about blur I'd ballpark more like 30% worse instead of 95% worse. Is this an understood behavior? Are there known mitigations? I'm still interested in discussing this, please anybody comment with feedback. I won't be able to check again until Monday, but I'll be back to see if anybody provided any more insight. cheers! |
2013-04-05 15:04:06 -0600 | commented question | ORB_GPU not as good as ORB(CPU) Crap, I found the solution and typed in a fantastically insightful (and deeply emotional) answer, and lost it because as a new user I can't answer my own question for 2 days. Now you're stuck with this: The answer is BLUR. cv::ORB applies a GaussianBlur (about 20 lines from the end of orb.cpp) before computing descriptors. There is no way to control this through the public interface. cv::gpu::ORB_GPU has a public member bool blurForDescriptor, which by default constructs as false. When I set it instead to true, I find that min/avg/max hamming distance drops to 0/7.2/30 bits, which seems much more reasonable. Follow-on questions: (a) Shouldn't cv::gpu::ORB_GPU default blurForDescriptor=true to match cv::ORB's (only) behavior? |
2013-04-05 13:53:21 -0600 | received badge | ● Editor (source) |
2013-04-05 13:15:52 -0600 | commented question | OpenCL BruteForceMatcher slow and faulty Wait, reading closer, it looks like you have exactly the opposite problem from me; your test above seems to show kp/dsc computed by the CPU implementation of ORB, and your problem is with the GPU matcher being not as good as the CPU matcher. Never mind. |
2013-04-05 13:07:07 -0600 | asked a question | ORB_GPU not as good as ORB(CPU) Hi all, I have some code working decently with the CPU implementation of ORB (from module/features2d); now I am experimenting with ORB_GPU, hoping to have a 2-way implementation, users with CUDA get faster performance, users without still get good quality. Problem is, the keypoints/descriptors returned by ORB_GPU are not yielding a sufficient number of correct matches; on exactly the same data that the CPU version is able to. I understand that synchronization issues etc may cause the GPU results to be different than CPU, but I would hope that the quality of the results would be comparable. Any tips? In particular, is there any way I can wrangle the ORB_GPU interface to compute descriptors for keypoints found by the CPU ORB? Maybe then I could isolate to either keypoint extraction or descriptor computation. Dirty details; I just downloaded CUDA 5.0 and built OpenCV2.4.4 for myself with VS2008 (previously I was using prebuilt OpenCV2.4.4, but I guess that was built with HAVE_CUDA=0). For my development testing I have obtained a rather old, low-end card (Quadro FX 4800, capability level 1.3). I am using ORB/ORB_GPU only for keypoints/descriptors. I have written my own matching code (CPU-based) which is used after the kp/dsc are extracted. I am using the same computer/compiler/data/etc for testing each way, just recompiling with/without the gpu code path. Here's a snippet: I am getting some results, so I must have compiled/linked/etc OK, but the results are significantly worse with GPU. In particular, with my baseline "easy" unit test case my matcher is detecting 34 matches (out of the 1000 kp/dsc per image), and GPU is yielding 5-9 (not deterministic--which is OK if I can get it to be reliably good). After this matching I run RANSAC to find a maximal subset that fits well to a homography, and the CPU kp/desc winds up with 20 correct matches, but GPU never yields better than 4 (i.e. trivial homography fit, and not all correct matches). Any feedback would be appreciated! UPDATE: I noticed that the CPU implementation offers optional parameter useProvidedKeypoints=false -- so I modified my code to ignore the ORB_GPU descriptors and let ORB(CPU) compute descriptors ... (more) |
2013-04-05 12:34:56 -0600 | commented question | OpenCL BruteForceMatcher slow and faulty Hi, I am experiencing maybe a similar problem, also can't find an answer. Have you considered whether ORB_GPU is producing the same quality of keypoints/descriptors as the CPU version? In my case I have a simple if/then to compute kp/dsc either with ORB_GPU or ORB, and then my own custom matching code (CPU-based). Using the CPU ORB I get sufficient matches, but using ORB_GPU I get many fewer. Anyways, perhaps the matcher is not the problem, perhaps ORB_GPU somehow returns lower-quality kp/dsc. UPDATE: I have submitted my question, maybe you want to watch it also for answers? http://answers.opencv.org/question/10835/orb_gpu-not-as-good-as-orbcpu/ |
2013-03-30 22:18:58 -0600 | received badge | ● Student (source) |
2013-03-30 14:28:23 -0600 | commented question | How can I imread just an ROI, not the whole image? Hmmm, maybe a feature request is in order? All it would take would be a flavor of imread with an extra ROI argument |
2013-03-30 12:55:58 -0600 | asked a question | How can I imread just an ROI, not the whole image? Hey all, This seems like a really dumb question, but I've searched, and cannot find any answer. I am working with large images -- or at least large enough that I don't want to load the whole thing into memory -- and I want to process just ROI at a time. Is there a way in OpenCV to read just a desired ROI into memory? Or do I have to imread the whole thing, copy out my ROI and release the full image? |