2016-03-23 00:36:00 -0600 | commented question | Multiplying a Mat against another Mat on a GPU Yes, but I am trying to use this on the GPU |
2016-03-20 22:36:07 -0600 | commented question | Multiplying a Mat against another Mat on a GPU Looking back on my code, gemm wants to multiply Mats which have the following types: CV_32FC1 , CV_64FC1 , CV_32FC2 , or CV_64FC2 My linear Mat is CV_32FC3, so I would think I need to need to split my RGB channels and multiply them individually. But that won't work. Any recommendations? I wish to perform matrix multiplication similar to: Any advice on how to accomplish this? |
2016-03-19 12:02:32 -0600 | commented question | Multiplying a Mat against another Mat on a GPU Looking back at the gemm documentation, I do see that I need to convert my linearMatrix to a float type and not an unsigned short type. |
2016-03-18 02:20:08 -0600 | received badge | ● Student (source) |
2016-03-17 15:59:32 -0600 | asked a question | Multiplying a Mat against another Mat on a GPU I can multiply a 1080X1920 pixel CV_32FC1 mat against another 3X3 Mat when using CPU based OpenCV, but when I convert the code to be Gpu Based, I get an error. Here is my CPU code When I run the following Gpu code, I get an error: I assume from the error that the problem is that my two arrays' sizes are different. Is there a way to accomplish the CPU code with the GPU? |
2016-03-11 11:50:50 -0600 | commented answer | Determining Scale of query image against larger target image So let me summarize. You read in your two images into 2 cv::Mats, generate key points and descriptors (SIFT SURF) for your two images, but once I have my key points, how would I compare those? (I also found this link which might address the matching http://stackoverflow.com/questions/13...). |
2016-03-11 11:40:38 -0600 | received badge | ● Supporter (source) |
2016-03-11 11:40:37 -0600 | received badge | ● Scholar (source) |
2016-03-11 00:51:04 -0600 | received badge | ● Editor (source) |
2016-03-11 00:36:29 -0600 | asked a question | Determining Scale of query image against larger target image I am trying to match and align a query image to a larger image. The query image can be a subset of the larger image, basically a region of interest, and might be at a smaller scale. My goal is to determine the scale and alignment of the smaller image required to match the larger image. Is there a way to do this in OpenCV? I was looking at homography and the stitching algorithms, but I ultimately want to determine how much I would need to scale and translate my query image to match the parent image. It doesn't need to be pixel perfect, but I would like to get with in 1-3% of my target image. I was looking at some Matlab code that demonstrates how to determine scale and rotation of a copy of an image, see http://www.mathworks.com/help/images/... Again, Is it possible to compute a geometric transform in OpenCV? |
2016-02-08 12:19:11 -0600 | commented question | Best Approach to key point /descriptor comparison Ok, that matches my understanding as well. |
2016-02-08 01:07:53 -0600 | received badge | ● Enthusiast |
2016-02-07 21:48:19 -0600 | asked a question | Best Approach to key point /descriptor comparison I have over 200,000 images and have written some code to find one image (which can be different in scale, color, etc) in the group of 200,000. I generate SURF key points and descriptors. I then save these to a file. When I want to find an image in this group, I generate the SURF key points and descriptors for the target image and scan through all 20000 key point/descriptor files and do a comparison of the target against each file looking for the best match. Assuming multithreading, etc. I can reduce this process down to about 5-8 per target image. Would anyone recommend a better strategy for doing this? Would bag of words approach work? As I understand BOW, you have categories which you train. I don't think this would be applicable in this case. |
2015-12-13 04:05:21 -0600 | asked a question | Replicate Cimg get_correlate function How would I replicate CImg's get_correlate function in OpenCV? In my case I am taking a kernel that is 17 x 17 pixels wide and correlating it ( I assume that convolving is an alternate phrase) with a 512 x 512 image. |
2015-11-01 07:02:42 -0600 | commented question | Storing feature information (keypoints, descriptors) You will have to download them from the GPU to CPU before you save them. As for uploading them, you deserialize your key point and descriptor from your file and, key points into Keypoint vector and your descriptors into a Mat. const char * filePath = files.at(frameNumber).c_str(); cv::FileStorage fs2(filePath, cv::FileStorage::READ); |