2017-10-24 16:31:25 -0600 | asked a question | Assertion Failure in SVM Predcit (OpenCV 3.0) Assertion Failure in SVM Predcit (OpenCV 3.0) Hi I was trying out the Bag of Words to classify images. I tried a simple |
2016-07-14 10:02:21 -0600 | commented answer | How to determine angular rotation in appropriate direction The ROI can have additional printed info (say characters) other than the code pattern. The code bars have a specific height and width, so I want to be as accurate as possible to measure the bars before further processing. hence I want to correct for the tilting / rotation. I extract contours and then extract rectangles and filter them based on size specification. The approximation of rectangles are more accurate when the image is not tilted than when it is, hence the angle correction. Thanks |
2016-07-14 09:57:24 -0600 | commented answer | How to determine angular rotation in appropriate direction Thanks! It works great. There is another thing, the ultimate focus of the scanning system is to read and decode a certain type barcode like pattern on the paper. The paper size can be huge, so after scanning the system extracts a smaller sub image (ROI) where the barcode should be. The position of the code on each paper is fairly static (it's usually near the top left corner). So, I will be getting only the ROI ( apprximately 500px x 1000px). Now the same issue as above. I can see some ROIs are roated within +-5 degrees. I mean the code bars look tilted. How can I rotate the ROI? I am thinking of using corner detectors and use your idea to rotate the tilted images back to zero degrees. If this is correct how do I get the reference corner points (correctedCorners) ? |
2016-07-14 09:47:21 -0600 | received badge | ● Scholar (source) |
2016-07-14 09:47:09 -0600 | received badge | ● Supporter (source) |
2016-07-13 17:47:27 -0600 | asked a question | How to determine angular rotation in appropriate direction Hi, I have this image scanning setup where I am scanning printed papers under a camera. I have this prior information that, when the paper passes under the camera it can rotate upto maximum 5 degrees in either clock wise or counter clock wise direction. I want to determine the value of the rotation angle in correct direction and then rotate the image back to make it zero degrees. My question is how can I determine the amount of rotation and the correct direction? thanks |
2016-07-11 09:40:14 -0600 | asked a question | estimate motion with opencv:contrib:reg class Hi, I am estimating Affine motion between two images using the opencv::contirb::reg class. There are some issues I am facing.
Now for my purpose even the bordering areas of the image are important. But BORDER_TRANSPARENT is used in remap() which means pixels that cannot be interpolated will not be touched. So after warping I get artifacts along the image borders I tried this workaround: I take a slightly larger reference image and a smaller current image. I use the matchTemplate() function to find the best matching part from the reference image to the current image. Due to time constraint, I resize both the images to 1/16 size. Then I get the reference image same size as the current image and pass them to mapper class. I estimate affine motion between them and then use the affine matrix to warp the larger golden image. Now, this seems to work, but sometimes the matchTemplate()'s output isn't actually the part that should match with the current image. Also I am wondering as I am estimating affine motion between two slightly smaller images and then applying this affine motion to a slightly larger image, is the affine motion matrix correct for the larger image? (the larger reference image has roughly 300px more on all sides ) Can anyone suggest any good ideas? Thanks |
2016-04-27 08:37:26 -0600 | received badge | ● Enthusiast |
2016-04-26 15:03:06 -0600 | commented question | Improve Runtime of a Function Thanks! I'll give them a try |
2016-04-26 13:57:12 -0600 | commented question | Improve Runtime of a Function Got it. So what should I use if I want to measure execution time of functions that involve io operations say imread() and/or inwrite() ? I have functions that may or may not have io operations and I need to find execution time. |
2016-04-26 13:45:54 -0600 | commented question | Improve Runtime of a Function I use cv::getTickCount() Update: I figured out the problem. The function was being called inside another function and there was a conditional cv::imwrite() in the function. That's why I was getting the problem. The conditional part comes from another section of the program. I've fixed the part and it's working ok. Thanks everyone! |
2016-04-26 13:15:52 -0600 | asked a question | Improve Runtime of a Function Hi, I am using a function to find difference between 2 images. It takes two 8-bit grayscale images, converts them to CV_32FC1, does a subtraction. Here is the function I am using: I have measured the run-time of each major step individually
When I call this function with images of sizes: 9000 x 6000. I get a run-time of about 900 msec, but each individual step takes a lot less time. Here's one example:
When I called the function: I get the function's runtime: 905 msec The function call looks like this: I measure the runtime using cv::getTickCount() and cv::getTickFrequency() Why is the function's runtime so large where individual step do no take that long? How to improve the runtime? Kindly Help Thanks! |
2016-04-07 17:07:41 -0600 | commented answer | Help Needed with OpenCV reg: Modifying the map there is an efficient way of doing it. Use the scale() function somehow I missed this simple function! |
2016-04-06 12:10:05 -0600 | answered a question | Help Needed with OpenCV reg: Modifying the map I was able to solve the issue like this: After
I create a MapAffine object using the parameterised constructor where I multiply the shift component by the integer factor: Then I call inversewarp() using mapAff2: If there's a more efficient way of doing it please let me know Thanks |
2016-04-04 11:06:34 -0600 | asked a question | Help Needed with OpenCV reg: Modifying the map Hi, I am working with OpenCV image registration library "reg" under "opencv-contrib". I am using the MapAffine class to estimate affine motion. I need to modify the shift vector element (multiply it by a constant factor). I can get the linear transformation matrix and shift vector using getLinTr() and getShift(). Before doing the warping (using inverseWarp() ) I want to multiply the shift vector by a constant. This is what I have done so far: (Following this tutorial (https://github.com/Itseez/opencv_cont...) Then doing the warping: Now I want to modify the shift vector prior to doing the above step. I have tried to modify the shift part using opencv Mat obejcts: Now the affine matrix is in a Mat object. My question is how can I recast it to MapAffine so that I can use the inversewarp() function. Or is there another way to modify the mapAffine reference directly? |
2016-03-24 12:09:17 -0600 | commented question | How to Change image intensity range Thanks, Ok so I am working with printed images. I have a set of images and I am checking the cover side of each image. The first image is the template image. the rest are checked against the template. For each subsequent image there is some distortion / motion present. So I am doing the registration. I am experimenting with estimateRigidTransform() and fidTransformECC to see alignment performance. I need to check if there is missing print and/or extra print comparing the aligned image against the template. |
2016-03-24 11:54:37 -0600 | received badge | ● Editor (source) |
2016-03-24 11:54:24 -0600 | commented question | How to Change image intensity range Hi berak, In the code example shown here in the showDifference function images are being converted to 32F. Should that be the approach?I want to check two things: missing object and extra object in the aligned image. |
2016-03-24 11:45:02 -0600 | asked a question | How to Change image intensity range Hi, I am working with image registration. I have gray-scale images (CV_8UC1). I have done the registration part. Now I want to check the alignment accuracy. I want to convert the intensity range to [-127 to 128] from [0 to 255]. How can I do that? What I am doing is:
Is this correct? Do I need to convert the images from 8U ? Thanks |