Ask Your Question

asa's profile - activity

2017-10-24 16:31:25 -0500 asked a question Assertion Failure in SVM Predcit (OpenCV 3.0)

Assertion Failure in SVM Predcit (OpenCV 3.0) Hi I was trying out the Bag of Words to classify images. I tried a simple

2016-07-14 10:02:21 -0500 commented answer How to determine angular rotation in appropriate direction

The ROI can have additional printed info (say characters) other than the code pattern. The code bars have a specific height and width, so I want to be as accurate as possible to measure the bars before further processing. hence I want to correct for the tilting / rotation. I extract contours and then extract rectangles and filter them based on size specification. The approximation of rectangles are more accurate when the image is not tilted than when it is, hence the angle correction. Thanks

2016-07-14 09:57:24 -0500 commented answer How to determine angular rotation in appropriate direction

Thanks! It works great. There is another thing, the ultimate focus of the scanning system is to read and decode a certain type barcode like pattern on the paper. The paper size can be huge, so after scanning the system extracts a smaller sub image (ROI) where the barcode should be. The position of the code on each paper is fairly static (it's usually near the top left corner). So, I will be getting only the ROI ( apprximately 500px x 1000px). Now the same issue as above. I can see some ROIs are roated within +-5 degrees. I mean the code bars look tilted. How can I rotate the ROI? I am thinking of using corner detectors and use your idea to rotate the tilted images back to zero degrees. If this is correct how do I get the reference corner points (correctedCorners) ?

2016-07-14 09:47:21 -0500 received badge  Scholar (source)
2016-07-14 09:47:09 -0500 received badge  Supporter (source)
2016-07-13 17:47:27 -0500 asked a question How to determine angular rotation in appropriate direction


I have this image scanning setup where I am scanning printed papers under a camera. I have this prior information that, when the paper passes under the camera it can rotate upto maximum 5 degrees in either clock wise or counter clock wise direction. I want to determine the value of the rotation angle in correct direction and then rotate the image back to make it zero degrees. My question is how can I determine the amount of rotation and the correct direction?


2016-07-11 09:40:14 -0500 asked a question estimate motion with opencv:contrib:reg class

Hi, I am estimating Affine motion between two images using the opencv::contirb::reg class. There are some issues I am facing.

  1. The images have to of the same size. The calculate() function inside the "mappergradaffine" class has a CV_DbgAssert(img1.size() == image2.size()); statement.
  2. The inversewarp() function inside the mapaffine class uses the OpenCV function remap().

Now for my purpose even the bordering areas of the image are important. But BORDER_TRANSPARENT is used in remap() which means pixels that cannot be interpolated will not be touched. So after warping I get artifacts along the image borders

I tried this workaround:

I take a slightly larger reference image and a smaller current image. I use the matchTemplate() function to find the best matching part from the reference image to the current image. Due to time constraint, I resize both the images to 1/16 size. Then I get the reference image same size as the current image and pass them to mapper class. I estimate affine motion between them and then use the affine matrix to warp the larger golden image. Now, this seems to work, but sometimes the matchTemplate()'s output isn't actually the part that should match with the current image. Also I am wondering as I am estimating affine motion between two slightly smaller images and then applying this affine motion to a slightly larger image, is the affine motion matrix correct for the larger image? (the larger reference image has roughly 300px more on all sides )

Can anyone suggest any good ideas?


2016-04-27 08:37:26 -0500 received badge  Enthusiast
2016-04-26 15:03:06 -0500 commented question Improve Runtime of a Function

Thanks! I'll give them a try

2016-04-26 13:57:12 -0500 commented question Improve Runtime of a Function

Got it. So what should I use if I want to measure execution time of functions that involve io operations say imread() and/or inwrite() ? I have functions that may or may not have io operations and I need to find execution time.

2016-04-26 13:45:54 -0500 commented question Improve Runtime of a Function

I use cv::getTickCount()


I figured out the problem. The function was being called inside another function and there was a conditional cv::imwrite() in the function. That's why I was getting the problem. The conditional part comes from another section of the program. I've fixed the part and it's working ok.

Thanks everyone!

2016-04-26 13:15:52 -0500 asked a question Improve Runtime of a Function

Hi, I am using a function to find difference between 2 images. It takes two 8-bit grayscale images, converts them to CV_32FC1, does a subtraction. Here is the function I am using:

cv::Mat calculateDiff(const cv::Mat &image_one, const cv::Mat &image_two)
    cv::Mat im1, im2, im_dest;
    im1 /= 2.f;
    im1 += 128.f;
    im2 /= 2.f;
    cv::subtract(im1, im2, im_dest);
    im_dest.convertTo(im_dest, CV_8UC1);

   return im_dest;

I have measured the run-time of each major step individually

  1. convertTo steps
  2. divide by 2, add 128
  3. subtract() function
  4. convertTo()

When I call this function with images of sizes: 9000 x 6000. I get a run-time of about 900 msec, but each individual step takes a lot less time. Here's one example:

  1. Step 1 time: 64 msec
  2. Step 2 time: 76 msec
  3. Step 3 time: 51 msec
  4. Step 4 time: 22 msec

When I called the function: I get the function's runtime: 905 msec

The function call looks like this:

cv::Mat diff_image;
diff_image = calculate_diff(input_one, input_two);

I measure the runtime using cv::getTickCount() and cv::getTickFrequency()

Why is the function's runtime so large where individual step do no take that long? How to improve the runtime? Kindly Help


2016-04-07 17:07:41 -0500 commented answer Help Needed with OpenCV reg: Modifying the map

there is an efficient way of doing it. Use the scale() function

mapAff->scale(alpha)`//multiplies the shift vector by a factor`

somehow I missed this simple function!

2016-04-06 12:10:05 -0500 answered a question Help Needed with OpenCV reg: Modifying the map

I was able to solve the issue like this:


MapAffine* mapAff = dynamic_cast<MapAffine>(mapPtr.get);

I create a MapAffine object using the parameterised constructor where I multiply the shift component by the integer factor:

MapAffine mapAff2 = cv::reg::MapAffine(ampAff->getLinTr(), alpha * mapAff->getShift());
// alpha is the integer factor

Then I call inversewarp() using mapAff2:

mapAff2.inversewarp(source, destination);

If there's a more efficient way of doing it please let me know


2016-04-04 11:06:34 -0500 asked a question Help Needed with OpenCV reg: Modifying the map


I am working with OpenCV image registration library "reg" under "opencv-contrib". I am using the MapAffine class to estimate affine motion. I need to modify the shift vector element (multiply it by a constant factor). I can get the linear transformation matrix and shift vector using getLinTr() and getShift(). Before doing the warping (using inverseWarp() ) I want to multiply the shift vector by a constant. This is what I have done so far: (Following this tutorial (

Ptr<Map> mapPtr;
MapperGradAffine mapper;
MapperPyramid mapPyr(mapper);
Ptr<Map> mapPtr;
mapPyr.calculate(image1, image2,  mapPtr);
MapAffine* mapAff = dynamic_cast<MapAffine*>(mapPtr.get());

Then doing the warping:

mapAff->inversewarp(image2, destination);

Now I want to modify the shift vector prior to doing the above step. I have tried to modify the shift part using opencv Mat obejcts:

cv::Mat lin_tr = Mat(mapAff->getLintr); //getting the linear part
cv::Mat shift = Mat(mapAff->getShift()); //getting the translation
cv::Mat* aff_mat;
cv::hconcat(lin_tr, 2 * shift, *aff_mat);

Now the affine matrix is in a Mat object. My question is how can I recast it to MapAffine so that I can use the inversewarp() function. Or is there another way to modify the mapAffine reference directly?

2016-03-24 12:09:17 -0500 commented question How to Change image intensity range


Ok so I am working with printed images. I have a set of images and I am checking the cover side of each image. The first image is the template image. the rest are checked against the template. For each subsequent image there is some distortion / motion present. So I am doing the registration. I am experimenting with estimateRigidTransform() and fidTransformECC to see alignment performance. I need to check if there is missing print and/or extra print comparing the aligned image against the template.

2016-03-24 11:54:37 -0500 received badge  Editor (source)
2016-03-24 11:54:24 -0500 commented question How to Change image intensity range

Hi berak,

In the code example shown here in the showDifference function images are being converted to 32F. Should that be the approach?I want to check two things: missing object and extra object in the aligned image.

2016-03-24 11:45:02 -0500 asked a question How to Change image intensity range


I am working with image registration. I have gray-scale images (CV_8UC1). I have done the registration part. Now I want to check the alignment accuracy. I want to convert the intensity range to [-127 to 128] from [0 to 255]. How can I do that? What I am doing is:

  • subtract the aligned image from template
  • divide the result by 2
  • add 128 to result

Is this correct? Do I need to convert the images from 8U ?