Ask Your Question

matt.hammer's profile - activity

2020-08-11 20:13:39 -0600 received badge  Notable Question (source)
2020-06-29 02:53:29 -0600 received badge  Popular Question (source)
2020-05-26 08:41:07 -0600 received badge  Famous Question (source)
2017-10-17 09:01:09 -0600 received badge  Popular Question (source)
2017-07-11 03:45:56 -0600 received badge  Notable Question (source)
2016-09-08 07:43:38 -0600 received badge  Popular Question (source)
2016-02-22 04:42:51 -0600 received badge  Good Answer (source)
2015-08-24 20:11:20 -0600 received badge  Nice Answer (source)
2015-01-16 07:53:37 -0600 received badge  Nice Answer (source)
2014-11-07 11:44:12 -0600 answered a question Euclidean Distance between Matrix Channels

Still not sure why the original code didn't hit that if-statement. A type change and a memory leak later, I got it to work:

private void euclidChannels(Mat in, Mat out){       
    in.convertTo(rgbF, CvType.CV_32F);
    Core.pow(rgbF, 2, rgbF);
    rgbF = rgbF.reshape(1,rgbF.rows() * rgbF.cols());
    Core.reduce(rgbF, gF, 1, Core.REDUCE_SUM);          
    // fix rgbF - constant time
    rgbF = rgbF.reshape(3, in.rows());              
    gF = gF.reshape(1, this.height);            
    Core.sqrt(gF, gF);                  
    gF.convertTo(out, CvType.CV_8UC1);      
}

Re-reshaping the rgbF matrix back to it's original state was key to eliminating the memory leak, as apparently convertTo() will re-allocate the destination matrix if it doesn't like the one you provide. (I guess I could have added another temp matrix instead, but reshape() is supposed to be constant time, so might as well save a bit of memory)

2014-11-07 11:37:48 -0600 commented question Euclidean Distance between Matrix Channels

Thanks for the link - while that's what I'll probably end up using, I'd like to learn more about what's going on under the hood.

2014-11-06 17:08:03 -0600 asked a question Euclidean Distance between Matrix Channels

I am working on background subtraction in OpenCV Android/Java. I need to find the Euclidean distance between two RGB images. First, I subtract the two images from each other, element-by-element.

Then I use this function:

private Mat euclidChannels(Mat in){
    Core.pow(in, 2, in);
    int rows = in.rows();
    Mat out = new Mat(new Size(in.cols(), in.rows()), CvType.CV_32F);
    Core.reduce(in.reshape(1,in.rows() * in.cols()), out, 1, Core.REDUCE_SUM);
    Core.sqrt(out.reshape(1, rows), out);
    return out;
}

based on the following technique: how to sum a 3 channel matrix to a one channel matrix?

I get the error:

11-06 22:45:02.094: E/cv::error()(10655): OpenCV Error: Unsupported format or combination of formats (Unsupported combination of input and output array formats) in void cv::reduce(cv::InputArray, cv::OutputArray, int, int, int), file /home/reports/ci/slave_desktop/50-SDK/opencv/modules/core/src/matrix.cpp, line 2365

So I chased it down in the c++ source:

1921            if(op == CV_REDUCE_SUM)
1922            {
1923                if(sdepth == CV_8U && ddepth == CV_32S)
1924                    func = reduceC_<uchar,int,OpAdd<int> >;
1925                if(sdepth == CV_8U && ddepth == CV_32F)
1926                    func = reduceC_<uchar,float,OpAdd<int> >;
1927                if(sdepth == CV_8U && ddepth == CV_64F)
1928                    func = reduceC_<uchar,double,OpAdd<int> >;
1929                if(sdepth == CV_16U && ddepth == CV_32F)
1930                    func = reduceC_<ushort,float,OpAdd<float> >;
1931                if(sdepth == CV_16U && ddepth == CV_64F)
1932                    func = reduceC_<ushort,double,OpAdd<double> >;
1933                if(sdepth == CV_16S && ddepth == CV_32F)
1934                    func = reduceC_<short,float,OpAdd<float> >;
1935                if(sdepth == CV_16S && ddepth == CV_64F)
1936                    func = reduceC_<short,double,OpAdd<double> >;
1937                if(sdepth == CV_32F && ddepth == CV_32F)
1938                    func = reduceC_<float,float,OpAdd<float> >;
1939                if(sdepth == CV_32F && ddepth == CV_64F)
1940                    func = reduceC_<float,double,OpAdd<double> >;
1941                if(sdepth == CV_64F && ddepth == CV_64F)
1942                    func = reduceC_<double,double,OpAdd<double> >;
1943            }
...
1964        if( !func )
1965            CV_Error( CV_StsUnsupportedFormat,
1966            "Unsupported combination of input and output array formats" );

I think this means that none of those if statements are hitting. But, I dumped the depths of the source and destination matrices and I get 0 & 5 respectively, which should correspond to CV_8U & CV_32F (checks out in both Java and C++ source code) - and hit on line 1925 of the source.

Any ideas as to what I'm doing wrong?

(I know my listing line numbers don't match the line numbers in the error message, but the listing is all I could find on the web: matrix.cpp

2014-10-23 07:21:08 -0600 received badge  Enlightened (source)
2014-10-23 07:21:08 -0600 received badge  Good Answer (source)
2014-09-11 12:49:11 -0600 received badge  Nice Answer (source)
2014-06-05 07:50:58 -0600 marked best answer Template matching with the CV_TM_CCOEFF algorithm

I can't figure out the CV_TM_CCOEFF algorithm. (I get the CCORR & Least squares)

The O'Reilly book explains that "These methods match a template relative to its mean against the image relative to its mean, so a perfect match will be 1 and a perfect mismatch will be -1; a value of 0 simply means that there is no correlation". However, the equation given for CV_TM_CCOEFF doesn't subtract the mean from each pixel value but instead subtracts the reciprocal of the pixel value sum TIMES the number of pixels (shouldn't it be a division?). Plus, all the simple examples I work out on paper (with small, one dimensional signals) usually don't give me 1, 0, or -1. I also Googled Correlation Coefficient and found variations of this: Pearson Correlation Coefficient, which has all kinds of covariant and squared terms I can't reconcile with the OpenCV equation.

2013-12-09 11:45:25 -0600 commented question Convert Matlab Code to Opencv

Maybe try Python? Numpy is more MATLAB-like than C++.

2013-08-21 16:09:34 -0600 asked a question CV_SCHARR in Python (cv2) for Sobel argument

The Docs indicate that you can pass in a special value to the Sobel filter argument kernel size in C to use the Scharr kernel - CV_SCHARR. I can't seem to find if that constant exists in OpenCV Python (cv2).

http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=scharr#scharr

It appears to be equal -1, but I was trying to be a little clean and use the actual named constant.

2013-07-31 16:00:59 -0600 commented answer matchTemplate() with a mask

Actually I ended up using a workaround. Starting from a warped image quadrilateral inscribed in a rectangle, I pick a smaller rectangle inscribed (and thus containing only "active pixels") in the quadrilateral. I discard everything else and use this rectangle with matchTemplate - thus no "empty" zero-value pixels to influence averages and weights in the matching algorithms. As of yet, results are inconclusive - I think it might work because the warps I am dealing with are "gentle" - maybe 5, 10 degree Euler angle rotations.

2013-05-01 23:11:48 -0600 marked best answer MatOfInt, MatOfFloat, etc...

Are these Java convenience classes just for 1 X N matrices?

2013-04-22 19:42:03 -0600 answered a question Detect concave using 4 points of a rectangle?

First, no rectangles are concave.

A quadrilateral is concave if and only if it has one interior angle over 180 degrees.

How smart and fast do you need?

Off the top of my head, you could find vectors for all 4 sides (not corners)* and take the outer product of each pair of adjacent sides (Outer Product is something like f([x1,y1],[x2,y2]) = x1y2+x2y1,* watch your order and direction, use the right hand rule). If the product is negative for one adjacent pair, that angle is over 180 degrees, you've got a concave quad. (if all the outer products are positive, the quad is convex) I just learned this technique myself a few days ago, so please Google for more detail first.

** subtract adjacent corners, in order CCW around the quad to get the side vectors

** I think this formula is actually the 3-D cross product in the plane z=0, by checking the sign you're seeing which direction the 3-D normal is pointing

2013-04-12 15:08:08 -0600 received badge  Student (source)
2013-04-12 15:08:06 -0600 marked best answer Android Java versions of CV_TERMCRIT_EPS, CV_TERMCRIT_ITER

I can't figure out/find what CV_TERMCRIT_EPS & CV_TERMCRIT_ITER are in Java for Android OpenCV. I've grepped the Java source code folder with a bunch of terms and come up with nothing.

Do they even exist, or do I need to use integer values from the C++ instead?

2013-04-12 15:07:23 -0600 received badge  Self-Learner (source)
2013-03-25 09:00:35 -0600 commented answer matchTemplate() with a mask

Well, my approach for WarpPerspective & MatchTemplate failed (Warping the target instead of the template uses a TON of memory - much more than I have available on Android phones), so I am probably going to deep dive into a new/modified version of matchTemplate in April. That is, unless someone else beats me to it (and saves me a bunch of work)

2013-03-15 16:15:16 -0600 answered a question Initialize numpy array (cv2 python) and PerspectiveTransform

After much trial & error, I figured it out. I didn't notice that the new python cv2 module reverses the order of the output and transformation matrix arguments. (different from c++ AND old python cv module). Also, it returns the output matrix, so I can (optionally) omit the output argument and return directly into a new variable:

corners = np.ones((1,1,2))
newCorners = cv2.perspectiveTransform(corners, rotMat)
2013-03-15 14:45:55 -0600 asked a question Initialize numpy array (cv2 python) and PerspectiveTransform

I am trying to convert some of my C++ OpenCV code to Python, and have hit a stumbling block.

I am trying to get a numpy array into the PerspectiveTransform function and have hit the following assert in the underlying C++ code (matmul.cpp, ln 1926):

CV_Assert( scn + 1 == m.cols && (depth == CV_32F || depth == CV_64F));

Now this is telling me that, first, the number of columns in the transformation matrix should be one more than the number of channels in the input matrix, and, second, that the source matrix should be made of floating point elements, right?

Here's the code that is causing the exception:

rotMat = buildRotMat(roll, pitch, yaw)
corners = np.ones((1,1,2), np.float32)
newCorners = np.zeros((1,1,2), np.float32)
cv2.perspectiveTransform(corners, newCorners, rotMat)

and from a separate function:

rotMat = np.array(
[[math.cos(alpha)*math.cos(beta),
math.cos(alpha)*math.sin(beta)*math.sin(gamma)-math.sin(alpha)*math.cos(gamma),
math.cos(alpha)*math.sin(beta)*math.cos(gamma)+math.sin(alpha)*math.sin(gamma)],
[math.sin(alpha)*math.cos(beta),
math.sin(alpha)*math.sin(beta)*math.sin(gamma)+math.cos(alpha)*math.cos(gamma),
math.sin(alpha)*math.sin(beta)*math.cos(gamma)-math.cos(alpha)*math.sin(gamma)],
[-math.sin(beta),
math.cos(beta)*math.sin(gamma),
math.cos(beta)*math.cos(gamma)]], dtype=float32)

My guess would be that I've made a numpy error (when trying to initialize a 2-channel floating point array), since I'm learning Python as I go, and this code did work fine in C++. Any help or insight would be greatly appreciated.

Thanks, Matt

2013-02-26 13:30:55 -0600 commented answer Results of camera calibration vary
  1. Possibly. Going directly into the camera matrix is hard, if not impossible. The focal lengths fx, and fy are actually a combination (multiplication?) of the physical focal length of the optical lens and the length and width of the individual CMOS sensor elements. The units on a camera matrix are some weird pixel/mm thing. You might be able to indirectly calculate based on multiple views of a known object from known distances or something - but then you're really doing the same sort of thing as the OpenCV anyways.
2013-02-26 13:27:26 -0600 commented answer Results of camera calibration vary
  1. cx & cy are the point that the camera's principal axis intersects the image frame. In a perfect camera, they would be in the middle. In a cheap webcam, if they were in the middle it would be pure luck,
  2. I think "CV_CALIB_SAME_FOCAL_LENGTH" just means that all the test chessboards were taken with the camera at the same focal length. I've never messed around with this off, but I'd imagine it lets you calibrate with images of chessboards taken with different focal lengths - probably by adding some more information as well.
  3. I don't have the error info on me, but the program is telling me I am getting less than 2 pixels of error with a pretty horrible camera.
2013-02-26 12:48:48 -0600 received badge  Nice Answer (source)
2013-02-20 10:43:18 -0600 commented answer Results of camera calibration vary

Well, I am still learning as well, so if you find a trustworthy source that tells you that far away chessboards are important, listen to them first. But if you pull up the far away chessboards intermeadiate results (the red/rainbow circles and lines drawn on the frames), you'll see a ton of error on low-res, if you get detects at all. You could do a bunch of these and pick out the good ones. However, my intution, based on my moderate-level understanding of what's going on under the hood is that for the "main" camera cal matrix parameters, fx, fy, cx, and cy, should be OK with only big chessboard views (make sure to remember to tilt/turn). I think the smaller views might help pick out lens and translation distorion.

2013-02-18 08:14:59 -0600 answered a question Results of camera calibration vary

First, get fx and fy nailed down - as long as they are moving around, cx and cy won't be consistent. Some things i've tried to get good results on bad cameras:

  1. Did you use the program in the Code examples or roll your own? If you programmed your own, use the example program to make sure you are getting EXACTLY the same numbers and error values in yours before trusting it.

  2. Turn off Auto focus and zoom - they will both mess with the physical focal length and vary fx and fy.

  3. Use lots of samples - I think the books say 10 as a minimum. Use at least 20, I've used as many as 100.

  4. Make sure that your chessboard is rigid - tape it tightly to some wood or cardboard - if it's wrinkled or flopping around it will affect calibration.

  5. Do you have a proper chessboard? - the book says the chessboard should be at least several corners in each direction - and that one dimension should have an odd number of corners and the other should be even. I use a 9 X 6 corner chessboard printed on a 8 1/2 X 11 sheet of paper taped to some cardboard.

  6. Make sure your chessboard is taking up as much of the camera view as possible - especially with your low resolution. Don't be afraid to rotate and skew, but when you do make it big - remember you only need inside corners within the camera frame. (not having smaller sectional views might affect the distortion correction numbers - but your basic matrix is probably the first step.)

  7. Finally, use the drawChessboardCorners function (or use the Example Code program's show chessboards option) to pop-up the chessboard-by-chessboard results. Take a close look at where OpenCV thinks the corners are. With low-res video stills I've seen chessboards that OpenCV thinks are good where the corners are drawn 1/2 square off where they should be. Don't use bad corners in your calibration.

Good Luck!

2013-02-11 13:26:54 -0600 commented answer Android camera image rotation

In Java with the SDK, right after: Camera camera = Camera.open(); My JNI is rusty, but I think you could make this call with it.

2013-02-07 17:06:39 -0600 commented answer Android camera image rotation

Could you invoke this method via the JNI?

2013-02-05 10:32:43 -0600 received badge  Editor (source)
2013-02-05 10:32:07 -0600 answered a question Android camera image rotation

I had a rotation issue with my Samsung Galaxy S3 - I had to use

                    camera.setDisplayOrientation(90);

to correct the problem - I think when I googled the issue it was a hardware setting on the camera

2013-01-29 20:11:18 -0600 answered a question Hello OpenCV Android Sample Code - mview confusion

mView is probably supposed to be a data member of the HelloOpenCVActivity class. (mXXXXX being an old convention for naming member variables).

So you would probably see something like this:

HelloOpenCVView mView;

some where close to the top of the HelloOpenCVActivity class. The "bunch of problems later on" you get when you try

HelloOpenCVView mView = new HelloOpenCVView(this);

inside onCreate() are probably scope-related, as other functions in the HelloOpenCVActivity class didn't have access to mView when you tried this. (looking at the example code, onPause() and onResume also refer to mView).

If this doesn't solve your problem, posting more error information/output would be helpful.

2013-01-15 08:35:22 -0600 commented answer Android Java versions of CV_TERMCRIT_EPS, CV_TERMCRIT_ITER

So, CV_TERMCRIT_EPS in C++ becomes TermCritera.EPS in Java? And CV_TERMCRIT_ITER in C++ becomes TermCritera.MAX_ITER in Java?

2013-01-15 01:38:03 -0600 received badge  Teacher (source)