Ask Your Question

tischenkoalex's profile - activity

2020-08-05 04:17:30 -0600 received badge  Notable Question (source)
2019-03-14 04:08:29 -0600 received badge  Popular Question (source)
2017-10-21 15:47:06 -0600 commented answer Weird words in my bag

Thanks, this makes perfect sense for me. I tried with KAZE and it looks much better.

2017-10-21 15:45:17 -0600 marked best answer Weird words in my bag

Hello, I'm using BOWKMeansTrainer to cluster ORB keypoints with FREAK descriptors. This combination of detector and matcher show good results for drawing-style images, which I trying to analyze, however BOWKMeansTrainer makes some problems...

After training, when I process an image with BOWImgDescriptorExtractor and get its image descriptor, I see that often words (clusters) consist of keypoints, which are too far from each other.

image description

Here are highlighted keypoints of one of the words. Of course there are other words, which are localized enough, but I'm curious why it happens when they're not localized?

2017-10-21 08:45:27 -0600 received badge  Student (source)
2017-10-21 06:41:34 -0600 asked a question Weird words in my bag

Weird words in my bag Hello, I'm using BOWKMeansTrainer to cluster ORB keypoints with FREAK descriptors. This combinatio

2017-04-04 03:33:05 -0600 commented question Turning ArUco marker in parallel with camera plane

I switched to use ARUCO Board and it improved accuracy a lot. findHomography() and getPerspectiveTransform() provide the following result for me.

2017-03-31 10:28:43 -0600 commented question Turning ArUco marker in parallel with camera plane

Yes, I plan to switch to subpixel accuracy too. Just wanted first to make sure that I'm not going in wrong direction by not calculating desired coordinates from vectors. I didn't check the homography topic yet though... I believe it will give me more understanding.

2017-03-31 07:20:58 -0600 commented question Turning ArUco marker in parallel with camera plane

I added source images and corner coordinates. Will read more on homography. Thanks!

2017-03-30 04:24:14 -0600 commented question Weird ArUco marker behavior

It looks like these could help, I'll try to play with them after I solve the problem which most likely gives the most distortion for me - http://answers.opencv.org/question/13...

2017-03-30 04:19:28 -0600 asked a question Turning ArUco marker in parallel with camera plane

I need to warp the image to fix its perspective distortion based on detected marker. In other words - to get the plane where the marker lays become parallel to the camera plane.

In general it works for me, when I simply map points of perspective-distorted marker to its orthogonal position (Sketch) with getPerspectiveTranfrorm() and then warpPerspective(), which warps whole image:

The following are sample params for getPerspectiveTransform()

src1 (100, 100) => dst1 (100, 100)
src2 (110, 190) => dst2 (100, 200)
src3: (190, 190) => dst3 (200, 200)
src4: (200, 100) => dst4 (200, 100)

The result looks OK, but not always, so I think that this way is wrong.

My assumption that since for detected marker I can get its pose estimation (which shows its relation to camera) I can calculate required marker position (or camera position?) using marker points and rotation/translation vectors.

Now I'm stuck basically not understanding the math solution. Could you advise?

UPDATE

The following is a source image with detected markers. The white circles represent the desired position of marker that will be used in getPerspectiveTransform(). source

Source corners: [479, 335; 530, 333; 528, 363; 475, 365]
Result corners: [479, 335; 529, 335; 529, 385; 479, 385]

The following is the result image, which is still distorted:

image description

2017-03-29 15:36:54 -0600 commented answer Aruco - draw position+orientation relative to marker

Hi, unfortunately the link in answer doesn't follow to any function now. Could you post function name or, maybe, code that worked for you?

2017-03-29 00:43:17 -0600 commented question Weird ArUco marker behavior

Yeah... that's blurring makes such result, just tested with the same size, just blurred image. Now wondering how to fix this situation, because in real photo there is always some portion of blurring.

2017-03-29 00:39:28 -0600 commented question Weird ArUco marker behavior

Only detection box. Here are both source images: https://www.dropbox.com/s/mbvtjvc5pl3...

2017-03-28 22:54:11 -0600 commented question Weird ArUco marker behavior

The second image is a result (on the right). 1.png ans 2.png are two attempts, where 2.png is just a little bit smaller. I just got idea to play with detection params.

2017-03-28 15:35:38 -0600 received badge  Editor (source)
2017-03-28 15:34:43 -0600 asked a question Weird ArUco marker behavior

It is the first time I worked with ArUco markers and was disappointed by weird distortion after perspective transformation based on recognized markers. I assumed that its because of camera calibration errors and decided to test with "ideal" markers.

I calibrated camera with pattern file (not printed and photographed, but just from a source file). Then I generated my marker and put it on JPG image without any distortions. My program found the marker, I generated new position for marked and warped the image:

    int side = 100;
    resultCorners.push_back(corners[0][0]);
    resultCorners.push_back(Point2f(corners[0][0].x + side, corners[0][0].y));
    resultCorners.push_back(Point2f(corners[0][0].x + side, corners[0][0].y + side));
    resultCorners.push_back(Point2f(corners[0][0].x, corners[0][0].y + side));
    Mat w = getPerspectiveTransform(corners[0], resultCorners);
    warpPerspective(img_copy, result, w, img_copy.size());

That worked OK: 1.PNG

Then I minimized source image with one pyrDown call and tried to recognize marker again:

2.PNG

Could you please explain, why this weird distortion happens and how I can avoid it in real situation with photo or video stream?

UPDATE: On the left is a source image, on the right - a destination. I find marker on both to make sure that code above turns source image in orthogonal projection to camera. I assume I should use vectors that I receive from estimatePoseSingleMarkers here, but for now I'm just confused with distortion of markers detected in source image.

2016-09-20 08:02:47 -0600 received badge  Enthusiast
2016-09-14 14:07:27 -0600 received badge  Supporter (source)
2016-09-14 14:07:25 -0600 received badge  Scholar (source)
2016-09-14 14:07:20 -0600 commented answer Mat rotation center

Thanks! I expected that rotation will be centered in resulting mat, but now I see it couldn't without explicitly specified offset:

Mat rot = getRotationMatrix2D(center, 90, 1);
rot.at<double>(0, 2) += 0; // x
rot.at<double>(1, 2) += size/2; // y
2016-09-14 05:04:46 -0600 asked a question Mat rotation center

Hi, I'm rotating the 10x1 image by specifying center at 5,0:

[255, 255, 255, 255, 255, 255, 255, 255, 255, 255]

int size = 10;
Mat test = Mat::zeros(Size(size, 1), CV_8U);
line(test, Point(0, 0), Point(size, 0), Scalar(255));

Mat rot = getRotationMatrix2D(Point(test.cols/2, test.rows/2), 90, 1);
Mat dest;
warpAffine(test, dest, rot, Size(size, size));

imshow("Source", test);
imshow("Result", dest);

I assume that result will be the vertical line at center of 10x10 Mat, but it is shifted and cropped to the top:

[  0,   0,   0,   0,   0, 255,   0,   0,   0,   0;
   0,   0,   0,   0,   0, 255,   0,   0,   0,   0;
   0,   0,   0,   0,   0, 255,   0,   0,   0,   0;
   0,   0,   0,   0,   0, 255,   0,   0,   0,   0;
   0,   0,   0,   0,   0, 255,   0,   0,   0,   0;
   0,   0,   0,   0,   0, 255,   0,   0,   0,   0;
   0,   0,   0,   0,   0,   0,   0,   0,   0,   0;
   0,   0,   0,   0,   0,   0,   0,   0,   0,   0;
   0,   0,   0,   0,   0,   0,   0,   0,   0,   0;
   0,   0,   0,   0,   0,   0,   0,   0,   0,   0]

I'm trying to understand why it happens?

2016-07-17 06:44:52 -0600 asked a question Understanding of planes in NAryMatIterator

I have 3-dimension matrix:

const int n_mat_size = 5;
const int n_mat_sz[] = { n_mat_size , n_mat_size, n_mat_size };
cv::Mat m1(3, n_mat_sz, CV_32FC1);

Now I'd like to iterate its planes and expect that it should be three two-dimensional matrices:

const cv::Mat* arrays[] = { &m1, 0 };
cv::Mat planes[3];
cv::NAryMatIterator it(arrays, planes);
std::cout << it.nplanes << ", " << it.planes[0].rows << ", " << it.planes[0].cols;

I expect to get output "3, 5, 5", but instead I get "1, 1, 125". Where is the slice of matrix?