Ask Your Question

sgarrido's profile - activity

2015-04-24 02:47:38 -0600 answered a question convert a position from a smaller image to a bigger image.

You can transform points from the small image to the big image (or viceversa) by a simple cross-multiplication (rule of three). For example:

    cv::Mat smallImg, bigImg;
    cv::Point2f bigPnt, smallPnt;
    ...
    bigPnt.x = smallPnt.x * float(bigImg.cols) / float(smallImg.cols);
    bigPnt.y = smallPnt.y * float(bigImg.rows) / float(smallImg.rows);

A rectangle (ROI) can be transformed in the same way:

    cv::Mat smallImg, bigImg;
    cv::Rect bigROI, smallROI;
    ...
    bigROI.x = smallROI.x * float(bigImg.cols) / float(smallImg.cols);
    bigROI.y = smallROI.y * float(bigImg.rows) / float(smallImg.rows);
    bigROI.width = smallROI.width * float(bigImg.cols) / float(smallImg.cols);
    bigROI.height = smallROI.height * float(bigImg.rows) / float(smallImg.rows);
2015-04-24 02:29:17 -0600 commented answer Get the 3D Point in another coordinate system

That depends on your cameras arrangment. About the website, it seems to be a problem with the University's hosting. I hope it gets fixed soon.

2015-04-20 10:06:54 -0600 answered a question Problem with cv::reduce when trying to sum both columns and rows

I think the problem can be the size of mask.

I guess the number of rows of mask is ok, but the number of columns is zero. You can check it by printing mask.cols and mask.rows. So you may have erroneously initialized it with something like this: cv::Mat mask(100,0,CV_...

That would explains why it fails when summing over columns, but not when summing over rows.

Anyway, I have just noticed that the ouput of cv::reduce cannot be of type uint8_t, since it has not enough capacity to store the whole sums. So, for instance, the output needs to be of type int or larger for an input of type uint8_t.

2015-04-20 09:42:15 -0600 commented answer Get the 3D Point in another coordinate system

You need the transformation between depth camera's system and rgb camera's system. Then you can transform points in the same way you do between the board's system and a camera's sytem.

2015-04-20 06:21:17 -0600 received badge  Supporter (source)
2015-04-20 05:24:52 -0600 answered a question Find the number of white pixels in contour

You can use pointPolygonTest() (java version) to determine which image pixels are inside a contour. Then you can count how many inner pixels are white.

To speed up the process you dont need to test all image pixels, only those inside the contour's bounding box.

2015-04-20 04:21:46 -0600 commented answer Get the 3D Point in another coordinate system

Yes, the method BoardDetector::getDetectedBoard() returns an object of type Board which includes the Rvec and Tvec of the detected board. There are also OpenGL integration examples for both, single markers and board.

2015-04-20 03:31:05 -0600 received badge  Teacher (source)
2015-04-19 05:34:05 -0600 received badge  Editor (source)
2015-04-19 05:19:48 -0600 answered a question OpenCV Assertion failed when using perspective transform

Hi!,

in this line:

...     

cv::projectPoints(TheMarkers[i],TheMarkers[i].Rvec,TheMarkers[i].Tvec,TheCameraParameters.CameraMatrix,TheCameraParameters.Distorsion,projectedPoints); 

...

TheMarkers[i] is a std::vector<cv::Point2f> (aruco::Marker inherits from std::vector<cv::Point2f> ), i.e. a vector composed by the 2D coordinates of the four marker's corners in the image.

You are trying to project 2D points, which is probably producing the error.

Anyway, I dont see the point of your project. You say you want to "define the camera position in world coordinates system using AR markers". You already have this information in TheMarkers[i].Rvec and TheMarkers[i].Tvec.

2015-04-19 04:40:55 -0600 answered a question What is the meaning of: TheMarkers[0].Tvec.at<Vec3f>(0,0)[0];

Hi,

TheMarkers[0].Tvec is a cv::Mat composed by 3 floats (3x1, 3 rows and 1 column), so this code is equivalent:

double x_t = -TheMarkers[0].Tvec.at<float>(0,0);
double y_t = TheMarkers[0].Tvec.at<float>(1,0);
double z_t = TheMarkers[0].Tvec.at<float>(2,0);

The problem is that sometimes you could forget if Tvec's size is 3x1 or 1x3 and confuse the at() indexes.

By using cv::Vec3f you avoid this problem since you dont need to specify columns or rows.

An alternative is using .ptr():

double x_t = -TheMarkers[0].Tvec.ptr<float>(0)[0];
double y_t = TheMarkers[0].Tvec.ptr<float>(0)[1];
double z_t = TheMarkers[0].Tvec.ptr<float>(0)[2];
2015-04-19 04:15:06 -0600 answered a question Get the 3D Point in another coordinate system

Hi,

ArUco provides the transformation from the marker's coordinate system to the camera's system.

As Eduardo said, you can transform the finger point to the marker's system just by applying the inverse transformation.

However, you say that your camera's coordinate system is left-handed, and the transformation provided by ArUco assumes a right-handed camera system (same as OpenCV): right-handed-camera

If your finger point is refered to a left-handed system, you have to transform the point to the ArUco right-handed camera system before applying the inverse marker transformation.

Considering your picture, you can accomplish this by simply negating the Y coordinate of the finger point.