Ask Your Question

Tetragramm's profile - activity

2020-04-15 15:25:59 -0500 commented question Camera position in world coordinate is not working but object pose in camera co ordinate system is working properly

You are correct. +Z should be in front of the camera. Not sure what I was thinking.

2020-04-02 14:50:13 -0500 commented question Printer ink not black in IR

Try shining an incandescent light on the paper. That should help distinguish the light and dark parts, though I don't k

2020-03-13 15:39:15 -0500 edited question Unhandled exception at 0x00007FFB8B3F9159 in test1.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000A35178F4E0.

Unhandled exception at 0x00007FFB8B3F9159 in test1.exe: Microsoft C++ exception: cv::Exception at memory location 0x0000

2020-02-18 08:39:41 -0500 received badge  Good Answer (source)
2020-01-22 23:06:57 -0500 edited answer Unexpected Results at Subtracting Two Images

EDIT: See comments about saturate_cast and the missing else. Also, that is what references are for. Passing things int

2020-01-19 16:48:04 -0500 commented answer Unexpected Results at Subtracting Two Images

Oh, I see. saturate_cast will do what needs to be done, but your problem is your if(iVal<0). If it is less than 0,

2020-01-18 22:05:45 -0500 answered a question vector results inconsistent what am I doing?[SOLVED]

cv::multiply is not a cross product. It is an element-wise product. You want to do vec_cross_result = PQ.cross(PR);

2020-01-18 22:03:37 -0500 commented answer Unexpected Results at Subtracting Two Images

Did you try that exact line?

2020-01-17 16:19:59 -0500 edited answer Unexpected Results at Subtracting Two Images

int iVal = inputImage1->at<uchar>(i,j) - inputImage2->at<uchar>(i,j); This line. Cast them to int be

2020-01-17 16:18:29 -0500 commented question Unexpected Results at Subtracting Two Images

int iVal = inputImage1->at<uchar>(i,j) - inputImage2->at<uchar>(i,j); This line. Cast them to int be

2020-01-04 17:18:24 -0500 received badge  Good Answer (source)
2019-12-31 19:51:16 -0500 commented question Learning the background model using images

You can extract the modeled background from the OpenCV BackgroundSubtractors. https://docs.opencv.org/4.2.0/d7/df6/clas

2019-12-31 16:22:45 -0500 answered a question MOG2 algorithm info

Per the documentation, see the papers [271] Zoran Zivkovic and Ferdinand van der Heijden. Efficient adaptive density es

2019-12-11 19:24:10 -0500 commented question cv2.imshow and cv2.imwrite show different output from the same array[SOLVED]

I'm not sure, but does the array have 4 channels? If so, try setting the last one (which may be treated as a transparen

2019-12-03 23:29:15 -0500 received badge  Nice Answer (source)
2019-11-28 00:05:54 -0500 answered a question Detect ellipses, ovals in images with OpenCV python[SOLVED]

Well, if these are representative images, you are probably better off doing something besides HoughCircles. I suggest d

2019-11-20 00:14:13 -0500 commented question How to calculate the average with CV_32F foramt?

That sounds like the CV_32F contains a NaN to begin with. Try running patchNaNs before mean to see if that fixes it.

2019-10-21 13:06:48 -0500 received badge  Nice Answer (source)
2019-09-17 22:17:45 -0500 commented question is there an unnecessary code repeat?

I would like to think I had a reason for putting that there, but I can't remember, and it certainly looks unnecessary.

2019-08-08 18:07:49 -0500 answered a question Parallel READ Access on Mats

Reading is no problem at all. Writing to different locations is ok, as long as you don't do anything that alters the Ma

2019-07-17 23:14:13 -0500 commented question How to convert cv::Mat* to cv::Mat?

https://www.carrida-technologies.com/doc/SDK/4.3.1/html/classsvcapture_1_1_image.html Well that's remarkably un-informa

2019-07-17 18:18:15 -0500 commented question How to grow bright pixels in grey region?

Are the only values 0, 127 and 255?

2019-07-06 13:38:29 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

It is much more accurate to get the depth at the particular pixel. That accounts for all the error in whether the table

2019-07-04 23:12:37 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

They have depth. You do not. If you know the distance to the table (t), and the table is parallel to the plane(R), y

2019-06-25 17:53:01 -0500 answered a question Translating from camera space, into see through display space

The short answer is, you can't do this properly, but you can fake it. Long answer is, because your camera isn't aligned

2019-06-24 17:23:54 -0500 answered a question How hard is it to only extract specific class and functions?

Pretty hard. However, you can build and link the static libraries instead of DLLs, and it should, maybe, possibly remov

2019-06-21 18:26:49 -0500 commented answer How to get undistorded point

Perhaps you could share the code you used to call undistortPoints? I can't possibly tell if you're using it correctly.

2019-06-12 09:00:24 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

When the camera is parallel to the table, yes, you can make the assumption that R is zero. t must obviously be [0,0,z],

2019-06-07 19:27:09 -0500 commented question Assisting the compiler into generating better code

Then do please make a pull-request on github.

2019-06-06 19:05:42 -0500 commented answer Replace subsection of Mat with another Mat.

Both. This uses some known guarantees about the Mat memory structure to be much faster than a pixel-by-pixel method.

2019-06-06 19:03:04 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

Nope, that's not the right calculation. And yes, it could also be the non-flatness of the table. But unless that slope

2019-06-05 19:17:29 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

There are two main possibilities. It could be either, or both. First is camera distortion. Distortion typically gets

2019-06-04 18:28:18 -0500 answered a question SolvePnp in millimeters instead of pixels

If your object_points are described in mm, then your tvec will similarly be in mm.

2019-06-04 18:25:54 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

First a question. You wish to move to an X,Y. Is it true that the farther from the destination you are, the farther th

2019-06-04 18:22:19 -0500 commented question Assisting the compiler into generating better code

Well, you can make the change for your personal use and submit it as a pull-request. But PowerPC is not a particularly

2019-06-03 19:59:03 -0500 commented question Assisting the compiler into generating better code

The size of the binaries is a concern, and this is doubling the size of the code for basically all the cvtColors. There

2019-06-03 19:53:16 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

That would work as long as the camera is pointed straight down at the table. And as long as you're ok with the center of

2019-05-30 22:18:28 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

This sort of question is best answered by the documentation for the function HERE. calibrateCamera treats the corner of

2019-05-29 18:21:25 -0500 edited question Blob Detector Not working when it should on obvious blobs, makes no sense

Blob Detector Not working when it should on obvious blobs, makes no sense So I have an HSV filtered image that I am tryi

2019-05-29 18:04:08 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

You said you know the orientation and position of the camera. Is that from an opencv function that gives you rvec and t

2019-05-28 19:16:33 -0500 answered a question one point (u,v) to actual (x,y) w.r.t camera frame?

Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the

2019-05-27 17:44:38 -0500 answered a question How to get undistorded point

Take a look at cv::fisheye::undistortPoints.

2019-05-27 17:43:25 -0500 commented question one point (u,v) to actual (x,y) w.r.t camera frame?

Do you not need z coordinate because you know how far it is? Or at least the location of the camera relative to some su

2019-05-18 09:40:26 -0500 answered a question I have system with three camera. I have R and T matrix between C1 & C2 also between C2 & C3. How to transform a point from first camera to third camera?

OpenCV provides the ComposeRT function, which combines two sets of Rotation and Translation transforms.

2019-05-11 16:07:34 -0500 answered a question What patern to be detected from far away

Try a simple chessboard pattern. A 2x2 chessboard can be seen from very far away, and the center is the center no matter

2019-04-27 15:23:55 -0500 answered a question Why at() return 3 values for a grayscale image

Probably because you're asking for 3 uchar values. Look at the lines with .at<Vec3b>. The type between the <

2019-04-05 17:02:09 -0500 commented question How to estimate object POSE when there are not enough features for SolvePnP?

So, I can see a lot more than one feature. Every corner is a feature. You have to do some logic to match those to the 3

2019-04-04 20:17:03 -0500 answered a question Mean position of white pixels

You want to use cv::moments using a.col(i), where i is the column number. The output is a structure that contains the m

2019-02-09 09:20:53 -0500 commented question Interpretation of translational vectors results of camera calibration.

So my thought is this. You are using something with real cameras. They have lenses that look something like THIS. But

2019-02-06 17:50:44 -0500 commented question Interpretation of translational vectors results of camera calibration.

Are the cameras pointing straight, or are they canted inwards or outwards? IE: ||, /\, or \/ One possibility is that t