Ask Your Question

Tetragramm's profile - activity

2019-08-08 18:07:49 -0500 answered a question Parallel READ Access on Mats

Reading is no problem at all. Writing to different locations is ok, as long as you don't do anything that alters the Ma

2019-07-17 23:14:13 -0500 commented question How to convert cv::Mat* to cv::Mat?

https://www.carrida-technologies.com/doc/SDK/4.3.1/html/classsvcapture_1_1_image.html Well that's remarkably un-informa

2019-07-17 18:18:15 -0500 commented question How to grow bright pixels in grey region?

Are the only values 0, 127 and 255?

2019-07-06 13:38:29 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

It is much more accurate to get the depth at the particular pixel. That accounts for all the error in whether the table

2019-07-04 23:12:37 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

They have depth. You do not. If you know the distance to the table (t), and the table is parallel to the plane(R), y

2019-06-25 17:53:01 -0500 answered a question Translating from camera space, into see through display space

The short answer is, you can't do this properly, but you can fake it. Long answer is, because your camera isn't aligned

2019-06-24 17:23:54 -0500 answered a question How hard is it to only extract specific class and functions?

Pretty hard. However, you can build and link the static libraries instead of DLLs, and it should, maybe, possibly remov

2019-06-21 18:26:49 -0500 commented answer How to get undistorded point

Perhaps you could share the code you used to call undistortPoints? I can't possibly tell if you're using it correctly.

2019-06-12 09:00:24 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

When the camera is parallel to the table, yes, you can make the assumption that R is zero. t must obviously be [0,0,z],

2019-06-07 19:27:09 -0500 commented question Assisting the compiler into generating better code

Then do please make a pull-request on github.

2019-06-06 19:05:42 -0500 commented answer Replace subsection of Mat with another Mat.

Both. This uses some known guarantees about the Mat memory structure to be much faster than a pixel-by-pixel method.

2019-06-06 19:03:04 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

Nope, that's not the right calculation. And yes, it could also be the non-flatness of the table. But unless that slope

2019-06-05 19:17:29 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

There are two main possibilities. It could be either, or both. First is camera distortion. Distortion typically gets

2019-06-04 18:28:18 -0500 answered a question SolvePnp in millimeters instead of pixels

If your object_points are described in mm, then your tvec will similarly be in mm.

2019-06-04 18:25:54 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

First a question. You wish to move to an X,Y. Is it true that the farther from the destination you are, the farther th

2019-06-04 18:22:19 -0500 commented question Assisting the compiler into generating better code

Well, you can make the change for your personal use and submit it as a pull-request. But PowerPC is not a particularly

2019-06-03 19:59:03 -0500 commented question Assisting the compiler into generating better code

The size of the binaries is a concern, and this is doubling the size of the code for basically all the cvtColors. There

2019-06-03 19:53:16 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

That would work as long as the camera is pointed straight down at the table. And as long as you're ok with the center of

2019-05-30 22:18:28 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

This sort of question is best answered by the documentation for the function HERE. calibrateCamera treats the corner of

2019-05-29 18:21:25 -0500 edited question Blob Detector Not working when it should on obvious blobs, makes no sense

Blob Detector Not working when it should on obvious blobs, makes no sense So I have an HSV filtered image that I am tryi

2019-05-29 18:04:08 -0500 commented answer one point (u,v) to actual (x,y) w.r.t camera frame?

You said you know the orientation and position of the camera. Is that from an opencv function that gives you rvec and t

2019-05-28 19:16:33 -0500 answered a question one point (u,v) to actual (x,y) w.r.t camera frame?

Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the

2019-05-27 17:44:38 -0500 answered a question How to get undistorded point

Take a look at cv::fisheye::undistortPoints.

2019-05-27 17:43:25 -0500 commented question one point (u,v) to actual (x,y) w.r.t camera frame?

Do you not need z coordinate because you know how far it is? Or at least the location of the camera relative to some su

2019-05-18 09:40:26 -0500 answered a question I have system with three camera. I have R and T matrix between C1 & C2 also between C2 & C3. How to transform a point from first camera to third camera?

OpenCV provides the ComposeRT function, which combines two sets of Rotation and Translation transforms.

2019-05-11 16:07:34 -0500 answered a question What patern to be detected from far away

Try a simple chessboard pattern. A 2x2 chessboard can be seen from very far away, and the center is the center no matter

2019-04-27 15:23:55 -0500 answered a question Why at() return 3 values for a grayscale image

Probably because you're asking for 3 uchar values. Look at the lines with .at<Vec3b>. The type between the <

2019-04-05 17:02:09 -0500 commented question How to estimate object POSE when there are not enough features for SolvePnP?

So, I can see a lot more than one feature. Every corner is a feature. You have to do some logic to match those to the 3

2019-04-04 20:17:03 -0500 answered a question Mean position of white pixels

You want to use cv::moments using a.col(i), where i is the column number. The output is a structure that contains the m

2019-02-09 09:20:53 -0500 commented question Interpretation of translational vectors results of camera calibration.

So my thought is this. You are using something with real cameras. They have lenses that look something like THIS. But

2019-02-06 17:50:44 -0500 commented question Interpretation of translational vectors results of camera calibration.

Are the cameras pointing straight, or are they canted inwards or outwards? IE: ||, /\, or \/ One possibility is that t

2019-02-05 17:47:55 -0500 commented question Interpretation of translational vectors results of camera calibration.

Your depth is short too, yes? How confident are you on the size of your pattern? Is it perhaps slightly larger or smal

2019-02-04 23:14:56 -0500 commented answer Multi-tracking Kalman Filter Problem

And did you make sure the values of state[0] and state[1] are different?

2019-02-04 23:13:20 -0500 commented answer Triangulation from spheric model

Well, if you can turn the pixels into a line of sight vector, and you've got your tvecs and rvecs which should remain th

2019-02-03 19:01:58 -0500 answered a question Triangulation from spheric model

Take at look at this prototype module called Mapping3D. It contains a couple of classes and functions to help you do ju

2019-02-01 22:00:32 -0500 answered a question Multi-tracking Kalman Filter Problem

I'm almost certain you're making shallow Mat copies somewhere. For example: The line kf[index].statePost = state[inde

2019-01-30 17:50:10 -0500 commented question How can I get bounding box around the specific quadrants(locations) the difference is in, instead of the exact difference of contents in image?

It sounds like you need to practice your basic python. It looks like you've got everything in lists, and if you can do

2019-01-29 22:55:55 -0500 commented question How can I get bounding box around the specific quadrants(locations) the difference is in, instead of the exact difference of contents in image?

Are you saying you've got it all working, or is there another question there?

2019-01-28 19:04:54 -0500 commented question Interpretation of translational vectors results of camera calibration.

I assume you're estimating lens distortion along with everything else? If you aren't, that would do it.

2019-01-28 18:54:53 -0500 commented question How to calibrate a camera with a movable camera?

I'm not sure what you mean by calibration. What is it you're trying to do with them?

2019-01-24 22:44:02 -0500 commented question Thermal Calibration Pattern Detection

Is it 8-bit or 16-bit data? If it's 16, can you post the original as a 16-bit PNG file somewhere so we can experiment?

2019-01-23 22:55:36 -0500 commented question Thermal Calibration Pattern Detection

Have you tried the corner detectors? IE: Harris corners?

2019-01-21 22:28:40 -0500 commented question How can I get bounding box around the specific quadrants(locations) the difference is in, instead of the exact difference of contents in image?

Ok, so try this: use the absdiff function to subtract your "empty" image from the current image (the one you're trying

2019-01-20 17:35:29 -0500 commented question How can I get bounding box around the specific quadrants(locations) the difference is in, instead of the exact difference of contents in image?

Have you figured out how to tell the difference between occupied and not? For just one box, can you tell the difference

2019-01-19 21:26:08 -0500 received badge  Nice Answer (source)
2019-01-19 14:35:41 -0500 answered a question Transform 2D Point into 3D Line

Take a look HERE. the variable los after line 167 contains what you're looking for. Note that it is a unit vector pro

2019-01-19 14:32:48 -0500 answered a question Conversion 16bit image to 8 bit image

There are several options, depending on what you mean. All lose information in one way or another. dst.converTo(src,

2019-01-19 14:24:25 -0500 commented question Why 'imencode' taking so long ?

4096x4096x8bit isn't the biggest I've seen. imencode is meant for long-term storage, where file-size is much more imp

2019-01-19 14:12:03 -0500 commented question How can I get bounding box around the specific quadrants(locations) the difference is in, instead of the exact difference of contents in image?

By quadrant you mean, literal quadrant? The quarter of the image, like top left, bottom right, ect? Or something else?

2019-01-19 14:09:29 -0500 answered a question cv::undistortPoints not working for me ....

Take a look at the documentation HERE. See the last argument, P=noArray()? It says it's the "New camera matrix (3x3) .