Ask Your Question

Tetragramm's profile - activity

2017-02-26 18:26:05 -0600 received badge  Organizer (source)
2017-02-26 04:21:38 -0600 received badge  Nice Answer (source)
2017-02-25 20:15:59 -0600 commented answer Question about "imagePoints" parameter for solvePnP() function

Yes, it is hard. But that's what you need to do.

2017-02-25 19:33:35 -0600 answered a question How to chosse the last parameter of connectedComponents

The last variable is the type of the labels output. If you use CV_8U, you can only have 255 components. CV_16U is 65535 components. CV_32S is a lot.

Those are the possible choices. CV_32S is just a good choice. It's not much slower than the others, and it can have all the components.

2017-02-24 20:34:52 -0600 commented answer Using cv::solvePnP on Lighthouse Data

Oops. You're correct. I made a mistake in my scratch program I was testing with and had the wrong angles, so of course the tan didn't match.

Now remember that this only works for the identity camera matrix. You're not really using a camera model here.

2017-02-24 19:15:59 -0600 answered a question Rectification from given lens parameters

You can use this to generate the camera matrix, but it does not include distortion or rectification. Rectification would be for stereo cameras, which this does not appear to be.

2017-02-24 19:11:43 -0600 commented question Finding distance between two curves

Actually, I don't think distance transform is what he wants. That's just the shortest distance, not the perpendicular distance.

I can't think of a particularly fast way of doing perpendicular distances. If you can accept using just the distance to the closest point on the other line, the distance transform is great.

2017-02-24 18:40:36 -0600 commented question cv2.split(img)[3] fails on RGB[A?]

Just checking, do you use imread to bring the images in? If so are you using the IMREAD_UNCHANGED flag? That is needed to preserve the alpha channel.

2017-02-24 17:48:39 -0600 commented answer Using cv::solvePnP on Lighthouse Data

Nope. You're forgetting that the lengths of the vectors are affected by the rotations. Basically, the length of your hypotenuse changes with both x and y, as does the "adjacent" you're using for tan.

Look HERE for an explanation of the model. Or google "Pinhole Camera model"

2017-02-24 17:36:50 -0600 commented answer Question about "imagePoints" parameter for solvePnP() function

You have to have a list of only the 3d points that have a match in your 2d points. So only those ORB points that have a feature match.

2017-02-23 21:45:13 -0600 commented answer Using cv::solvePnP on Lighthouse Data

It is redundant in this case, but I'm describing the general case if someone else comes along.

The x/y angles are not the tangent of the x and y. It's also not the tangents of the normalized unit LOS, though that's closer. Your identity camera matrix makes the focal length 1 pixel. So you take the vector intersecting the plane at 1 pixel distance (that's the 1 in the z component) and you get the location on the screen in pixel units.

2017-02-23 19:50:34 -0600 answered a question Using cv::solvePnP on Lighthouse Data

Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)

Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)

Multiply your vector by the camera matrix. LOS = camMat*LOS;

Now divide your LOS by LOS.at(double)(2).

LOS.at(double)(0) is now your x value, and (1) is your y.

You can put these into solvePnP with the identity camera matrix and I think you'll get good results.

2017-02-23 19:36:59 -0600 answered a question Question about "imagePoints" parameter for solvePnP() function

Image points are perfectly fine as floating point in Point2f. They can be anywhere in the frame, so if you have 1920x1080, you can give points from x = 0 through 1920 and y = 0 through 1080.

Are you making sure that your image points correspond to the 3d points? That is, point 0 in your 3d points should be point 0 in your image points and so forth. SolvePnP needs them in order, with matches for all of them.

2017-02-23 18:09:37 -0600 commented question Using cv::solvePnP on Lighthouse Data

If you have the direction in degrees, you're already "past" the camera matrix, as it were. I'll say more later, sorry.

2017-02-22 17:49:47 -0600 commented question triangulatepoints returns very bad results

I think I gave you enough karma to post links. Could you also include your P1 and P2?

2017-02-22 17:45:01 -0600 commented answer VideoCapture from HTC Vive camera?

I think it can be done, but OpenVR has all the status and error checking that will tell you when it will and won't work and why.

2017-02-21 21:51:01 -0600 commented answer VideoCapture from HTC Vive camera?

I think it ought to be able to, but right now I can't get it to work no matter what method I use.

All I can say is that I have used this method before, and it worked then.

2017-02-21 21:39:49 -0600 answered a question building opencv_contrib - minimum dependencies

If you look in each module's folder, there is the CMakeLists.txt file. If you open that, there is the line that starts with ocv_define.

In aruco, it looks like this

ocv_define_module(aruco opencv_core opencv_imgproc opencv_calib3d WRAP python java)

The modules core, imgproc and calib3d are the dependencies.

2017-02-21 19:33:47 -0600 answered a question VideoCapture from HTC Vive camera?

Ok, I'm not getting OpenCV not throw exceptions, but OpenVR integrates nicely with OpenCV.

Take a look at the OpenVR examples HERE. The relevant one is trackedCamera_vr.

Following the sample, you get the header, if it's changed you get the buffer. However, before then you create an OpenCV mat.

    if (m_nCameraFrameBufferSize / (m_nCameraFrameHeight*m_nCameraFrameWidth) == 4)
    {
        image.create(m_nCameraFrameHeight, m_nCameraFrameWidth, CV_8UC4);
    }
    else if (m_nCameraFrameBufferSize / (m_nCameraFrameHeight*m_nCameraFrameWidth) == 3)
    {
        image.create(m_nCameraFrameHeight, m_nCameraFrameWidth, CV_8UC3);
    }

    nCameraError = m_pVRTrackedCamera->GetVideoStreamFrameBuffer(m_hTrackedCamera, vr::VRTrackedCameraFrameType_Undistorted, (uint8_t*)image.data, m_nCameraFrameBufferSize, &frameHeader, sizeof(frameHeader));

You'll notice that unlike the sample, this copies into image.data, which is your Mat. It has a reversed RGB color array, so do a cv::cvtColor(image, image, cv::COLOR_BGR2RGB); and you're good. The sample also has all the code for getting camera pose (Remember it's OpenGL, so it won't match OpenCV's rotation and translation) and camera matrix.

2017-02-21 19:19:19 -0600 answered a question Generate different camera view using camera poses

That is precisely the problem. If you knew the depth at each pixel you could warp it exactly (except for where the first camera can't see the scene), but from an image you don't know that.

2017-02-21 18:10:49 -0600 commented question VideoCapture from HTC Vive camera?

I'll check what I'm doing when i get home. I know I can get frames, but I might be using the OpenVR api.

2017-02-21 18:03:36 -0600 answered a question Difference between undistortPoints() and projectPoints() in OpenCV

undistortPoints is a special case of the reverse projectPoints function.

So, if you pass undistortPoints with no P parameter, then projectPoints with rvec=tvec=(0,0,0) and the camera matrix will return them to the original location. If you do give undistortPoints a P parameter, then using the identity matrix will return them to the original location. Note that for both of these you must convert the 2d points to 3d before passing to projectPoints.

projectPoints has several other uses. If you have true 3d points, you can place them onto an image (either distorted or undistorted). Pass in the camera rvec and tvec, as well as the camera matrix. To place them onto a distorted image, also pass the camera distortion matrix. To place them on an undistorted image, pass noArray or zeros.

NOTE: I don't think project points checks direction, so if you have points in front of and behind the camera, you will get answers for both on the plane of the image. I think, I'm not in a place to test right now.

The Jacobian is useful for many things, but if you don't need it, don't worry about it. If you do need it, you ought to already know what it is and what it means.

2017-02-20 21:26:39 -0600 answered a question Algorithms used for to train cascade classifier

The documentation HERE contains a description with several citations.

2017-02-20 21:25:27 -0600 commented question Findcontours intermittent memory corruption x64

Can you isolate one image that causes the problem and post it here so we can test? Also, what version, compiler, and platform?

2017-02-20 20:16:47 -0600 commented question Contour filling with drawContours() fills area inside contours

To tell what's going on, you should include the same image as just outlines. Otherwise we can't see what's wrong.

2017-02-19 22:26:09 -0600 answered a question the dtype of read image

You can simply use the numpy functions

floatImage = np.float32(image)
2017-02-19 20:01:58 -0600 commented question Unknown type 'CvVideoCamera'

Mm, no. I definitely see a cap_ios.h in my directory. In the videoio module. You may need to go opencv2/videoio/cap_ios.h

2017-02-19 15:50:54 -0600 commented question Unknown type 'CvVideoCamera'

You got your import for it?

#import "cap_ios.h"
2017-02-19 15:44:27 -0600 commented answer Editing/Understanging FindContours Points

They are in order. 0 connects to 1, 1 connect to 2, 2 connects to 3, n-1 connects to n, n connects to 0.

2017-02-19 15:24:21 -0600 answered a question Distance of the object from multiple single camera views

Absolutely it's possible. Not necessarily easy, but possible. I have part of an OpenCV module that does just this HERE. Before it's done it should do more, but for now it just calculates position from several observations.

I think it's fairly well commented, but if you have any questions, just ask.

Since I forgot to put it in the README, the paper this algorithm is based off of is the "Selective angle measurements for a 3D-AOA instrumental variable TMA algorithm". It's not the only algorithm that does this, but it's pretty good, and very fast.

2017-02-19 15:18:02 -0600 commented question Converting image into an array of 2D coordinate points and colors

So, there's a couple of steps here.

First, there is the question of what colors to use. Do you want to posterize the image to a small set of colors and create those, or do you want a pre-defined set of colors and convert the image to just those colors?

Secondly, there is order. From what you say, the machine might benefit from having the colors in order, so it does all the blue, then all the red, then all the green and so forth. That would save having to clean the brush every time, maybe.

Third, what resolution images and what resolution is the machine? Do you need to resize the images, to fit, or are they already the appropriate size?

Last, do you know enough C++ that if you knew the color and coordinate you could print it, or are you a total beginner?

2017-02-16 17:49:23 -0600 answered a question Old cvFitEllipse

fitEllipse in ImgProc.

2017-02-14 19:34:49 -0600 commented answer how to return array of minimum values between two arrays

Yep. That ought to have been fixed with the #define NOMINMAX. Not sure why.

2017-02-14 17:24:43 -0600 commented question Error in Using TrackerKCF in Ubuntu /Linux

Yeah, I'm not having any problems either. Update your source and re-compile, is my suggestion.

2017-02-14 17:14:35 -0600 commented answer how to return array of minimum values between two arrays

It's a macro with the name min. No idea where it's being defined, but that's the cause.

Putting the () around the method name does nothing, but it stops macros from parsing. Since it can't be a macro with the (), it then parses correctly as a function name.

2017-02-14 17:12:40 -0600 commented question problem with image rotate

What did the original image look like?

2017-02-13 22:27:32 -0600 commented answer how to return array of minimum values between two arrays

Hmm. At the top of your file, before any includes, try putting #define NOMINMAX before all of the includes. If that doesn't work, put parentheses around the function name ex:

(cv::min)(norm, monnormval)
2017-02-13 19:36:01 -0600 commented question StereoRectify of non-parallel cameras?

First, can you do just the undistort? Or at least verify that that's correct?

Secondly, you have really small overlap. Do you really need rectification, or just the rotation and translation of the cameras relative to each other?

2017-02-13 18:15:22 -0600 answered a question how to return array of minimum values between two arrays

Many C++ files either include or define a min and max function that often overlaps definition with the OpenCV one.

Try specifying cv::min(), even if you have "using namespace cv;" specified.

You can also use the arguments different ways that make it less likely to have problems.

Normally you would do

result = cv::min(a, b);

but you can also do

cv::min(a, b, result);

You'll note that this does not overlap with any macros that take (x,y).

2017-02-12 17:15:08 -0600 commented question Platform independent way to determine the min and max values of CV_ types

Does it stay the same if the platform int is different than 32 bits? I know the DataType<> assumes that it's the standard char-8, short-16, int-32. That's fairly new and not used everywhere though.

I don't have access to a non 32 int platform to test.

2017-02-12 16:31:42 -0600 answered a question How to detect gun on gray scale image with opencv c++?

Take a look at the Object Detection and Machine Learning tutorials. Lots of useful information there. When you have a specific question, come back and we'll be happy to answer that.

2017-02-12 16:29:25 -0600 answered a question aruCo module, world space coordinates?

If the aruco board is stationary, and the createBoard function has the correct sizes passed in, then yes.

For example, using a Charuco board, the create function asks for square length and marker length in the units you want to measure in. This must be the same as you have printed the marker. So if your board has squares 5cm across, you would put in 0.05 as the square length.

2017-02-12 16:26:00 -0600 commented question Error in Using TrackerKCF in Ubuntu /Linux

Could you include the line of code + one or two lines above and below that's causing the error message?

2017-02-12 00:35:53 -0600 answered a question I GoTurn tracker supported in opencv 3.1 python api?

GOTURN does not exist in 3.1, so no, it is not supported in the 3.1 python api.

2017-02-11 16:30:19 -0600 answered a question How to determine the average BGR values of all the pixels in a closed contour?

I think the easiest is to create a mask for each contour, and then use the mean function to get the average color inside it.

If you create an empty one channel image the same size as your input, you can use drawContours to fill the area of the contour with 255. That is the mask to use with the mean function. Do this for each candidate, and there you go.

2017-02-11 15:44:10 -0600 commented question It is possible to know how much of a given color is Required to If you come to another color with OpenCV?

That looks like 4 variables and three measurements (RGB). You're going to need more information at best.

What's more, you need to make sure that the way they mix is linear. IE: that adding 0.5 yellow always makes the same change in color even if the amount already there is 0.5 or 2.

2017-02-10 23:33:54 -0600 answered a question Issue with Template Matching

matchTemplate is ok, but it's not ideal for this kind of work. Take a look at the OpenCV Tracker module, which has many algorithms specifically made for this.

2017-02-10 17:12:22 -0600 commented answer Image Alignment

That's slightly easier, but still not a guarantee. Honestly, multi-scale optical flow might well be better than key points for this.

2017-02-10 00:06:52 -0600 answered a question Image Alignment

IR Visible image registration is an active area of research and there is no best way to do it.

Take a look at current papers and see if anything looks like it will work for you.

2017-02-08 17:34:09 -0600 commented answer Aruco: Z-Axis flipping perspective

But Aruco /does/ know which is the top left corner, so it /does/ tell solvePnP, and what you're seeing isn't a 90 degree rotation or any multiple of that.

It looks far more like you're seeing bad detections of the corners of the marker than ambiguity of the solvePnP.