Ask Your Question

Tetragramm's profile - activity

2017-05-25 21:59:54 -0500 commented question How to Store a Mat data of Size (1920*1080) and Use it in another program?

Ok, so storing data to disk and reading it in a new program, and 60 FPS do not go together.

You either need to set up shared memory and some events or... I dunno how else, but that's the way I would do it.

Create the shared memory and an event in both processes. In program A map the shared memory into a Mat and do your progA processing. When you're done with A, wait for the event to be unset, then image.copyTo(shared), then set the event. In Program B, wait for the event to be set, copy the image out of the shared buffer, then unset the event.

2017-05-25 21:55:32 -0500 commented answer How to triangulate Points from a Single Camera multiple Images?

Ah, yeah. Fixing the float/double thing is already on my list. Sorry it caused you problems.

2017-05-24 20:19:14 -0500 commented question How to Store a Mat data of Size (1920*1080) and Use it in another program?

Do you mean, another process entirely or just another part of the same process? Either way, you can use the Mat constructor to wrap a section of memory and then copy into or out of it.

Take a look at constructor 11 HERE.

2017-05-24 20:12:27 -0500 commented answer How to triangulate Points from a Single Camera multiple Images?

Also, I just pushed the sample I've been working on. I've only tried it on Visual Studio, but I don't think there's any Windows specific stuff.

2017-05-24 20:05:17 -0500 commented answer How to triangulate Points from a Single Camera multiple Images?

Hmm, Another thing to check in the method. The inputs I've been using are 1 column, 3 rows, which are [X;Y;Z] in the same coordinate system calibrateCamera would output.

2017-05-24 17:15:09 -0500 commented answer How to triangulate Points from a Single Camera multiple Images?

It is expecting the same format as calibrateCamera outputs, which is not the 3x3 size. Try using Rodrigues to convert the 3x3 to the proper shape.

I'll add a check in my code that will use Rodrigues or not as appropriate so it doesn't matter. I'm not sure if the python/c++ interface is doing anything funny though. I don't see how it would end up as 3x33 in the c++ code.

2017-05-21 22:57:56 -0500 commented question OpenCV Error: Assertion failed(dims <= 2.....

Why don't you use the OpenCV functions for undistorting the image? It's virtually certain that you're going out of bounds with that.

2017-05-21 22:09:47 -0500 answered a question How to tranform 2D image coordinates to 3D world coordinated with Z = 0?

solvePnP should be fine, though you may want to set the flags to use the method that specifically works with 4 point sets. Or use more points.

I think you're doing the projection wrong though. Take a look HERE. This method shows how to get the line of sight from a point in and image and the camera intrinsics and extrinsics. Except the last section, which you'll need to write, where you scale the LOS to be the negative of the cameraTranslation z value. Then add that to the translation and get your result.

2017-05-21 09:06:05 -0500 commented answer How to write this interpolate function using OpenCV?

Yeah, I'm not sure which is the approximated one, but the interpolation works differently between them. You're not going to get them exactly the same, without just re-writing the method you have.

Sorry, I tried them all, and still differences. Small differences, but they are there.

2017-05-21 08:53:22 -0500 commented answer How to write this interpolate function using OpenCV?

You need it exactly exactly? Ee, that's hard. I'll take a look though.

For the boolean. You have a range of value (0-255) that are valid image values. You need true when the entire patch was within the image, and false if any part of it was outside the image. warpAffine has a flag that determines how it handles the border. The default it BORDER_CONSTANT, where it takes the last parameter and uses that as the value for any portion outside the image. If you set the last value to something far outside the range (say -999), then you only see -999 when you need a false. The checkRange function is just an existing way to see if there is any values outside of the range.

2017-05-20 17:25:34 -0500 answered a question How to write this interpolate function using OpenCV?

Ok, when I run it, the results didn't match what you uploaded, but I did figure out how to make warpAffine function just like it.

Mat affine( 2, 3, CV_64F );
affine.at<double>( 0, 0 ) = a11;
affine.at<double>( 0, 1 ) = a12;
affine.at<double>( 0, 2 ) = ofsx;
affine.at<double>( 1, 0 ) = a21;
affine.at<double>( 1, 1 ) = a22;
affine.at<double>( 1, 2 ) = ofsy;
invertAffineTransform( affine, affine );
affine.at<double>( 0, 2 ) += output.cols / 2.0;
affine.at<double>( 1, 2 ) += output.rows / 2.0;

warpAffine( image, output, affine, Size( output.cols, output.rows ) );

warpAffine, unlike the function there, is capable of extrapolating past the edge of the image, so you may not need the return value. If you do, just use the BORDER_CONSTANT with a value outside your image range (say -999), then use the checkRange function with your image range to get the boolean value.

2017-05-17 22:24:14 -0500 answered a question Unable to use edgePreservingFilter and stylization

There are a couple of edge preserving filters in OpenCV. If you're stuck using 2.4, there's the bilateralFilter and adaptiveBilateralFilter in imgproc.

If you can use the 3.x contrib modules, you also get all of THESE, which are significantly better and faster

2017-05-16 16:37:53 -0500 answered a question How to know if a camera is stationary or moving?

There's no way to answer definitively without a system that completely understands the image and everything in it, but we can come close.

Basically, if the camera is moving, everything you see in the image is undergoing the same apparent motion plus whatever real motion those objects are taking. So your basic effort is to match points between frames, then calculate the camera motion that would cause them to align. That would be the findEssentialMat function. Then, if the motion is large, the camera is moving.

Obviously, if there are many objects moving in a consistent manner that will throw off the calculation, but depending on the application, you should be able to filter or otherwise deal with it. Many objects moving in different ways should not cause a problem, as the RANSAC or LMED methods would throw them out of consideration.

And of course, you can use time in your favor as well. If the motion changes with every frame, it might not be real, but if it's consistent, then it is real. Kalman filters are your friend for this.

It's not an easy problem, so good luck.

2017-05-15 19:36:25 -0500 answered a question Re-distorting a set of points after camera calibration

I'm assuming your x and y to distort are in the range 0->image dimensions, not the normalized version.

There are three steps. First, normalize the points to be independent of the camera matrix using undistortPoints with no distortion matrix. Second, convert them to 3d points using the convertPointsToHomogeneous. Thirdly, project them back to image space using the distortion matrix.

vector<Point2d> ptsOut;
vector<Point3d> ptsTemp;
Mat rtemp, ttemp;
rtemp.create( 3, 1, CV_32F );
rtemp.setTo( 0 );
rtemp.copyTo( ttemp );
undistortPoints( ptsOut, ptsOut, dist::cameraMatrix, noArray());
convertPointsToHomogeneous( ptsOut, ptsTemp );
projectPoints( ptsTemp, rtemp, ttemp, dist::cameraMatrix, dist::distortion, ptsOut );

I'm seeing a bit of a problem right at the very corner that I'm not sure why. Probably where the distortion goes iffy anyway.

2017-05-15 19:09:20 -0500 answered a question Visual Studio CLR static and dinamic libs

You can use the dynamic libraries so long as the .dll files are on the DLL Search Path of the .exe

HERE is a page explaining how the search works. So putting the .dll files in the same folder as your .exe will work all the time. Putting them in a different folder will work if you either add it to the path, or the application calls the SetDllDirectory function mentioned in that article.

Hope this helps.

2017-05-11 17:53:40 -0500 answered a question Lens calibration by moving camera?

It is possible, but not built into OpenCV. You don't need to use the checkerboard pattern though. Any points you can measure on the surface of the table will work, and you can track those with optical flow.

2017-05-11 17:35:35 -0500 commented answer How to triangulate Points from a Single Camera multiple Images?

Well, I think you do cv2.mapping3d.functionName.

I followed instructions for making python bindings, but since I don't use python... Take a look at the examples to find how you call the various parameter types.

If there's anything weird, could you post it here? I'm actually writing better documentation and examples now, so I could just include it.

2017-05-10 17:07:18 -0500 commented question How to visualize 3d Points data stream of object coordinates in Real time python

To do 3d things with OpenCV you'll need the VIZ module. Unfortunately, I can't help with doing it in real-time.

2017-05-10 16:38:15 -0500 commented answer How to compute a stitching model and reuse it on further image pairs?

Windows, though I wouldn't think it would make a difference.

2017-05-08 21:32:29 -0500 answered a question How to compute a stitching model and reuse it on further image pairs?

So, your code works as is. All I did was copy/paste. Release and Debug, GPU and no. I would run an update and re-compile. If you didn't compile yourself, then it should still work, but try compiling the problem or checking install issues.

Are you using the latest version of OpenCV?

Edit:

Result

2017-05-08 17:41:14 -0500 answered a question Understanding Single object tracking

It's most likely because mario leaves the frame entirely.

Try using template matching, and just keep track of when he goes off screen and ignore results not at the top of the screen. Or use a _NORMED method and check the value of the maximum.

2017-05-08 17:34:17 -0500 answered a question Issue with detecting combinations of squares

So, I shall assume you can isolate just the blocks and their contours. Before you do minAreaRect, you should analyze the contours. Since you have just the edges of the boxes, you can estimate the rotation needed to make them vertical and horizontal lines.

Then rotate the image to make the edges vertical and horizontal. Your bounding box is then a tight, non rotated box around the contours, which should be simple.

2017-05-06 18:48:12 -0500 commented question How to compute a stitching model and reuse it on further image pairs?

Can you post the original images so I can experiment?

2017-05-04 20:53:28 -0500 commented answer Is there any OpenCV or IPP equivalent for this function?

Hmm, on further examination, that's not what it's doing. Unfortunately, I can't tell what it is doing. Is this just a version of the warpAffine function?

I assume you have this in a state that runs? Can you set a breakpoint, save the input parameters and the input and output images?

2017-05-04 19:09:50 -0500 commented question Aruco module, estimatePoseSingleMarkers looks great, estimatePoseBoard does not.

Are you sure you're using the correct board and that your settings are the same as when you created it to print?

2017-05-04 19:03:20 -0500 commented question How to compute a stitching model and reuse it on further image pairs?

Are these two different cameras with a fixed relationship?

And how does estimateTransform and composePanorama fail? Is it an error message, or does it just not look correct?

2017-05-03 22:02:24 -0500 answered a question Is there any OpenCV or IPP equivalent for this function?

This looks like it's shrinking the image by a factor of 2, and a sub-pixel offset.

Just create an affine matrix using getRotationMatrix2D and use warpAffine to do the actual warping. The results of the interpolation won't be exactly the same, but it should be essentially the same.

2017-05-02 18:46:41 -0500 answered a question Can I use OpenCV to detect weeds in a paddock?

Simple color detection can be done on just about any hardware you can buy. I don't have any low-end hardware around to give you a quick benchmark with, but the algorithms are not complicated at all. The trick is tuning your filters (color and size and shape) so that you get few enough false negatives that you're not just spraying everything, and not so many that you miss weeds.

I think it's certainly feasible, although whether your method works depends on what the scene really looks like. I assume you can add a light so you don't have to worry about sunset or night changing the color of things.

It sounds like an interesting project.

2017-05-02 18:37:53 -0500 commented question Is it possible to measure the exact distance of a flat object (eg. metal sheets) using Aruco Markers?JO

More details needed. Is the camera on the robot looking at markers on the sheet? Are the markers on the robot with a camera looking at it and the sheet? Is the camera always in the same place relative to the sheet?

2017-05-02 18:28:27 -0500 answered a question Parameters in Hough Transform

The HoughLines function also has the optional parameters min_theta = 0 and max_theta = CV_PI, which is the start and end theta.

The pixelCount parameter from the paper is the same as threshold.

And connectDistance sounds more like something for HoughLinesP, not HoughLines. That would be the maxLineGap parameter.

Make sure you're using the correct function, the two are very similar.

2017-05-02 18:22:58 -0500 commented answer camera calibration - tracking object distance

I strongly recommend finding the distance and using that. Even travelling up and down, the distance from the camera will change. Especially if you need any precision, you will see errors.

As for how to use python, no idea. I work in C++, and you don't post nearly enough code to figure out how you're doing things.

Lastly, you don't need to tag me. You get notifications for any response to your post, your answer or something you've commented on.

2017-05-01 17:46:24 -0500 commented answer camera calibration - tracking object distance

It is better to use the size of the ball to find the distance the ball is, then use that to get the speed in real life. If the distance to the ball is changing, the 10px = 1cm would change too, and everything would be wrong.

As to finding the size of the ball, there are many ways of doing that, so I can't tell you the "right" way. Take a look around and you'll find several, even in the OpenCV examples.

2017-04-29 11:17:50 -0500 answered a question camera calibration - tracking object distance

They were written as 2803 because the number was 2.80360356e+03. Which is the same number, just with the decimal place rounded off. Note the scientific notation. (e+03)

Your number is just 294.

To calculate the distance to the ball, you must know the size of the ball. To calculate the size of the ball, you must know the distance to the ball. So measure one in the real world, and then you can know the other.

2017-04-28 19:45:07 -0500 answered a question 3d data recognition

Try OctNet. It's probably not suitable for your situation due to the way it quantizes data, but at least read the paper and check other citations.

https://github.com/griegler/octnet

2017-04-25 20:31:16 -0500 commented answer How to triangulate Points from a Single Camera multiple Images?

I think it has a Python interface already. At least, I declared everything with CV_WRAP and put the python marker in the CMAKE file.

Sorry, I'm trying to get a machine learning run going, then I can work on this while it's running.

2017-04-24 23:02:15 -0500 received badge  Citizen Patrol (source)
2017-04-23 08:27:43 -0500 commented answer Replace a chain of image blurs with one blur

Correct. You are showing the image as a float, which shows 0-1, not 0-255. So if you do the division, then you will see things correctly both on screen and after you save.

KeyPoint detectors are very sensitive to the specifics of the image. Try adding a very small random noise to your original image and running it again. You'll likely see similar differences in the number of keypoints. What that means is, the number of keypoints is not a good measure of difference.

Mat rand( img.rows, img.cols, img.type() );
randn( rand, 0, 1 );
add( img, rand, img );
2017-04-22 13:09:11 -0500 commented answer Replace a chain of image blurs with one blur

There are very small differences between the images, but they are fundamentally the same. Remember that the gaussian blur function performs an approximation of a true gaussian blur. If you could have a "true" blur, then there would be absolutely no differences between them, but you can't. As it is, the differences are very small, and not worth worrying about.

2017-04-20 20:34:31 -0500 commented question Recommended Detector for this kind of image?

Like I said, I would look at how ARUCO does it. It's just a weirdly shape marker, you should be able to do the same thing to find it.

2017-04-20 20:33:25 -0500 commented question Distance from camera to object.. The error increasing linearly!

pixel_size is constant, focal length is constant, 2 is constant, only d changes, so it is linear.

What that equation means is basically what I said. As you move further away, each pixel covers more area on the object. So if you double the distance, what was 4 pixels is now one pixel.

2017-04-20 20:30:39 -0500 commented answer Replace a chain of image blurs with one blur

What I mean is that there is no actual problem. You are displaying the images in a way that exaggerates the error by a factor of 255. If you simply look at the actual images for the two methods, you will see they appear identical.

2017-04-18 17:44:09 -0500 commented question idea about difference bet moving red obj & fire

Do you have many moving red objects in your training data? You should make sure your training data has examples of all the types of things you want to be able to separate.

2017-04-18 17:42:32 -0500 commented question Distance from camera to object.. The error increasing linearly!

Yep. Basically, your error in finding the chessboard corners goes up with the distance, because a 0.1 (or whatever) pixel error is now a larger distance. So a higher uncertainty in your world points means a higher uncertainty in your camera location.

2017-04-17 22:47:53 -0500 commented question Recommended Detector for this kind of image?

Well, I'm not sure how it works, but I would look at how the ARUCO module does it's detections. I think it's adaptive thresholding and Harris corners. You could probably define these as your own little dictionary and have it do all the work.

2017-04-17 19:53:32 -0500 commented question Recommended Detector for this kind of image?

Do you not know the location of the ARUCO markers? Or are they freely moving within the area like an AR system?

2017-04-17 18:25:40 -0500 commented question Recommended Detector for this kind of image?

Why do you need to detect the delimiters? Why not just detect the aruco markers and leave it at that? The functions for that already exist.

2017-04-16 15:57:41 -0500 commented answer Replace a chain of image blurs with one blur

Right. But that's because the float version of imshow uses a range of 0-1, but your input images are still 0-255. So that's still the problem.

Your images are range 0-255. Then your blurs are in the range 0-255. Then your absdiff is in range 0-255, then you multiply by 255 to show. So that's your problem.

2017-04-15 13:29:09 -0500 commented answer Replace a chain of image blurs with one blur

Note again: You are multiplying the differences by 255 when you convert to CV_8U. In none of the code I see here, or at the link do you divide by 255. This means you take those images and divide by 255 to see the true difference.

I'm not sure that's the problem, but based on what you've posted, it looks like it. You need to double check that the magnitude of the input is the same as the output.