Ask Your Question

Tetragramm's profile - activity

2017-04-23 08:27:43 -0500 commented answer Replace a chain of image blurs with one blur

Correct. You are showing the image as a float, which shows 0-1, not 0-255. So if you do the division, then you will see things correctly both on screen and after you save.

KeyPoint detectors are very sensitive to the specifics of the image. Try adding a very small random noise to your original image and running it again. You'll likely see similar differences in the number of keypoints. What that means is, the number of keypoints is not a good measure of difference.

Mat rand( img.rows, img.cols, img.type() );
randn( rand, 0, 1 );
add( img, rand, img );
2017-04-22 13:09:11 -0500 commented answer Replace a chain of image blurs with one blur

There are very small differences between the images, but they are fundamentally the same. Remember that the gaussian blur function performs an approximation of a true gaussian blur. If you could have a "true" blur, then there would be absolutely no differences between them, but you can't. As it is, the differences are very small, and not worth worrying about.

2017-04-20 20:34:31 -0500 commented question Recommended Detector for this kind of image?

Like I said, I would look at how ARUCO does it. It's just a weirdly shape marker, you should be able to do the same thing to find it.

2017-04-20 20:33:25 -0500 commented question Distance from camera to object.. The error increasing linearly!

pixel_size is constant, focal length is constant, 2 is constant, only d changes, so it is linear.

What that equation means is basically what I said. As you move further away, each pixel covers more area on the object. So if you double the distance, what was 4 pixels is now one pixel.

2017-04-20 20:30:39 -0500 commented answer Replace a chain of image blurs with one blur

What I mean is that there is no actual problem. You are displaying the images in a way that exaggerates the error by a factor of 255. If you simply look at the actual images for the two methods, you will see they appear identical.

2017-04-18 17:44:09 -0500 commented question idea about difference bet moving red obj & fire

Do you have many moving red objects in your training data? You should make sure your training data has examples of all the types of things you want to be able to separate.

2017-04-18 17:42:32 -0500 commented question Distance from camera to object.. The error increasing linearly!

Yep. Basically, your error in finding the chessboard corners goes up with the distance, because a 0.1 (or whatever) pixel error is now a larger distance. So a higher uncertainty in your world points means a higher uncertainty in your camera location.

2017-04-17 22:47:53 -0500 commented question Recommended Detector for this kind of image?

Well, I'm not sure how it works, but I would look at how the ARUCO module does it's detections. I think it's adaptive thresholding and Harris corners. You could probably define these as your own little dictionary and have it do all the work.

2017-04-17 19:53:32 -0500 commented question Recommended Detector for this kind of image?

Do you not know the location of the ARUCO markers? Or are they freely moving within the area like an AR system?

2017-04-17 18:25:40 -0500 commented question Recommended Detector for this kind of image?

Why do you need to detect the delimiters? Why not just detect the aruco markers and leave it at that? The functions for that already exist.

2017-04-16 15:57:41 -0500 commented answer Replace a chain of image blurs with one blur

Right. But that's because the float version of imshow uses a range of 0-1, but your input images are still 0-255. So that's still the problem.

Your images are range 0-255. Then your blurs are in the range 0-255. Then your absdiff is in range 0-255, then you multiply by 255 to show. So that's your problem.

2017-04-15 13:29:09 -0500 commented answer Replace a chain of image blurs with one blur

Note again: You are multiplying the differences by 255 when you convert to CV_8U. In none of the code I see here, or at the link do you divide by 255. This means you take those images and divide by 255 to see the true difference.

I'm not sure that's the problem, but based on what you've posted, it looks like it. You need to double check that the magnitude of the input is the same as the output.

2017-04-14 17:55:41 -0500 answered a question Replace a chain of image blurs with one blur

Ok, you don't have to worry. What you need to look at is not that there is a difference, but the amount of difference.

Running your code shows the same thing, lots of difference, but if instead you find diff by using the absdiff function, you see that the amount of difference is very small. The largest difference is 27 counts, and that's right at the edge of the image. If you look at the difference image, it's totally black. In fact, there isn't a difference larger than 1 except right at the edge, where the border behaves differently.

The differences of 1 are just slightly different coefficients causing rounding effects differently. When you do it twice, you round to integer values between blurs, so the second blur isn't as accurate. A single blur doesn't have that problem.

2017-04-14 17:01:56 -0500 commented answer register ir image to distorted rgb

You're using getOptimalNewCameraMatrix for initUndistortRectifyMap aren't you? There's a kind bug or two in that function, so if you simply use your original camera matrix for initUndistortRectifyMap, you should be ok. If there's too much or not enough black you can play with modifying the focal length in the camera matrix a bit to effectively zoom in or out.

2017-04-14 16:55:49 -0500 answered a question Distance from an object.. Where is the camera origine?

I'm not actually sure (I haven't been in a situation where it mattered), but I believe that it is the point of convergence in the lens system. In other words, centered in the lens, 1 focal length in front of the focal plane.

There's an image HERE that shows things nicely, but I don't know if there's still abstraction going on here.

It might also be 1 focal length in behind that point, along the optical axis, which would be the where the optical axis intersects the focal plane, but I don't think so.

2017-04-14 16:34:43 -0500 commented answer How to triangulate Points from a Single Camera multiple Images?

I will definitely do that soon. Until a week ago it only had a minimum of functionality, and it's still lacking good documentation. I'll get that done soon and then contrib.

2017-04-14 16:33:34 -0500 commented question Replace a chain of image blurs with one blur

He means create diff and then display it on the screen as an image using imshow. Then save it and post that image here.

2017-04-13 15:50:31 -0500 commented answer How to triangulate Points from a Single Camera multiple Images?
2017-04-12 21:39:08 -0500 answered a question How to triangulate Points from a Single Camera multiple Images?

Take a look at THIS unfinished contrib module. It's far enough along to have what you are asking for.

You put in the 2d image point from each image, along with the camera matrix, distortion matrix, rvec and tvec, and you get out the 3d location of the point.

I should really add some sample code to the readme now that I have it working, but for now here's a sample.

vector<Mat> localt; //vector of tvecs, one for each image
vector<Mat> localr; //vector of rvecs, one for each image
vector<Point2f> trackingPts; //location of the point of interest in each image
Mat cameraMatrix;  //For a single camera application, you only need one camera matrix, for multiple cameras use vector<Mat>, one for each image
Mat distMatrix; //For a single camera application, you only need one distortion matrix, for multiple cameras use vector<Mat>, one for each image
Mat state; //Output of the calculation
Mat cov;  //Optional output, uncertainty covariance

mapping3d::calcPosition(localt, localr, trackingPts, cameraMatrix, distMatrix, state, cov);
2017-04-10 22:39:50 -0500 commented question [Paid job] Multi-view solvePnP routine

You should get an e-mail from me soon. If you don't, check your spam. I've never used this feature of the forum thing, so I don't know if it works.

2017-04-10 21:16:33 -0500 commented question [Paid job] Multi-view solvePnP routine

All right, I had some time, so I went ahead and worked on this. I've tested the two pieces individually, but not together, so there's a possibility of bugs. I don't have a particularly good dataset for this.

I've checked it in HERE. See the calcObjectPosition function. Be careful, the signature isn't quite what you asked for, but it's very similar.

Instead of ICP, which works for un-associated point clouds, you know what image point goes with which model point so I used a Least-Squares solution.

2017-04-10 19:48:59 -0500 commented answer register ir image to distorted rgb

Here's some old code to re-distort an image. So calibrate both, undistort the IR, and then distort using the color coefficients.

http://code.opencv.org/issues/1387

2017-04-09 16:30:27 -0500 commented answer A very basic question about image data

As for data volume, do the math yourself. actual readout pixels * frame rate * bit depth = rate in bits/second. Then go compare that to USB 2.0 speeds of 280e6 bit/s and you'll see why.

2017-04-09 16:27:27 -0500 commented answer A very basic question about image data

It's complicated imread uses libjpeg and libpng. How the files actually store the data, I have very little idea, and I don't particularly care. Here are the wiki pages for PNG and JPEG files. Both are complicated.

As for how RAW images work? There are apparently at least 42 different ways, based on counting the formats on the wiki page, and I bet some of those file extensions have more than one internal format they use.

Since there's no standard (or at least any that are used by anyone...), a webcam can't provide a standard interface to get RAW images. Since RAW images have to be post-processed to be used, anyone who wants RAW will just use a real camera, so why bother?

2017-04-08 17:02:49 -0500 answered a question A very basic question about image data

I believe that RAW files are (with a few exceptions) as close as you can get to the pixel readouts, and usually has no filtering or image altering processing. There are some scientific cameras that just dump straight to their output, but that's typically due to very high frame rates. There are also a few cameras that do some processing before saving a RAW, but wikipedia says that they are rare and disliked for doing so.

As to the layout of any particular format, you would have to check the format specifications. With the exception of a bitmap, there is almost always many things going on. Some formats are lossy, and others are lossless. PNG is lossless, so whatever information you save is exactly what you get out. JPEG is lossy, so what you read is not necessarily what you saved.

So you have a PNG or JPEG image. What does the data look like? Who cares? What matters is what you get after it's read. When you call the imread function, it decodes the image into a pixel buffer. This has rows and columns and in those rows and columns it stores the digital intensity values (usually as BGR not RGB). If you print the memory as values you'll see 0xABCDEF. So that's good.

But, that is not what is in the RAW format for a given camera. Often RAW images store more bytes of data, say 12 bits per color per pixel. So the R would be 0x0000 to 0x1FFF, and the same for the B and G. If you have something that can read your raw format, it likely gets decoded into something like that. Except if it's a Bayer camera, which it probably is, you don't have RGB pixels at the same place, they're in a grid. So it's not useable directly. You have to process it before it can be used.

Typically after all the processing is done, you have compressed it down to an 8-bit image 0x00 to 0xFF with RGB for every pixel and then you save it as a PNG or JPEG. That is what is usable by a human, and what can be displayed on a monitor. A few monitors allow 10 or even 12 bits of display range, but they are typically for professionals only.

Webcams rarely offer the ability to get RAW images, and provided only the filtered and compressed images. VideoCapture then decodes them into the pixel buffer for you to use.

To get RAW video live, you typically need a scientific camera that outputs in Coax-Express, Camera Link, Gig-E, or now there are a few USB Vision.

2017-04-08 16:35:56 -0500 commented answer how to calculate the inliers points from my rotation and translation matrix?

projectPoints is the name of the function.

2017-04-07 23:22:30 -0500 answered a question how to calculate the inliers points from my rotation and translation matrix?

You can't know absolutely, but you can set an error threshold. Then any points that have less than that error are inliers.

Use projectPoints on the 3d to get a set of 2d points. Then calculate the error of the projected points from the "true" points as sqrt(diffx^2 + diffy^2).

2017-04-05 21:07:49 -0500 commented question How do I determine which direction/angle the robot is facing?

If you move forward once, and you're tracking with the camera, you now know which way is forward. Track the corners of the robot, and you know which ones are the front and which are the back, so now you know which way it's facing.

2017-04-05 21:05:43 -0500 commented question 360 Panorama around object

Have you tried the Stitching module?

2017-04-03 18:20:10 -0500 commented answer Using cv::solvePnP on Lighthouse Data

I have no idea what you're doing. Why don't you start a new question with context and code snippets and more details about what exactly isn't working?

2017-04-03 18:17:35 -0500 commented answer Result of CLAHE is different on 8 and 16 bit

That would be the grid.... You know, the one the parameter you're adjusting is talking about...

If you're not sure how it works, there's an explanation HERE.

2017-04-02 08:46:01 -0500 answered a question About Laser Scanning - Triangulation and PointClouds

Sorry, I don't think OpenCV is what you need. OpenCV mostly does things where a camera is the primary source of input. For a laser scanner, that's not true. Or if it is, what are you even using a laser for instead of just having a white or black background to separate the object from?

If you want to know how to deal with the point cloud data from a laser scanner, you should check out the PCL.

2017-04-02 08:35:59 -0500 commented answer Parallelize chain of blurs

Ah, yes, sorry. So sigma blur1 then blur2 = sqrt(sigma1^2+sigma2^2)

2017-04-01 23:52:51 -0500 answered a question Parallelize chain of blurs

From the wikipedia page on Gaussian blur.

Applying multiple, successive gaussian blurs to an image has the same effect as applying a single, larger gaussian blur, whose radius is the square root of the sum of the squares of the blur radii that were actually applied. For example, applying successive gaussian blurs with radii of 6 and 8 gives the same results as applying a single gaussian blur of radius 10, since sqrt {6^{2}+8^{2}}=10. Because of this relationship, processing time cannot be saved by simulating a gaussian blur with successive, smaller blurs — the time required will be at least as great as performing the single large blur.

2017-03-31 23:10:52 -0500 commented question [Paid job] Multi-view solvePnP routine

Well, I can't take your money, but if you look HERE, you can see the multi-camera any pose triangulation I've been working on. That gives you the position of each marker in 3d space.

Then use ICP to register the 3d markers to your model. (This may be overkill, but it's built into OpenCV already).

2017-03-31 20:43:46 -0500 commented question [Paid job] Multi-view solvePnP routine

First, you should close this and post a new question for visibility.

Second, are the markers associated? By that I mean, does the first marker in the list for camera1 match the first marker in the list for camera2 or are they un-ordered?

2017-03-30 18:11:26 -0500 commented answer Using cv::solvePnP on Lighthouse Data

Basic geometry.

x = cos(elevation) .* cos(azimuth)
y = cos(elevation) .* sin(azimuth)
z = sin(elevation)
2017-03-29 18:26:27 -0500 commented answer Coordinates and 3D reconstruction after stereoCalibration

CV_32FC1 and CV_32FC1 are literally x and y coordinates. To find the value that goes in (0,0), you read the value at src(map1(0,0), map2(0,0)).

CV_32FC2 and Nothing is just those two maps as channels of one Mat object.

CV_16UC2 and CV_16UC1 is a lower-precision version of the CV_32FC2. The CV_16UC1 map stores interpolation values. This is not really human readable, but it uses less memory.

2017-03-29 18:19:03 -0500 commented answer Calculate odometry from camera poses

It seems to be returning camera pose, so do the reverse thing in my answer and then your x, y, and z should represent camera location in world coordinates.

2017-03-29 18:16:09 -0500 commented question Weird ArUco marker behavior

Are you doing cornerSubPix? That may either help or hurt.

Or perhaps the refineDetectedMarkers function? Again, not sure if it would help or hurt.

2017-03-29 18:13:31 -0500 commented answer Aruco - draw position+orientation relative to marker

Right, sorry. It's the Rodrigues function.

2017-03-28 23:13:55 -0500 commented question Weird ArUco marker behavior

Right, but in 2.png, is it the source image or just the detection box that is crooked? Since the detection box is drawn on top I can't see the original image.

2017-03-28 20:17:55 -0500 answered a question 16-bit image processing

So for 16 bit processing there are several things to pay attention to.

First, imwrite will output proper 16-bit images if you save as PNG, BMP, or TIF formats. PNG is usually best. However, viewing it almost always show the image just divided by 256, which on many 16-bit images is very dark. Try normalizing before saving.

Imshow definitely shows 16-bit images as an 8-bit images divided by 256.

More and more OpenCV functions are handling 16-bit images properly, and many more will work if you're willing to get into the code and alter things.

To convert to 8-bit there are several ways.

  1. Normalize min/max. This doesn't saturate any extremes, but if most of your information is far from the extremes, you'll lose it.
  2. meanStdDev and convertTo. Take the mean-xstd as your min, and mean+ystd as your max. You saturate extreme values, but you preserve the information in the middle values as much as possible and cause no distortion in relative values.
  3. CLAHE or Histogram Equalization. These saturate the extremes, and alter the relative values of pixels, but visually they're great. CLAHE has local boxes which can cause artifacts between parts of the image if they have different statistics. After these you'll need to use convertTo(dst, CV_8U, 1.0/256.0) to get the 8 bit version.

Good Luck.

2017-03-28 17:56:16 -0500 commented question Weird ArUco marker behavior

In the second image, which is crooked, the source, or the result?

It looks like the marker in the source image isn't quite square. Which would obviously cause the result to be at least a little warped. Either that, or the detection is a bit off. I can't tell if it's the marker or just the detection box.

2017-03-28 17:25:38 -0500 commented question I have create a panorama,but the final panorama has a pixel gap,why?is it a bug?

The panorama in your first link is not the same as the one in the second link. Obviously, I can't tell what might cause the gap by using different pictures.

2017-03-28 17:19:23 -0500 commented question image reading error

RGB would be 3, RGBA would be 4. Either should work Just after your imread(), call image.channels() to get the number of channels it has. If it's grayscale, you'll only have 1.

2017-03-28 15:49:05 -0500 received badge  Nice Answer (source)
2017-03-27 21:49:47 -0500 commented question I have create a panorama,but the final panorama has a pixel gap,why?is it a bug?

Well, share by imgur?