Ask Your Question

JoeMama's profile - activity

2017-11-28 12:23:26 -0600 received badge  Nice Question (source)
2016-09-13 21:56:16 -0600 received badge  Famous Question (source)
2015-12-26 00:30:59 -0600 received badge  Notable Question (source)
2015-09-01 04:29:31 -0600 received badge  Popular Question (source)
2014-07-06 14:39:17 -0600 commented question Deconvolution - Theory

Yes, at each step the "currentEstimate" gets deconvolved in "some areas" according to the specific algorithm I'm studying.

2014-07-06 13:05:30 -0600 commented question Deconvolution - Theory

I'm not sure. This is taken from the paper I'm working on: https://dl.dropboxusercontent.com/u/105600602/temp.png The "input image" is the blurry image we want to deconvolve, the "blurred estimate" is the convolution between the current estimate and the PSF (why are we doing this convolution?). The difference between the two gives us the residual image. Help!!! :P

2014-07-06 12:22:13 -0600 commented question Deconvolution - Theory

Hi, thanks for your reply. Yeah, intuitively I had thought of the same thing. My point is, if the residual measures the difference, the "distance", between the output blurry image and the current estimate of the original unblurred image convolved with the PSF, how does that tell us if the current estimate is good or bad? Also, why are we convolving the estimate with the PSF? Shouldn't we calculate the difference between the output blurry image and the current estimate without the PSF?

2014-07-06 10:36:08 -0600 asked a question Deconvolution - Theory

Hello,

just a quick question about theory: I'm studying deconvolution, specifically, image restoration.

Briefly, we want to calculate an image estimate F_est of the original, unblurred, image f by minimizing:

                     2
T = || I_out - K*f ||
                     2

I_out = our output, blurred image K = point spread function

T measures how close our current estimate is to I_out and we aim at minimize this distance. My question is: if the current estimate is very close to I_out, what does that indicate, why is that a good estimate? What if they would be distant?

Thanks in advance!

2014-07-04 18:24:18 -0600 asked a question OpenCV - Gaussian Noise

Hello,

here's my problem: I'm trying to create a simple program which adds Gaussian noise to an input image. The only constraints are that the input image is of type CV_64F (i.e. double) and the values are and must be kept normalized between 0 and 1.

The code I wrote is the following:

Mat my_noise;
my_ noise = Mat (input.size(), input.type());

randn(noise, 0, 5); //mean and variance

input += noise;

The above code doesn't work, the resulting image doesn't get displayed properly. I think that happens because it gets out of the 0,1 range. I modified the code like this:

Mat my_noise;
my_ noise = Mat (input.size(), input.type());

randn(noise, 0, 5); //mean and variance

input += noise;

normalize(input, input, 0.0, 1.0, CV_MINMAX, CV_64F);

but it still doesn't work. Again, the resulting image doesn't get displayed properly. Where is the problem? Remember: the input image is of type CV_64F and the values are normalized between 0 and 1 before adding noise and have to remain like also after the noise addition.

Thank you in advance.

2014-03-31 12:33:11 -0600 asked a question Ringing artifacts

I'm working on a deblurring algorithm based upon this paper: http://www.cs.ubc.ca/labs/imager/tr/2013/StochasticDeconvolution/

I've implemented the part about boundary conditions: essentially, the image is padded according to the PSF width (I'm using a simple 3x3 box blur) and the pixels within the padded area are initialized to the value of the nearest pixel within the unpadded area.

I'm using OpenCV's copyMakeBorder function to do that (using the BORDER_REPLICATE flag).

My program works fine, images get properly and efficiently deblurred, however I always get ringing artifacts like the ones in the following image:

https://dl.dropboxusercontent.com/u/105600602/test_01.png

I always get those little "squares" in the bottom left and top right corners, along with those other larger "squares".

Is there a way to eliminate them? Am I doing something wrong?

Thanks in advance.

2014-03-31 04:29:45 -0600 asked a question Saturated pixels

Just like the title of this topic, how can I determine in OpenCV if a particular pixel of an image (either grayscale or color) is saturated (for instance, excessively bright)?

Thank you in advance.

2014-03-20 06:06:28 -0600 received badge  Editor (source)
2014-03-19 10:38:26 -0600 received badge  Student (source)
2014-03-19 10:22:04 -0600 asked a question Boundary artifacts in deconvolution

Hello everybody.

I'm a new OpenCV user and I'm working on a project for university. The program takes an input image, blurs it synthetically and, later, deblurs it. When the synthetically blurred image gets deconvolved, boundary artifacts generate because...well, so far I haven't implemented boundary conditions yet. Here're a few examples: in order, you can see the input unblurred image, the synthethically blurred one and the final output I get:

image description

According to the paper I'm writing the code from, boundary conditions have to be implemented via padding the input image by the point spread function width and creating a mask that indicates which pixels are from the captured region versus from the boundary region.

I apologize if my questions may be silly but:

1. How do I calculate the point spread function width? So far I use a simple 3x3 box blur kernel with 1/9s on the inside. Is 3 the width?

2. If the point spread function width is 3, do I have to pad the input image by adding three pixels on the four sides like in the following picture or do I have to pad the input image by "covering" the "dark frame" around it resulting from the blurring process? From what I understand, those "dark frame" areas contain mean values of the original unblurred image, therefore it's impossible to reconstruct the starting image doing a deconvolution in those ares, this would just generate and propagate artifacts.

image description

What I'm trying to say is: do I have to add extra pixels to all four sides of the input image or do I have to "cover" the "dark frame", its width being the same one of the point spread function, from what I understand?

3. Do I have to pad the unblurred input image or the synthethically blurred one?

Thank you in advance for your help!