Ask Your Question

HugoRune's profile - activity

2019-10-10 10:58:12 -0500 received badge  Good Question (source)
2019-02-07 06:22:19 -0500 received badge  Notable Question (source)
2017-12-12 23:09:41 -0500 received badge  Popular Question (source)
2017-09-08 12:35:09 -0500 marked best answer finding axis of symmetry in an image

edit after 5 years:

i marked Michael Burdinov's answer as the best answer. Because i have not got any answer better than his answer yet.feel free to give a better answer if you have.

Given an image containing a mirror symmetrical object, such as a car or a butterfly, I want to find the symmetry axis:

image description image description

Due to perspective distortions the two halves of a symmetrical object will never match exactly. The background and noise further complicate things.

But given the assumption that the image of the object is mostly mirror symmetric and takes up a large portion of the picture, it should be possible to find a best fit for the axis of symmetry.

How can I find this axis?

I tried flipping the input image, then executing SURF and FindHomography on the original and the flipped image, but the results are very poor, and not robust at all.

Is there a better way?

2017-04-12 23:37:40 -0500 received badge  Notable Question (source)
2017-01-17 11:04:33 -0500 received badge  Popular Question (source)
2016-02-18 15:27:22 -0500 received badge  Famous Question (source)
2016-02-04 15:43:53 -0500 received badge  Popular Question (source)
2015-10-27 06:21:23 -0500 received badge  Taxonomist
2015-08-03 12:23:29 -0500 received badge  Notable Question (source)
2014-11-09 19:17:59 -0500 received badge  Nice Question (source)
2014-11-01 05:26:01 -0500 received badge  Popular Question (source)
2014-07-18 21:27:23 -0500 received badge  Nice Answer (source)
2014-07-10 03:39:05 -0500 marked best answer Smoothing with a mask

Is there a way to apply a blur or median smoothing filter to an image, while supplying a mask of pixels that should be ignored?

I have a height map from a laser-scanner which I want to smooth. The map is not continuous; wherever the laser was not reflected, the map simply contains no height data.

If I arbitrarily set the height for missing values to zero (or any other value) and then blur the image, this will introduce a lot of error around these missing values, and around all edges and holes of objects.

So I need to supply a binary mask to the blur. Masked pixels should be ignored when calculating the new blur or median value of neighbor pixels.

How can I accomplish this?

2014-05-03 20:13:12 -0500 received badge  Guru (source)
2014-05-03 20:13:12 -0500 received badge  Great Answer (source)
2014-03-03 10:19:38 -0500 asked a question canny edge detection for 32 bit floats

I want to detect edges in a range image. The image contains distance values from 0 meters to 2,5 meters as 32 floats.

Canny edge detection works very well for my images. However the cvCanny method only accepts single-channel 8bit input images.

Converting the distances to 8bit results in a resolution of about 1cm, which is far too low. The original images have a resolution of below 1mm

I can generate different 8bit images that only cover a small part of the full range each, and run canny on all of them, but that is slow and cumbersome, and introduces artifact edges.

Is there a way to run canny edge detection on the full range of the input data?

2012-11-14 17:48:33 -0500 commented answer contrast-stretch with clipping

The speed is not an issue compared to all the other stuff that needs to run, and this method happens to match well with the form of noise I am dealing with: There are a couple highly localized noise peaks, one or two pixels wide, that disturb normalization; and those are removed by erosion

2012-11-14 04:36:40 -0500 answered a question contrast-stretch with clipping

unxnut's suggestions is probably the right way to go at this, but I found a quick hack that gets me mostly what I want:

To stretch an image so the topmost few pixels are clipped:

  • cvErode srcImage n times --> erodedImg // this will remove all the highest localized peaks
  • cvMinMaxLoc erodedImg // this will get the maximum value
  • Scale srcImg * (255.0 / maxValue) // this will set the maximum value of the eroded image to 255 and clip all higher values

removing low peaks works the same, but was not needed in my case

2012-11-14 04:36:32 -0500 received badge  Scholar (source)
2012-11-13 05:15:24 -0500 asked a question contrast-stretch with clipping

To contrast-stretch an image I use

CvNormalize(img,img,0,255, NormType.MinMax);

This will set the minimum to 0 and the maximum to 255, stretching all values in between.

But is there a way to specify a percentage of values that should be cut off? If I want to stretch the image so that at least 5% of the resulting array are zeroes and 5% of the resulting array are 255,
(Or alternatively, stretch the image so that 5% of the lowest values and 5% of the highest values are clipped),
what combination of opencv commands could I use to accomplish that?

2012-11-01 15:54:09 -0500 commented answer High-level implementation of Canny

Yes, this helps, thanks. My own searches all came up empty because I was searching for an implementation using openCV, but I guess converting these ones to use the openCV datatypes and functions will be a lot easier than converting the native c canny code.

2012-10-31 12:45:13 -0500 answered a question Prebuilt windows dlls? No bin folder!

Instructions on how to download and install the prebuilt windows dlls can be found at
http://docs.opencv.org/doc/tutorials/introduction/windows_install/windows_install.html#windows-install-prebuild

It is basically just downloading and executing the first link from the download page

2012-10-31 12:28:56 -0500 asked a question High-level implementation of Canny

I want to try a few things with a modified canny algorithm, in order to adapt it to detect centerlines instead of edges. In particular i want to switch the standard sobel filter to detect edge direction.

To this end I am looking for an implementation of canny that I can modify.

However the source for the OpenCV Canny implementation is very hard to translate into a high-level language. I am currently using a C# wrapper for openCV, and cannot use pointer arithmetic, memset etc. If possible I would like to keep all my code in one language.

Is there an implementation of canny using the basic openCV functions and classes that would be easier to translate? Something in Python would probably work just fine.

2012-10-30 22:02:05 -0500 received badge  Nice Question (source)
2012-10-27 21:36:02 -0500 commented question how to calculate gradient using canny operator in 45 and 135 degrees direction

the canny algorithm uses the sobel operator to calculate the gradient direction atan2(sobelY,sobelX). I am not sure what you are asking, could you clarify your question?

2012-10-26 15:12:14 -0500 commented answer Affine transform image outside of view

I added some ideas to my answer

2012-10-26 15:11:36 -0500 edited answer Affine transform image outside of view

What do you mean by view?

If you have set a ROI (region of interest) for an image, you need to remove the roi with cvResetImageROI().

But if the affine transform merely transforms some points out of the defined boundaries of the image, then the only solution is to use a larger image for transforming.

You could first create a new image with double the dimensions,
then call cvSetImageROI() on that image to select a region in the middle with the size of your old image
then cvCopy() the old image to the new one
then call cvResetImageROI() to remove the roi (important!)
then do your transforms with the new image

EDIT:

If the affine transformations are such that even a much larger image will not contain the result, I can think of two options:

1) instead of transforming a 2d matrix, transform a list of points:
make a matrix that contains all the (foreground) pixels in your image, with their x coordinate in channel 0 and their y coordinate in channel 1. The dimensions of the matrix do not matter. then call transform instead of WarpAffine with this matrix then paint the resulting transformed points in a new image

2) change your affine transforms so that they do not shift the image quite as much
If you use getAffineTransform, then simply shift your 3 reference dst points so that they have the same centroid as your 3 src points before calling getAffineTransform

2012-10-24 17:47:08 -0500 asked a question Detecting thick edges

How can I detect thick edges in an image, without detecting double edges?

How can I get from this image description

as close as possible to this:

image description

I realise why Canny, Sobel or Laplace will not work in this case. They will all find two edges, from black to white, and from white to black. If I only want to get one central edge, is there another approach I can use?