Ask Your Question

defunktlemon's profile - activity

2019-02-01 04:27:43 -0600 received badge  Popular Question (source)
2014-11-21 03:15:14 -0600 commented answer Coordinate System Transform

Thanks. Unfortunately, there is no theory or explanation on that tutorial which makes it hard for a novice in image processing and programming, such as myself, to follow it. I haven't actually started using OpenCV yet either, but would really like to as it's one of the few places in the image processing world which seems to have an open forum; which to me is a very powerful thing. I do appreciate your comments very much, however, as I am still learning some from the experience. I'm going to try and take a look at it over the weekend, time allowing. Will have to see how many chores the girlfriend (weekend boss) has planned for me first :). Thanks again petititi.

2014-11-20 03:24:26 -0600 commented answer Coordinate System Transform

Hi. Thanks for your response petititi. I'm not actually concerned with any objects in the scene just yet. The purpose is to negate camera knock, so if the camera is moved, then an algorithm compensates for that difference. So, the algorithm will have the offset values from the template image and the images taken after camera movement has occurred and can then perform the necessary homograph translations on it. I guess I can use three corner points on each of the images as that makes up a plane on each. Haven't heard of a homography transformation before - I guess it must be an affine transformation? I think this would probably be the standard way to perform the operation, with an affine transformation matrix, but I'm also wondering whether there is another option?

2014-11-19 11:09:35 -0600 asked a question Coordinate System Transform

Hi. Hi. I'm just gonna throw this one out there to see if anyone will knock around some ideas with me.

I have a camera which has been calibrated and it takes shots of the same scene with the same type of objects in to analyse if there are any differences on the objects. Lets say the top_left of the image is (0,0), top_right (60,0), bottom_left (0,50) and bottom_right (60,50), for example. If someone comes along and knocks the camera then the image plane will be in different coordinates. I would like to find a way for the system to transform the coordinates, to allow for this coordinate difference in the next series of images captured so that they appear the same as the original ones before the camera knock, given that there may now need transformations in rotation, scale, translation and maybe even warping has occurred.

The camera runs in different modes • Stereo camera • Laser and camera • SRS camera internally there are 3 calibrations (one for each of the above)

The end result is that after applying the target check routine the camera should give identical results as if the camera hadn’t been moved or replaced.

So, if anyone has any suggestions on how to go about achieving this that would be cool - holler back on this line. Thanks

2014-11-19 09:06:06 -0600 asked a question Target Check - rectifying translations in an image

Hi. I'm just gonna throw this one out there to see if anything boomerangs with a solution or two. What I'm hoping to achieve is to be able to rectify translated or warped images, based upon, maybe a master image, so that they are displayed correctly in preparation for image analysis.

Scenario: I have a camera that takes images of the same scene which are then analysed to find something; whatever. But this is based upon the image detection algorithm expecting the things in the image, and image itself to be the same, apart from some minor difference in a small region of interest - No problem! At least, no problem until some monkey comes along and knocks the camera, for example (though I can't explain why there are monkeys in the scene). The target check for the camera internally has three calibrations, one for the stereo camera, one for the laser and camera together, and a final one for the srs camera, hmmmm! So, the end result should be that after applying the target check routine the camera should give identical results as if the camera hadn't been moved.

The problem posed is, how to approach this??? So, for image correction I guess I'm looking at scale, rotation (x,y,x), maybe warping, etc. I'm guessing I need to get out my Matrix toolbox and maybe even play with an idea or two in the classification arena. And I suppose I could be looking at using a master image of the consistenct background of the scene and try to align and fix the camera-knocked images to that perhaps through a grey threshold evaluation of ROIs.

So, if any compadres out there has the mind to knock around a suggestion ball with me, holler back on this line OK. Thanks!

2013-04-03 07:14:33 -0600 commented question log transformations

Hi. Please let me know if I am right - I was wrong in my above assumptions, I believe.

The transformation takes a narrow range of grey-level pixels in the input image and maps them into a wider range in the output image.

So, in a Fourier spectrum, for example, the dominant white pixels will become more grey and the unseen black, or near black, will become visible. So, it lessens large variations in pixel values.

Why is this useful. Is it used for things like making image negatives? So, for example, if I were to perform the inverse log transform, would this make a negative image which would be useful in things like mammograms?

2013-04-03 06:48:14 -0600 received badge  Editor (source)
2013-04-03 06:42:39 -0600 asked a question log transformations

Hi. I'm trying to understand the image analysis process. Could somebody help me to understand when considering log transformations it is said that transformation is used to expand values of dark pixels, while compressing higher-level values, please?

S = c log (1 + r)

I think I am confused by expanding and compressing. I thought the transformation would be like a threshold where grey pixels above a threshold would become white and below black, effectively making a black and white image.

Thanks