I've a set of several images which are shifted to small amounts in X/Y-direction. Starting from e.g. four low resolution images, I want to reconstruct it to one high resolution image. Therefore I need to get the appropriate shift.
Currently I'm using the Android Version of CalcOpticalFlowPyrLK()
. This detects the shifts in X and Y-direction.
Sample Images can be found here
My setup to acquire is the following:
LED => Pinhole => Distance Z (light propagates) => transmissive biological sample => small distance z (interference) => Sensor
This represents an inline Hologram acquistion as seen in the Papers like this My goal is to shift the LED in X/Y-Direction. This cases a shift of the object/interference-pattern on the sensor. By "re-shifting" the object and merging the LR-Images, I can get the Sub-Pixel superresolution. My problem is, that the Shift seems to be not linear. Due to a magnification depending on the distance from the optical axis.
I was thinking that the camera calibration method could help, but didn't have the time to look into it.
Another way might be to us the GoodFeatureDetector
and detect matches between LR1.jpg and LR2.jpg and then "non-linearly" warp the second image to match pixel-pixel-position. Does this make sense? Is this supported in OpenCV? didn't find a good point to start from.
My idea:
- Finde Features
- Use e.g. 20 points
- Try to bring Point_i(x_i, y_i) in Picture 2 to position Point_i(x_i, y_i) of Picture 1
- Sum/Merge Pixel into a HR-Mat
Right now there is something like "Motion Blur" in the resulting image. The Registration works quiet ok, but not good enough for Super-Resolution. ;)
Thank you very much!