Ask Your Question

# Trying to remove artefact with interpolation

Hello everyone, I'm posting here my question because I have some issues with interpolation algorithm.

Indeed, I'm working with astronomical images and I need to make some transformations on the images. To do that I'm using the function warpPerspective and the Lanczos4 algorithm:

• Transformation can be rotation, shift, shear (very few), but generally not scale.

The problem is that I have artifact with the interpolation all around stars. This is normal, I know but I would like to know the best way to proceed to remove these artifacts. Is there a protect algorithm ? I tried the super sampling algorithm:

• I multiply the size of the image by 2 (cubic)
• I make the transformation
• I divide the size of the image by 2 (cubic)

but results is blured, while artifacts have disapeared. In fact, in this last case, the final images looks like the interpolation algorithm used for super sampling. If I use Lanczos4 for resizing the image, I will see the artifacts again.

You can see examples at this adress: http://hpics.li/1859d75

So am I doing something wrong ? What is the best practice ?

Best Regards

edit retag close merge delete

## Comments

Try to convert your image CV_32F before warp

( 2017-08-31 15:32:12 -0600 )edit

Could it be as simple that this solution ? Ok I have to find a way to convert data. Thank you.

( 2017-09-01 02:26:22 -0600 )edit

use convertTo function

( 2017-09-01 02:33:02 -0600 )edit

Yes and to convert back to 16U: out.convertTo(out, CV_16UC3, 65535.0); ?

( 2017-09-01 02:47:09 -0600 )edit

yes and no. using Lanczos4, I think you can have values greater than 65535 and less than 0. convertTo(out, CV_16UC3, 65535.0) : values will be saturate why not? you have to choose saturate or use min max values to convert

( 2017-09-01 02:51:15 -0600 )edit

It looks like artifacts are almost gone like that !!!

( 2017-09-01 03:14:40 -0600 )edit

Oh no. Sorry. Changes are not really visible.

( 2017-09-01 03:17:36 -0600 )edit

The probleme is that on linear images, typical undershoot artifacts are composed of one or a few very dark pixels around bright structures such as stars. I would need an algorithm to prevent generation of visible undershoot artifacts. If it exists of course.

( 2017-09-01 08:48:02 -0600 )edit

The problem is that on linear images, typical undershoot artifacts are composed of one or a few very dark pixels around bright structures such as stars. I would need an algorithm to prevent generation of visible undershoot artifacts in addition to the Lanczos interpolation. If it exists of course.

( 2017-09-01 08:48:09 -0600 )edit

Can you post one image and give perpersctive parameters .? i cannot reproduce your issue using 8 bits image

( 2017-09-01 11:49:09 -0600 )edit

## 2 answers

Sort by » oldest newest most voted

Lanczos4 has a sharpening effect built in. Try using cubic for the warp perspective call.

EDIT Attempt #3:

Sorry, I keep losing things I post. Hopefully this works.

I should expand on the above a bit. The clarity you see in your images is somewhat less truthful than the blurriness in the cubic images. If you look HERE, you can see a description of how the lanczos4 and the cubic interpolation work.

Cubic contains no sharpening, so it looks blurrier, but it does use all the information. Lanczos4 also uses all the information, but it has some sharpening, so it can create halos. The nearest option however, blurs nothing, but it can lose data, or duplicate pixels if two result pixels have the same nearest original pixel.

Visually, Lanczos4 looks better, but scientifically, cubic is more accurate. Lanczos4 can create artifacts that don't actually exist, as you are seeing here.

I'm pretty sure the halo is not caused by the data-type wrapping around. Rather, it is the sharpening effect.

For example, your star is 15000 counts, and the bit right next to it is 1000. If the kernel is (I'm guessing numbers here) [-0.1, 1.2, -0.1], then you get sum[-1500, 1200, 0] = -300, which is the halo.

more

## Comments

The problem with using cubic in the warpPerspective call is that result is too blurred.

( 2017-09-01 02:10:31 -0600 )edit

You're right. The thing I would like to do is playing with a parameter to adjust this effect.

( 2017-09-06 08:44:06 -0600 )edit

You basically have to write your own interpolation algorithm if you want to adjust the parameters. The OpenCV ones are hard-coded. There will be some level of either, blurring, artifacts, or data loss no matter what you do?

( 2017-09-07 22:16:28 -0600 )edit

I have no skills for writing my own interpolation algorithm. In fact I would like to write something like that: https://pixinsight.com/doc/tools/Star...

( 2017-09-08 03:40:32 -0600 )edit

Unfortunately, that doesn't give enough details to tell exactly what it does. It's something similar to what LBerger is trying though, so if you can find more information on how that actually works, he or I could maybe help.

( 2017-09-08 07:29:43 -0600 )edit

Sorry I haven't much more information. It is very nice for trying to help me. I know another algorithm (noHalo, used in Gimp 2.9), but I think that write this algorithm interpolation in my software and rewrite a new WarpPerspective transform function would be to much difficult.

( 2017-09-08 07:35:51 -0600 )edit

The clamping mechanism acts by limiting the high-pass component of the interpolation filter selectively to fix the undershoot problems.

( 2017-09-12 02:35:28 -0600 )edit

That's not an answer. I am not able to reproduce your issue using this program but I think I don't understand your transformation. My program is :

#include <opencv2/opencv.hpp>
#include <iostream>

using namespace std;
using namespace cv;

struct ZoomPos {
Point p;
bool newPt;
int zoom;
int typeInter;
bool inter;
};

void AjouteGlissiere(String nomGlissiere, String nomFenetre, int minGlissiere, int maxGlissiere, int valeurDefaut, int *valGlissiere, void(*f)(int, void *), void *r)
{
createTrackbar(nomGlissiere, nomFenetre, valGlissiere, 1, f, r);
setTrackbarMin(nomGlissiere, nomFenetre, minGlissiere);
setTrackbarMax(nomGlissiere, nomFenetre, maxGlissiere);
setTrackbarPos(nomGlissiere, nomFenetre, valeurDefaut);
}

void onZoom(int event, int x, int y, int flags, void *userdata)
{
ZoomPos *z=(ZoomPos*)userdata;

if (event== EVENT_LBUTTONDOWN)
{
z->p = Point(x,y);
z->newPt=true;
}
else
z->newPt = false;
}

void MAJInter(int x, void * userdata)
{
ZoomPos *z = (ZoomPos*)userdata;
z->inter=true;
}

int main() {
ifstream fs;
fs.open("pp_sel_00000.fit",ios::binary);

if (!fs.is_open())
{
cout<<"PB";
return 0;
}
fs.seekg(0, ios_base::end);
int nb=fs.tellg();
int nb2=4008*2672*2;
cout<< nb+ -11 * 256 + 64<<" =? "<<nb2<<endl;
fs.seekg(11*256+64, ios_base::beg);
vector<char> v(nb2);
fs.read(v.data(),nb2);
Mat h=(Mat_<double>(3,3)<< -0.99974,- 0.00299,+ 4002.14396, +0.00239,- 0.99964,+ 2673.06210, -0.00000,- 0.00000, 1.00000);
for (int i = 0; i < v.size(); i += 2)
{
swap(v[i],v[i+1]);
}
Mat imgOriginal(2672,4008,CV_16SC1,v.data());
Mat img;
Mat img2;
ZoomPos z;
img=imgOriginal.clone();
resize(img,img,img.size()*2);
warpPerspective(img, img2, h, img.size(), CV_INTER_LANCZOS4);
resize(img2, img, imgOriginal.size());
namedWindow("test", WINDOW_NORMAL);
int code=0;
imshow("test",img);
z.p =Point(0,0);z.newPt = false; z.zoom = 1;z.inter=false;z.typeInter= CV_INTER_LANCZOS4;
AjouteGlissiere("Interpolation", "test", 0, CV_INTER_LANCZOS4, CV_INTER_LANCZOS4, &z.typeInter,MAJInter,&z);
setMouseCallback("test",onZoom,&z);
Mat x;
bool modifZoom=false;
Ptr<Mat> lutRND;
if (!lutRND)
{
RNG ra;
lutRND = makePtr<Mat>(256, 1, CV_8UC3);
ra.fill(*lutRND, RNG::UNIFORM, 0, 256);
}

while (code != 27)
{
code =  waitKey(10);
switch (code){
case '+' :
if (z.zoom < 16)
{
modifZoom=true;
z.zoom += 1;
}
break;
case '-' :
if (z.zoom>=2)
{
modifZoom = true;
z.zoom -= 1;
}
break;
}
if (z.inter)
{
img = imgOriginal.clone();
resize(img, img, img.size() * 2);
warpPerspective(img, img2, h, img.size(), z.typeInter);
resize(img2, img, imgOriginal.size());
imshow("test", img);

}
if (z.newPt || z.inter || modifZoom)
{
Rect r(z.p.x-50, z.p.y - 50,100,100);
if (r.tl().x<0)
r.x = 0;
if (r.tl().y<0)
r.y = 0;

resize(img(r),x,Size(),z.zoom,z.zoom);
Mat x3;
x=x/256;
(x).convertTo(x3,CV_GRAY2BGR);
applyColorMap(x3,x3, *lutRND.get());
imshow("zoom",x3);
z.inter= false;
modifZoom=false;
waitKey(10);
cout<<"-->\n";
}
}
waitKey();
}

more

## Comments

The only difference I see is that you work with short data (CV_16SC1), not ushort. What if you use ushort ?

( 2017-09-02 17:14:20 -0600 )edit

In file I found BITPIX = 16 and In this document in table 2 16 means 16-bit twos-complement binary integer. Data are in short format. If you use ushort -1 becomes 65535 and you will have strange behaviors in interpolation.

( 2017-09-03 11:00:32 -0600 )edit

In fact: #define USHORT_IMG 20 /* 16-bit unsigned integers, equivalent to / / BITPIX = 16, BSCALE = 1, BZERO = 32768 */

( 2017-09-03 11:23:01 -0600 )edit

"In fact: #define USHORT_IMG 20 /* 16-bit unsigned integers, equivalent to / / BITPIX = 16, BSCALE = 1, BZERO = 32768 */"

?????

Check profile here.How can you explain profile using unsigned short? You must use short to read data(x) and then use x*BSCALE+BZERO to display data

( 2017-09-03 11:54:22 -0600 )edit

Yes you're right, but I'm using cfitsio library in C. When you have BITPIX = 16, BSCALE = 1 and BZERO = 32768, you can load ushort automatically with the appropriate function (fits_read_pix). You can see my project here by the way: https://github.com/lock042/Siril-0.9

( 2017-09-03 14:07:42 -0600 )edit

no problem using ushort. you can download a PNG image in 16 bits of file pp_sel_00000.fit here

( 2017-09-03 14:53:08 -0600 )edit

Hummmm. there's something wrong in the dynamic of the image you sent me I can't explain.

( 2017-09-04 08:03:07 -0600 )edit

And if you remove the super sampling you will see artifact. By the way if in the resize you also use Lanczoz you should see artifact

( 2017-09-06 09:00:52 -0600 )edit

"Something wrong" : what ?

"Artifact " what do you call artifact ? black pixel in white blob? never seen that.

Another question have you try my program ?

If you are not happy with lanczos4 use Whittaker–Shannon interpolation formula example here using blue values formula gives blue signal and original is in red....

( 2017-09-06 09:19:11 -0600 )edit

Something wrong. For example the Median absolute deviation of the images you sent became 0. It is very strange. Some signals disappeared. Yes, I think that if you do not resize before and after the warp perspective you should see this kind of artifacts. By the way, I tried to adapt your code in mine for testing. The problem is that resizing the image by two before warppespective should change H values. I haven't try your code yet, but for sure I will. The algorithm I would like to test is : No Halo. Whittaker–Shannon interpolation formula sounds fine too, but I have no skills to do it by myself.

( 2017-09-06 09:27:03 -0600 )edit

Official site

GitHub

Wiki

Documentation

## Stats

Asked: 2017-08-31 04:21:34 -0600

Seen: 1,610 times

Last updated: Sep 03 '17