Ask Your Question

Trying to remove artefact with interpolation

asked 2017-08-31 04:21:34 -0500

lock042 gravatar image

updated 2020-11-01 06:41:11 -0500

Hello everyone, I'm posting here my question because I have some issues with interpolation algorithm.

Indeed, I'm working with astronomical images and I need to make some transformations on the images. To do that I'm using the function warpPerspective and the Lanczos4 algorithm:

  • Transformation can be rotation, shift, shear (very few), but generally not scale.

The problem is that I have artifact with the interpolation all around stars. This is normal, I know but I would like to know the best way to proceed to remove these artifacts. Is there a protect algorithm ? I tried the super sampling algorithm:

  • I multiply the size of the image by 2 (cubic)
  • I make the transformation
  • I divide the size of the image by 2 (cubic)

but results is blured, while artifacts have disapeared. In fact, in this last case, the final images looks like the interpolation algorithm used for super sampling. If I use Lanczos4 for resizing the image, I will see the artifacts again.

You can see examples at this adress:

So am I doing something wrong ? What is the best practice ?

Best Regards

edit retag flag offensive close merge delete


Try to convert your image CV_32F before warp

LBerger gravatar imageLBerger ( 2017-08-31 15:32:12 -0500 )edit

Could it be as simple that this solution ? Ok I have to find a way to convert data. Thank you.

lock042 gravatar imagelock042 ( 2017-09-01 02:26:22 -0500 )edit

use convertTo function

LBerger gravatar imageLBerger ( 2017-09-01 02:33:02 -0500 )edit

Yes and to convert back to 16U: out.convertTo(out, CV_16UC3, 65535.0); ?

lock042 gravatar imagelock042 ( 2017-09-01 02:47:09 -0500 )edit

yes and no. using Lanczos4, I think you can have values greater than 65535 and less than 0. convertTo(out, CV_16UC3, 65535.0) : values will be saturate why not? you have to choose saturate or use min max values to convert

LBerger gravatar imageLBerger ( 2017-09-01 02:51:15 -0500 )edit

It looks like artifacts are almost gone like that !!!

lock042 gravatar imagelock042 ( 2017-09-01 03:14:40 -0500 )edit

Oh no. Sorry. Changes are not really visible.

lock042 gravatar imagelock042 ( 2017-09-01 03:17:36 -0500 )edit

The probleme is that on linear images, typical undershoot artifacts are composed of one or a few very dark pixels around bright structures such as stars. I would need an algorithm to prevent generation of visible undershoot artifacts. If it exists of course.

lock042 gravatar imagelock042 ( 2017-09-01 08:48:02 -0500 )edit

The problem is that on linear images, typical undershoot artifacts are composed of one or a few very dark pixels around bright structures such as stars. I would need an algorithm to prevent generation of visible undershoot artifacts in addition to the Lanczos interpolation. If it exists of course.

lock042 gravatar imagelock042 ( 2017-09-01 08:48:09 -0500 )edit

Can you post one image and give perpersctive parameters .? i cannot reproduce your issue using 8 bits image

LBerger gravatar imageLBerger ( 2017-09-01 11:49:09 -0500 )edit

2 answers

Sort by » oldest newest most voted

answered 2017-08-31 18:22:14 -0500

Tetragramm gravatar image

updated 2017-09-03 17:06:23 -0500

Lanczos4 has a sharpening effect built in. Try using cubic for the warp perspective call.

EDIT Attempt #3:

Sorry, I keep losing things I post. Hopefully this works.

I should expand on the above a bit. The clarity you see in your images is somewhat less truthful than the blurriness in the cubic images. If you look HERE, you can see a description of how the lanczos4 and the cubic interpolation work.

Cubic contains no sharpening, so it looks blurrier, but it does use all the information. Lanczos4 also uses all the information, but it has some sharpening, so it can create halos. The nearest option however, blurs nothing, but it can lose data, or duplicate pixels if two result pixels have the same nearest original pixel.

Visually, Lanczos4 looks better, but scientifically, cubic is more accurate. Lanczos4 can create artifacts that don't actually exist, as you are seeing here.

I'm pretty sure the halo is not caused by the data-type wrapping around. Rather, it is the sharpening effect.

For example, your star is 15000 counts, and the bit right next to it is 1000. If the kernel is (I'm guessing numbers here) [-0.1, 1.2, -0.1], then you get sum[-1500, 1200, 0] = -300, which is the halo.

edit flag offensive delete link more


The problem with using cubic in the warpPerspective call is that result is too blurred.

lock042 gravatar imagelock042 ( 2017-09-01 02:10:31 -0500 )edit

You're right. The thing I would like to do is playing with a parameter to adjust this effect.

lock042 gravatar imagelock042 ( 2017-09-06 08:44:06 -0500 )edit

You basically have to write your own interpolation algorithm if you want to adjust the parameters. The OpenCV ones are hard-coded. There will be some level of either, blurring, artifacts, or data loss no matter what you do?

Tetragramm gravatar imageTetragramm ( 2017-09-07 22:16:28 -0500 )edit

I have no skills for writing my own interpolation algorithm. In fact I would like to write something like that:

lock042 gravatar imagelock042 ( 2017-09-08 03:40:32 -0500 )edit

Unfortunately, that doesn't give enough details to tell exactly what it does. It's something similar to what LBerger is trying though, so if you can find more information on how that actually works, he or I could maybe help.

Tetragramm gravatar imageTetragramm ( 2017-09-08 07:29:43 -0500 )edit

Sorry I haven't much more information. It is very nice for trying to help me. I know another algorithm (noHalo, used in Gimp 2.9), but I think that write this algorithm interpolation in my software and rewrite a new WarpPerspective transform function would be to much difficult.

lock042 gravatar imagelock042 ( 2017-09-08 07:35:51 -0500 )edit

The clamping mechanism acts by limiting the high-pass component of the interpolation filter selectively to fix the undershoot problems.

lock042 gravatar imagelock042 ( 2017-09-12 02:35:28 -0500 )edit

answered 2017-09-01 16:57:28 -0500

LBerger gravatar image

updated 2017-09-03 14:49:16 -0500

That's not an answer. I am not able to reproduce your issue using this program but I think I don't understand your transformation. My program is :

#include <opencv2/opencv.hpp>
#include <iostream>

using namespace std;
using namespace cv;

struct ZoomPos {
    Point p;
    bool newPt;
    int zoom;
    int typeInter;
    bool inter;

void AjouteGlissiere(String nomGlissiere, String nomFenetre, int minGlissiere, int maxGlissiere, int valeurDefaut, int *valGlissiere, void(*f)(int, void *), void *r)
    createTrackbar(nomGlissiere, nomFenetre, valGlissiere, 1, f, r);
    setTrackbarMin(nomGlissiere, nomFenetre, minGlissiere);
    setTrackbarMax(nomGlissiere, nomFenetre, maxGlissiere);
    setTrackbarPos(nomGlissiere, nomFenetre, valeurDefaut);

void onZoom(int event, int x, int y, int flags, void *userdata)
    ZoomPos *z=(ZoomPos*)userdata;

    if (event== EVENT_LBUTTONDOWN)
        z->p = Point(x,y);
        z->newPt = false;

void MAJInter(int x, void * userdata)
    ZoomPos *z = (ZoomPos*)userdata;

int main() {
    ifstream fs;"",ios::binary);

    if (!fs.is_open())
        return 0;
    fs.seekg(0, ios_base::end);
    int nb=fs.tellg();
    int nb2=4008*2672*2;
    cout<< nb+ -11 * 256 + 64<<" =? "<<nb2<<endl;
    fs.seekg(11*256+64, ios_base::beg);
    vector<char> v(nb2);,nb2);
    Mat h=(Mat_<double>(3,3)<< -0.99974,- 0.00299,+ 4002.14396, +0.00239,- 0.99964,+ 2673.06210, -0.00000,- 0.00000, 1.00000);
    for (int i = 0; i < v.size(); i += 2)
    Mat imgOriginal(2672,4008,CV_16SC1,;
    Mat img;
    Mat img2;
    ZoomPos z;
    warpPerspective(img, img2, h, img.size(), CV_INTER_LANCZOS4);
    resize(img2, img, imgOriginal.size());
    namedWindow("test", WINDOW_NORMAL);
    int code=0;
    z.p =Point(0,0);z.newPt = false; z.zoom = 1;z.inter=false;z.typeInter= CV_INTER_LANCZOS4;
    AjouteGlissiere("Interpolation", "test", 0, CV_INTER_LANCZOS4, CV_INTER_LANCZOS4, &z.typeInter,MAJInter,&z);
    Mat x;
    bool modifZoom=false;
    Ptr<Mat> lutRND;
   if (!lutRND)
       RNG ra;
       lutRND = makePtr<Mat>(256, 1, CV_8UC3);
       ra.fill(*lutRND, RNG::UNIFORM, 0, 256);

    while (code != 27)
        code =  waitKey(10);
        switch (code){
        case '+' :
            if (z.zoom < 16)
                z.zoom += 1;
        case '-' :
            if (z.zoom>=2)
                modifZoom = true;
                z.zoom -= 1;
        if (z.inter)
            img = imgOriginal.clone();
            resize(img, img, img.size() * 2);
            warpPerspective(img, img2, h, img.size(), z.typeInter);
            resize(img2, img, imgOriginal.size());
            imshow("test", img);

        if (z.newPt || z.inter || modifZoom)
            Rect r(z.p.x-50, z.p.y - 50,100,100);
            if (<0)
                r.x = 0;
            if (<0)
               r.y = 0;

        Mat x3;
        applyColorMap(x3,x3, *lutRND.get());
        z.inter= false;
edit flag offensive delete link more


The only difference I see is that you work with short data (CV_16SC1), not ushort. What if you use ushort ?

lock042 gravatar imagelock042 ( 2017-09-02 17:14:20 -0500 )edit

In file I found BITPIX = 16 and In this document in table 2 16 means 16-bit twos-complement binary integer. Data are in short format. If you use ushort -1 becomes 65535 and you will have strange behaviors in interpolation.

LBerger gravatar imageLBerger ( 2017-09-03 11:00:32 -0500 )edit

In fact: #define USHORT_IMG 20 /* 16-bit unsigned integers, equivalent to / / BITPIX = 16, BSCALE = 1, BZERO = 32768 */

lock042 gravatar imagelock042 ( 2017-09-03 11:23:01 -0500 )edit

"In fact: #define USHORT_IMG 20 /* 16-bit unsigned integers, equivalent to / / BITPIX = 16, BSCALE = 1, BZERO = 32768 */"


Check profile here.How can you explain profile using unsigned short? You must use short to read data(x) and then use x*BSCALE+BZERO to display data

LBerger gravatar imageLBerger ( 2017-09-03 11:54:22 -0500 )edit

Yes you're right, but I'm using cfitsio library in C. When you have BITPIX = 16, BSCALE = 1 and BZERO = 32768, you can load ushort automatically with the appropriate function (fits_read_pix). You can see my project here by the way:

lock042 gravatar imagelock042 ( 2017-09-03 14:07:42 -0500 )edit

no problem using ushort. you can download a PNG image in 16 bits of file here

LBerger gravatar imageLBerger ( 2017-09-03 14:53:08 -0500 )edit

Hummmm. there's something wrong in the dynamic of the image you sent me I can't explain.

lock042 gravatar imagelock042 ( 2017-09-04 08:03:07 -0500 )edit

And if you remove the super sampling you will see artifact. By the way if in the resize you also use Lanczoz you should see artifact

lock042 gravatar imagelock042 ( 2017-09-06 09:00:52 -0500 )edit

"Something wrong" : what ?

"Artifact " what do you call artifact ? black pixel in white blob? never seen that.

Another question have you try my program ?

If you are not happy with lanczos4 use Whittaker–Shannon interpolation formula example here using blue values formula gives blue signal and original is in red....

LBerger gravatar imageLBerger ( 2017-09-06 09:19:11 -0500 )edit

Something wrong. For example the Median absolute deviation of the images you sent became 0. It is very strange. Some signals disappeared. Yes, I think that if you do not resize before and after the warp perspective you should see this kind of artifacts. By the way, I tried to adapt your code in mine for testing. The problem is that resizing the image by two before warppespective should change H values. I haven't try your code yet, but for sure I will. The algorithm I would like to test is : No Halo. Whittaker–Shannon interpolation formula sounds fine too, but I have no skills to do it by myself.

lock042 gravatar imagelock042 ( 2017-09-06 09:27:03 -0500 )edit

Question Tools

1 follower


Asked: 2017-08-31 04:21:34 -0500

Seen: 1,195 times

Last updated: Sep 03 '17