Should we use filters on colored images? Convolving filter performance.

asked 2016-06-24 07:05:03 -0500

VanGog gravatar image

updated 2016-06-24 07:06:23 -0500

I noticed there is great difference in performance if we use filters on grayscale and colored images. This is an example code

#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <stdlib.h>
#include <stdio.h>

using namespace cv;

int main( int argc, char** argv )
{
    Mat src, gray, dst, abs_dst;
    src = imread( "../../data/lena.jpg" );
    if ( src.empty() )
        return -1;
    /// Remove noise by blurring with a Gaussian filter

    double t = (double) cv::getTickCount();
    GaussianBlur( src, dst, Size(3,3), 0, 0, BORDER_DEFAULT );
    cvtColor( dst, gray, CV_RGB2GRAY );
    t = (double) 1000 * (cv::getTickCount() - t) / cv::getTickFrequency();
    std::cout << "convertion + blur time: " << t << "ms" << std::endl;
    imshow("blured->gray", gray);
    waitKey(0);

    t = (double) cv::getTickCount();
    cvtColor( dst, gray, CV_RGB2GRAY );
    GaussianBlur( gray, gray, Size(3,3), 0, 0, BORDER_DEFAULT );
    t = (double) 1000 * (cv::getTickCount() - t) / cv::getTickFrequency();
    std::cout << "convertion + blur time: " << t << "ms" << std::endl;
    imshow("gray->blured", gray);
    waitKey(0);
    destroyWindow("blured->gray");
    destroyWindow("gray->blured");

    /// Apply Laplace function
    t = (double) cv::getTickCount();
    Laplacian( src, dst, CV_16S, 3, 1, 0, BORDER_DEFAULT );
    convertScaleAbs( dst, abs_dst );
    t = (double) 1000 * (cv::getTickCount() - t) / cv::getTickFrequency();
    std::cout << "Laplacian time: " << t << "ms" << std::endl;
    imshow( "Laplacian", abs_dst );

    waitKey(0);
    return 0;
}

So I wonder why should we perform filters on RGB images? Should we or not? I have seen various effects on grayscale images and always you convert color image to grayscale. But if it is faster to apply the filter on grayscale, could we just apply the filter on the grayscale image and then use some trick/effect that will do similar change (blur/smoothing/emboss/edges effects) to the colored images. I think like it should be many times faster than applying kernels on color images. I think like using grayscale image like some kind of "mask" or something what can change the look of the color image.

edit retag flag offensive close merge delete

Comments

I guess when you apply convolution on a 3 channel image, internally it just splits the image, applies convolution to each seperate channel and combines them again. Thats the only thing that would make sense.

StevenPuttemans gravatar imageStevenPuttemans ( 2016-06-30 06:48:18 -0500 )edit

But you could merge the channels and then you should got RGB image which is blured, don't you? Would the results be different?

VanGog gravatar imageVanGog ( 2016-06-30 10:31:52 -0500 )edit