Ask Your Question
1

PNG encoding performance difference between 3.1 and 3.2

asked 2018-10-31 17:59:13 -0600

phamtp gravatar image

updated 2018-11-02 18:37:38 -0600

Encoding a PNG using OpenCV 3.2.0 takes about 2x longer than OpenCV 3.1.0 on an ARM Cortex A9 system, and about 1.4x longer on an x64 system. Does anyone know what causes the difference and whether it can be reduced through build settings? I'll try git bisect between 3.1 and 3.2 to see what caused the difference but wanted to check if anyone already knows. Thanks

Test code (cv.cpp):

#include <iostream>
#include <opencv2/opencv.hpp>

#include <sys/time.h>


int height = 1600;
int width = 1200;

void loadFrame(cv::Mat& frame)
{
    srand (time(NULL));
    printf("generating random image\n");
    int image_format = CV_8U;
    frame.create(height, width, image_format);
    uint8_t* image_ptr = frame.ptr<uint8_t>(0);
    for (int pixel = 0; pixel < int(frame.total()); ++pixel)
    {
        image_ptr[pixel] = rand();
    }
}

bool writeImageToString(const cv::Mat& image, const std::string& extension, std::vector<uint8_t>& buffer)
{
    if (image.empty() || (image.cols == 0 && image.rows == 0))
        return false;

    bool success = true;

    // Set the image compression parameters.
    std::vector<int> params;
    if (extension == ".png")
    {
        params.push_back(cv::IMWRITE_PNG_COMPRESSION);
        params.push_back(6);     // MAX_MEM_LEVEL = 9
        success = cv::imencode(extension, image, buffer, params);
    }
    else if (extension == ".jpg")
    {
        params.push_back(cv::IMWRITE_JPEG_QUALITY);
        params.push_back(90);     // 0...100 (higher is better)
        success = cv::imencode(extension, image, buffer, params);
    }

    return success;
}

void encodeTest(int argc, const char** argv)
{
    int frame_tests = 10;
    int write_tests = 5;
    size_t png_size = 0;
    size_t jpg_size = 0;
    double png_time = 0;
    double jpg_time = 0;
    struct timeval start, end;

    for ( int frame_count = 0; frame_count < frame_tests; frame_count++ )
    {
        cv::Mat frame;
        loadFrame(frame);
        std::vector<uint8_t> buffer;
        gettimeofday(&start, NULL);

        for ( int test_count = 0; test_count < write_tests; test_count++ )
        {
            writeImageToString(frame, ".png", buffer);
        }
        gettimeofday(&end, NULL);
        png_time += ((end.tv_sec - start.tv_sec) * 1000) 
            + (end.tv_usec / 1000 - start.tv_usec / 1000);

        png_size += buffer.size();
        gettimeofday(&start, NULL);

        for ( int test_count = 0; test_count < write_tests; test_count++ )
        {
            writeImageToString(frame, ".jpg", buffer);
        }
        jpg_size += buffer.size();
        gettimeofday(&end, NULL);
        jpg_time += ((end.tv_sec - start.tv_sec) * 1000) 
            + (end.tv_usec / 1000 - start.tv_usec / 1000);

    }
    std::cout << "Avg time PNG: " << png_time / frame_tests / write_tests / 1000 << "sec  JPG: " << jpg_time / frame_tests / write_tests / 1000 << "sec" << std::endl;

    printf("Avg size PNG: %zu     JPG: %zu\n", png_size / 10, jpg_size / 10);
}

int main(int argc, const char** argv)
{
    printf( "OpenCV version : %s\n" , CV_VERSION);

    try
    {
        encodeTest(argc, argv);
    }
    catch (cv::Exception& e)
    {
        printf("OpenCV test error\n");
        throw;
    }
    return 0;
}

This is the CMakeLists.txt:

cmake_minimum_required(VERSION 2.8)
project( cv )
find_package( OpenCV REQUIRED )
include_directories( ${OpenCV_INCLUDE_DIRS} )
add_executable( cv cv.cpp )
target_link_libraries( cv ${OpenCV_LIBS} )

To generate cross-compile for ARM I used following cmake command:

cmake -DCMAKE_TOOLCHAIN_FILE=../platforms/linux/arm-gnueabi.toolchain.cmake -DGCC_COMPILER_VERSION=5 -DENABLE_NEON=ON -DWITH_OPENMP=1 -DCMAKE_INSTALL_PREFIX=/usr/local ..

For intel x64 system (Ubuntu 14.04) I just used:

cmake .

Sample runs on ARM:

OpenCV version : 3.1.0
Avg time PNG: 0.26792sec  JPG: 0.35374sec
Avg size PNG: 1925061     JPG: 1509208

OpenCV version : 3.1.0
Avg time PNG: 0.26778sec  JPG: 0.35472sec
Avg size PNG: 1925061     JPG: 1509038

OpenCV version : 3.2.0
Avg time PNG: 0 ...
(more)
edit retag flag offensive close merge delete

Comments

neither 3.1 nor 3.2 do matter any more (those are all outdated). do your regression test against this and report back, please !

berak gravatar imageberak ( 2018-10-31 18:28:46 -0600 )edit
1

The encode times using libraries built from the link hash (733fec0) look the same as version 3.2.0 (slower than 3.1). Here is what I see on my x64 machine:

OpenCV version : 4.0.0-pre
Avg time PNG: 0.10924sec  JPG: 0.01666sec
Avg size PNG: 1925061     JPG: 1509126
phamtp gravatar imagephamtp ( 2018-10-31 19:27:34 -0600 )edit

btw, please use cv::getTickCount() or std::chrono::steady_clock for measuring (CPU clock) not gettimeofday() (wall clock)

berak gravatar imageberak ( 2018-11-03 02:12:53 -0600 )edit

I found the commit which changed the timing for me: https://github.com/opencv/opencv/comm...

It looks like i just needed to explicitly set the encode strategy after this commit, since the strategy is set to 'default' when setting compression level. Thanks for the advice about CPU vs wall clock

phamtp gravatar imagephamtp ( 2018-11-06 18:35:53 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2018-11-06 18:36:06 -0600

phamtp gravatar image

updated 2018-11-06 19:16:03 -0600

I traced down the commit which changed encode timings to https://github.com/opencv/opencv/comm...

The fix for me is to set the compression strategy explicitly to IMWRITE_PNG_STRATEGY_RLE, which was the 'as coded default' value before this change.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2018-10-31 17:59:13 -0600

Seen: 1,658 times

Last updated: Nov 06 '18