Ask Your Question

Image Transformation OpenCV/C++

asked 2015-02-23 19:17:35 -0600

jefferson-sep gravatar image

updated 2015-02-23 19:18:17 -0600

Hello, I'm trying to perform a transformation, or perspective correction, on the image with the chess board features as follows:

image description

However, already tried in several places, including in the book Learning OpenCV and have not found a way to do this. If anyone can help me with this I thank you!

edit retag flag offensive close merge delete


try the code in the link. It may work

jamesnzt gravatar imagejamesnzt ( 2015-02-24 00:39:51 -0600 )edit

Thanks for the answer, but in this example, it inserts the coordinates of the points that form the corners. I meant to find automatically. But still helped because lack only able to determine the corners and I think I know how to do this, Tks!

jefferson-sep gravatar imagejefferson-sep ( 2015-02-24 01:15:33 -0600 )edit

I think that if you find the corners of one small black square, you may apply the same transformation on the whole image, it should be the same...

thdrksdfthmn gravatar imagethdrksdfthmn ( 2015-02-24 06:49:02 -0600 )edit

Yes, I'm trying it =)

jefferson-sep gravatar imagejefferson-sep ( 2015-02-24 08:52:54 -0600 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2015-02-24 19:06:24 -0600

theodore gravatar image

updated 2015-02-25 03:54:21 -0600

Have a look in the code below, for sure it needs some optimization but I guess it will provide you with an idea how to deal with your problem. Moreover, I used another image with a different perspective but I do not think that this changes a lot.

#include <iostream>
#include <opencv2/opencv.hpp>

using namespace std;
using namespace cv;

int main()
    // Load image
    Mat img = imread("chessboard.png");

    // Check if image is loaded successfully
    if(! || img.empty())
        cout << "Problem loading image!!!" << endl;
        return EXIT_FAILURE;

    imshow("src", img);

    // Convert image to grayscale
    Mat gray;
    cvtColor(img, gray, COLOR_BGR2GRAY);

    // Convert image to binary
    Mat bin;
    threshold(gray, bin, 50, 255, CV_THRESH_BINARY_INV | CV_THRESH_OTSU);

    imshow("bin", bin);

    // Dilate a bit in order to fill any gap between the joints
    Mat kernel = Mat::ones(2, 2, CV_8UC1);
    dilate(bin, bin, kernel);
//    imshow("dilate", bin);

    // Find external contour
    vector<Vec4i> hierarchy;
    std::vector<std::vector<cv::Point> > contours;
    cv::findContours(bin.clone(), contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));

    // Find the convex hull object of the external contour
    vector<vector<Point> >hull( contours.size() );
    for( size_t i = 0; i < contours.size(); i++ )
       {  convexHull( Mat(contours[i]), hull[i], false ); }

    // We'll put the labels in this destination image
    cv::Mat dst = Mat::zeros(bin.size(), CV_8UC3);

    // Draw the contour as a solid blob filling also any convexity defect with the extracted hulls
    for (size_t i = 0; i < contours.size(); i++)
        drawContours( dst, hull, i, Scalar(255, 255, 255), CV_FILLED/*1*/, 8, vector<Vec4i>(), 0, Point() );

    // Extract the new blob and the approximation curve that represents it
    Mat bw;
    cvtColor(dst, bw, CV_BGR2GRAY);
    cv::findContours(bw.clone(), contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));

    // The array for storing the approximation curve
    std::vector<cv::Point> approx;
    Mat src = img.clone();

    for (size_t i = 0; i < contours.size(); i++)
        // Approximate contour with accuracy proportional
        // to the contour perimeter with approxPolyDP. In this,
        // third argument is called epsilon, which is maximum
        // distance from contour to approximated contour. It is
        // an accuracy parameter. A wise selection of epsilon is
        //needed to get the correct output.
        double epsilon = cv::arcLength(cv::Mat(contours[i]), true) * 0.02; // epsilon = 2% of arc length
        cout << "approx: " << approx.size() << endl;

        // visuallize result
        for(size_t j = 0; j < approx.size(); j++)
            string text = to_string(static_cast<int>(j));
            circle(src, approx[j], 3, Scalar(0, 255, 0), CV_FILLED);
            circle(dst, approx[j], 3, Scalar(0, 255, 0), CV_FILLED);
            putText(src, text, approx[j], FONT_HERSHEY_COMPLEX_SMALL, 1, Scalar( 0, 0, 255 ), 2);
            putText(dst, text, approx[j], FONT_HERSHEY_COMPLEX_SMALL, 1, Scalar( 0, 0, 255 ), 2);

    imshow("hull", dst);
    imshow("points", src);

    // find a more automated way to deal with the points here and extract the perspective, this is done in a hurry
    vector<Point2f> p,q;



    p.push_back(approx[3 ...
edit flag offensive delete link more


In the result, there are not really squares, but I like the idea. +1

thdrksdfthmn gravatar imagethdrksdfthmn ( 2015-02-25 02:51:29 -0600 )edit

that's because the dimensions was 200x200, if you change it to something more rectangle-ish 300x420 (check the update) you will get something more close to the original image.

theodore gravatar imagetheodore ( 2015-02-25 03:56:40 -0600 )edit

Theodore, Thank you for your cooperation! I will try to adapt the code to work with images of this type:

jefferson-sep gravatar imagejefferson-sep ( 2015-02-25 15:40:42 -0600 )edit

Question Tools

1 follower


Asked: 2015-02-23 19:17:35 -0600

Seen: 2,441 times

Last updated: Feb 25 '15