Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Testing tensorflow model with opencv dnn

I have asked the following question on StackOverflow : https://stackoverflow.com/questions/47072498/evaluation-of-tensorflow-model-with-opencv-fails.

I am adding here more details and asking opencv specific question.

In the code given in StackOverflow the dimension of inputData is 64 X 32 X 32 X 3 and this format is required by the tf.nn.conv2d operator in tensorflow.

I am reading the network with opencv dnn: m_Net = cv::dnn::readNetFromTensorflow(m_TrainedModelPath);

To run the network I initialize with cv::Mat netInput = cv::dnn::blobFromImages(imagesBatch); m_Net.setInput(netInput, "input/Identity"); where images batch is as required by the opencv api a vector of cv::Mat. The cv::Mat were read with cv::imread and the image conversion BGR to RGB was performed.

But then the input of the network is 64 X 3 x 32 X 32. The network does not work from OpenCV. My question is does opencv internally swaps the data or am I required to do so ?

Testing tensorflow model with opencv dnn

I have asked the following question on StackOverflow : https://stackoverflow.com/questions/47072498/evaluation-of-tensorflow-model-with-opencv-fails.

I am adding here more details and asking opencv specific question.

In the code given in StackOverflow the dimension of inputData is 64 X 32 X 32 X 3 and this format is required by the tf.nn.conv2d operator in tensorflow.

I am reading the network with opencv dnn: m_Net = cv::dnn::readNetFromTensorflow(m_TrainedModelPath);

To run the network I initialize with cv::Mat netInput = cv::dnn::blobFromImages(imagesBatch); m_Net.setInput(netInput, "input/Identity"); where images batch is as required by the opencv api a vector of cv::Mat. The cv::Mat were read with cv::imread and the image conversion BGR to RGB was performed.

But then the input of the network is 64 X 3 x 32 X 32. The network does not work from OpenCV. My question is does opencv internally swaps the data or am I required to do so ?

As required I add the code here:

for (auto testItem : m_TestData) {        
    if (batchCounter > 0 && batchCounter % m_BatchSize == 0) {
        //recognize and update counters
        cv::Mat netInput = cv::dnn::blobFromImages(imagesBatch);                  
        m_Net.setInput(netInput, "input/Identity");        //set the network input
        cv::Mat result = m_Net.forward("output/Mul");     //compute output

        ///compute maximum in the fully connected layer output 
        for (int i = 0; i < m_BatchSize; ++i) {
            int maxIdx = 0;
            double maxVal = result.at<double>(i, 0);
            for (int j = 1; j < result.size[1]; ++j) {
                double val = result.at<double>(i, j);
                if (val > maxVal) {
                    maxVal = val;
                    maxIdx = j;
                }
            }
            printf("Groundtruth: %d, Recognized %d\n", labelsBatch[i], maxIdx);
            if (labelsBatch[i] == maxIdx)
               countCorrRecog++; 
        }

        batchCounter = 0;
        imagesBatch.clear();
        labelsBatch.clear();
        countTestedImages += m_BatchSize;
    }

    cv::Mat img = cv::imread(testItem.first.toUtf8().constData());
    if (img.empty()) {
        std::cerr << "Can't read image from the file: " << testItem.first.toUtf8().constData() << std::endl;
        exit(-1);
    }

    cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
    if (m_InputImageSize != img.size())
        cv::resize(img, img, m_InputImageSize); //Resize image to input size       
    imagesBatch.push_back(img);
    labelsBatch.push_back(testItem.second);
    batchCounter++;

}

Testing tensorflow model with opencv dnn

I have asked the following question on StackOverflow : https://stackoverflow.com/questions/47072498/evaluation-of-tensorflow-model-with-opencv-fails.

I am adding here more details and asking opencv specific question.

In the code given in StackOverflow the dimension of inputData is 64 X 32 X 32 X 3 and this format is required by the tf.nn.conv2d operator in tensorflow.

I am reading the network with opencv dnn: m_Net = cv::dnn::readNetFromTensorflow(m_TrainedModelPath);

To run the network I initialize with cv::Mat netInput = cv::dnn::blobFromImages(imagesBatch); m_Net.setInput(netInput, "input/Identity"); where images batch is as required by the opencv api a vector of cv::Mat. The cv::Mat were read with cv::imread and the image conversion BGR to RGB was performed.

But then the input of the network is 64 X 3 x 32 X 32. The network does not work from OpenCV. My question is does opencv internally swaps the data or am I required to do so ?

As required I add the code here:

for (auto testItem : m_TestData) {        
    if (batchCounter > 0 && batchCounter % m_BatchSize == 0) {
        //recognize and update counters
        cv::Mat netInput = cv::dnn::blobFromImages(imagesBatch);                  
        m_Net.setInput(netInput, "input/Identity");        //set the network input
        cv::Mat result = m_Net.forward("output/Mul");     //compute output

        ///compute maximum in the fully connected layer output 
        for (int i = 0; i < m_BatchSize; ++i) {
            int maxIdx = 0;
            double maxVal = result.at<double>(i, 0);
            for (int j = 1; j < result.size[1]; ++j) {
                double val = result.at<double>(i, j);
                if (val > maxVal) {
                    maxVal = val;
                    maxIdx = j;
                }
            }
            printf("Groundtruth: %d, Recognized %d\n", labelsBatch[i], maxIdx);
            if (labelsBatch[i] == maxIdx)
               countCorrRecog++; 
        }

        batchCounter = 0;
        imagesBatch.clear();
        labelsBatch.clear();
        countTestedImages += m_BatchSize;
    }

    cv::Mat img = cv::imread(testItem.first.toUtf8().constData());
    if (img.empty()) {
        std::cerr << "Can't read image from the file: " << testItem.first.toUtf8().constData() << std::endl;
        exit(-1);
    }

    cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
    if (m_InputImageSize != img.size())
        cv::resize(img, img, m_InputImageSize); //Resize image to input size       
    imagesBatch.push_back(img);
    labelsBatch.push_back(testItem.second);
    batchCounter++;

}

I do not receive errors bu countCorrRecog stays very small when compared to countTestedImages.

click to hide/show revision 4
None

updated 2017-11-03 05:05:48 -0600

berak gravatar image

Testing tensorflow model with opencv dnn

I have asked the following question on StackOverflow : https://stackoverflow.com/questions/47072498/evaluation-of-tensorflow-model-with-opencv-fails.

I am adding here more details and asking opencv specific question.

In the code given in StackOverflow the dimension of inputData is 64 X 32 X 32 X 3 and this format is required by the tf.nn.conv2d operator in tensorflow.

I am reading the network with opencv dnn: m_Net = cv::dnn::readNetFromTensorflow(m_TrainedModelPath);

To run the network I initialize with

 cv::Mat netInput = cv::dnn::blobFromImages(imagesBatch);
  m_Net.setInput(netInput, "input/Identity");

where images batch is as required by the opencv api a vector of cv::Mat. The cv::Mat were read with cv::imread and the image conversion BGR to RGB was performed.

But then the input of the network is 64 X 3 x 32 X 32. The network does not work from OpenCV. My question is does opencv internally swaps the data or am I required to do so ?

As required I add the code here:

for (auto testItem : m_TestData) {        
    if (batchCounter > 0 && batchCounter % m_BatchSize == 0) {
        //recognize and update counters
        cv::Mat netInput = cv::dnn::blobFromImages(imagesBatch);                  
        m_Net.setInput(netInput, "input/Identity");        //set the network input
        cv::Mat result = m_Net.forward("output/Mul");     //compute output

        ///compute maximum in the fully connected layer output 
        for (int i = 0; i < m_BatchSize; ++i) {
            int maxIdx = 0;
            double maxVal = result.at<double>(i, 0);
            for (int j = 1; j < result.size[1]; ++j) {
                double val = result.at<double>(i, j);
                if (val > maxVal) {
                    maxVal = val;
                    maxIdx = j;
                }
            }
            printf("Groundtruth: %d, Recognized %d\n", labelsBatch[i], maxIdx);
            if (labelsBatch[i] == maxIdx)
               countCorrRecog++; 
        }

        batchCounter = 0;
        imagesBatch.clear();
        labelsBatch.clear();
        countTestedImages += m_BatchSize;
    }

    cv::Mat img = cv::imread(testItem.first.toUtf8().constData());
    if (img.empty()) {
        std::cerr << "Can't read image from the file: " << testItem.first.toUtf8().constData() << std::endl;
        exit(-1);
    }

    cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
    if (m_InputImageSize != img.size())
        cv::resize(img, img, m_InputImageSize); //Resize image to input size       
    imagesBatch.push_back(img);
    labelsBatch.push_back(testItem.second);
    batchCounter++;

}

I do not receive errors bu but countCorrRecog stays very small when compared to countTestedImages.

Testing tensorflow model with opencv dnn

I have asked the following question on StackOverflow : https://stackoverflow.com/questions/47072498/evaluation-of-tensorflow-model-with-opencv-fails.

I am adding here more details and asking opencv specific question.

In the code given in StackOverflow the dimension of inputData is 64 X 32 X 32 X 3 and this format is required by the tf.nn.conv2d operator in tensorflow.

I am reading the network with opencv dnn: m_Net = cv::dnn::readNetFromTensorflow(m_TrainedModelPath);

To run the network I initialize with

 cv::Mat netInput = cv::dnn::blobFromImages(imagesBatch);
 m_Net.setInput(netInput, "input/Identity");

where images batch is as required by the opencv api a vector of cv::Mat. The cv::Mat were read with cv::imread and the image conversion BGR to RGB was performed.

But then the input of the network is 64 X 3 x 32 X 32. The network does not work from OpenCV. My question is does opencv internally swaps the data or am I required to do so ?

As required I add the code here:

for (auto testItem : m_TestData) {        
    if (batchCounter > 0 && batchCounter % m_BatchSize == 0) {
        //recognize and update counters
        cv::Mat netInput = cv::dnn::blobFromImages(imagesBatch);                  
        m_Net.setInput(netInput, "input/Identity");        //set the network input
        cv::Mat result = m_Net.forward("output/Mul");     //compute output

        ///compute maximum in the fully connected layer output 
        for (int i = 0; i < m_BatchSize; ++i) {
            int maxIdx = 0;
            double maxVal = result.at<double>(i, result.at<float>(i, 0);
            for (int j = 1; j < result.size[1]; ++j) {
                double val = result.at<double>(i, result.at<float>(i, j);
                if (val > maxVal) {
                    maxVal = val;
                    maxIdx = j;
                }
            }
            printf("Groundtruth: %d, Recognized %d\n", labelsBatch[i], maxIdx);
            if (labelsBatch[i] == maxIdx)
               countCorrRecog++; 
        }

        batchCounter = 0;
        imagesBatch.clear();
        labelsBatch.clear();
        countTestedImages += m_BatchSize;
    }

    cv::Mat img = cv::imread(testItem.first.toUtf8().constData());
    if (img.empty()) {
        std::cerr << "Can't read image from the file: " << testItem.first.toUtf8().constData() << std::endl;
        exit(-1);
    }

    cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
    if (m_InputImageSize != img.size())
        cv::resize(img, img, m_InputImageSize); //Resize image to input size       
    imagesBatch.push_back(img);
    labelsBatch.push_back(testItem.second);
    batchCounter++;

}

I do not receive errors but countCorrRecog stays very small when compared to countTestedImages.