Ask Your Question

f4f's profile - activity

2020-03-30 04:06:01 -0600 received badge  Notable Question (source)
2019-10-24 00:21:35 -0600 received badge  Popular Question (source)
2018-12-12 03:27:48 -0600 received badge  Self-Learner (source)
2018-12-10 00:58:30 -0600 marked best answer cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

I try to launch intel inference engine example under OpenCV.

Link to example info: https://software.intel.com/en-us/arti...

Security Barrier Camera Demo

I use (OS: Win10) OpenCV 3.4.3 compiled with inference engine.

In particular I have problems with following model /intel_models/vehicle-attributes-recognition-barrier-0039. This pretrained model (.xml + .bin files) runs successfully in intel inference engine demo app. This net has two output softmax layers ("color" and "type", "type" is the final network layer so its result is returned from net.forward())

Piece of code:

 try
{
    cv::dnn::Net net = cv::dnn::readNet(model, config);
    net.setPreferableBackend(cv::dnn::DNN_BACKEND_DEFAULT);
    net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);


    if (net.empty())
    {
        std::cerr << "Failed! Net is empty!\n";
        return EXIT_FAILURE;
    }
    else
    {
        std::cout << "Success!\n";

        cv::Mat car = cv::imread(file);

        if (car.empty())
        {
            std::cerr << "Failed to load " << file << '\n';
            return EXIT_FAILURE;
        }
        cv::Mat conv_car;
        car.convertTo(conv_car, CV_32FC3);

        int w = 72, h = 72;
        std::cout
            << "car type: " << car.type() << '\n'
            << "conv car type: " << conv_car.type() << '\n';

        while (true)
        {

            auto input = cv::dnn::blobFromImage(
                conv_car,
                1.,
                cv::Size((int)w, (int)h),
                cv::Scalar(0, 0 ,0),
                false,
                false,
                CV_32F
            );
            net.setInput(input);
            cv::Mat output = net.forward();
            output = output.reshape(1, 1);
            std::cout
                << "output.size(): " << output.size() << '\n'
                << "output.elemSize(): " << output.elemSize() << '\n'
                << "output.data():\n";
            for (size_t i = 0; i < output.cols; ++i)
                std::cout << output.at<float>(0, i) << ' ';

            cv::Point classIdPoint;
            double confidence;
            minMaxLoc(output, 0, &confidence, 0, &classIdPoint);
            int classId = classIdPoint.x;
            std::cout << "\nclass id: " << classId << " confidence: " << confidence << '\n';

            cv::imshow("img", car);
            char k = cv::waitKey(0) % 256;
            if (k == 27)
                break;
        }
    }
}
catch (cv::Exception& e)
{
    std::cout << "Failed!\n";
    std::cout << e.what() << '\n';
}

When I call

net.forward()

or

net.forward("type")

I get a reasonable output which match one of intel native sample

output.size(): [4 x 1]
output.elemSize(): 4
output.data():
0.999998 2.33588e-09 6.7388e-09 2.31872e-06

But when I call

net.forward("color")

I get strange output different every time. Though output size looks correct.

output.size(): [7 x 1]
output.elemSize(): 4
output.data():
2.57396e-12 2.61858e+20 7.58788e+31 1.94295e-19 7.71303e+31 5.08303e+31 1.80373e+28
class id: 4 confidence: 7.71303e+31

UPDATE (see berak's comments):

It's possible to get correct output from both layers using following syntax:

vector<Mat> outputs;
vector<String> names {"type","color"};
net.forward(outputs, names);

??? But how to get correct result when call:

net.forward("color")

The only idea I have is that to get the output of intermediate layer in some specific way.

Linkto pretrained network: https://download.01.org/openvinotoolk...

2018-12-10 00:57:34 -0600 answered a question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

According to berak's comment: forward() for multiple outputs would be: vector<Mat> outputs; vector<String>

2018-12-10 00:55:38 -0600 edited question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

cv::dnn::Net::forward() returns wrong output for intermediate output layer of network I try to launch intel inference en

2018-12-10 00:54:12 -0600 commented question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

@dkurt, yes its a build number for IE shipped with Intel OpenVINO toolkit. I will use 13911 for OpenCV 3.4.3 as you sugg

2018-12-07 06:20:20 -0600 commented question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

@dkurt, I have tested my app after rebuild opencv 3.4.3 and intel IE 1.0.17328. Also I have rechecked consistency off a

2018-12-06 23:39:25 -0600 commented question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

@dkurt , It was a typo in my source code which cause a call to uninitialized network, thanks a lot. I will update the t

2018-12-06 07:41:57 -0600 edited question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

cv::dnn::Net::forward() returns wrong output for intermediate output layer of network I try to launch intel inference en

2018-12-06 07:38:43 -0600 commented question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

@dkurt, Should have mentioned Windows OS, its my fault. I also get (in different working pipeline) following "assertion

2018-12-06 07:25:22 -0600 commented question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

Opencv is build with IE 1.0.13911 However I took MKLDNNPlugin.dll from fresh IE 1.0.17328 . Maybe it causes the problem.

2018-12-06 07:05:21 -0600 edited question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

cv::dnn::Net::forward() returns wrong output for intermediate output layer of network I try to launch intel inference en

2018-12-06 07:05:03 -0600 edited question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

cv::dnn::Net::forward() returns wrong output for intermediate output layer of network I try to launch intel inference en

2018-12-06 07:04:44 -0600 edited question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

cv::dnn::Net::forward() returns wrong output for intermediate output layer of network I try to launch intel inference en

2018-12-06 07:04:44 -0600 received badge  Editor (source)
2018-12-06 06:59:56 -0600 commented question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

You are right. This is a proper way to use net.forward() to get multiple outputs. But I still have a question why attemp

2018-12-06 06:44:53 -0600 commented question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

I am not sure that I have used net.forward(std::vector< std::vector< Mat > > & outputBlobs, const std::

2018-12-06 06:42:07 -0600 received badge  Student (source)
2018-12-06 06:41:07 -0600 commented question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

call to output.size returns 1x7 and 1x4 respectively so it's similar to output.size()

2018-12-06 06:21:47 -0600 received badge  Organizer (source)
2018-12-06 06:21:11 -0600 asked a question cv::dnn::Net::forward() returns wrong output for intermediate output layer of network

cv::dnn::Net::forward() returns wrong output for intermediate output layer of network I try to launch intel inference en

2017-04-04 01:55:04 -0600 commented answer Why does my own implementation of color conversion differ from cv::cvtColor() ?

I have added cv::staurate_cast<uchar> as you pointed and it fixed the problem. So now loop's body is following;

uchar* p_src;
uchar* p_dst;

for (int i = 0; i < height; i++) {
    p_src = src.ptr<uchar>(i);
    p_dst = dst.ptr<uchar>(i);
    for (int j = 0; j < width * channels; j += 3) {
        uchar gray = cv::saturate_cast<uchar>(0.114f * p_src[j] + 0.587f * p_src[j + 1] + 0.299f * p_src[j + 2]);

        p_dst[j / 3] = gray;
    }
}

Now it's time to cast a glance at source code of saturate_cast. Thanks a lot.

2017-04-04 01:55:04 -0600 commented answer Why does my own implementation of color conversion differ from cv::cvtColor() ?

src.ptr<uchar>(i) is the fastest way to iterate through cv::Mat and it is also suggested in tutorials.

2017-04-04 01:48:33 -0600 received badge  Supporter (source)
2017-04-04 01:48:31 -0600 received badge  Scholar (source)
2017-04-03 07:09:47 -0600 asked a question Why does my own implementation of color conversion differ from cv::cvtColor() ?

Hi. I have implemented a little function to convert CV_8UC3 (bgr) to CV_8UC1 grayscale using formula provided with cv::cvtColor() docs : GRAY = 0.114 * B + 0.587f *G + 0.299f * R

Though output of my func is quite similar to one of cv::cvtColor(BGR2Gray), when checking each pixel's value in loop discovers lots of bad pixels. What is wrong ?

My function is below:

void bgr2gray(cv::Mat& src, cv::Mat& dst) { CV_Assert(!src.empty());

int width = src.cols;
int height = src.rows;
int channels = src.channels();

CV_Assert(src.channels() == 3 && dst.channels() == 1);
CV_Assert(width == dst.cols && height == dst.rows);
CV_Assert(src.isContinuous() && dst.isContinuous());


// src.isContinious() && dst.isContinious()
width *= height;
height = 1;

uchar* p_src;
uchar* p_dst;

for (int i = 0; i < height; i++) {
    p_src = src.ptr<uchar>(i);
    p_dst = dst.ptr<uchar>(i);
    for (int j = 0; j < width * channels; j += 3) {
        float d_gray(0.114F * p_src[j] + 0.587f * p_src[j + 1] + 0.299f * p_src[j + 2]);
        uchar gray = (uchar)d_gray;

        if (gray - d_gray >= 0.5f)
            gray += 1;

        p_dst[j / 3] = gray;
    }
  }
    }