Ask Your Question

Revision history [back]

OpenCV forward() inference size is different between python to c++

i have trained a classification model, when loaded using OpenCV readNetFromTensorflow

In python using net.forward() it gives me an output of size 1XnbrOfClasses which is correct

In C++ using the same parameters it gives me the size of nbrOfClassesX64 ?

any suggestions ?

OpenCV forward() inference size is different between python to c++

i have trained a classification model, when loaded using OpenCV readNetFromTensorflow

In python using net.forward() it gives me an output of size 1XnbrOfClasses which is correct

In C++ using the same parameters it gives me the size of nbrOfClassesX64 ?

any suggestions ?

OpenCV forward() inference size is different between python to and c++

i have trained a classification model, when loaded using OpenCV readNetFromTensorflow

In python using net.forward() it gives me an output of size 1XnbrOfClasses which is correct

In C++ using the same parameters it gives me the size of nbrOfClassesX64 ?

any suggestions ?

OpenCV forward() inference size is different between python and c++

i have trained a classification model, when loaded using OpenCV readNetFromTensorflow

In python using net.forward() it gives me an output of size 1XnbrOfClasses which is correct

In C++ using the same parameters it gives me the size of nbrOfClassesX64 ?

any suggestions ?

OpenCV forward() inference size is different between python and c++

i have trained a classification model, when loaded using OpenCV readNetFromTensorflow

In python using net.forward() it gives me an output of size 1XnbrOfClasses which is correct

In C++ using the same parameters it gives me the size of nbrOfClassesX64 ?? i dont understand what is this output and why its not the same as in python ,

ps, i am using opencv 3.4.2 in both C++ and python.

any suggestions ?

OpenCV forward() inference size is different between python and c++

i have trained a classification model, when loaded using OpenCV readNetFromTensorflow

In python python using net.forward() it gives me an output of size 1XnbrOfClasses 1 X nbrOfClasses which is correct

In C++ C++ using the same parameters it gives me the size of nbrOfClassesX64 nbrOfClasses X 64 ? i dont understand what is this output and why its not the same as in python ,

ps, i am using opencv 3.4.2 in both C++ and python.

any suggestions ?

OpenCV forward() inference size is different between python and c++

i have trained a classification model, when loaded using OpenCV readNetFromTensorflow

In python using net.forward() it gives me an output of size 1 X nbrOfClasses which is correct

In C++ using the same parameters it gives me the size of nbrOfClasses X 64 ? i dont understand what is this output and why its not the same as in python ,

ps, i am using opencv 3.4.2 in both C++ and python.

any suggestions Any ideas what's going on ?

OpenCV forward() inference size is different between python and c++

i have trained a classification model, when loaded using OpenCV readNetFromTensorflow

In python using net.forward() it gives me an output of size 1 X nbrOfClasses which is correct

In C++ using the same parameters it gives me the size of nbrOfClasses X 64 ? i dont understand what is this output and why its not the same as in python ,

ps, i am using opencv 3.4.2 in both C++ and python.

Any ideas what's going on ?

my code in c++

    String model = "model.pb";
String config = "model.pbtxt";
dnn::Net net = cv::dnn::readNetFromTensorflow(model, config);

string imgFullPath = "anyImage.jpg";
Mat img = cv::imread(imgFullPath, IMREAD_GRAYSCALE);
Mat blob = blobFromImages(img, 1.0, Size(64, 64), Scalar(0, 0, 0), true, true);
net.setInput(blob);
Mat forward = net.forward();
cout << "forward size : " << forward.size() << endl;

my code in python :

    net = cv.dnn.readNetFromTensorflow(modelWeights, textGraph); 
pathImg="anyImage.jpg"
frame=cv.imread(pathImg, cv.IMREAD_GRAYSCALE)

blob = cv.dnn.blobFromImage(frame,1.0,(64,64),[0,0,0],True,True)
net.setInput(blob)
outputBlob = net.forward()
print(outputBlob.shape)

OpenCV forward() inference size is different between python and c++

i have trained a classification model, when loaded using OpenCV readNetFromTensorflow

In python using net.forward() it gives me an output of size 1 X nbrOfClasses which is correct

In C++ using the same parameters it gives me the size of nbrOfClasses X 64 ? i dont understand what is this output and ( "this 64 " )and why its not the same as in python ,

ps, i am using opencv 3.4.2 in both C++ and python.

Any ideas what's going on ?

my code in c++

    String model = "model.pb";
String config = "model.pbtxt";
dnn::Net net = cv::dnn::readNetFromTensorflow(model, config);

string imgFullPath = "anyImage.jpg";
Mat img = cv::imread(imgFullPath, IMREAD_GRAYSCALE);
Mat blob = blobFromImages(img, 1.0, Size(64, 64), Scalar(0, 0, 0), true, true);
net.setInput(blob);
Mat forward = net.forward();
cout << "forward size : " << forward.size() << endl;

my code in python :

    net = cv.dnn.readNetFromTensorflow(modelWeights, textGraph); 
pathImg="anyImage.jpg"
frame=cv.imread(pathImg, cv.IMREAD_GRAYSCALE)

blob = cv.dnn.blobFromImage(frame,1.0,(64,64),[0,0,0],True,True)
net.setInput(blob)
outputBlob = net.forward()
print(outputBlob.shape)

OpenCV forward() inference size is different between python and c++

i have trained a classification model, when loaded using OpenCV readNetFromTensorflow

In python using net.forward() it gives me an output of size 1 X nbrOfClasses which is correct

In C++ using the same parameters it gives me the size of nbrOfClasses X 64 ? i dont understand what is this output ( "this 64 " )and why its not the same as in python ,

ps, i am using opencv 3.4.2 in both C++ and python.

Any ideas what's going on ?

my model.pbtxt is here and the model.pb is here

my code in c++

    String model = "model.pb";
String config = "model.pbtxt";
dnn::Net net = cv::dnn::readNetFromTensorflow(model, config);

string imgFullPath = "anyImage.jpg";
Mat img = cv::imread(imgFullPath, IMREAD_GRAYSCALE);
Mat blob = blobFromImages(img, 1.0, Size(64, 64), Scalar(0, 0, 0), true, true);
net.setInput(blob);
Mat forward = net.forward();
cout << "forward size : " << forward.size() << endl;

my code in python :

    net = cv.dnn.readNetFromTensorflow(modelWeights, textGraph); 
pathImg="anyImage.jpg"
frame=cv.imread(pathImg, cv.IMREAD_GRAYSCALE)

blob = cv.dnn.blobFromImage(frame,1.0,(64,64),[0,0,0],True,True)
net.setInput(blob)
outputBlob = net.forward()
print(outputBlob.shape)