How do I load a TF Saved Model from AutoML for inference?
- OpenCV => 4.1.2
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2017
I've exported a TF SavedModel (a '.pb' file) for object detection via AutoML for edge devices that I can load and test through TensorFlow for inference. However, when I try to load it using OpenCV, it yields the following error:
OpenCV(4.1.2) Error: Unspecified error (FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse GraphDef file: D:\\model3.pb) in cv::dnn::ReadTFNetParamsFromBinaryFileOrDie, file C:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\tensorflow\tf_io.cpp, line 42
I've tried the solution presented in 11805 but it breaks with:
File ".\optimize_for_inference.py", line 83, in main
input_graph_def.ParseFromString(data)
google.protobuf.message.DecodeError: Error parsing message
The same error pops up when I try any of scripts from TensorFlow Object Detection API Wiki
Minimum code to reproduce the error
`#include <opencv2/dnn.hpp>`
`#include <opencv2/imgproc.hpp>`
`#include <opencv2/opencv.hpp>`
cv::dnn::Net net = cv::dnn::readNetFromTensorflow("saved_model.pb");
I cannot download your model :
Something is wrong. is AutoML compatible with opencv? @dkurt
@LBerger - Changed its properties. Are you still unable to download?