Revision history [back]

Convert tensorflow model to code and load it from memory

Hi, I'm developing a deep learning app with a tensorflow model in a .pb file. I load it with the following code and it works fine:

myNet = cv::dnn::readNetFromTensorflow(modelPath)


However, I need to protect the model, so I'd like to (somehow) convert it to memory before compiling (C++), and load it from memory so the model is not packed with the binaries.

This is the function I'm currently using:

Net cv::dnn::readNetFromTensorflow (const String &model, const String &config=String())


And these are the two functions that I think I may need, but can't make work:

Net cv::dnn::readNetFromTensorflow (const std::vector< uchar > &bufferModel, const std::vector< uchar > &bufferConfig=std::vector< uchar >())

Net cv::dnn::readNetFromTensorflow (const char *bufferModel, size_t lenModel, const char *bufferConfig=NULL, size_t lenConfig=0)


I tried converting the .pb file to a header file with xxd, but the computer runs out of memory trying to compile the program. I searched the docs but didn't find an example on how to convert the file to memory and consume it. Is is possible to do it?

Thank you very much.