cv2.dnn.readNetFromTorch(pretrained_model_path) throws error

asked 2019-10-26 01:09:58 -0600

prasadCV gravatar image

updated 2019-10-26 11:25:39 -0600

berak gravatar image

When a model is trained and saved using official PyTorch(http://pytorch.org/) and the same model is loaded using OpenCV dnn module like this cv2.dnn.readNetFromTorch(pretrained_model_path) it throws below error

error:(-213: The function/feature is not implemented) Unsupported Lua type in function 'cv::dnn::*::TorchImporter::readObjectC:\fakepath\OpenCV_DNN_Error.png

edit retag flag offensive close merge delete

Comments

please be so kind and replace any screenshots of code & errors with a TEXT version, so we can try your code, and lookup the errors, thank you !

berak gravatar imageberak ( 2019-10-26 01:15:24 -0600 )edit

opencv version ? do you think, we can do anything without seing your model / training code ?

berak gravatar imageberak ( 2019-10-26 01:17:05 -0600 )edit
1

opencv-python==4.1.0.25 I used the Github open source project to train from https://github.com/davidtvs/PyTorch-ENet

prasadCV gravatar imageprasadCV ( 2019-10-26 01:30:48 -0600 )edit

ok, reproduced with a similar pretrained model

i guess, it needs some pre/postprocessing. also note, that the existing ENet from opencv's model zoo was saved from c++ torch, not from pytorch.

could you add your saving code to the question ?

berak gravatar imageberak ( 2019-10-26 03:57:55 -0600 )edit

can it be, that what you saved there is a plain pickle of the network weights, like a saved checkpoint ?

(the other one is, and not usable for opencv)

((readNetFromTorch() expects a serialized .t7 file from c++/lua Torch, not a python pickle, entirely different api))

imho, what you have to do is:

  • in pytorch, extract the state_dict (and discard the optimizer, miou and such parts)
  • put that into your model
  • model.eval()
  • then save it somehow (that's the current problem..)

i tried with onnx.export(), but fail with KeyError: 'max_unpool2d'

see: https://github.com/pytorch/pytorch/is...

berak gravatar imageberak ( 2019-10-26 09:40:44 -0600 )edit

Thank you for your answers. Instead of using onnx or using openCV can I just use the model which was saved using PyTorch and just pass an image to the model for inference/predict the semantic segmentation results.

prasadCV gravatar imageprasadCV ( 2019-10-26 15:24:33 -0600 )edit

I get the exact same error with every model from here: https://upscale.wiki/wiki/Model_Database

Which were trained with ESRGAN

And I can confirm the models work running them through PyTorch itself.

Is the ultimate answer to my issue "PyTorch models aren't supported?" I can try conversion, but being new to this whole ecosystem, the distinction between Torch/PyTorch support seems rather confusing,

AlphaAtlas gravatar imageAlphaAtlas ( 2019-10-27 19:37:29 -0600 )edit

@AlphaAtlas your're getting which error ? do you fail to import the result from torch.save() or fail to onnx.export() ?

please rather ask a new question specific to your problem.

berak gravatar imageberak ( 2019-10-27 21:49:54 -0600 )edit

Thanks for the response!

It's the same error from the first post: "error:(-213: The function/feature is not implemented) Unsupported Lua type in function 'cv::dnn::*::TorchImporter::readObject"

Repro steps are exactly the same too: try to load a .pth file with dnn.readNetFromTorch() or nn.readNet(). I haven't created the script for onnx.export() yet, but I'd rather not have to do that for every model I try. I'm away from a CUDA device atm, so I suspect the conversion will be painfully slow, and that also means inference though PyTorch isn't an option.

I'd be happy to open a second thread if necessary, but it appears I'm having the exact same issue as berak, so I figured I'd keep my issue in this thread.

AlphaAtlas gravatar imageAlphaAtlas ( 2019-10-28 19:18:14 -0600 )edit