Compile the dnn module against the system version of opencv
I am using Ubuntu Bionic (18.04) which comes with opencv 3.2.0. Unfortunately this version doesn't contain the dnn module that I need for importing tensorflow models and running inference (supported in opencv 3.4.1). I am tied to this opencv version and can't install a newer one, neither can I install tensorflow. So my only way is to compile dnn separately.
I downloaded the source of the latest opecv and tried to find a way to compile only the dnn module while linking it to opencv core and imgproc that are already installed (from opencv 3.2.0) But I faced a lot of CMake magic that I could not understand until now.
Could anyone give me some hints on how to do that? And is it possible at all to compile dnn from 3.4.1 and link it to opencv 3.2.0?
please don't try to mix versions.
if you can't (or don't want to) get rid of the preinstalled 3.2.0 version, probably installing the recent one to a custom location (e.g. somewhere in your home folder) might be a clean way to do it. (CMAKE_INSTALL_PREFIX=XXXXX)
(also, we're at 3.4.3 already)
Due to strict controls on the software on the target product as well as of the deployment pipeline, this is not feasible unfortunately. The only possible way is to compile dnn against 3.2.0 and make a debian package out of it. A more recent opencv ubuntu version will probably come with Ubuntu 20.04, around may 2020, I don't feel like waiting until then.
And as for my goal, it is to do forward inference in order to segment an image, without having to install tensorflow. The only other two ways are using ahead of time compilation (tfcompile) which didn't work for complex models, or running inference on the cloud (which I do now but is interenet dependent). I think it is awesome that the opencv community came up with its own model parser that can run forward inference on any model, but the lagging of the debian community to package the newest versions is a bit unfortunate.
no, again, waste of time. it'll never work.
get VirtualBox, and build the later version in a VM.
", without having to install tensorflow. " -- did you know, that you can use opencv & tensorflow from google's http://colab.research.google.com/ (even with gpu support !) ?
I don't see any great advantage of colab compared to an AWS EC2 instance with a dedicated Tesla K80, am I missing something?
it's for free, that's all there is to it. (but i'm probably only trolling you now...)
The troll was already clear with the virtual machine
Please provide a source of
importing tensorflow models and running inference (supported in opencv 3.4.1)
.@dkurthttps://github.com/opencv/opencv/wiki...