Hello, i am porting an algorithm to opencv https://arxiv.org/pdf/1804.06039.pdf , the models are available here: https://github.com/Jack-CV/PCN-FaceDetection-FaceAlignment . I finished the first stage and while implementing the second, the outputs of the network are a bit weird:
Input Data: I: 1 C: 3 H: 24 W: 24
Scores Data: I: 1 C: 2 H: 299777088 W: 1
Regression Data: I: 1 C: 3 H: 309665792 W: 1
Rotate Data: I: 1 C: 3 H: 299780800 W: 1
The channels size of each output is correct, but the H,W are confusing me. I don't know if this has to be with the case that the model is the same for all the three networks?. Or maybe is an overflow issue? I tested it just forwarding a simple image with the secont network "PCN-2.prototxt" the output is similar.. has someone experienced something like this?
thanks!