1 | initial version |
When I tried to load a FP16 model I also got an error, what I would try is to use an - or convert the model to FP32. Then after that you can set OpenCL_FP16 for inference.
There could be another way but this seems the only possibility from what I know.
2 | No.2 Revision |
When I tried to load a FP16 model in ONNX I also got an error, what I would try is to use an - or convert the model to FP32. Then after that you can set OpenCL_FP16 for inference.
There could be another way but this seems the only possibility from what I know.
3 | No.3 Revision |
When I tried to load a FP16 ONNX model in ONNX I also got an error, what I would try is to use an - or convert the model to FP32. Then after that you can set OpenCL_FP16 for inference.
There could be another way but this seems the only possibility from what I know.