Ask Your Question

Revision history [back]

Cannot run yolov3 on GPU with Cuda 10.2 in docker container

System information (version)

OpenCV => 4.3.0
Operating System / Platform => Ubuntu 18.04
Docker version => 19.03.8
nvidia-docker => works
python => 2.7
GPU => GeForce 1080ti
NVIDIA driver => Driver Version: 440.33.01
CUDA version host => 10.2

Detailed description

I am trying to run a detector inside a docker container. I base my image of nvidia/cudagl:10.2-devel-ubuntu18.04. After that, I install some ROS ( not relevant here) things on it. Finally, I build OpenCV from source (version 4.3.0) with the extra modules. I pass all the correct (I think) parameters to my cmake to be able to run a detector on the CUDA backend Steps to reproduce

Dockerfile :

FROM smartuav_px4:latest --> this image is fist built with nvidia/cudagl:10.2-devel-ubuntu18.04

#copy paste from other image , some useless packages. TODO : clean 
USER root
WORKDIR /
RUN apt-get -qq -y update && apt-get -qq -y install \
build-essential \
git \
cmake \
python3 \
python3-pip \
python3-numpy \
libtbb2 \
libtbb-dev \
libcudnn7-dev \
libeigen3-dev \
libgtk2.0-dev \
pkg-config \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libavresample-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libdc1394-22-dev \
libv4l-dev \
ffmpeg \
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
wget \
&& apt-get clean
RUN rm -rf /var/lib/apt/lists/\* (used \ for the markup)
ENV LD_LIBRARY_PATH="/usr/local/cuda/compat:${LD_LIBRARY_PATH}"

# Install OpenCV with CUDA
WORKDIR /opt
RUN wget -q -O opencv.tar.gz https://github.com/opencv/opencv/archive/4.3.0.tar.gz
RUN tar xzvf opencv.tar.gz && rm opencv.tar.gz
RUN wget -q -O opencv_contrib.tar.gz https://github.com/opencv/opencv_contrib/archive/4.3.0.tar.gz
RUN tar xzvf opencv_contrib.tar.gz && rm opencv_contrib.tar.gz
WORKDIR /opt/opencv-4.3.0/build
RUN cmake \
    -DCMAKE_BUILD_TYPE=RELEASE \
    -DCMAKE_INSTALL_PREFIX=/usr/local \ 
    -DWITH_CUDA=ON \
    -DWITH_CUBLAS=ON \
    -DWITH_CUDNN=ON \
    -DOPENCV_DNN_CUDA=ON \
    -DINSTALL_C_EXAMPLES=OFF \
    -D ENABLE_FAST_MATH=1 \
    -D CUDA_FAST_MATH=1 \
    -D WITH_CUBLAS=1 \
    -D WITH_FFMPEG=ON \
    -D WITH_GSTREAMER=ON \ 
    -D ENABLE_PRECOMPILED_HEADERS=OFF \
    -D CUDA_ARCH_BIN="5.0 5.2 6.0 6.1 7.0 7.5" \
    -D INSTALL_PYTHON_EXAMPLES=OFF -D OPENCV_EXTRA_MODULES_PATH=/opt/opencv_contrib-                  4.3.0/modules -D BUILD_EXAMPLES=OFF ..

RUN make -j$(nproc)
RUN make install
RUN rm -rf /opt/opencv_contrib-4.3.0 && rm -rf /opt/opencv-4.3.0

WORKDIR /
# Build darknet
RUN set -x; \
    git clone --recursive https://github.com/pjreddie/darknet.git

#copy the needed files, setting GPU and OPENCV in Makefile
COPY ./yoloFiles/yolov4.cfg /darknet/cfg/
COPY ./yoloFiles/Makefile /darknet/
COPY ./yoloFiles/yolov3-tiny.weights /darknet
COPY ./yoloFiles/yolov3.weights /darknet
COPY ./yoloFiles/yolov4.weights /darknet

RUN cd darknet && make

#dont do the wget, files saved to host for now
#WORKDIR /darknet
# download weights full (accurate most) and tiny (faster , less accurate) models
# darknet rnns
# RUN \ 
#     wget https://pjreddie.com/media/files/yolov3.weights; \
#     wget https://pjreddie.com/media/files/yolov3-tiny.weights; \
#     wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights

# RUN make

# test Nvidia docker
CMD nvidia-smi -q

# Change terminal prompt
USER user
RUN echo 'export PS1="🐳 \[\033[01;32m\]\u@$CONTAINER_NAME\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ "' >>     ~/.bashrc

# Set initial commands
USER root
COPY ./scripts/init_commands.sh /scripts/init_commands.sh
RUN ["chmod", "+x", "/scripts/init_commands.sh"]

` # Complete building process of docker container USER user WORKDIR /home/user/catkin_ws/src/smartuav/ STOPSIGNAL SIGTERM ENTRYPOINT ["/scripts/init_commands.sh"] CMD /bin/bash

Inside the Makefile that I copy into the darknet directory, I set OPENCV = 1 and GPU = 1.

to make the image i use :

DOCKER_BUILDKIT=1 docker build -t cuda_darknet_yolo:latest -f ./dockerfiles/darknet/Dockerfile .

To start my container use docker-nvidia, set the runtime to Nvidia and pass my GPUs start a container with the image :

TITLE='echo -ne "\033]0;objectDetection\007"'
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

nvidia-docker run --privileged -it --rm \
-e "DISPLAY=$DISPLAY" \
-e "TERM=xterm-256color" \
-e "PROMPT_COMMAND=$TITLE" \
-e "CONTAINER_NAME=objectDetection" \
-e "TOTAL_UAVS=$1" \
-v "${DIR}"/src/objectDetection:/home/user/catkin_ws/src/smartuav/ \
-v /tmp/.X11-unix:/tmp/.X11-unix \
--runtime=nvidia \
--gpus all \
--env QT_X11_NO_MITSHM=1 \
--network=host \
--name smartuav_objectDetection \
cuda_darknet_yolo:latest \
bash

source ~/.bashrc

In my container I can use nvidia-smi to check my GPU

Fri May 29 08:54:27 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  On   | 00000000:65:00.0  On |                  N/A |
|  0%   31C    P0    64W / 280W |    655MiB / 11175MiB |      3%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|

+-----------------------------------------------------------------------------+

nvcc -V output (in running container)

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89

When i try and run a darknet example (from the /darknet dir )

./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg

I get the error :

CUDA Error: forward compatibility was attempted on non supported HW
darknet: ./src/cuda.c:36: check_error: Assertion `0' failed.
Aborted (core dumped)

or when i run my own detector i get the error :

terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.3.0) /opt/opencv-4.3.0/modules/core/src/cuda_info.cpp:62: error: (-217:Gpu API call) forward     compatibility was attempted on non supported HW in function 'getCudaEnabledDeviceCount'

./1_start_object_detection.sh: line 6:   592 Aborted                 (core dumped) rosrun smartuav ObjectDectionController.py

In my own detector, I set my dnn to the cuda backend

self.net = cv2.dnn.readNet(config,weights)
self.net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
self.net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)

is the 1080ti not supported or am I missing a driver / do I set a parameter wrong somewhere? Issue submission checklist