
Install Darknet deep learning framework on a Jetson Nano.
Last updated: Dec 5, 2022
Introduction.
This page will guide you through the installation of the famous Darknet on a Jetson Nano. Darknet is one of the oldest frameworks, developed by Joseph Redmon for his YOLO network. The project started in 2013, and the first GitHub publication dates back to 2015. In 2018, Joseph stopped working on the project. However, Alexey Bochnkovskiy continues to work on new ideas for YOLO.
So we use Alexey's repo to install Darknet on the Jetson Nano. The code is well maintained and supports CUDA and cuDNN, while Redmon's code lacks new features.
The last time YOLOv7 becomes very popular. However, the code is only available as a Jupiter Notebook. If you want to use YOLOv7, use the ncnn framework.
One final note. Darknet is a lightweight framework that only recognizes objects in an image through a YOLO network. In contrast, for example, TensorFlow or PyTorch. Both offer a wide range of solutions such as pose estimation, segmentation, GAN or NLP.
Dependencies.
Darknet has almost no dependencies. It requires only OpenCV. Install OpenCV first if it is not already installed. The installation guide is here and takes about two hours.
# a fresh start
$ sudo apt-get update
$ sudo apt-get upgrade
# install dependencies
$ sudo apt-get install cmake wget
# download Darknet
$ git clone --depth=1 https://github.com/AlexeyAB/darknet
Before you can build the framework you need to make some modifications in the make file.

# Set these variable to 1:
GPU=1
CUDNN=1
CUDNN_HALF=1
OPENCV=1
OPENMP=1
LIBSO=1
# Comment the ARCH block and uncomment the following line
# For Jetson TX1, Tegra X1, DRIVE CX, DRIVE PX - uncomment:
ARCH= -gencode arch=compute_53,code=[sm_53,compute_53]
# Replace NVCC path
NVCC=/usr/local/cuda/bin/nvcc

Once the make file is saved, you can compile the code.
$ cd ~/darknet
# compile
$ make
If everything goes well, you end up with a screen like this one.

You can test Darknet with the following Python command when a webcam is connected to your Jetson Nano.
# darknet location
$ cd ~/darknet
# download yolov4 tiny weight file
$ wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights --no-check-certificate
# run with webcam
$ ./darknet detector demo cfg/coco.data cfg/yolov4-tiny.cfg yolov4-tiny.weights "v4l2src ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink" -ext_output
Before you can use the Darknet library in a C++ application, you have to modify the yolo_v2_class.hpp in the folder ~/darknet/include.
At the top, define OpenCV and GPU, as shown below. Default, the Jetson Nano ships with OpenCV without CUDA acceleration. That's why we have placed a guide on our site on how to install OpenCV with CUDA. If you have OpenCV with CUDA, you can define TRACK_OPTFLOW. In the case of using the default configuration, without CUDA, don't define this parameter.
#ifndef YOLO_V2_CLASS_HPP
#define YOLO_V2_CLASS_HPP
#define OPENCV //must be added
#define GPU //must be added
#define TRACK_OPTFLOW //only when you have OpenCV with CUDA
#ifndef LIB_API
#ifdef LIB_EXPORTS
..........
Large parts of the libdarknet.so library are not accesseble if these flags are not set, leaving you with errors like no matching function for call to Detector::detect .
The weird assumption to place a #define OPENCV of #define GPU statement right in the middle of a series of header calls in a cpp file, took me four hours to solve. Better to set them hardcoded in your yolo_v2_class.hpp header on the forehand.
Last action is copying the files to the /usr/local/ folders.
# darknet location
$ cd ~/darknet
# modify the yolo_v2_class.hpp before copying
$ nano ./include/yolo_v2_class.hpp
# copy
$ sudo cp ./libdarknet.so /usr/local/lib/
$ sudo cp ./include/yolo_v2_class.hpp /usr/local/include/
Github C++ example