Install ncnn deep learning framework on a Jetson Nano.
This page will guide you through the installation of Tencent's ncnn framework on a Jetson Nano. Because the ncnn framework targets mobile devices, like Android phones, it has no CUDA support. However, most Android phones use the Vulkan API for low-level access to their GPU's. The ncnn framework can use Vulkan routines to accelerate the convolutions of a deep learning model. The Jetson Nano has Vulkan support which ncnn will be using. More information about the software structures can be found here and here. The given C ++ code examples are written in the Code::Blocks IDE for the Nano. We only guide you through the basics, so in the end, you can build your application. For more information about the ncnn library, see: https://github.com/Tencent/ncnn. Perhaps unnecessarily, but the installation is the C ++ version. It is not suitable for Python.
The ncnn framework has almost no dependencies. It requires protobuf to load ONNX models and, of course, the Vulkan SDK. Install OpenCV first if it is not already installed. The installation guide is here and takes about two hours.
The entire installation of ncnn on a Jetson Nano with a Linux Tegra operating system is as follows.
# a fresh start
$ sudo apt-get update
$ sudo apt-get upgrade
# install dependencies
$ sudo apt-get install cmake wget
$ sudo apt-get install libprotobuf-dev protobuf-compiler libvulkan-dev
# download ncnn
$ git clone --depth=1 https://github.com/Tencent/ncnn.git
# download glslang
$ cd ncnn
$ git submodule update --depth=1 --init
# prepare folders
$ mkdir build
$ cd build
# build 64-bit ncnn for Jetson Nano
$ cmake -DCMAKE_TOOLCHAIN_FILE=../toolchains/jetson.toolchain.cmake \
$ make -j4
$ make install
# copy output to dirs
$ sudo mkdir /usr/local/lib/ncnn
$ sudo cp -r install/include/ncnn /usr/local/include/ncnn
$ sudo cp -r install/lib/*.a /usr/local/lib/ncnn/
# once you've placed the output in your /usr/local directory,
# you can delete the ncnn directory if you wish
$ cd ~
$ sudo rm -rf ncnn
If everything went well, you will get two folders. One with all header files and one with the libraries as shown in the screen dumps.
Please note also the folder with the examples. Many different types of deep learning are covered here. The references to the actual deep learning models can sometimes cause errors due to version changes in the ncnn library. We recently received the following repository from nihui with the latest models: https://github.com/nihui/ncnn-assets/tree/master/models.
We have done some benchmarking with and without Vulkan support, to see how well ncnn is performing. On average, you will get a 57% performance boost when ncnn is using Vulkan. Which is even better than the increase of MNN with CUDA. Note, during the test the Jetson Nano CPU was overclocked to 2014.5 MHz and the GPU to 998.4 MHz.