ncnn - Q-engineering
Go to content
Install ncnn software on Jetson Nano

Install ncnn deep learning framework on a Jetson Nano.

Last updated: Februari 13, 2024


This page will guide you through the installation of Tencent's ncnn framework on a Jetson Nano. Because the ncnn framework targets mobile devices, like Android phones, it has no CUDA support. However, most Android phones use the Vulkan API for low-level access to their GPU's. The ncnn framework can use Vulkan routines to accelerate the convolutions of a deep learning model. The Jetson Nano has Vulkan support which ncnn will be using. More information about the software structures can be found here and here. The given C ++ code examples are written in the Code::Blocks IDE for the Nano. We only guide you through the basics, so in the end, you can build your application. For more information about the ncnn library, see: Perhaps unnecessarily, but the installation is the C ++ version. It is not suitable for Python.


RTTI stands for Run-Time Type Identification. It is a C ++ mechanism used at runtime to get the type and memory size of an object, yet not been defined. Normally, a programmer knows the type of variable and can allocate memory that holds the object in advance. Obtaining memory from an operating system with all its processes and threads can be a relatively time-consuming operation. Modern C compilers know how much memory is required and one call to memory management is sufficient. It's one of the main advantages over Python, which is oblivious to memory requirements until it hits the line of code with a variable.

It is best not to use RTTI if you want to write the fastest possible code. It is also the case in the ncnn framework.
By default, it is compiled with a -fno-rtti flag that prevents the use of RTTI. Compiled with this flag, custom-defined layers like those found in YOLOV5 are only usable if the remaining program is still not using RTTI.

Sometimes it is not possible to avoid RTTI. Especially in mature code that requires a new function without rewriting all the code. OpenCV uses the RTTI mechanism in some places.
There is a problem here. When you compile ncnn with OpenCV, the compiler returns an error on the -fno-rtti flag. Removing the flag sometimes works with the ncnn code, depending on the type of DNN used.

At this point, the used build flag -D NCNN_DISABLE_RTTI=OFF becomes clear. It tells the compiler ncnn will allow RTTI. It means that you can now run ncnn in its full functionality without getting into problems with OpenCV. Or, for that matter, any other piece of software using RTTI.
Performance-wise, you will not notice any difference on your Jetson Nano.
CMake 3.21.4.
The ncnn framework uses Glslang for OpenGL compilation. The latest version of this compiler requires CMake version 3.18.4 or higher. Since the Jetson Nano ships with version 3.16, we need to upgrade CMake. The only way to do this is from scratch. There are no aarch64 repositories for CMake, making installation easy. The whole building will take a while, be patient.
# a fresh start
$ sudo apt-get update
$ sudo apt-get upgrade
# install dependencies
$ sudo apt-get install libssl-dev build-essential libtool autoconf unzip wget
# download cmake version 21
$ mkdir ~/temp
$ cd ~/temp
$ wget
$ tar -xzvf cmake-3.21.4.tar.gz
$ cd cmake-3.21.4/
# build cmake
$ ./bootstrap
$ make -j4
$ sudo make install
# reboot to complete
$ sudo reboot
# check the version
$ cmake --version
# you can delete the cmake directory if you wish
$ cd ~
$ sudo rm -rf ~/temp


CMake will install several new FindXXX.cmake files. Unfortunately, they are targeting a PC, not the Jetson Nano.
For instance, you get the following message when you want to compile the ncnn software.
You have to modify several .cmake files to solve the issue.
Start with the FindCUDA.cmake. You have to explicitly give the CUDA locations on the Jetson Nano, as well as the correct version number.
# open the cmake file
$ sudo nano -c /usr/local/share/cmake-3.21/modules/FindCUDA.cmake
# change the CUDA_INCLUDE_DIRS at line 964
set (CUDA_INCLUDE_DIRS "usr/local/cuda-10.2/include")
set (CUDA_TOOLKIT_ROOT_DIRS "usr/local/cuda-10.2")
set (CUDA_NVCC_EXECUTABLE "usr/local/cuda/bin/nvcc")
# change the PATHS at line 988
PATH "/usr/local/cuda-10.2"
# check CUDA_CUDART_LIBRARY_VAR at line 1270.
# They should not be inside brackets.
# save and exit with <Ctrl>+<X>, <Y>, <Enter>

The last file to modify is the OpenCV config file. This cmake file checks the current CUDA version to the one used during building.
Obviously, it is the same. However, upgrading CMake sets the CUDA_VERSION_STRING to blank, hence the following error.
A simple solution is inserting the missing string in OpenCVConfig.cmake.
# open the cmake file
$ sudo nano -c /usr/lib/aarch64-linux-gnu/cmake/opencv4/OpenCVConfig.cmake
# add CUDA_VERSION_STRING at line 99
# save and exit with <Ctrl>+<X>, <Y>, <Enter>


With the new CMake 3.21.4 installed, the next step is to build the actual ncnn framework.
It requires protobuf to load ONNX models and, of course, the Vulkan SDK.
The installation of ncnn on a Jetson Nano with a Linux Tegra operating system is as follows.
# install dependencies
$ sudo apt-get install libprotobuf-dev protobuf-compiler libvulkan-dev
# download ncnn
$ git clone --depth=1
# download glslang
$ cd ncnn
$ git submodule update --depth=1 --init
# prepare folders
$ mkdir build
$ cd build
# build 64-bit ncnn for Jetson Nano
$ cmake -D CMAKE_TOOLCHAIN_FILE=../toolchains/jetson.toolchain.cmake \
        -D NCNN_VULKAN=ON \
        -D CMAKE_BUILD_TYPE=Release ..
$ make -j4
$ make install
# copy output to dirs
$ sudo mkdir /usr/local/lib/ncnn
$ sudo cp -r install/include/ncnn /usr/local/include/ncnn
$ sudo cp -r install/lib/*.a /usr/local/lib/ncnn/
# once you've placed the output in your /usr/local directory,
# you can delete the ncnn directory if you wish
$ cd ~
$ sudo rm -rf ncnn
cmake ncnn

If everything went well, you will get two folders. One with all header files and one with the libraries as shown in the screen dumps.



Please note also the folder with the examples. Many different types of deep learning are covered here. The references to the actual deep learning models can sometimes cause errors due to version changes in the ncnn library. We recently received the following repository from nihui with the latest models:


We have done some benchmarking with and without Vulkan support, to see how well ncnn is performing. On average, you will get a 57% performance boost when ncnn is using Vulkan. Which is even better than the increase of MNN with CUDA. Note, during the test the Jetson Nano CPU was overclocked to 2014.5 MHz and the GPU to 998.4 MHz.
Github C++ example
Back to content