
Install MNN deep learning framework on a Jetson Nano.
Last updated: May 30, 2022
Introduction.
This page will guide you through the installation of Alibaba's MNN framework on a Jetson Nano. With the latest release, the MNN framework also has CUDA support. It makes it an ideal choice for the Jetson Nano as a lightweight frame. The given C ++ code examples are written in the Code::Blocks IDE for the Nano. We only guide you through the basics, so in the end, you can build your application. For more information about the MNN library, see the documentation here. Perhaps unnecessarily, but the installation is the C ++ version. It is not suitable for Python.
Dependencies.
The MNN framework has a few dependencies. It requires protobuf. OpenCV is being used for building the C++ examples and is not needed for MNN. The installation of MNN on a Jetson Nano with a Linux Tegra operating system begins as follows.
# check for updates
$ sudo apt-get update
$ sudo apt-get upgrade
# install dependencies
$ sudo apt-get install cmake wget
$ sudo apt-get install libprotobuf-dev protobuf-compiler
$ sudo apt-get install libglew-dev
Before you can compile the MNN software, there is one thing to be done. The MNN Vulkan interface uses the OpenGL ES 3.0 library. It is a low-level graphics rendering interface for Android. Luckily it is backwards compatible with the version 2.0 library found in the JetPack 4.4 on your Jetson Nano. And, as far as we know, the MNN framework doesn't use any unique version 3.0 calls. It makes it possible to use a symbolic link redirecting libGLES3.0 to libGLES2.0. This strategy works very well and relieves you from a cumbersome installation of version 3.0.
# make symlink
$ sudo ln -s /usr/lib/aarch64-linux-gnu/libGLESv2.so /usr/lib/aarch64-linux-gnu/libGLESv3.so
Installation.
With the dependencies installed, the library and converter tools can be built.
# download MNN
$ git clone https://github.com/alibaba/MNN.git
# common preparation (installing the flatbuffers)
$ cd MNN
$ ./schema/generate.sh
# install MNN
$ mkdir build
$ cd build
# generate build script
$ cmake -D CMAKE_BUILD_TYPE=Release \
-D MNN_BUILD_QUANTOOLS=ON \
-D MNN_BUILD_CONVERTER=ON \
-D MNN_OPENGL=ON \
-D MNN_VULKAN=ON \
-D MNN_CUDA=ON \
-D MNN_TENSORRT=OFF \
-D MNN_BUILD_DEMO=ON \
-D MNN_BUILD_BENCHMARK=ON ..

Time to build the library and install it in the appropriate folders.
# build MNN (± 25 min)
$ make -j4
$ sudo make install
$ sudo cp ./source/backend/cuda/*.so /usr/local/lib/
# don't copy until MNN has solved the issues with the TensorRT backend
# $ sudo cp ./source/backend/tensorrt/*.so /usr/local/lib/

And after sudo make install.

If everything went well, you have the following folders on your Jetson Nano.



Please note also the folder with the examples.

If you like to download some example deep learning models, use the commands below.
# download some models
$ cd ~/MNN
$ ./tools/script/get_model.sh
Benchmark.
With the new CUDA backend, it is interesting to see how well MNN is performing. Below are some benchmarks. On average, you will get a 40% performance boost when MNN is using CUDA. Note, during the test the Jetson Nano CPU was overclocked to 2014.5 MHz and the GPU to 998.4 MHz.
Model | CPU (mSec) | CUDA (mSec) |
SqueezeNet | 38.9 | 26.7 |
MobileNetV1 | 34.6 | 21.7 |
MobileNetV2 | 28.5 | 16.4 |
ResNet | 100.9 | 39.6 |
GoogleNet | 93.1 | 45.2 |
ShuffleNet | 28.3 | 21.8 |
Github C++ example