Install MNN on Jetson Nano - Q-engineering
Q-engineering
Q-engineering
Go to content
images/empty-GT_imagea-1-.png
Install MNN software on Jetson Nano

Install MNN deep learning framework on a Jetson Nano.

Introduction.

This page will guide you through the installation of Alibaba's MNN framework on a Jetson Nano. With the latest release, the MNN framework also has CUDA support. It makes it an ideal choice for the Jetson Nano as a lightweight frame. The given C ++ code examples are written in the Code::Blocks IDE for the Nano. We only guide you through the basics, so in the end, you can build your application. For more information about the MNN library, see the documentation here. Perhaps unnecessarily, but the installation is the C ++ version. It is not suitable for Python.
Dependencies.
The MNN framework has a few dependencies. It requires protobuf. OpenCV is being used for building the C++ examples and is not needed for MNN. The installation of MNN on a Jetson Nano with a Linux Tegra operating system begins as follows.
# check for updates
$ sudo apt-get update
$ sudo apt-get upgrade
# install dependencies
$ sudo apt-get install cmake wget
$ sudo apt-get install libprotobuf-dev protobuf-compiler
$ sudo apt-get install libglew-dev
Before you can compile the MNN software, there is one thing to be done. The MNN Vulkan interface uses the OpenGL ES 3.0 library. It is a low-level graphics rendering interface for Android. Luckily it is backwards compatible with the version 2.0 library found in the JetPack 4.4 on your Jetson Nano. And, as far as we know, the MNN framework doesn't use any unique version 3.0 calls. It makes it possible to use a symbolic link redirecting libGLES3.0 to libGLES2.0. This strategy works very well and relieves you from a cumbersome installation of version 3.0.
# make symlink
$ sudo ln -s /usr/lib/aarch64-linux-gnu/libGLESv2.so /usr/lib/aarch64-linux-gnu/libGLESv3.so

Installation.

With the dependencies installed, the library and converter tools can be built.
# download MNN
$ git clone https://github.com/alibaba/MNN.git
# common preparation (installing the flatbuffers)
$ cd MNN
$ ./schema/generate.sh
# install MNN
$ mkdir build
$ cd build
# generate build script
$ cmake -D CMAKE_BUILD_TYPE=Release \
        -D MNN_BUILD_QUANTOOLS=ON \
        -D MNN_BUILD_CONVERTER=ON \
        -D MNN_CUDA=ON \
        -D MNN_TENSORRT=ON \
        -D MNN_BUILD_DEMO=ON \
        -D MNN_BUILD_BENCHMARK=ON ..
LibBuildMNN

Time to build the library and install it in the appropriate folders.
# build MNN (± 25 min)
$ make -j4
$ sudo make install
$ sudo cp ./source/backend/cuda/*.so /usr/local/lib/
$ sudo cp ./source/backend/tensorrt/*.so /usr/local/lib/
Make_MNN_Jetson

And after sudo make install.

MNN_build_rdy

If everything went well, you have the following folders on your Jetson Nano.

MNN_include

lib MNN Jetson

none

Please note also the folder with the examples.

none

If you like to download some example deep learning models, use the commands below.
# download some models
$ cd ~/MNN
$ ./tools/script/get_model.sh

Benchmark.
With the new CUDA backend, it is interesting to see how well MNN is performing. Below are some benchmarks. On average, you will get a 40% performance boost when MNN is using CUDA. Note, during the test the Jetson Nano CPU was overclocked to 2014.5 MHz and the GPU to 998.4 MHz.
Model
CPU
(mSec)
CUDA
(mSec)
SqueezeNet38.926.7
MobileNetV134.621.7
MobileNetV228.516.4
ResNet100.939.6
GoogleNet93.145.2
ShuffleNet28.321.8
Github C++ example
Install 64 OS
Install 32 OS
Raspberry and alt
Raspberry Pi 4
Jetson Nano
images/GithubSmall.png
images/YouTubeSmall.png
images/SDcardSmall.png
Back to content