Install MNN on Jetson Nano - Q-engineering
Go to content
Install MNN software on Jetson Nano

Install MNN deep learning framework on a Jetson Nano.

Last updated: March 13, 2022


This page will guide you through the installation of Alibaba's MNN framework on a Jetson Nano. With the latest release, the MNN framework also has CUDA support. It makes it an ideal choice for the Jetson Nano as a lightweight frame. The given C ++ code examples are written in the Code::Blocks IDE for the Nano. We only guide you through the basics, so in the end, you can build your application. For more information about the MNN library, see the documentation here. Perhaps unnecessarily, but the installation is the C ++ version. It is not suitable for Python.
The latest version of MNN (1.2.1) has some installation issues on a Jetson Nano. You must update your code with pull requests #1616 and #1530 after downloading. As long as these pull requests aren't granted, you could also download our fork instead of the official one.

 git clone  

MNN also has problems with NVIDIA's TensorRT. We will not use this option (-D MNN_TENSORRT) until these have been resolved.


The MNN framework has a few dependencies. It requires protobuf. OpenCV is being used for building the C++ examples and is not needed for MNN. The installation of MNN on a Jetson Nano with a Linux Tegra operating system begins as follows.
# check for updates
$ sudo apt-get update
$ sudo apt-get upgrade
# install dependencies
$ sudo apt-get install cmake wget
$ sudo apt-get install libprotobuf-dev protobuf-compiler
$ sudo apt-get install libglew-dev
Before you can compile the MNN software, there is one thing to be done. The MNN Vulkan interface uses the OpenGL ES 3.0 library. It is a low-level graphics rendering interface for Android. Luckily it is backwards compatible with the version 2.0 library found in the JetPack 4.4 on your Jetson Nano. And, as far as we know, the MNN framework doesn't use any unique version 3.0 calls. It makes it possible to use a symbolic link redirecting libGLES3.0 to libGLES2.0. This strategy works very well and relieves you from a cumbersome installation of version 3.0.
# make symlink
$ sudo ln -s /usr/lib/aarch64-linux-gnu/ /usr/lib/aarch64-linux-gnu/


With the dependencies installed, the library and converter tools can be built.
# download MNN
$ git clone
# common preparation (installing the flatbuffers)
$ cd MNN
$ ./schema/
# install MNN
$ mkdir build
$ cd build
# generate build script
$ cmake -D CMAKE_BUILD_TYPE=Release \
       -D MNN_OPENGL=ON \
       -D MNN_VULKAN=ON \
       -D MNN_CUDA=ON \
        -D MNN_BUILD_DEMO=ON \
CMake MNN Jetson

Time to build the library and install it in the appropriate folders.
# build MNN (± 25 min)
$ make -j4
$ sudo make install
$ sudo cp ./source/backend/cuda/*.so /usr/local/lib/
# don't copy until MNN has solved the issues with the TensorRT backend
# $ sudo cp ./source/backend/tensorrt/*.so /usr/local/lib/

And after sudo make install.


If everything went well, you have the following folders on your Jetson Nano.


lib MNN Jetson


Please note also the folder with the examples.


If you like to download some example deep learning models, use the commands below.
# download some models
$ cd ~/MNN
$ ./tools/script/

With the new CUDA backend, it is interesting to see how well MNN is performing. Below are some benchmarks. On average, you will get a 40% performance boost when MNN is using CUDA. Note, during the test the Jetson Nano CPU was overclocked to 2014.5 MHz and the GPU to 998.4 MHz.
Github C++ example
Raspberry 64 OS
Raspberry 32 OS
Raspberry and alt
Raspberry Pi 4
Jetson Nano
Back to content