Install Paddle Lite deep learning framework on a Jetson Nano.
This page will guide you through the setup of Baidu's Paddle-Lite framework on a Jetson Nano.
PaddlePaddle is the Chinese counterpart of TensorFlow. It's widely used in industry, institutions and universities. It supports several hardware environments including NPU acceleration with the Rockchip RK3399, found on many single-board computers. Software acceleration can be done with CUDA and cuDNN libraries.
The given C ++ code examples are written in the Code::Blocks IDE for the Jetson Nano. We only guide you through the basics, so in the end, you can build your application. For more information about the Paddle library, see the GitHub pages or the Chinese tutorial https://paddle-lite.readthedocs.io/zh/latest/.
Note, this is a C ++ installation it is not suitable for Python. Paddle Lite's Python interface relies on the PaddlePaddle framework. Since we don't want to use an additional 3 GByte of disk space just for the Python interface, we will not use it. And, as you know, speed and Python don't go hand in hand.
We are going to install Paddle Lite version 2.7.0 because it supports cuDNN 8.0, found in Jetpack 4.4 of your Jetson Nano. Some of our examples, such as the Face Mask Detection, use 'old' deep learning models, which are only supported by previous versions. Since these earlier Paddle Lite versions do not compile with cuDNN 8.0, we have a problem. For now, we'll stick with version 2.7, as it has the desired CUDA acceleration. For those who want to use the Face Mask Detection software, install version 2.6.3 without CUDA support. To this end, follow the installation for the Raspberry Pi 64-OS by the letter. The resulting FPS will be comparable to the RPi option.
The Paddle Lite framework has almost no dependencies. OpenCV would be useful, but it is not even necessary. We used it because most of our software uses OpenCV one way or the other.
Install OpenCV first if it is not already installed. The installation guide is here and takes about an hour and a half.
The entire installation of latest version of Paddle Lite on a Jetson Nano is as follows.
# check for updates
$ sudo apt-get update
$ sudo apt-get upgrade
# install dependencies
$ sudo apt-get install cmake wget
# download Paddle Lite
$ git clone https://github.com/PaddlePaddle/Paddle-Lite.git
$ cd Paddle-Lite
# build Paddle Lite (± 2 Hours)
$ ./lite/tools/build.sh \
# copy the headers and library to /usr/local/
$ sudo mkdir /usr/local/lib/paddle-lite
$ sudo cp -r build.lite.armlinux.armv8.gcc/inference_lite_lib.armlinux.armv8/cxx/include /usr/local/include/paddle-lite
$ sudo cp -r build.lite.armlinux.armv8.gcc/inference_lite_lib.armlinux.armv8/cxx/lib /usr/local/lib/paddle-lite
The compilation is ready after two hours. It takes up about 7.2 GByte on your disk.
Please note also the folder with the examples.
After copying the headers and library to the /usr/local/ folder, you may want to delete the entire Paddle-Lite folder. This action will free up about 7 GByte on your SD card. Keep in mind that you also delete the samples. Although those can be found on GitHub if needed. It's all up to you.
$ sudo rm -rf Paddle-Lite