Install Paddle (Lite) deep learning framework on a Jetson Nano.
This page will guide you through the setup of Baidu's Paddle-Lite framework on a Jetson Nano.
PaddlePaddle is the Chinese counterpart of TensorFlow. It's widely used in industry, institutions and universities. It supports several hardware environments including NPU acceleration with the Rockchip RK3399, found on many single-board computers. Software acceleration can be done with CUDA and cuDNN libraries.
The given C ++ code examples are written in the Code::Blocks IDE for the Jetson Nano. We only guide you through the basics, so in the end, you can build your application. For more information about the Paddle library, see the GitHub pages or the Chinese tutorial https://paddle-lite.readthedocs.io/zh/latest/.
Note, this is a C ++ installation it is not suitable for Python. Paddle-Lite's Python interface relies on the PaddlePaddle framework. Since we don't want to use an additional 3 GByte of disk space just for the Python interface, we will not use it. And, as you know, speed and Python don't go hand in hand.
We are going to install Paddle-Lite version 2.7.0 because it supports cuDNN 8.0, found in Jetpack 4.4 of your Jetson Nano. It makes Paddle-Lite the only Lite framework for small devices supporting CUDA and cuDNN. All other frameworks have either no GPU acceleration or some form of Vulkan support (ncnn and MNN).
The Paddle-Lite framework has almost no dependencies. The libraries needed are downloaded and compiled automatically during installation. Re-install OpenCV first if you like it to have CUDA support also. The installation guide is here and takes about an hour and a half. It is not mandatory. The entire installation of latest version of Paddle-Lite on a Jetson Nano is as follows.
# check for updates
$ sudo apt-get update
$ sudo apt-get upgrade
# install dependencies
$ sudo apt-get install cmake wget
# download Paddle Lite
$ git clone https://github.com/PaddlePaddle/Paddle-Lite.git
$ cd Paddle-Lite
# build Paddle Lite (± 2 Hours)
$ ./lite/tools/build.sh \
# copy the headers and library to /usr/local/
$ sudo mkdir /usr/local/lib/paddle-lite
$ sudo cp -r build.lite.armlinux.armv8.gcc/inference_lite_lib.armlinux.armv8/cxx/include/*.* /usr/local/include/paddle-lite
$ sudo cp -r build.lite.armlinux.armv8.gcc/inference_lite_lib.armlinux.armv8/cxx/lib/*.* /usr/local/lib/paddle-lite
The compilation is ready after two hours. It takes up about 7.2 GByte on your disk.
Please note also the folder with the examples.
After copying the headers and library to the /usr/local/ folder, you may want to delete the entire Paddle-Lite folder. This action will free up about 7 GByte on your SD card. Keep in mind that you also delete the samples. Although those can be found on GitHub if needed. It's all up to you.
$ sudo rm -rf Paddle-Lite