
Install Paddle Lite deep learning framework on a Raspberry Pi 4.
Paddle 2.0.0
Last updated: March 28, 2023
Introduction.
This page will guide you through the installation of the Baidu's Paddle Lite framework on a Raspberry Pi 4. The given C ++ code examples are written in the Code::Blocks IDE for the Raspberry Pi 4. We only guide you through the basics, so in the end, you can build your application. For more information about the Paddle Lite library, see: https://github.com/PaddlePaddle/Paddle-Lite or the Chinese tutorial https://paddle-lite.readthedocs.io/zh/latest/. Perhaps unnecessarily, but the installation is the C ++ version. It is not suitable for Python.
Dependencies.
The Paddle Lite framework has almost no dependencies. OpenCV would be useful, but it is not even necessary. We used it because most of our software uses OpenCV one way or the other.
Version check.
Please check your operating system before installing Paddle Lite on your Raspberry Pi 4. Run the command uname -a and verify your version with the screen dump below.

In case of a 64-bit operating system, please check also your C++ compiler with the command gcc -v. It must also be an aarch64-linux-gnu version. In case of a different gcc version, reinstall the whole operating system with the latest version. The guide is found here: Install 64 bit OS on Raspberry Pi 4. You must have a 64-bit C ++ compiler as we are going to build the Paddle Lite libraries.
Also note the zram swap size of more than 3 Gbyte after installation according to our instructions.

Raspberry Pi 32-bit OS.
Installation.
Due to incompatibility between the CPU (armv8) and the compiler (arm-linux-gnueabihf), Paddle-Lite cannot be installed on a Raspberry Pi 4 with a 32-bit operating system. The generated library uses registers (VFPV3) missing in the armv8. Replacing a compiler can be a real nightmare. You can install Paddle-Lite on a Raspberry Pi 3, or better, take a new SD card and install the latest Raspberry 64-bit OS.
For those with a Raspberry Pi 3, install OpenCV first, if not already installed. The installation guide is here and takes about an hour.
The entire installation of the latest version of Paddle Lite (v2.6.3) on a Raspberry with a 32-bit operating system (Raspbian) is as follows.
Raspberry Pi 3 only !
# check for updates
$ sudo apt-get update
$ sudo apt-get upgrade
# install dependencies
$ sudo apt-get install cmake wget
# download Paddle Lite
$ git clone --depth=1 https://github.com/PaddlePaddle/Paddle-Lite.git
$ cd Paddle-Lite
# build 32-bit Paddle Lite
$ ./lite/tools/build_linux.sh \
--arch=armv7hf \
--with_extra=ON \
--with_cv=ON \
--with_static_lib=ON \
--toolchain=gcc
# copy the headers and library to /usr/local/
$ sudo mkdir -p /usr/local/include/paddle-lite
$ sudo cp -r build.lite.linux.armv7hf.gcc/inference_lite_lib.armlinux.armv8/cxx/include/*.* /usr/local/include/paddle-lite
$ sudo mkdir -p /usr/local/lib/paddle-lite
$ sudo cp -r build.lite.linux.armv7hf.gcc/inference_lite_lib.armlinux.armv8/cxx/lib/*.* /usr/local/lib/paddle-lite
Raspberry Pi 64-bit OS.
Installation.
Install OpenCV first if it is not already installed. The installation guide is here and takes about an hour.
The entire installation of latest version of Paddle Lite (v2.6.3) on a Raspberry with a 64-bit operating system is as follows.
# check for updates
$ sudo apt-get update
$ sudo apt-get upgrade
# install dependencies
$ sudo apt-get install cmake wget
# download Paddle Lite
$ git clone --depth=1 https://github.com/PaddlePaddle/Paddle-Lite.git
$ cd Paddle-Lite
# build 64-bit Paddle Lite (±1 hour)
$ ./lite/tools/build_linux.sh \
--arch=armv8 \
--with_extra=ON \
--with_cv=ON \
--with_static_lib=ON \
--toolchain=gcc
# copy the headers and library to /usr/local/
$ sudo mkdir -p /usr/local/include/paddle-lite
$ sudo cp -r build.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8/cxx/include/*.* /usr/local/include/paddle-lite
$ sudo mkdir -p /usr/local/lib/paddle-lite
$ sudo cp -r build.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8/cxx/lib/*.* /usr/local/lib/paddle-lite
If everything went well, you will get the following output.


Please note also the folder with the examples.

Conversion to Paddle Lite.
This section describes the conversion of regular, so-called liquid models, from the PaddlePaddle framework to Paddle Lite models used in embedded systems such as a Raspberry Pi 4, or Jetson Nano.
As with TensorFlow Lite, not all models are portable to the Lite framework. Some operations are not supported. The conversion tool will let you know if that's the case.
Before we can build the conversion tool, it takes some preparations. First, we need at least 6 GB of RAM, just like building PaddlePaddle from scratch.

Follow the steps on this page here to increase your RAM. Once done, make sure you do indeed have the required amount of RAM.

Next, the -m64 flag should be disabled because an aarch64 system don't recognize this flag.

The -m64 flag is declared in flags.cmake at line 151. The file is located in the ~/Paddle-Lite/cmake folder. Expand line 151 with the text AND NOT(CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64.*|AARCH64.*)"),as shown below. It effectively disables the -m64 flag that is unknown to aarch64 machines. We have made a pull request with Paddle Lite.

With these steps done, you can now begin to compile the Paddle Lite conversion tool. We assume you have already successfully built the Paddle Lite framework as described above. The build of the conversion tool is as follows.
$ cd ~/Paddle-Lite
# build 64-bit Paddle Lite optimize tool
$ ./lite/tools/build.sh \
--build_cv=ON \
--arm_os=armlinux \
--arm_abi=armv8 \
--arm_lang=gcc \
build_optimize_tool

In folder /home/pi/Paddle-Lite/build.opt/lite/api you can find the optimize tool called opt.

Once you have the optimizer, you can now download your deep learning model of your interest using PaddleHub. The procedure is covered here. As an example, we will use the face mask detector.
The liquid model, as PaddlePaddle calls its in-depth learning models, comes in two variants. Either it is a combined model with the parameters (topology) and weights in a single file, or the parameters and weights are each stored in a separate file. The optimizer handles both types. As it happens, the face mask detector has two files.
So the next step will be extracting these __model__ and __param__ from the network. It's done with a few Python instructions. As you can see, we use both PaddlePaddle and PaddleHub.
$ python3
>>> import paddlehub as hub
>>> import paddle
>>> paddle.enable_static()
>>> pyramidbox_lite_mobile_mask = hub.Module(name="pyramidbox_lite_mobile_mask")
>>> pyramidbox_lite_mobile_mask.save_inference_model(dirname="test_program")

The last action is converting the model and parameters to a Paddle Lite version by the just build opt application
This is done with the following command.
$ ./opt \
--model_file=/home/pi/test_program/mask_detector/__model__ \
--param_file=/home/pi/test_program/mask_detector/__params__ \
--valid_targets=arm \
--optimize_out_type=naive_buffer \
--optimize_out=opt_model_v2.7

Please note the version number V2.7. Most models are not backwards nor forwards compatible.
More information about the optimize tool can be found in the wiki https://github.com/PaddlePaddle/Paddle-Lite/wiki/model_optimize_tool.
If you had to reinstall dphys-swapfile, it's time to uninstall it again. This way you will extend the life of your SD card.
# remove the dphys-swapfile (if installed)
$ sudo /etc/init.d/dphys-swapfile stop
$ sudo apt-get remove --purge dphys-swapfile
Deep learning software for Raspberry Pi
Deep learning examples for Raspberry Pi