Install TensorFlow Lite 2 on Raspberry 64 OS
Last updated: December 13, 2021
The Raspberry Pi is moving towards a 64-bit operating system. Within a year or so, the 32-bit OS will be fully replaced by the faster 64-bit version.
This guide will install the latest version of TensorFlow Lite 2 on a Raspberry Pi 4 with a 64-bit operating system together with some examples. TensorFlow evolves over time. Models generated in an older version of TensorFlow may have compatibility issues with a newer version of TensorFlow Lite. Or vice versa. This manual describes the latest version of TensorFlow Lite. You can always install an older version by changing the version number in the download command. For example, $ wget -O tensorflow.zip https://github.com/tensorflow/tensorflow/archive/v2.2.1.zip, will download version 2.2.1 on your Raspberry Pi.
Regularly, we get the question if we have an SD image of a Raspberry Pi 4 with pre-installed frameworks and deep-learning examples.
We are happy to comply with this request. Please, find a complete working Raspberry Pi 4 dedicated to deep learning on our GitHub page. Download the zip file from our GDrive site, unzip and flash the image on a 16 GB SD-card, and enjoy!
Please check your operating system before installing TensorFlow Lite on your Raspberry 64-bit OS. Run the command uname -a and verify your version with the screen dump below.
You also need to check your C++ compiler version with the command gcc -v. It must also be an aarch64-linux-gnu version, as shown in the screenshot. If you have a 64-bit operating system, but your gcc version is different from the one given above, reinstall the whole operating system with the latest version. The guide is found here: Install 64 bit OS on Raspberry Pi 4. You must have a 64-bit C ++ compiler as we are going to build libraries. Even if you use Python wheels, gcc is called behind the curtains.
Also note the zram swap size of more than 3 Gbyte after installation according to our instructions.
To compile Tensorflow Lite, you need enough memory onboard. Make sure you have at least a 1.5 GByte of memory available. In the case of the Raspberry Pi Zero 2, with its 500 MByte RAM, you need to expand the swap space to a comforting 1024 MByte. Please follow the instructions given on this page on how to do so.
Install TensorFlow Lite.
TensorFlow Lite can be run in Python. However, to build a very fast deep learning application, you have to work in C ++. That's why you need to build TensorFlow Lite's C ++ API libraries. The procedure is very simple. Just copy the latest GitHub repository and run the two scripts. The commands are listed below. For those who have previously installed TensorFlow Lite on a Raspberry Pi 32-bit OS, note the subtle difference. Here we use build_aarch64_lib because we now use a 64-bit operating system compared to the 32-bit Raspbian versions, build_rpi_lib.
The TensorFlow Lite flat buffers are also needed. If you have the latest Bullseye OS running on your Raspberry Pi, you will get errors during the compilation of the flatbuffers. It has to do with the gcc version 10.2.1 giving some warnings now the 'error' status. The flatbuffers team has already fixed these warnings. However, TensorFlow still uses the 'old' version of the flatbuffers.
Please use the following commands.
If everything went well, you should have the two libraries and two folders with header files as shown in the slide show.
As of version 2.3.0, Tensorflow Lite uses dynamic linking. At runtime libraries are copied to RAM and pointers are relocated before TF Lite can run. This strategy gives greater flexibility. It all means that TensorFlow Lite now requires glibc 2.28 or higher to run. From now on, link the libdl library when building your application, otherwise, you get undefined reference to symbol dlsym@@GLIBC_2.17 linker errors. The symbolic link can be found at /lib/aarch64-linux-gnu/libdl.so.2 on a 64-bit Linux OS, or /lib/arm-linux-gnueabihf/libdl.so.2 on a Raspberry Pi 32-bit OS. Please see our examples on GitHub.
TensorFlow Lite models.
You cannot run normal TensorFlow models on the Lite software, they must be converted before use. These TensorFlow pages explain how to do this. Google has some ready-made models available on the net here.
The most well known is, of course, the classifications of objects. Google hosts a wide range of TensorFlow Lite models, the so-called quantized models in their zoo. The models are capable of detecting 1000 different objects. All models are trained with square images. Therefore, the best results are given when your input image is also square-like. All models are supported on GitHub by our C ++ software samples for both the 32-bit and 64-bit Raspberry and Ubuntu 18.04 or 20.04 operating system.
Another application is detecting objects in a scene. TensorFlow Lite host one model for now. COCO SSD MobileNet v1 recognize 80 different objects. It can detect up to ten objects in a scene. On GitHub we have a C++ example of the famous Skyfall intro running on a bare Raspberry Pi 4 for 64-bit. The 64-bit version can be used for both RPi 4 and Ubuntu 18.04 or 20.04.
With semantic image segmentation, a neural network attempts to associate every pixel in the scene with a particular object. You could say, it tries to detect the outline of objects. Tensorflow Lite has one segmentation model capable of classifying 20 different objects. Keep in mind that only reasonable sized objects can be recognized, not a scene of a highway with lots of tiny cars. The C++ examples can be found here for 64-bit. The 64-bit version is suitable for Raspberry Pi 64 OS and Ubuntu.
This neural network tries to estimate a person pose in a scene. It recognizes certain key features like elbows, knees, ankles in an image. TensorFlow Lite supports two models, a single person and a multi-person version. We have only used the single person model because it gives reasonable good results when the person is centred and in full view in a square-like image. Please, find the 64-bit Raspbain C++ example at our GitHub page.
Here, some frame rates are given of the several TensorFlow Lite models tested on a bare Raspberry Pi 4. The overclock frequencies are indications. Ubuntu always crashes above 1950 MHz when running deep learning models with the 4 cores simultaneous. Some models could run at 1950 MHz, others not higher than 1825 MHz. The Raspberry Pi 4's 32-bit and 64-bit operating systems are capable of clocking up to 1950 MHz for all examples.
Frame rates are only based on model run time (interpreter->Invoke()). Grabbing and preprocessing of a image are not taken into account. Also noteworthy is the higher frame rate on a Raspbian for the MobileNet models compared to Ubuntu. The guide to installing Ubuntu along with OpenCV and TensorFlow Lite can be found here. Overclocking is covered here. By the way, there is little difference in speed between Ubuntu 18.04 and 20.04. And as you can see, Raspberry Pi 64-bit OS performs significantly better than Ubuntu.
Raspberry Pi 4
64 bit OS
Raspberry Pi 4
64 bit OS
|Raspberry Pi 4 64 bit Ubuntu 1850 MHz|
Raspberry Pi 4
64 bit Ubuntu
Raspberry Pi 4
32 bit OS
|Raspberry Pi 4|
32 bit OS
|21.5 FPS||24.0 FPS||17.2 FPS||17.0 FPS||13.8 FPS|
|38.5 FPS||32.2 FPS||22.9 FPS||22.5 FPS||33.0 FPS||22.2 FPS|
|45.5 FPS||37 FPS||19,7 FPS||19,5 FPS||36.2 FPS||28.0 FPS|
|9.5 FPS||10.0 FPS||8.7 FPS||8.9 FPS||6.9 FPS|
|1.7 FPS||2.0 FPS||1.8 FPS||1.6 FPS||1.3 FPS|
|7.5 FPS||6.8 FPS||7.2 FPS|
|4.0 FPS||3.6 FPS|
|10.3 FPS||9.2 FPS||9.4 FPS||8.7 FPS||5.0 FPS||4.3 FPS|
Python installation of TensorFlow Lite.
For completeness, the Pythons installation of TensorFlow Lite 2.1.0 is given here. It is only one command. There are no wheels for TensorFlow Lite version 2.2.0, 2.3.0 and 2.3.1. Python examples can be found everywhere on the net. Google has also made an example here.
$ pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_aarch64.whl