Install ARMnn on Raspberry Pi 4 - Q-engineering
Q-engineering
Q-engineering
Go to content
images/empty-GT_imagea-1-.png
Install ARMnn on Raspberry Pi 4

Install ARMnn deep learning framework on a Raspberry Pi 4.

Introduction.

This page guides you through the installation of the ARMnn framework on a Raspberry Pi 4. It is part of our Top 5 benchmark. The given C++ code examples are written in the Code::Blocks IDE for the Raspberry Pi 4. We only guide you through the basics, so in the end, you are capable of building your application. For more information on the ARMnn library see: https://developer.arm.com/solutions/machine-learning-on-arm/developer-material.  
Dependencies.
ARMnn is the largest framework we've come across. More than 3.1 Gbyte of disk space is not small. The main reason for this memory usage is the dependencies of some large libraries, such as TensorFlow and Boost. ARM recommends cross-compilation. A fast, powerful computer generates the code, which is then transferred to the Raspberry Pi. However, cross-code development can be complicated when it comes to debugging. Constantly exchanging files, to see if your code is working properly, is not user-friendly.
Let begin with the well known fresh start and then download the required software.
# fresh start in the morning
$ sudo apt-get update
$ sudo apt-get upgrade
# some tools
$ sudo apt-get install scons git wget
$ sudo apt-get install autoconf
$ sudo apt-get install libtool
# make the directory
$ mkdir armnn-pi
$ cd armnn-pi
$ export BASEDIR=`pwd`
# get the ARM libraries
$ git clone https://github.com/Arm-software/ComputeLibrary.git
$ git clone https://github.com/Arm-software/armnn
# and the dependencies
$ wget https://dl.bintray.com/boostorg/release/1.64.0/source/boost_1_64_0.tar.bz2
$ git clone -b v3.5.0 https://github.com/google/protobuf.git
$ git clone https://github.com/google/flatbuffers.git
$ git clone https://github.com/tensorflow/tensorflow.git
$ cd tensorflow
$ git checkout 590d6eef7e91a6a7392c8ffffb7b58f2e0c8bc6b
One word about the line export BASEDIR=`pwd`. Here you define a variable BASEDIR with the string of the print working directory, your current folder. The variable is lost when the operating system is closed. So, if you continue working on the installation after a restart, first set this variable by cd arm-pi && export BASEDIR = `pwd`.

The first step to installing ARMnn on the Raspberry Pi is to build ARM's Compute Library. The ARMnn framework uses this library to optimize these deep learning routines on ARM CPUs and GPUs. It takes about 33 minutes on a Raspberry Pi 4 to build the entire library.
$ cd $BASEDIR/ComputeLibrary
$ scons -j 4 extra_cxx_flags="-fPIC" \
        Werror=0 debug=0 asserts=0 neon=1 \
        opencl=0 os=linux arch=armv7a examples=1
ComputeLibrary complete

The next library is Boost. Most people have Boost already on their system. Often loaded with sudo apt-get install libboost-dev. However, the ARMnn framework needs the static build C++ library with all its headers. In this case, we need to build the Boost library from scratch. First, unpack the library and get some administration done.
# unpack the tar (± 3 min)
$ tar xf boost_1_64_0.tar.bz2
# run the scripts
$ cd $BASEDIR/boost_1_64_0/tools/build
$ ./bootstrap.sh
$ ./b2 install --prefix=$BASEDIR/boost.build
# incorporate the bin dir into PATH
$ export PATH=$BASEDIR/boost.build/bin:$PATH

Note the line export PATH=$ BASEDIR/boost.build/bin: $PATH. Here you include the directory with executables, like b2, in the PATH system variable. This link will be lost when the operating system is closed. So keep working on installing of boost to completion.

Before the actual build, we need to set the correct compiler in the b2 project file. This is done after copying the project-config.jam file with nano. Please follow the instructions.
# copy the user-config to project-config
$ cp $BASEDIR/boost_1_64_0/tools/build/example/user-config.jam \
     $BASEDIR/boost_1_64_0/project-config.jam
# change the directory
$ cd $BASEDIR/boost_1_64_0
# start editor
$ nano project-config.jam
# add the line using gcc : arm : arm-linux-gnueabihf-g++ ;  in the project-config
# save with <Ctrl+X>, <Y>, <ENTER>

Boost config file

With the proper compiler given, the b2 tool can build the Boost library. Some warnings can be generated, but they can all be ignored.
$ b2 --build-dir=$BASEDIR/boost_1_64_0/build -j 4 \
    toolset=gcc-arm \
    link=static \
    cxxflags=-fPIC \
    --with-filesystem \
    --with-test \
    --with-log \
    --with-program_options install \
    --prefix=$BASEDIR/boost
Boost Ready

After Boost, Google's protobuffer library is installed. This library must also be built statically, with all its headers in place. So, even earlier installed with a sudo apt-get install protobuf-compiler, they have to be compiled again before the ARMnn framework can use them.This action takes 15 minutes.
$ cd $BASEDIR/protobuf
$ git submodule update --init --recursive
$ ./autogen.sh
$ ./configure --prefix=$BASEDIR/protobuf-host
$ make -j4
$ make install
$ make clean
Protobuffer ready

Two more libraries to go. The first one is TensorFlow, the second are the FlatBuffers, used by TensorFlow Lite. Compile both with the following commands. It takes about 5 minutes to complete.
# tensorflow
$ cd $BASEDIR/tensorflow
$ ../armnn/scripts/generate_tensorflow_protobuf.sh \
  ../tensorflow-protobuf ../protobuf-host
# flatbuffers
$ cd $BASEDIR/flatbuffers
$ cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release
$ make -j4
FlatRdy

Install ARMnn.

With all the dependencies in place, we can finally build the ARMnn framework. The procedure is identical to the one used to install OpenCV. First, you create a build directory, next you run a cmake with the options followed by a make command. The compilation takes 55 minutes on a Raspberry Pi 4.
# create a build directory
$ cd $BASEDIR/armnn
$ mkdir build
$ cd build
# run cmake
$ cmake -DCMAKE_LINKER=/usr/bin/arm-linux-gnueabihf-ld \
        -DCMAKE_C_COMPILER=/usr/bin/arm-linux-gnueabihf-gcc \
        -DCMAKE_CXX_COMPILER=/usr/bin/arm-linux-gnueabihf-g++ \
        -DCMAKE_C_COMPILER_FLAGS=-fPIC \
        -DCMAKE_CXX_FLAGS=-mfpu=neon \
        -DARMCOMPUTE_ROOT=$BASEDIR/ComputeLibrary \
        -DARMCOMPUTE_BUILD_DIR=$BASEDIR/ComputeLibrary/build \
        -DBOOST_ROOT=$BASEDIR/boost \
        -DBUILD_TF_PARSER=1 \
        -DTF_GENERATED_SOURCES=$BASEDIR/tensorflow-protobuf \
        -DPROTOBUF_ROOT=$BASEDIR/protobuf-host \
        -DBUILD_TF_LITE_PARSER=1 \
        -DTF_LITE_GENERATED_PATH=$BASEDIR/tensorflow/tensorflow/lite/schema \
        -DFLATBUFFERS_ROOT=$BASEDIR/flatbuffers \
        -DFLATBUFFERS_LIBRARY=$BASEDIR/flatbuffers/libflatbuffers.a \
        -DARMCOMPUTENEON=1 \
        -DBUILD_TESTS=1 \
        -DARMNNREF=1 ..
# compile the cmake script
$ make -j4
ArmRdy

After installation, you can check the ARMnn library by running the UnitTests program ($ ./UnitTests). It is currently conducting 2672 tests. In our case, 31 fealures were detected. We have not investigated them further. Most failures are related to temporarily unavailable resources.

						
						<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
						<CodeBlocks_project_file>
							<FileVersion major="1" minor="6" />
							<Project>
								<Option title="TestARMnnMobileNetV1_Quant" />
								<Option pch_mode="2" />
								<Option compiler="gcc" />
								<Build>
									<Target title="Debug">
										<Option output="bin/Debug/ARMnnMobileLite" prefix_auto="1" extension_auto="1" />
										<Option object_output="obj/Debug/" />
										<Option type="1" />
										<Option compiler="gcc" />
										<Compiler>
											<Add option="-g" />
										</Compiler>
									</Target>
									<Target title="Release">
										<Option output="bin/Release/ARMnnMobileLite" prefix_auto="1" extension_auto="1" />
										<Option object_output="obj/Release/" />
										<Option type="1" />
										<Option compiler="gcc" />
										<Option parameters="-m ./mobilenet_v1_1.0_224_quant.tflite -p ./labels.txt -d ./cat.jpg -c CpuAcc" />
										<Compiler>
											<Add option="-O3" />
										</Compiler>
										<Linker>
											<Add option="-s" />
										</Linker>
									</Target>
								</Build>
								<Compiler>
									<Add option="-O3" />
									<Add option="-Wall" />
									<Add option="-std=c++14" />
									<Add option="-fexceptions" />
									<Add option="-pthread" />
									<Add option="-fPIE" />
									<Add directory="/home/pi/armnn-pi/boost/include" />
									<Add directory="/home/pi/armnn-pi/armnn/include" />
									<Add directory="/home/pi/armnn-pi/armnn/src/backends" />
									<Add directory="/home/pi/armnn-pi/armnn/include/armnn" />
									<Add directory="/home/pi/armnn-pi/armnn/src/armnnUtils" />
									<Add directory="/home/pi/armnn-pi/armnn/tests" />
								</Compiler>
								<Linker>
									<Add option="-O3" />
									<Add option="-pthread" />
									<Add option="-pie" />
									<Add library="libarmnn" />
									<Add library="libarmnnTfLiteParser" />
									<Add library="libinferenceTest" />
									<Add library="libboost_filesystem" />
									<Add library="libboost_program_options" />
									<Add directory="/home/pi/armnn-pi/armnn/build" />
									<Add directory="/home/pi/armnn-pi/armnn/build/tests" />
									<Add directory="/home/pi/armnn-pi/boost/lib" />
								</Linker>
								<Unit filename="mobilenetv1_quant_tflite.cpp" />
								<Extensions>
									<code_completion />
									<debugger />
								</Extensions>
							</Project>
						</CodeBlocks_project_file>
						
						
						
Deep learning software for Raspberry Pi
Raspberry and alt
Install
Raspberry Pi 4
Back to content