Install TensorFlow Addons on Jetson Nano - Q-engineering
Q-engineering
Q-engineering
Go to content
images/empty-GT_imagea-1-.png
TensorFlow Addons on Jetson Nano

Install TensorFlow Addons on Jetson Nano

TensorFlow Lite
TensorFlow

Introduction.

TensorFlow Addons give you additional functionality not yet included in the core of TensorFlow. If it has proven useful, a particular Addon merged into the TensorFlow package. Other algorithms remain in the Addons herd, only appreciated by a select few of the TF-community.

Due to the experimental nature of the package, versions have a short lifespan. Fourteen versions have been released in 2020 alone.
We hope to keep up with the pace. If not, please let us know, and we will try to provide the missing version on our GitHub page.
Tensorflow.
As said, the Addons work with the TensorFlow framework, so you need to have a working TensorFlow version on your system. If required, you can install a recent TensorFlow version according to one of our guides. You can check your TensorFlow version with the different Addons version here.

The wheel.

Once you have TensorFlow up and running, you can install the Addons. The easiest way to install the TensorFlow-Addons is by using the wheel we have placed on our GitHub page. It's the outcome of the time-consuming installation from scratch by Bazel, given in the next paragraph. Please follow the instructions.
# download the wheel
$ wget https://github.com/Qengineering/TensorFlow-Addons-Jetson-Nano/raw/main/tensorflow_addons-0.13.0.dev0-cp36-cp36m-linux_aarch64.whl
# install the wheel
$ sudo -H pip3 install tensorflow_addons-0.13.0.dev0-cp36-cp36m-linux_aarch64.whl
You can now check the installation, as can be seen in the screen dump below.

tfa_succes_nano


Installation from scratch.

Building the TensorFlow-Addons from scratch is not so difficult. It only takes time to compile all code. In the end, you get the same wheel as we put on GitHub. If you want to save some time, feel free to use this wheel. In case you want to build the Addons yourself, here's the complete guide.
Bazel.
First, you need bazel, a commonly used build tool like CMake. We have already used bazel during the building of TensorFlow from scratch. Please see this paragraph on how to install bazel on your Jetson Nano.
TensorFlow code.
The Addons use the TensorFlow code for the CUDA acceleration. You need to download the TensorFlow repo and issue a ./configure command. You don't actually build TensorFlow, you just set up the environments, so Basel can determine which CUDA architecture it's facing. We used TensorFlow 2.4.1 in the instructions, but it can be any version of your choice. You only have to change the numbering.
# download TensorFlow 2.4.1
$ wget -O tensorflow.zip https://github.com/tensorflow/tensorflow/archive/v2.4.1.zip
# unpack and give the folder a convenient name
$ unzip tensorflow.zip
$ mv tensorflow-2.4.1 tensorflow
$ cd tensorflow
# reveal the CUDA location
$ sudo sh -c "echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf.d/nvidia-tegra.conf"
$ sudo ldconfig
# give the settings
$ ./configure
jetson@nano:~/tensorflow$ ./configure
You have bazel 3.1.0- (@non-git) installed.
Please specify the location of python. [Default is /usr/bin/python3]: <enter>

Found possible Python library paths:
 /usr/local/lib/python3.6/dist-packages
 /usr/lib/python3.6/dist-packages
 /usr/lib/python3/dist-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python3.6/dist-packages] <enter>
/usr/lib/python3/dist-packages

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Do you wish to build TensorFlow with TensorRT support? [y/N]: y
TensorRT support will be enabled for TensorFlow.

Found CUDA 10.2 in:
   /usr/local/cuda-10.2/targets/aarch64-linux/lib
   /usr/local/cuda-10.2/targets/aarch64-linux/include
Found cuDNN 8 in:
   /usr/lib/aarch64-linux-gnu
   /usr/include
Found TensorRT 7 in:
   /usr/lib/aarch64-linux-gnu
   /usr/include/aarch64-linux-gnu

Please specify a list of comma-separated CUDA compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Each capability can be specified as "x.y" or "compute_xy" to include both virtual and binary GPU code, or as "sm_xy" to only include the binary code.
Please note that each additional compute capability significantly increases your build time and binary size, and that TensorFlow only supports compute capabilities >= 3.5 [Default is: 3.5,7.0]: 5.3

Do you want to use clang as CUDA compiler? [y/N]: n
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: <enter>

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: <enter>

Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl          # Build with MKL support.
--config=monolithic   # Config for mostly static monolithic build.
--config=ngraph       # Build with Intel nGraph support.
--config=numa         # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
--config=v2           # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws        # Disable AWS S3 filesystem support.
--config=nogcp        # Disable GCP support.
--config=nohdfs       # Disable HDFS support.
--config=nonccl       # Disable NVIDIA NCCL support.
Configuration finished

Building the Addons.

After the installation of bazel and preparing the TensorFlow code, you can now download the Addons repo.
Before you start compiling the Addons, you have to modify the configure.py script to set the right environment variables for the bazel build. Another script that needs our attention is build_pip_pkg.sh. Here we must change the 'old' python to python3 to get the wheel generation working.
We have made a pull request to the TensorFlow-Addon community to modify the configure.py and build_pip_pkg.sh for the Jetson Nano. Until they approve the pull, please use the script on our GitHub page. See the following commands.
# get the aadons
$ cd ~
$ git clone --depth=1 https://github.com/tensorflow/addons.git
# get the modified configure script and replace the original one
$ wget https://github.com/Qengineering/TensorFlow-Addons-Jetson-Nano/raw/main/configure.py
$ mv configure.py ./addons/
# get the modified build_pip_pkg.sh script and replace the original one
$ wget https://github.com/Qengineering/TensorFlow-Addons-Jetson-Nano/raw/main/build_pip_pkg.sh
$ mv build_pip_pkg.sh ./addons/build_deps
# symlink the tensorflow lib to /usr/lib
$ sudo ln -s /usr/local/lib/python3.6/dist-packages/tensorflow/python/_pywrap_tensorflow_internal.so /usr/lib/lib_pywrap_tensorflow_internal.so
# the Jetson Nano configuration will be set
$ cd ~/addons
$ python3 ./configure.py
# and start the build
$ bazel clean
$ bazel build build_pip_pkg
# finish with the wheel generation
$ sudo bazel-bin/build_pip_pkg /tmp/tensoraddons_pkg
# install the wheel
$ cd /tmp/tensoraddons_pkg
$ sudo -H pip3 install tensorflow_addons-0.13.0.dev0-cp36-cp36m-linux_aarch64.whl
# you can remove the addons folder, as it is no longer needed
$ sudo rm -rf ~/addons
Raspberry 64 OS
Raspberry 32 OS
Raspberry and alt
Raspberry Pi 4
Jetson Nano
images/GithubSmall.png
images/YouTubeSmall.png
images/SDcardSmall.png
Back to content