Install PyTorch on Jetson Nano - Q-engineering
Q-engineering
Q-engineering
Go to content
images/empty-GT_imagea-1-.png
PyTorch on Jetson Nano

Install PyTorch on Jetson Nano.

Last updated: July 12, 2021

Introduction.

This page will guide you through the installation of PyTorch 1.7.0, 1.7.1, 1.8.0, 1.8.1 or 1.9.0, TorchVision, LibTorch and Caffe2 on a Jetson Nano.

PyTorch is a software library specially developed for deep learning. It consumes an lot of resources of your Jetson Nano. So, don't expect miracles. It can run your models, but it can't train new models. The so-called transfer learning can cause problems due to the limited amount of available RAM.

PyTorch runs on Python. A C++ API is available, but we have not tested it.

We discuss two installations, one with a Python 3 wheel. The other method is the build from scratch. Unfortunately, there is no official pip3 wheel available for the Jetson Nano. However, we created these wheels and put them on GitHub for your convenience.
In order for Pytorch to work properly with the ARM NEON registers, we had to compile the framework using the clang compiler. It means that you should also use the clang compiler if you are going to compile C++ code yourself. The GCC compiler will give you 'no expression errors'.

The wheel.

PyTorch is build by Ninja. It takes more then 5 hours to complete the whole build. We have posted the wheels on our GitHub page. Feel free to use these. With all the tedious work already done, it takes now only a couple of minutes to install PyTorch on your Nano. For the diehards, the complete procedure is covered later in this manual.
PyTorch 1.9.
Some warnings about version 1.9.0. As seen here, quite a few changes are made to the software since the last version. Not all operations and declarations are supported anymore. It can cause backward compatibility issues when your 1.8 networks are running on this new version.
PyTorch 1.8.
The whole shortcut procedure is found below. The wheel was too large to store at GitHub, so Google drive is used. Please make sure you have latest pip3 and python3 version installed, otherwise, pip may come with the message ".whl is not a supported wheel on this platform".

JetPack 4 comes with Python 3.6.9. Undoubtedly, the Python version will upgrade over time and you will need a different wheel. See out GitHub page for all the wheels.

Python version check

# install the dependencies (if not already onboard)
$ sudo apt-get install python3-pip libopenblas-dev libopenmpi-dev libomp-dev
$ sudo -H pip3 install future
# upgrade setuptools 47.1.1 -> 57.0.0
$ sudo -H pip3 install --upgrade setuptools
$ sudo -H pip3 install Cython
# install gdown to download from Google drive
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download the wheel
$ gdown https://drive.google.com/uc?id=12UiREE6-o3BthhpjQxCKLtRg3u4ssPqb
# install PyTorch 1.9.0
$ sudo -H pip3 install torch-1.9.0a0+gitd69c22d-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torch-1.9.0a0+gitd69c22d-cp36-cp36m-linux_aarch64.whl
# install the dependencies (if not already onboard)
$ sudo apt-get install python3-pip libopenblas-dev libopenmpi-dev libomp-dev
$ sudo -H pip3 install future
# upgrade setuptools 47.1.1 -> 57.1.0
$ sudo -H pip3 install --upgrade setuptools
$ sudo -H pip3 install Cython
# install gdown to download from Google drive
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download the wheel
$ gdown https://drive.google.com/uc?id=1XL6k3wfWTJVKXHvCbZSfIVdz6IDJUAkt
# install PyTorch 1.8.1
$ sudo -H pip3 install torch-1.8.1a0+56b43f4-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torch-1.8.1a0+56b43f4-cp36-cp36m-linux_aarch64.whl
# install the dependencies (if not already onboard)
$ sudo apt-get install python3-pip libopenblas-dev libopenmpi-dev libomp-dev
$ sudo -H pip3 install future
# upgrade setuptools 47.1.1 -> 56.0.0
$ sudo -H pip3 install --upgrade setuptools
$ sudo -H pip3 install Cython
# install gdown to download from Google drive
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download the wheel
$ gdown https://drive.google.com/uc?id=1-XmTOEN0z1_-VVCI3DPwmcdC-eLT_-n3
# install PyTorch 1.8.0
$ sudo -H pip3 install torch-1.8.0a0+37c1f4a-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torch-1.8.0a0+37c1f4a-cp36-cp36m-linux_aarch64.whl
# install the dependencies (if not already onboard)
$ sudo apt-get install python3-pip libopenblas-dev libopenmpi-dev libomp-dev
$ sudo -H pip3 install future
# upgrade setuptools 47.1.1 -> 54.0.0
$ sudo -H pip3 install --upgrade setuptools
$ sudo -H pip3 install Cython
# install gdown to download from Google drive
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download the wheel
$ gdown https://drive.google.com/uc?id=1-b9rg2yGEdBATdUmIWcSqjkL1b0gvToQ
# install PyTorch 1.7.1
$ sudo -H pip3 install torch-1.7.1a0-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torch-1.7.1a0-cp36-cp36m-linux_aarch64.whl
# install the dependencies (if not already onboard)
$ sudo apt-get install python3-pip libopenblas-dev libopenmpi-dev libomp-dev
$ sudo -H pip3 install future
# upgrade setuptools 47.1.1 -> 54.0.0
$ sudo -H pip3 install --upgrade setuptools
$ sudo -H pip3 install Cython
# install gdown to download from Google drive
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download the wheel
$ gdown https://drive.google.com/uc?id=1aWuKu8eqkZwVzFFvguVuwkj0zdCir9qX
# install PyTorch 1.7.0
$ sudo -H pip3 install torch-1.7.0a0-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torch-1.7.0a0-cp36-cp36m-linux_aarch64.whl
After a successful installation, you can check PyTorch with the following commands.

PyTorch 1.9.0 success


Installation from scratch.

Install PyTorch 1.9 for Python 3.

Building PyTorch from scratch is relatively easy. Install some dependencies first, then download the zip from GitHub and finally build the software.
Note, the whole procedure takes about 8 hours on an overclocked Jetson Nano.
Most important, modify the version number in the file ~/pytorch/version.txt from 1.7.0 to 1.7.1, if you install version PyTorch 1.7.1. It seems the developers forget the adjust this number on GitHub. If you don't change the number, you end up with a torch-1.7.0a0-cp36-cp36m-linux_aarch64.whl wheel, suggesting you have the old version still on your machine.
The same apply to version 1.8.1 which has still a version number 1.8.0 in the ~/pytorch/version.txt file.
# get a fresh start
$ sudo apt-get update
$ sudo apt-get upgrade
# the dependencies
$ sudo apt-get install ninja-build git cmake
$ sudo apt-get install libopenmpi-dev libomp-dev ccache
$ sudo apt-get install libopenblas-dev libblas-dev libeigen3-dev
$ sudo -H pip3 install -U --user wheel mock pillow
# upgrade setuptools 47.1.1 -> 57.0.0
$ sudo -H pip3 install -U setuptools
$ sudo -H pip3 install scikit-build
# download PyTorch 1.9.0 with all its libraries
$ git clone -b v1.9.0 --depth=1 --recursive https://github.com/pytorch/pytorch.git
$ cd pytorch
# one command to install several dependencies in one go
# installs future, numpy, pyyaml, requests
# setuptools, six, typing_extensions, dataclasses
$ sudo pip3 install -r requirements.txt
# get a fresh start
$ sudo apt-get update
$ sudo apt-get upgrade
# the dependencies
$ sudo apt-get install ninja-build git cmake
$ sudo apt-get install libopenmpi-dev libomp-dev ccache
$ sudo apt-get install libopenblas-dev libblas-dev libeigen3-dev
$ sudo -H pip3 install -U --user wheel mock pillow
$ sudo -H pip3 install -U setuptools
$ sudo -H pip3 install scikit-build
# download PyTorch 1.8.1 with all its libraries
$ git clone -b v1.8.1 --depth=1 --recursive https://github.com/pytorch/pytorch.git
$ cd pytorch
# one command to install several dependencies in one go
# installs future, numpy, pyyaml, requests
# setuptools, six, typing_extensions, dataclasses
$ sudo pip3 install -r requirements.txt
# get a fresh start
$ sudo apt-get update
$ sudo apt-get upgrade
# the dependencies
$ sudo apt-get install ninja-build git cmake
$ sudo apt-get install libopenmpi-dev libomp-dev ccache
$ sudo apt-get install libopenblas-dev libblas-dev libeigen3-dev
$ sudo -H pip3 install -U --user wheel mock pillow
$ sudo -H pip3 install -U setuptools
$ sudo -H pip3 install scikit-build
# download PyTorch with all its libraries
$ git clone -b v1.8.0 --depth=1 --recursive https://github.com/pytorch/pytorch.git
$ cd pytorch
# one command to install several dependencies in one go
# installs future, numpy, pyyaml, requests
# setuptools, six, typing_extensions, dataclasses
$ sudo pip3 install -r requirements.txt
# get a fresh start
$ sudo apt-get update
$ sudo apt-get upgrade
# the dependencies
$ sudo apt-get install ninja-build git cmake
$ sudo apt-get install libopenmpi-dev libomp-dev ccache
$ sudo apt-get install libopenblas-dev libblas-dev libeigen3-dev
$ sudo -H pip3 install -U --user wheel mock pillow
$ sudo -H pip3 install -U setuptools
$ sudo -H pip3 install scikit-build
# download PyTorch with all its libraries
$ git clone -b v1.7.1 --depth=1 --recursive https://github.com/pytorch/pytorch.git
$ cd pytorch
# alter version number from 1.7.0 to 1.7.1 and close with <Ctrl>+<X>,<Y>,<Enter>
$ nano version.txt
# one command to install several dependencies in one go
# installs future, numpy, pyyaml, requests
# setuptools, six, typing_extensions, dataclasses
$ sudo pip3 install -r requirements.txt
# get a fresh start
$ sudo apt-get update
$ sudo apt-get upgrade
# the dependencies
$ sudo apt-get install ninja-build git cmake
$ sudo apt-get install libopenmpi-dev libomp-dev ccache
$ sudo apt-get install libopenblas-dev libblas-dev libeigen3-dev
$ sudo -H pip3 install -U --user wheel mock pillow
$ sudo -H pip3 install -U setuptools
$ sudo -H pip3 install scikit-build
# download PyTorch with all its libraries
$ git clone -b v1.7.0 --depth=1 --recursive https://github.com/pytorch/pytorch.git
$ cd pytorch
# one command to install several dependencies in one go
# installs future, numpy, pyyaml, requests
# setuptools, six, typing_extensions, dataclasses
$ sudo pip3 install -r requirements.txt

Preparations.

Before the build can begin, some preparations are required. First, you must have the latest clang compiler on your Jetson Nano. There is a constant stream of issues with the GNU compiler and the Jetson Nano when compiling PyTorch. Usually, it has to do with poor support of the NEON architecture of the ARM cores, causing floating points to be truncated.

GNUerror

Oddly enough, the clang compiler doesn't seem to have a problem with the code at all, so time to use clang this time. We know there are people disliking clang. The GNU compiler used to be superior compared to clang, but those days are long gone. Today, both compilers perform almost identically.
# install the clang compiler
$ sudo apt-get install clang
Next, you have to modify the PyTorch code you just downloaded from GitHub. All alterations limits the maximum of CUDA threads available during runtime. There are four places which need our attention.

~/pytorch/aten/src/ATen/cpu/vec256/vec256_float_neon.h
Around line 28 add #if defined(__clang__) ||(__GNUC__ > 8 || (__GNUC__ == 8 && __GNUC_MINOR__ > 3)) and the matching closure #endif.

Prep1

~/pytorch/aten/src/ATen/cuda/CUDAContext.cpp
Around line 24 add an extra line device_prop.maxThreadsPerBlock = device_prop.maxThreadsPerBlock / 2;

Prep2

~/pytorch/aten/src/ATen/cuda/detail/KernelUtils.h
In line 26 change the constant from 1024 to 512.

Prep3

~/pytorch/aten/src/THCUNN/common.h
In line 22 the same modification, change the CUDA_NUM_THREADS from 1024 to 512

Prep4

With all preparations done, we can now set the environment parameters so that the Ninja compiler gets the correct instructions on how we want PyTorch built. As you know, these instructions are only valid in the current terminal. If you start the build in another terminal, you will need to set the parameters again.

Note also the symbolic link at the end of the instructions. NVIDIA has moved the cublas library from /usr/local/cuda/lib64/ to the /usr/lib/aarch64-linux-gnu/ folder, leaving much software, like PyTorch, with broken links. A symlink is the best workaround here.

Another noteworthy point is the arch_list. Not only is the Jetson Nano's 5.3 architectural CUDA number given, but also numbers for the Jetson Xavier. Now the wheel supports the Xavier devices also.
# set NINJA parameters
$ export BUILD_CAFFE2_OPS=OFF
$ export USE_FBGEMM=OFF
$ export USE_FAKELOWP=OFF
$ export BUILD_TEST=OFF
$ export USE_MKLDNN=OFF
$ export USE_NNPACK=OFF
$ export USE_XNNPACK=OFF
$ export USE_QNNPACK=OFF
$ export USE_PYTORCH_QNNPACK=OFF
$ export USE_CUDA=ON
$ export USE_CUDNN=ON
$ export TORCH_CUDA_ARCH_LIST="5.3;6.2;7.2"
$ export USE_NCCL=OFF
$ export USE_SYSTEM_NCCL=OFF
$ export USE_OPENCV=OFF
$ export MAX_JOBS=4
# set path to ccache
$ export PATH=/usr/lib/ccache:$PATH
# set clang compiler
$ export CC=clang
$ export CXX=clang++
# create symlink to cublas
$ sudo ln -s /usr/lib/aarch64-linux-gnu/libcublas.so /usr/local/cuda/lib64/libcublas.so
# start the build
$ python3 setup.py bdist_wheel

BuildEnvironment

Once Ninja finished the build, you can install PyTorch on your Jetson Nano with the generated wheel. Follow the instructions below.
# install the wheel
$ cd dist
$ sudo -H pip3 install torch-1.9.0a0+gitd69c22d-cp36-cp36m-linux_aarch64.whl
# install the wheel
$ cd dist
$ sudo -H pip3 install torch-1.8.1a0+56b43f4-cp36-cp36m-linux_aarch64.whl
# install the wheel
$ cd dist
$ sudo pip3 install torch-1.8.0a0+37c1f4a-cp36-cp36m-linux_aarch64.whl
# install the wheel
$ cd dist
$ sudo pip3 install torch-1.7.1a0-cp36-cp36m-linux_aarch64.whl
# install the wheel
$ cd dist
$ sudo pip3 install torch-1.7.0a0-cp36-cp36m-linux_aarch64.whl
After successful installation, you can check PyTorch with the commands given at the end of the previous section.

Torch_1_9_0_Jetson_Succes

One word about OpenCV. PyTorch has the option the use OpenCV. However, it links hardcoded with the OpenCV version found during the build. As soon as you upgrade your OpenCV, PyTorch will stop working since it can't find the old OpenCV version. Given OpenCV enthusiasm to release at least two or three versions a year, it seems not wise to link PyTorch with OpenCV. Otherwise, you will be forced to recompile PyTorch or manually create a whole bunch of symbolic links to the old libraries.

After a successful installation, many files are no longer needed. Removing them will give you about 3.6 GB of disk space.
# remove the whole folder
$ sudo rm -rf ~/pytorch

TorchVision.

Install torchvision on Jetson Nano.

Torchvision is a collection of frequent used datasets, architectures and image algorithms. The installation is simple when you use one of our wheels found on GitHub. You can also build torchvision from scratch. In that case, you only have to download the version of your choice from the official GitHub page, modify the version number at line 32 in setup.py and issue the command $ python3 setup.py bdist_wheel.
Torchvision assumes PyTorch is installed on your machine on the forehand.

Used with PyTorch 1.9.0
# the dependencies
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
# install gdown to download from Google drive, if not done yet
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download TorchVision 0.10.0
$ gdown https://drive.google.com/uc?id=1tU6YlPjrP605j4z8PMnqwCSoP6sSC91Z
# install TorchVision 0.10.0
$ sudo -H pip3 install torchvision-0.10.0a0+300a8a4-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torchvision-0.10.0a0+300a8a4-cp36-cp36m-linux_aarch64.whl
Used with PyTorch 1.8.1
# the dependencies
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
# install gdown to download from Google drive, if not done yet
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download TorchVision 0.9.1
$ gdown https://drive.google.com/uc?id=1HYmjUrv9o2hZWVz7GpGplaKhqMPMtESL
# install TorchVision 0.9.1
$ sudo -H pip3 install torchvision-0.9.1a0+8fb5838-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torchvision-0.9.1a0+8fb5838-cp36-cp36m-linux_aarch64.whl
Used with PyTorch 1.8.0
# the dependencies
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
# install gdown to download from Google drive, if not done yet
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download TorchVision 0.9.0
$ gdown https://drive.google.com/uc?id=1BdvXkwUGGTTamM17Io4kkjIT6zgvf4BJ
# install TorchVision 0.9.0
$ sudo -H pip3 install torchvision-0.9.0a0+01dfa8e-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torchvision-0.9.0a0+01dfa8e-cp36-cp36m-linux_aarch64.whl
Used with PyTorch 1.7.1
# the dependencies
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
# install gdown to download from Google drive, if not done yet
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download TorchVision 0.8.2
$ gdown https://drive.google.com/uc?id=1Z14mNdwgnElOb_NYkRaDCwP31scd7Mfz
# install TorchVision 0.8.2
$ sudo -H pip3 install torchvision-0.8.2a0+2f40a48-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torchvision-0.8.2a0+2f40a48-cp36-cp36m-linux_aarch64.whl
Used with PyTorch 1.7.0
# the dependencies
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
# install gdown to download from Google drive, if not done yet
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download TorchVision 0.8.1
$ gdown https://drive.google.com/uc?id=1WhplBjODLjNmYWEvQliCdkt3CqQTsClm
# install TorchVision 0.8.1
$ sudo -H pip3 install torchvision-0.8.1a0+45f960c-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torchvision-0.8.1a0+45f960c-cp36-cp36m-linux_aarch64.whl
After installation you may want to check torchvision by verifying the release version.

TV_0_10_0_Success

LibTorch.

Install LibTorch on Jetson Nano.

PyTorch has a Python interface on the outside, which makes programming easy for most users. There are situations where the Python language is unfavourable. For example, in the so-called production phase, you want speed. It is not possible in a Python environment. You need the low-level C++ engine which PyTorch uses behind the curtains. PyTorch has made most of the functionality available through the LibTorch C++ API.

There are two possible ways to install LibTorch on your Jetson Nano. The first method is to download the tar.xz file and extract it. All necessary libraries and headers are installed in the LibTorch folder, as seen in the screenshot below.
# install gdown to download from Google drive, if not done yet
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download LibTorch 1.9.0
$ gdown https://drive.google.com/uc?id=1SA9DpJoTS3Q7kDAz5e_Rr4oTc-TGjLR7
# unpack the LibTorch 1.9.0 tar ball
$ sudo tar -xf libTorch_1.9.0.tar.xz
# clean up
$ rm libTorch_1.9.0.tar.xz
libTorchNano

The other way is to compile the LibTorch C++ API from scratch. The whole procedure is almost identical to the original Python installation. Follow the instructions if you want to compile and install the libraries from scratch. If you don't want static libraries (libtorch.a), set the environment flag BUILD_SHARED_LIBS=ON. Now you get dynamic libraries (libtorch.so).
# First, download and install the dependencies and
# your PyTorch version of your choice as specified above.
# Follow all steps up until the environment variables.
# Don't forget to modify the five files also.

$ cd ~/pytorch
$ mkdir build_libtorch
$ cd build_libtorch
# now set the temporary environment variables for LibTorch
# remember, don't close the window as it will delete these variables
$ export BUILD_CAFFE2_OPS=OFF
$ export USE_FBGEMM=OFF
$ export USE_FAKELOWP=OFF
$ export BUILD_TEST=OFF
$ export USE_MKLDNN=OFF
$ export USE_NNPACK=OFF
$ export USE_XNNPACK=OFF
$ export USE_QNNPACK=OFF
$ export USE_PYTORCH_QNNPACK=OFF
$ export USE_CUDA=ON
$ export USE_CUDNN=ON
$ export TORCH_CUDA_ARCH_LIST="5.3;6.2;7.2"
$ export MAX_JOBS=4
$ export USE_NCCL=OFF
$ export USE_OPENCV=OFF
$ export USE_SYSTEM_NCCL=OFF
$ export BUILD_SHARED_LIBS=OFF
$ PATH=/usr/lib/ccache:$PATH
# set clang compiler
$ export CC=clang
$ export CXX=clang++
# create symlink to cublas
$ sudo ln -s /usr/lib/aarch64-linux-gnu/libcublas.so /usr/local/cuda/lib64/libcublas.so
# clean up the previous build, if necessary
$ python3 setup.py clean
# start the build
$ python3 ../tools/build_libtorch.py
libTorchSuccessNano

More information about the C++ API Library can be found on the PyTorch site. You can also find guides on how to transfer your TorchScript to C++ here.


Caffe2.

Install Caffe2 on Jetson Nano.

PyTorch comes with Caffe2 on board. In other words, if you have PyTorch installed, you have also installed Caffe2 with CUDA support on your Jetson Nano. Together with two conversion tools. Before using Caffe2, most of the time protobuf needs to be updated. Let's do it right away now.
# update protobuf (3.15.5)
$ sudo -H pip3 install -U protobuf
You can check the installation of Caffe2 with a few Python instructions.

Caffe2_Nano
Raspberry 64 OS
Raspberry 32 OS
Raspberry and alt
Raspberry Pi 4
Jetson Nano
images/GithubSmall.png
images/YouTubeSmall.png
images/SDcardSmall.png
Back to content