Install PyTorch on Jetson Nano - Q-engineering
Q-engineering
Q-engineering
Go to content
images/empty-GT_imagea-1-.png
PyTorch on Jetson Nano

Install PyTorch on Jetson Nano.

Introduction.

This page will guide you through the installation of PyTorch 1.7.0, 1.7.1 or 1.8.0, TorchVision and Caffe2 on a Jetson Nano.

PyTorch is a software library specially developed for deep learning. It consumes an lot of resources of your Jetson Nano. So, don't expect miracles. It can run your models, but it can't train new models. The so-called transfer learning can cause problems due to the limited amount of available RAM.

PyTorch runs on Python. A C++ API is available, but we have not tested it.

We discuss two installations, one with a Python 3 wheel. The other method is the build from scratch. Unfortunately, there is no official pip3 wheel available for the Jetson Nano. However, we created these wheels and put them on GitHub for your convenience.

The wheel.

PyTorch is build by Ninja. It takes more then 5 hours to complete the whole build. We have posted the wheels on our GitHub page. Feel free to use these. With all the tedious work already done, it takes now only a couple of minutes to install PyTorch on your Nano. For the diehards, the complete procedure is covered later in this manual.
Pytorch 1.8 for Python 3.
The whole shortcut procedure is found below. The wheel was too large to store at GitHub, so Google drive is used. Please make sure you have latest pip3 and python3 version installed, otherwise, pip may come with the message ".whl is not a supported wheel on this platform".

JetPack 4 comes with Python 3.6.9. Undoubtedly, the Python version will upgrade over time and you will need a different wheel. See out GitHub page for all the wheels.

Python version check

# install the dependencies (if not already onboard)
$ sudo apt-get install python3-pip libopenblas-dev libopenmpi-dev libomp-dev
$ sudo -H pip3 install future
# upgrade setuptools 47.1.1 -> 56.0.0
$ sudo -H pip3 install --upgrade setuptools
$ sudo -H pip3 install Cython
# install gdown to download from Google drive
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download the wheel
$ gdown https://drive.google.com/uc?id=1-XmTOEN0z1_-VVCI3DPwmcdC-eLT_-n3
# install PyTorch 1.8.0
$ sudo -H pip3 install torch-1.8.0a0+37c1f4a-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torch-1.8.0a0+37c1f4a-cp36-cp36m-linux_aarch64.whl
# install the dependencies (if not already onboard)
$ sudo apt-get install python3-pip libopenblas-dev libopenmpi-dev libomp-dev
$ sudo -H pip3 install future
# upgrade setuptools 47.1.1 -> 54.0.0
$ sudo -H pip3 install --upgrade setuptools
$ sudo -H pip3 install Cython
# install gdown to download from Google drive
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download the wheel
$ gdown https://drive.google.com/uc?id=1-b9rg2yGEdBATdUmIWcSqjkL1b0gvToQ
# install PyTorch 1.7.1
$ sudo -H pip3 install torch-1.7.1a0-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torch-1.7.1a0-cp36-cp36m-linux_aarch64.whl
# install the dependencies (if not already onboard)
$ sudo apt-get install python3-pip libopenblas-dev libopenmpi-dev libomp-dev
$ sudo -H pip3 install future
# upgrade setuptools 47.1.1 -> 54.0.0
$ sudo -H pip3 install --upgrade setuptools
$ sudo -H pip3 install Cython
# install gdown to download from Google drive
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download the wheel
$ gdown https://drive.google.com/uc?id=1aWuKu8eqkZwVzFFvguVuwkj0zdCir9qX
# install PyTorch 1.7.0
$ sudo -H pip3 install torch-1.7.0a0-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torch-1.7.0a0-cp36-cp36m-linux_aarch64.whl
After a successful installation, you can check PyTorch with the following commands.

Torch_1_8_0_Jetson_Succes


Installation from scratch.

Install PyTorch 1.8 for Python 3.

Building PyTorch from scratch is relatively easy. Install some dependencies first, then download the zip from GitHub and finally build the software.
Note, the whole procedure takes about 7 hours on an overclocked Jetson Nano.
Most important, modify the version number in the file ~/pytorch/version.txt from 1.7.0 to 1.7.1, if you install version PyTorch 1.7.1. It seems the developers forget the adjust this number on GitHub. If you don't change the number, you end up with a torch-1.7.0a0-cp36-cp36m-linux_aarch64.whl wheel, suggesting you have the old version still on your machine.
# get a fresh start
$ sudo apt-get update
$ sudo apt-get upgrade
# the dependencies
$ sudo apt-get install ninja-build git cmake
$ sudo apt-get install libopenmpi-dev libomp-dev
$ sudo apt-get install libopenblas-dev libblas-dev libeigen3-dev
$ sudo -H pip3 install -U --user wheel mock pillow ccache
$ sudo -H pip3 install -U setuptools
$ sudo -H pip3 install scikit-build
# download PyTorch with all its libraries
$ git clone -b v1.8.0 --depth 1 --recursive https://github.com/pytorch/pytorch.git
$ cd pytorch
# one command to install several dependencies in one go
# installs future, numpy, pyyaml, requests
# setuptools, six, typing_extensions, dataclasses
$ sudo pip3 install -r requirements.txt
# get a fresh start
$ sudo apt-get update
$ sudo apt-get upgrade
# the dependencies
$ sudo apt-get install ninja-build git cmake
$ sudo apt-get install libopenmpi-dev libomp-dev
$ sudo apt-get install libopenblas-dev libblas-dev libeigen3-dev
$ sudo -H pip3 install -U --user wheel mock pillow ccache
$ sudo -H pip3 install -U setuptools
$ sudo -H pip3 install scikit-build
# download PyTorch with all its libraries
$ git clone -b v1.7.1 --depth 1 --recursive https://github.com/pytorch/pytorch.git
$ cd pytorch
# alter version number from 1.7.0 to 1.7.1 and close with <Ctrl>+<X>,<Y>,<Enter>
$ nano version.txt
# one command to install several dependencies in one go
# installs future, numpy, pyyaml, requests
# setuptools, six, typing_extensions, dataclasses
$ sudo pip3 install -r requirements.txt
# get a fresh start
$ sudo apt-get update
$ sudo apt-get upgrade
# the dependencies
$ sudo apt-get install ninja-build git cmake
$ sudo apt-get install libopenmpi-dev libomp-dev
$ sudo apt-get install libopenblas-dev libblas-dev libeigen3-dev
$ sudo -H pip3 install -U --user wheel mock pillow ccache
$ sudo -H pip3 install -U setuptools
$ sudo -H pip3 install scikit-build
# download PyTorch with all its libraries
$ git clone -b v1.7.0 --depth 1 --recursive https://github.com/pytorch/pytorch.git
$ cd pytorch
# one command to install several dependencies in one go
# installs future, numpy, pyyaml, requests
# setuptools, six, typing_extensions, dataclasses
$ sudo pip3 install -r requirements.txt

Preparations.

Before the build can begin, some preparations are required. First, you must have the latest clang compiler on your Jetson Nano. There is a constant stream of issues with the GNU compiler and the Jetson Nano when compiling PyTorch. Usually, it has to do with poor support of the NEON architecture of the ARM cores, causing floating points to be truncated.

GNUerror

Oddly enough, the clang compiler doesn't seem to have a problem with the code at all, so time to use clang this time. We know there are people disliking clang. The GNU compiler used to be superior compared to clang, but those days are long gone. Today, both compilers perform almost identically.
# install the clang compiler
$ sudo apt-get install clang
Next, you have to modify the PyTorch code you just downloaded from GitHub. All alterations limits the maximum of CUDA threads available during runtime. There are four places which need our attention.

~/pytorch/aten/src/ATen/cpu/vec256/vec256_float_neon.h
Around line 28 add #if defined(__clang__) ||(__GNUC__ > 8 || (__GNUC__ == 8 && __GNUC_MINOR__ > 3)) and the matching closure #endif.

Prep1

~/pytorch/aten/src/ATen/cuda/CUDAContext.cpp
Around line 24 add an extra line device_prop.maxThreadsPerBlock = device_prop.maxThreadsPerBlock / 2;

Prep2

~/pytorch/aten/src/ATen/cuda/detail/KernelUtils.h
In line 26 change the constant from 1024 to 512.

Prep3

~/pytorch/aten/src/THCUNN/common.h
In line 22 the same modification, change the CUDA_NUM_THREADS from 1024 to 512

Prep4

With all preparations done, we can now set the environment parameters so that the Ninja compiler gets the correct instructions on how we want PyTorch built. As you know, these instructions are only valid in the current terminal. If you start the build in another terminal, you will need to set the parameters again.

Note also the symbolic link at the end of the instructions. NVIDIA has moved the cublas library from /usr/local/cuda/lib64/ to the /usr/lib/aarch64-linux-gnu/ folder, leaving much software, like PyTorch, with broken links. A symlink is the best workaround here.

Another noteworthy point is the arch_list. Not only is the Jetson Nano's 5.3 architectural CUDA number given, but also numbers for the Jetson Xavier. Now the wheel supports the Xavier devices also.
# set NINJA parameters
$ export BUILD_CAFFE2_OPS=OFF
$ export USE_FBGEMM=OFF
$ export USE_FAKELOWP=OFF
$ export BUILD_TEST=OFF
$ export USE_MKLDNN=OFF
$ export USE_NNPACK=OFF
$ export USE_QNNPACK=OFF
$ export USE_PYTORCH_QNNPACK=OFF
$ export USE_CUDA=ON
$ export USE_CUDNN=ON
$ export TORCH_CUDA_ARCH_LIST="5.3;6.2;7.2"
$ export USE_NCCL=OFF
$ export USE_SYSTEM_NCCL=OFF
$ export USE_OPENCV=OFF
$ export MAX_JOBS=4
# set path to ccache
$ export PATH=/usr/lib/ccache:$PATH
# set clang compiler
$ export CC=clang
$ export CXX=clang++
# create symlink to cublas
$ sudo ln -s /usr/lib/aarch64-linux-gnu/libcublas.so /usr/local/cuda/lib64/libcublas.so
# start the build
$ python3 setup.py bdist_wheel

BuildEnvironment

Once Ninja finished the build, you can install PyTorch on your Jetson Nano with the generated wheel. Follow the instructions below.
# install the wheel
$ cd dist
$ sudo pip3 install torch-1.8.0a0+37c1f4a-cp37-cp37m-linux_aarch64.whl
# install the wheel
$ cd dist
$ sudo pip3 install torch-1.7.1a0-cp37-cp37m-linux_aarch64.whl
# install the wheel
$ cd dist
$ sudo pip3 install torch-1.7.0a0-cp37-cp37m-linux_aarch64.whl
After successful installation, you can check PyTorch with the commands given at the end of the previous section.

PyTorch_Success_2

One word about OpenCV. PyTorch has the option the use OpenCV. However, it links hardcoded with the OpenCV version found during the build. As soon as you upgrade your OpenCV, PyTorch will stop working since it can't find the old OpenCV version. Given OpenCV enthusiasm to release at least two or three versions a year, it seems not wise to link PyTorch with OpenCV. Otherwise, you will be forced to recompile PyTorch or manually create a whole bunch of symbolic links to the old libraries.

After a successful installation, many files are no longer needed. Removing them will give you about 3.6 GB of disk space.
# remove the whole folder
$ sudo rm -rf ~/pytorch

TorchVision.

Install torchvision on Jetson Nano.

Torchvision is a collection of frequent used datasets, architectures and image algorithms. The installation is simple when you use one of our wheels found on GitHub. You can also build torchvision from scratch. In that case, you only have to download the version of your choice from the official GitHub page, modify the version number at line 32 in setup.py and issue the command $ python3 setup.py bdist_wheel.
Torchvision assumes PyTorch is installed on your machine on the forehand.

Used with PyTorch 1.8.0
# the dependencies
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
# install gdown to download from Google drive, if not done yet
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download TorchVision 0.9.0
$ gdown https://drive.google.com/uc?id=1BdvXkwUGGTTamM17Io4kkjIT6zgvf4BJ
# install TorchVision 0.9.0
$ sudo -H pip3 install torchvision-0.9.0a0+01dfa8e-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torchvision-0.9.0a0+01dfa8e-cp36-cp36m-linux_aarch64.whl
Used with PyTorch 1.7.1
# the dependencies
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
# install gdown to download from Google drive, if not done yet
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download TorchVision 0.8.2
$ gdown https://drive.google.com/uc?id=1Z14mNdwgnElOb_NYkRaDCwP31scd7Mfz
# install TorchVision 0.8.2
$ sudo -H pip3 install torchvision-0.8.2a0+2f40a48-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torchvision-0.8.2a0+2f40a48-cp36-cp36m-linux_aarch64.whl
Used with PyTorch 1.7.0
# the dependencies
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
# install gdown to download from Google drive, if not done yet
$ sudo -H pip3 install gdown
# copy binairy
$ sudo cp ~/.local/bin/gdown /usr/local/bin/gdown
# download TorchVision 0.8.1
$ gdown https://drive.google.com/uc?id=1WhplBjODLjNmYWEvQliCdkt3CqQTsClm
# install TorchVision 0.8.1
$ sudo -H pip3 install torchvision-0.8.1a0+45f960c-cp36-cp36m-linux_aarch64.whl
# clean up
$ rm torchvision-0.8.1a0+45f960c-cp36-cp36m-linux_aarch64.whl
After installation you may want to check torchvision by verifying the release version.

TV_Success

Caffe2.

Install Caffe2 on Jetson Nano.

PyTorch comes with Caffe2 on board. In other words, if you have PyTorch installed, you have also installed Caffe2 with CUDA support on your Jetson Nano. Together with two conversion tools. Before using Caffe2, most of the time protobuf needs to be updated. Let's do it right away now.
# update protobuf (3.15.5)
$ sudo -H pip3 install -U protobuf
You can check the installation of Caffe2 with a few Python instructions.

Caffe2_Nano
Install 64 OS
Install 32 OS
Raspberry and alt
Raspberry Pi 4
Jetson Nano
images/GithubSmall.png
images/YouTubeSmall.png
images/SDcardSmall.png
Back to content