Install TensorFlow Lite 2.4.0 on Jetson Nano
The first part of this guide will walk you through installing TensorFlow Lite on the Jetson Nano.
The second part guides you through an installation of TensorFlow Lite using GPU delegates. Must be said that the expected acceleration is somewhat disappointing.
The third part covers C++ examples used to get an impression of the performance of TensorFlow Lite on your Nano.
TensorRT is shipped default with the Jetson Nano as deep learning framework. It is a C++ library based on CUDA and cuDNN. Due to its low-level structure, it requires quite proficient programming skills. Not something you set up on a rainy afternoon. That's why we're not covering the TensorRT framework, although its execution is just a bit faster than TensorFlow Lite.
If you want to run the TensorFlow Lite examples we provide, please make sure you have OpenCV installed on your Jetson Nano. It may the default version without CUDA support, or you can re-install OpenCV 4.5.0 with CUDA according to our guide.
Install TensorFlow Lite.
If you want to build fast deep learning applications, you have to use C++. That's why you need to build TensorFlow Lite's C++ API libraries. The procedure is simple. Just copy the latest GitHub repository and run the two scripts. The commands are listed below. This installation ignores the CUDA GPU onboard the Jetson Nano. It's pure CPU based.