# Always start a new build with
$ make clean
The Makefile.config contains all of Caffe installation information. Once set up correctly, building Caffe is easy.
There is no Makefile.config in the GitHub repository. Instead, there are several examples. The name of the file indicates the type of installation. It follows the naming convention of TensorFlow; Makefile.config.cpxx_operating-system_example. The cpxx stands for the Python3 version. The operating system speaks for itself. Copy the one that matches your machine and check it with the nano editor. We're using nano here because it highlight syntax much better than gedit. Follow the commands shown below. Best you open a second terminal which can be used for gathering system information. The screen dumps are from an Ubuntu 20.04 desktop, but other systems have similar commands and screens.
$ cd ~/caffe
# Choose the appropriate config for instance, Raspberry Pi with Raspbian 32-bit OS
$ cp Makefile.config.cp37_arm-linux-gnueabihf_example Makefile.config
# Or this one with CUDA on a desktop Ubuntu 20.04 machine
$ cp Makefile.config.cp38_x86_64-linux-gnu_CUDA_example Makefile.config
# After you select one open it with nano
$ nano Makefile.config
This slide reflects the installation with CUDA and the cuDNN toolkit installed. Obvious, it must be an x86_64-Linux-gnu machine, because a Raspberry Pi has no CUDA support. If you only have CUDA but no cuDNN, you may better install cuDNN before continuing. cuDNN speeds up your tensor calculations with about 36%. If you have no NVIDIA GPU, refer to slide 2.
In the case with a CPU only, the hash comment disables the USE_CUDNN flag. The CPU_ONLY flag is now active. This setting is used for the Raspberry Pi, among others.
In the top, you see three third-party libraries (OpenCV, LevelDB and LMDB) all enabled. Best to leave it this way as they will accelerate your Caffe framework.
Next, you see the OpenCV release, which is assumed to be version 4. If you have an older version (1, 2 or 3), consider upgrading to the 4.4.0 version. Caffe will eventually work with OpenCV 2.4.13, but you are on your own in this adventure. In case you stick to a version lower than 3, comment out both OPENCV_VERSION and USE_PKG_CONFIG lines by placing a hash at the front. We assume you have installed OpenCV according to our guides, so there will always be a package configuration available.
The last enabled line in this slide is the CUDA_DIR. Assuming again that you have followed NVIDEA's installation guide, there will be a symbolic link /usr/local/cuda to the actual folder where CUDA is located.
Here you see the commands used to gather all necessary information for the previous slide. Your OpenCV version, the existence of the OpenCV package configuration and the CUDA directory, if you had CUDA installed. Please note, the CUDA directory given is the physical location. It will be something like /usr/local/cuda-6.5/bin. However, check if the installation has placed a symbolic linkon the same location (/usr/local) to this path. Better to use this link, because upgrading to a newer CUDA release lets the symbolic link intact. If there is no symlink, give the folder name without the subdirectories (/usr/local/cuda-x.y).
Although available, nvcc doesn't return its location with $ which. Better to use in that case the command $ ldconfig -p | grep cuda.
Select your favorite Basic Linear Algebra Subprogram here. OpenBLAS is slightly faster, which is why we chose this library.
Disable the whole Python 2 section as we are using Python 3. It is not possible to select both. You have to choose one.
Since the support of Python 2 stopped, and Ubuntu 20.04 only ships with Python 3, the choice is not too difficult.
One remark here. Because Caffe first edition was in 2013, a period Python 2 still was the favourite, all old Caffe examples are written in this version.
Next, give the libboost_python and python release. Check also the existence of the given Numpy location. The next slide gives you the commands on how to find this information.
The commands to get your python and libboost_python versions are given here. This machine has the libboost_python38 installed. Others can have libboost_python3, so pay attention to the number of digits. The location of NumPy is found with a search to one of the headers. Notice the truncation of the path when given the folder name to the Makefile.config.
Here is another example from a PC with Ubuntu 18.04.5. As you can see, the names of python and libboost_python versions are somewhat curious. After omitting the lib prefix, you should specify python3.6m and boost-python3-py36 as library names in Makefile.config.
Give the last directories here after you have enabled the support layers with the WITH_PYTHON_LAYER flag.
Two header locations are needed, opencv4 and hdf5. And the hfd5 library folder. See slide 9, on how to obtain this information.
Slide 8b (only Jetson TX2).
In the case of the Jetson TX2, two additional folders need to be added. Some macros, needed at compile time, are defined in cudnn.h, located in these directories. At the same time, you need to include the cudnn.h header to cudnn_conv_layer.hpp.
Please open /caffe/include/caffe/layers/cudnn_conv_layer.hpp and add #include cudnn.h at line 15, as you can see below.
Again, only if you have a Jetson TX2. In the case of another platform, no adjustments are required.
The slide speaks for itself. The opencv4 location is found with the pkg-config --cflags command, the other two by a search command. Pay close attention to which part of the path is used.
For the sake of clarity, the commands are given below again.
# To get OpenCV version
>>> import cv2
# To get the configuration package
$ pkg-config --cflags opencv4
# To get CUDA loation
$ which nvcc
# To get python lib and numpy location
$ python3 -m site
$ find /usr -name numpyconfig.h
# To get boost-python version
$ ldconfig -p | grep boost_python3
# To get hdf5
$ find /usr -name H5Classes.h
$ find /usr -name libhdf5.so
Caffe relies on protobuf. Check the existence of the correct (python3) library using the following command. If the library isn't found, install it anyway.
# Check python3 protobuf
$ pip3 list | grep protobuf
# If the above command doesn't show
# a protobuf version, please install it now
$ sudo pip3 install protobuf
Skimage relies on numpy. In the past, there were issues with different versions of scikit-image (skimage) and numpy not working well together. For example, see this issue on GitHub. The error thrown when starting Caffe in Python is cannot import name '_validate_lengths'. To fix this problem, you need to force an upgrade of scikit-image. Use the command below. It can take a while, over an hour on a Raspberry Pi. Several large libraries, such as scipy, are now also forced to rebuild.
# upgrade skimage
$ sudo pip3 install --upgrade scikit-image