
Code::Blocks C++ examples on Raspberry Pi 4
Introduction.
In this article covers three C++ examples with OpenCV 4.4 on a Raspberry Pi 4. A movie player, a USB live camera and at last a deep learning YOLO network.
If you don't have OpenCV on your Raspberry yet, all installation instructions are given in our Install OpenCV 4.5 on Raspberry Pi 4 page.
Tip.

Install Code::Blocks.
First, you need a good IDE to write your C ++ program. You could use Geany as it comes with the Raspbian OS. However, Geany cannot handle projects, only individual files. You end up messing with Make to integrate all the different files into one executable. Secondly, Geany has limited debug tools.
We are going to use Code::Blocks. The IDE can handle multi-file projects and has excellent debug functions such as variable, thread or CPU registry inspection. The IDE is relatively easy and intuitive to understand. With the following command in your terminal, you can install Code::Blocks.
$ sudo apt-get install codeblocks
OpenCV camera example.
- The very first step is to make a directory where all the project files are kept.
- Here the folder /home/pi/software/Camera is made.
- Now starts Code::Blocks and create a new project.
- Code::Blocks supports many different types of projects. We start with a plain console application.
- The next step is to select C or C ++. OpenCV is C ++, so use a C ++ console application. If you don't know which one to choose, C ++ is always the safest option.
- Now Code::Blocks wants a project name. Here we named our project TestCamera.
- In the final step, Code::Blocks needs some settings confirmed. No adjustments are required, so click Finish.
- Code::Blocks now opens its IDE with a standard "Hello world!" main.cpp sample file. People who know Microsoft Visual Studio will see a remarkable similarity between both IDE's. A debug/release drop-down selection in the middle, and some build, run, debug buttons on the toolbar. The menu structure is also more or less the same. For more information about the IDE, visit the Internet. There are many very good tutorials about working with Code::Blocks.
- In the next step, main.cpp will be replaced by the other file, called SimpleGrab.cpp. This file and all the other project files can be found at our Github page.
- Download the file and place it in your working directory (/home/pi/software/Camera/TestCamera). Add this file with the dialog to the project.
- A standard confirmation is required before the file is included in the project.
- Then remove main.cpp from the project. We don't need it anymore. It can also be removed from the working directory if you want.
- If everything went well, your IDE screen should look like this.
- Build the project. Notice that the build is done in Debug mode.
- The first error appears. Code::Blocks cannot find a specific header file, opencv2/opencv.hpp as you can read. This is a very common error. You must inform the compiler in Code::Blocks where all required header files can be found.
- To add the path to the compiler, open the menu option Projects → Build options.
- In the Build options dialog, first, activate TestCamera on the left side. Now all the modifications apply to both the debug and release mode. Secondly, select tab Search directories and click Add.
- Enter the name of the folder where OpenCV saved all header files in the Add directory dialog box. If you have followed your instructions at Install OpenCV 4.4 on Raspberry Pi 4, the headers are placed in /usr/local/include/opencv4.
- Confirm the new path and build again.
- New errors emerge. OpenCV has many separate library files each containing functions which can be called from within your program. Here SimpleGrap.cpp is using the OpenCV function cv::VideoCapture::VideoCapture(…). Again you need to tell Code::Blocks where it can find the specific OpenCV library.
- If you look at your Build log in Code::Blocks the line g++ -o bin/.. is displayed. The -o indicates that all files have been compiled correctly and that the next linking phase is started. Errors in this phase have always to do with missing libraries.
- Again, you need to tell Code::Blocks where it can find the library that holds the used function. This can also be done in the Projects → Build options dialog with tab Linker settings. So here cv::VideoCapture::VideoCapture(…). Normally a library contains many functions and you only have to specify that one library. In the case of OpenCV, with its ever-growing additions, many libraries refer to each other. If you specify one, an error can be generated because the next internal linked library is missing. Ultimately, a whole series of libraries must be added to Code::Blocks for only one call from your program. In this case, it is better to give Code::Blocks the entire OpenCV package so that it can find all libraries and the dependencies. This is been done with the command 'pkg-config --libs --cflags opencv4'. Pay attention to the grave accent at the beginning and the end of the line. Don't use copy & paste because the clipboard function doesn't support the grave accent most of the time. Below the project file (TestCamera.cbp) of Code::Blocks in both situations.
- As you can see the application is built without any error, so the movie can be played. If, of course, the file "James.mp4" can be found by the program.
Deep learning code example.
For a deep learning example, we make use of ncnn. This is a lightweight and fast framework build by the Chinese internet giant Tencent. The full installation can be found here. Or, you can download the ready build library from our GitHub page with an installation script. With the following commands, it is downloaded and installed in the right places.
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install git
$ git clone https://github.com/Qengineering/ncnn_raspberry
$ cd ncnn_raspberry
$ chmod 755 install_ncnn_script
$ ./install_ncnn_script
If everything went well, a file named libncnn.a must be placed in the new folder /usr/local/lib/ncnn. Also 17 h-files at stored in the /usr/local/include/ncnn directory. The installation script is just one way to place the files from the GitHub repository on the Raspberry. If you prefer another way, go ahead. As long as the files are located in the right folders at the end.

We have placed more than one deep learning network suitable for the Raspberry on GitHub. You can choose the one you like. Only as an example, we are going to build a MobileNet-YOLO network. The original YOLO network can detect 1000 different objects. By replacing the frontend by a MobileNet there are only 20 categories left to detect. This will increase the computational speed substantial. MobileNet-YOLO still uses the YOLO backend for position determination.
Before building the application, be sure you have a working OpenCV on your Raspberry. We presume you have followed our OpenCV installation on the previous page. If you have another installation check the example above before continuing. The ncnn library itself has no links to OpenCV and can run perfectly without any additional software. Only our examples make use of OpenCV.
Let's start by downloading the deep learning network and the associated code. Again, there are many ways to get files from GitHub. We usually use the procedure below. As you may already guess, MyDir is the name of the folder where you like run this example. Any name will do. Notice also the last part of the wget command. Here the postfix /archive/master.zip is added to the original URL of GitHub. This is a common practice when downloading the files in the associated zip container.
$ mkdir MyDir
$ cd MyDir
$ wget https://github.com/Qengineering/MobileNetV2_YOLOV3_ncnn/archive/master.zip
$ unzip -j master.zip
$ rm master.zip
$ rm README.md
Your MyDir folder (here /software/DeepLearning/MobiYO/) must be identical with the image below. If you have downloaded another example, there will of course be other files in the folder.

Start Code::Blocks and load the MobiYO.cbp project file. Or simply double click on the MobiYO.cbp in your File Manager. Follow the slide show and find the comment at every step below the gallery.
- Once the project is loaded, Code::Blocks looks like this. First, select the release mode. We want fast results and are not going to debug the code.
- Notice the search directories in the Build options dialog. For both Debug and Release there are two directories given. The well known OpenCV folder (/usr/local/include/opencv4) and the ncnn library (/usr/local/include/ncnn). Indeed, the one we just created.
- In the linker settings are also now some references to the ncnn library. First, the library itself. It is simply declared in the Link libraries option list (/usr/local/lib/ncnn/libncnn.a) Notice here the .a extension. This library will be statical incorporated in the code. Secondly, there is an extra flag in the Other linker options list (-fopenmp). This flag turns the multi-core mode of the ncnn library via the G++ compiler on.
- Before the program runs flawlessly, there is only one setting left. You must specify the argument (s) in advance as if the program had been started from the command line.
- Here the image dog.jpg is given for both the debug and release mode. When the program starts, this image is automatically loaded.
- Build and run the program.
- Succes. There are eleven objects detected in the image. Only the ones with a high score (>50%) are shown in the output window.
- For your information, this type of error occurs when the -fopenmp flag is not set in the list of Other linker options.
- The output is a single file located in the Release folder. Before you can run the app here, you need to copy the associated files to this location.
- Now the program will run without any errors from this location.

Final remarks.
Before a deep learning network can detect objects, all the weight factors must be loaded. This can be a time-consuming process. In the case of MobiYO, it takes 82 mSec. Pre- and post-image processing takes also time to complete. Every picture is resized to the 352x352 input size of the network. This takes around 18 mSec. Afterwards, the resulting rectangles are drawn inside the image and show on the screen, again some 185 mSec. So, from the 620 mSec total execution time, only 335 mSec is used for the detection itself. Timings are based on a Raspberry Pi 4 with a clock of 1.5 GHz.

Deep learning memory is hungry. The enormous amount of weight factors, in particular, can be problematic. The 2 or 4 Gbyte RAM on a Raspberry Pi 4 is normally sufficient. However, 1 Gbyte may not be enough. Especially large networks as a VGG-16 will not fit in the available memory space. The only option left is the increase the memory swap space. This can be done by enlarging the default CONF_SWAPSIZE in the /etc/dphys-swapfile from 100 MB to 1024 or 2048 MB. Instructions on how to do this can be found here.
Note: memory swaps will slow your program down slightly. More seriously, it can also wear out your flash (SD card), because it supports only a finite number of write cycles. Don't get paranoia, under normal use it will last for decades. Even if you store something like a log file every 30 seconds over the years. Huge deep learning models will access their weights millions of times during execution. Now, the swapping rate increases suddenly enormously. This is a situation not preferred over a long period. Better to use a 2 or 4 GB Raspberry Pi 4 in this situation.
Use <chrono> if you want to measure execution times. Other timing algorithms are all influenced by the burden that the ncnn library places on all processor-cores simultaneous. Below you can find some example code on how to measure execution times.
#include <stdio.h>
#include <vector>
std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
//the algorithms you like to measure
std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
std::cout << "Time difference = " << std::chrono::duration_cast <std::chrono::milliseconds> (end - begin).count() << "[ms]" << std::endl;
OpenCV DNN Module.
OpenCV recently has a more than excellent module for deep learning. It is surprisingly fast for a Raspberry Pi without additional hardware accelerators such as neural sticks. See our page for more information and software downloads.

OpenCV + TensorFlow Lite.
OpenCV can also be used in combination with TensorFlow and TensorFlow Lite. The latter in particular gives amazing results if you use the C ++ API. See our page for more information and software downloads.
Install OpenCV 4.4
Deep learning examples for Raspberry Pi