OpenCV C++ examples on Raspberry Pi - Q-engineering
Q-engineering
Q-engineering
Go to content
images/empty-GT_imagea-1-.png
CodeBlocks on Raspberry Pi

OpenCV and deep learning examples on Raspberry Pi 4

Introduction.

In this article covers three C++ examples with OpenCV 4.1 on a Raspberry Pi 4. A movie player, a USB live camera and at last a deep learning YOLO network.
If you don't have OpenCV on your Raspberry yet, all installation instructions are given in our Install OpenCV 4.1 on Raspberry Pi 4 page.  
Install Code::Blocks.
First, you need a good IDE to write your C ++ code. You could use Geany because it comes standard with Raspbian. However, Geany cannot handle projects, only individual files. You are ending up fiddling with Make to integrate all the separated files into a single executable. Secondly, Geany has very limited debug tools.
We are going to use Code::Blocks. It can handle multi-file projects. It has excellent debug functions such as variable, thread or CPU registry inspection. As an IDE, it is relatively easy and intuitive to understand. And above all it's free. With the following command in your terminal, you can install Code::Blocks.
sudo apt-get install codeblocks
Now we are downloading, lets also install a driver for the RaspiCam. Without this driver OpenCV will not detect your camera properly.
sudo modprobe bcm2835-v4l2
After the driver has been installed, it must also be listed in the boot modules. Use the following command to open nano and add the line bcm2835-v4l2 to the end of the file. See the gallery below for details.
sudo nano /etc/modules
Last step is to reboot the Raspberry so that OpenCV can find the driver.
sudo reboot

OpenCV camera example.

Now we are going to program our the first examples in OpenCV. The slide show below gives all the steps necessary to complete this example. Below the gallery, a comment can be found at each step. Because it is the first example, we take some more time to introduce you to some C++ and Code::Blocks basics. All our software can always be found on GitHub.
  1. The very first step is to make a directory where all the project files are kept.
  2. Here the folder /home/pi/software/Camera is made.
  3. Now starts Code::Blocks and create a new project.
  4. Code::Blocks supports many different types of projects. We start with a plain console application.
  5. The next step is to select C or C ++. OpenCV is C ++, so use a C ++ console application. If you don't know which one to choose, C ++ is always the safest option.
  6. Now Code::Blocks wants a project name. Here we named our project TestCamera.
  7. In the final step, Code::Blocks needs some settings confirmed. No adjustments are required, so click Finish.
  8. Code::Blocks now opens its IDE with a standard "Hello world!" main.cpp sample file. People who know Microsoft Visual Studio will see a remarkable similarity between both IDE's. A debug/release drop-down selection in the middle, and some build, run, debug buttons on the toolbar. The menu structure is also more or less the same. For more information about the IDE, visit the Internet. There are many very good tutorials about working with Code::Blocks.
  9. In the next step, main.cpp will be replaced by the other file, called SimpleGrab.cpp. This file and all the other project files can be found at our Github page.
  10. Download the file and place it in your working directory (/home/pi/software/Camera/TestCamera). Add this file with the dialog to the project.
  11. A standard confirmation is required before the file is included in the project.
  12. Then remove main.cpp from the project. We don't need it anymore. It can also be removed from the working directory if you want.
  13. If everything went well, your IDE screen should look like this.
  14. Build the project. Notice that the build is done in Debug mode.
  15. The first error appears. Code::Blocks cannot find a specific header file, opencv2/opencv.hpp as you can read. This is a very common error. You must inform the compiler in Code::Blocks where all required header files can be found.
  16. To add the path to the compiler, open the menu option Projects → Build options.
  17. In the Build options dialog, first, activate TestCamera on the left side. Now all the modifications apply to both the debug and release mode. Secondly, select tab Search directories and click Add.
  18. Enter the name of the folder where OpenCV saved all header files in the Add directory dialog box. If you have followed your instructions at Install OpenCV 4.1 on Raspberry Pi 4, the headers are placed in /usr/local/include/opencv4.
  19. Confirm the new path and build again.
  20. New errors emerge. OpenCV has many separate library files each containing functions which can be called from within your program. Here SimpleGrap.cpp is using the OpenCV function cv::VideoCapture::VideoCapture(…). Again you need to tell Code::Blocks where it can find the specific OpenCV library.
  21. If you look at your Build log in Code::Blocks the line g++ -o bin/.. is displayed. The -o indicates that all files have been compiled correctly and that the next linking phase is started. Errors in this phase have always to do with missing libraries.
  22. Again, you need to tell Code::Blocks where it can find the library that holds the used function. So here cv::VideoCapture::VideoCapture(…). Normally a library contains many functions and you only have to specify that one library. In the case of OpenCV, with its ever-growing additions, many libraries refer to each other. If you specify one, an error can be generated because the next internal linked library is missing. Ultimately, a whole series of libraries must be added to Code::Blocks for only one call from your program. In this case, it is better to give Code::Blocks the entire OpenCV package so that it can find all libraries and the dependencies. This is been done with the command 'pkg-config --libs --cflags opencv4'. Pay attention to both grave accent at the beginning and the end of the line.
  23. As you can see the application is built without any error, so the movie can be played. If, of course, the file "James.mp4" can be found by the program.
  24. Because the filename is not absolute (/home/pi/..../James.mp4) but relative, it must be located in the working folder. If you use the start button on the IDE, the working directory is the one given in step 6. (/home/pi/software/Camera/TestCamera).
  25. However, if you run the application in the output folder of Code::Blocks (/home/pi/software/Camera/TestCamera/bin/Debug), you have to put a copy of the mp4 file there first before you can play the movie. Because the output folder is now the working directory of the executable.
  26. With the same code, you can show live camera images from a Raspicam or webcam. You only have to alter the name of the file, here “James.mp4” into a number. All connected cameras have a unique number starting with 0. In the given example, two cameras were plugged into the Raspberry Pi, a Raspicam and a Logitech webcam. Here you see our street recorded with the Logitech camera.

Webcam street

A few last words about the program itself. The code is very basic and holds no surprises. A VideoCapture object holds the video stream. Every time cap>>frame is called, a frame is moved via a buffer to the screen. In case of a video, the transfer rate is only limited by the execution time of the other code and the bandwidth of the memory. Hence the 20 mSec delay and the end of the loop in cv::waitKey(20). Without this delay, the video will play too fast.

Another point mentioning is the linkage of OpenCV. There are two types of linkage, static (filename.a) and dynamic (filename.so). A static linkage will incorporate the code form the library in its executable file. This single file can be copied to other computers. However, if a library is updated, all code must be regenerated again before the modification is activated.
A dynamic linkage tells the program only where it can find the libraries. Every time the program needs a function, it loads a part of the library into its memory and executes the code. The program can not be copied to other computers unless you are certain this computer has the same libraries at the same location present. If a library has an update, there is no need for rebuilding the code.
OpenCV is always dynamically linked to a program. It could be static but this involves a lot of work because, as said before, OpenCV itself calls many times other libraries. All these libraries need to become static before a linkage is successful.

Deep learning code example.

For all our deep learning networks on a Raspberry Pi, we make use of ncnn. This is an extremely fast framework build by the Chinese internet giant Tencent. It is full handcrafted NEON assembly code, specially designed for the ARM cores found in the Raspberry Pi and its alternatives. But also in almost any smartphone nowadays. It is a very lightweight, but yet powerful library, which can easily be installed. We have made some minor modifications so it will process video streams slightly faster. Let's start with the installation of git. You needed git to clone code from our GitHub page. If you have already the newest version, no harm done.
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install git
The next step is to install the library itself. We put it on GitHub with an installation script. With the following commands, it is downloaded and installed in the right places.
git clone https://github.com/Qengineering/ncnn_raspberry
cd ncnn_raspberry
chmod 755 install_ncnn_script
./install_ncnn_script
If everything went well, a file named libncnn.a must be placed in the new folder /usr/local/lib/ncnn. Also 17 h-files at stored in the /usr/local/include/ncnn directory. The installation script is just one way to place the files from the GitHub repository on the Raspberry. If you prefer another way, go ahead. As long as the files are located in the right folders at the end.
Location ncnn library
We have placed more than one deep learning network suitable for the Raspberry on GitHub. You can choose the one you like. Only as an example, we are going to build a MobileNet-YOLO network. The original YOLO network can detect 1000 different objects. By replacing the frontend by a MobileNet there are only 20 categories left to detect. This will increase the computational speed substantial. MobileNet-YOLO still uses the YOLO backend for position determination.

Before building the application, be sure you have a working OpenCV on your Raspberry. We presume you have followed our OpenCV installation on the previous page. If you have another installation check the example above before continuing. The ncnn library itself has no links to OpenCV and can run perfectly without any additional software. Only our examples make use of OpenCV.

Let's start by downloading the deep learning network and the associated code. Again, there are many ways to get files from GitHub. We usually use the procedure below. As you may already guess, MyDir is the name of the folder where you like run this example. Any name will do. Notice also the last part of the wget command. Here the postfix /archive/master.zip is added to the original URL of GitHub. This is a common practice when downloading the files in the associated zip container.
mkdir MyDir
cd MyDir
wget https://github.com/Qengineering/MobileNetV2_YOLOV3_ncnn/archive/master.zip
unzip -j master.zip
rm master.zip
rm README.md
Your MyDir folder (here /software/DeepLearning/MobiYO/) must be identical with the image below. If you have downloaded another example, there will of course be other files in the folder.
Download folder
Start Code::Blocks and load the MobiYO.cbp project file. Or simply double click on the MobiYO.cbp in your File Manager. Follow the slide show and find the comment at every step below the gallery.
  1. Once the project is loaded, Code::Blocks looks like this. First, select the release mode. We want fast results and are not going to debug the code.
  2. Notice the search directories in the Build options dialog. For both Debug and Release there are two directories given. The well known OpenCV folder (/usr/local/include/opencv4) and the ncnn library (/usr/local/include/ncnn). Indeed, the one we just created.
  3. In the linker settings are also now some references to the ncnn library. First, the library itself. It is simply declared in the Link libraries option list (/usr/local/lib/ncnn/libncnn.a) Notice here the .a extension. This library will be statical incorporated in the code. Secondly, there is an extra flag in the Other linker options list (-fopenmp). This flag turns the multi-core mode of the ncnn library via the G++ compiler on.
  4. Before the program runs flawlessly, there is only one setting left. You must specify the argument (s) in advance as if the program had been started from the command line.
  5. Here the image dog.jpg is given for both the debug and release mode. When the program starts, this image is automatically loaded.
  6. Build and run the program.
  7. Succes. There are eleven objects detected in the image. Only the ones with a high score (>50%) are shown in the output window.
  8. For your information, this type of error occurs when the -fopenmp flag is not set in the list of Other linker options.
  9. The output is a single file located in the Release folder. Before you can run the app here, you need to copy the associated files to this location.
  10. Now the program will run without any errors from this location.
ncnn MobileNet_YOLO outcome

Final remarks.

Before a deep learning network can detect objects, all the weight factors must be loaded. This can be a time-consuming process. In the case of MobiYO, it takes 82 mSec. Pre- and post-image processing takes also time to complete. Every picture is resized to the 352x352 input size of the network. This takes around 18 mSec. Afterwards, the resulting rectangles are drawn inside the image and show on the screen, again some 185 mSec. So, from the 620 mSec total execution time, only 335 mSec is used for the detection itself. Timings are based on a Raspberry Pi 4 with a clock of 1.2 GHz.
Time consumption

Deep learning memory is hungry. The enormous amount of weight factors, in particular, can be problematic. The 2 or 4 Gbyte RAM on a Raspberry Pi 4 is normally sufficient. However, 1 Gbyte may not be enough. Especially large networks as a VGG-16 will not fit in the available memory space. The only option left is the increase the memory swap space. This can be done by enlarging the default CONF_SWAPSIZE in the /etc/dphys-swapfile from 100 MB to 1024 or 2048 MB. Instructions on how to do this can be found here.
Note: memory swaps will slow your program down slightly. More seriously, it can also wear out your flash (SD card), because it supports only a finite number of write cycles. Don't get paranoia, under normal use it will last for decades. Even if you store something like a log file every 30 seconds over the years. Huge deep learning models will access their weights millions of times during execution. Now, the swapping rate increases suddenly enormously. This is a situation not preferred over a long period. Better to use a 2 or 4 GB Raspberry Pi 4 in this situation.

Use <chrono> if you want to measure execution times. Other timing algorithms are all influenced by the burden that the ncnn library places on all processor-cores simultaneous. Below you can find some example code on how to measure execution times.
#include <stdio.h>
#include <vector>

std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();

the algorithms you like to time

std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
std::cout << "Time difference = " << std::chrono::duration_cast <std::chrono::milliseconds> (end - begin).count() << "[ms]" << std::endl;


Install OpenCV 4.1
DIY deep learning on Raspberry Pi
Raspberry and alt
Info
Raspberry Pi 4
Back to content