
Install Qt5 with OpenCV on Raspberry Pi 4 or Jetson Nano.
Last updated: December 7, 2021
Introduction.
This article will help you install Qt5 on your Raspberry Pi 4 or Jetson Nano. After installation, we will build a GUI with an OpenCV interface. At the end of the day, you'll have a live Raspicam or webcam interface in the original Raspbian or Tegra UI style.
Qt5 is a free and open-source, cross-platform, especially suited for designing graphical user interfaces. Some well known GUI applications written with Qt5 are Google-Earth and the VLC media player. And of course, you can always use Qt5 for your simple terminal programs. One of the nice features of Qt5 is its excellent portability across platforms. Once you've written your app on, say, a Linux machine, porting the same app to an Apple or Window machine means virtually no extra code.
Qt5 is not a simple programming tool like Geany or for that matter Code::Blocks. You're facing a steep learning curve when you start coding in Qt5. Even for a simple GUI program, many concepts must be understood. Ask yourself if it's worth all the time and frustration when you only have some uncomplicated ideas in your mind. On the other hand, once conquered Qt5, you can call yourself a real pro. And, there are many friendly Qt5 forums and tutorials on the net.
This guide will not explain the underlying Qt5 mechanisms. It is not a Qt5 tutorial. The main goal here is the initial installation of Qt5 and the connection between the OpenCV and the Qt5 'world'. Qt5 has its own objects such as strings and bitmaps. They are slightly different from the standard library and the OpenCV implementations. By the way, this is one of the well-known hurdles you have to overcome when working with Qt5.
In the old days, with embedded systems having almost no RAM and a clock speed of max 10 MHz, you were forced to cross-compile. First, the software was developed on a desktop PC and then transferred to the embedded system. It's a cumbersome way of writing software with limited debugging possibilities. When dedicated I/O had to be addressed, it could end up in a real nightmare.
Today, with a quad-core ARM running at 1.5 GHz or higher, with 2, 4 or even 8 GByte of RAM, there is luckily no need for cross-compilation any more. Just install the whole Qt5 platform on your Raspberry Pi 4 or the Jetson Nano and start writing your code straight away.
Installation.
The installation is simple. There are no dependencies to be installed first. You need to have OpenCV installed on your board if you want to reproduce the given example. Please use our guide because of the C++ installation.
# install Qt5
$ sudo apt-get update
$ sudo apt-get upgrade
Buster OS
$ sudo apt-get install qt5-default
$ sudo apt-get install qtcreator
$ sudo apt-get install qtdeclarative5-dev
or Bullseye OS
$ sudo apt-get install qtbase5-dev qtchooser
$ sudo apt-get install qt5-qmake qtbase5-dev-tools
$ sudo apt-get install qtcreator
$ sudo apt-get install qtdeclarative5-dev
The Raspberry Pi Buster users have to alter the GTK-toolkit version. If you have a Jetson Nano, you can skip this step.
Open the file /etc/xdg/qt5ct/qt5ct.conf with the nano editor and set the style to gtk3. Save and exit with <Ctrl> + <X>, <Y> and <Enter>.
# edit qt5ct.conf on a Raspberry Pi Buster.
$ sudo nano /etc/xdg/qt5ct/qt5ct.conf

If you forget to set up the GTK toolkit, you may encounter the following warning and errors.
- cannot register existing type 'GtkWidget'- g_type_add_interface_static: assertion 'G_TYPE_IS_INSTANTIATABLE (instance_type)' failed- g_type_register_static: assertion 'parent_type > 0' failed
If everything went well, you should see the following welcome screen when you launch the Qt5 creator. In this case on a Jetson Nano.

OpenCV example.
Let's start with a simple live webcam preview in OpenCV. Earlier we already built an app with solely OpenCV and a webcam. However, it was a terminal application with an OpenCV window showing the video images. Please find the tutorial here. Now we use the genuine GUI for showing the frames. This all done in the Qt5 environment.
The first step is a GUI design suitable for displaying the images. Second, an OpenCV video recording must be started in a separate thread to process the frames from the webcam. Third, an image display is implemented that converts the OpenCV cv::mat to a QImage, the Qt5 image format. The last action is to plot the images on the 'canvas' of the GUI.
Let's start with downloading the project from GitHub. Assuming you have unzipped the project in the folder ~/software/Qt5, you get the following situation.

Now open the Qt5 creator. You see the next welcome screen.

Next, click the Open Project button and load the example project file Viewer.pro into the IDE. You see the next screen asking you to configure your project by selecting a so-called kit. Qt5 works with 'kits', a collection of parameters defining an environment, such as a device, a compiler, a desktop etc. It makes cross-platform development easy. In our case, we have just one kit, the Raspberry Pi desktop. Just click the Configure Project button, and you're in business.

Qt5 now loads the project in the familiar IDE environment. We've expanded all the folders and double-click the Viewer.pro file to open it in the editor. Most important here are the lines in the red box. These will include the OpenCV headers and libraries in your project. Always add these lines if you need to use OpenCV in the Qt5 project. The location of the files will, of course, depend on your OpenCV installation. We used the directories from our OpenCV 4.5 installation here.

You may want to check out the GUI we created for this application. Double click the mainwindow.ui file and the GUI will open with the design. In the screen dump, the QLabel on which the video is plotted is selected.

Without going into too much detail, the conversion of an OpenCV to a Qt5 bitmap is done in the last part of the myvideocapture.cpp file, the cvMatToQImage and the cvMatToQPixmap routine. The QPix map can be plotted on the QLabel canvas.
The first part of this file creates a separate thread that will capture the video stream from the WebCam. As you know, a GUI application works with events. Events can be mouse clicks, moving a scroll bar, an arrival of an Ethernet packet, you name it. Each event invokes a small piece of code describing what to do. Once done, the application is idle waiting for the next event to occur. It's a wonderful mechanism, as long as it doesn't take much time to handle events. For example, if plotting requires some time-consuming image rendering, your application will have to wait for this to be done. It may feel like the app has crashed; it becomes unresponsive to your mouse or keyboard. Better to make a separate thread in that case. The main thread can still take care of common user events while the other thread is in the background rendering images. This is also the case with this application. A thread is constantly receiving new images and transforming them to QPixmap for plotting on the canvas. The other thread keeps your 'normal' Gui alive.
#include "myvideocapture.h"
#include <QDebug>
MyVideoCapture::MyVideoCapture(QObject *parent)
:QThread { parent }
,mVideoCap { ID_CAMERA }
{
}
void MyVideoCapture::run()
{
if(mVideoCap.isOpened())
{
while (true)
{
mVideoCap >> mFrame;
if(!mFrame.empty())
{
mPixmap = cvMatToQPixmap(mFrame);
emit newPixmapCapture();
}
}
}
}
QImage MyVideoCapture::cvMatToQImage( const cv::Mat &inMat )
{
switch ( inMat.type() )
{
// 8-bit, 4 channel
case CV_8UC4:
{
QImage image( inMat.data,
inMat.cols, inMat.rows,
static_cast<int>(inMat.step),
QImage::Format_ARGB32 );
return image;
}
// 8-bit, 3 channel
case CV_8UC3:
{
QImage image( inMat.data,
inMat.cols, inMat.rows,
static_cast<int>(inMat.step),
QImage::Format_RGB888 );
return image.rgbSwapped();
}
// 8-bit, 1 channel
case CV_8UC1:
{
#if QT_VERSION >= QT_VERSION_CHECK(5, 5, 0)
QImage image( inMat.data,
inMat.cols, inMat.rows,
static_cast<int>(inMat.step),
QImage::Format_Grayscale8 );
#else
static QVector<QRgb> sColorTable;
// only create our color table the first time
if ( sColorTable.isEmpty() )
{
sColorTable.resize( 256 );
for ( int i = 0; i < 256; ++i )
{
sColorTable[i] = qRgb( i, i, i );
}
}
QImage image( inMat.data,
inMat.cols, inMat.rows,
static_cast<int>(inMat.step),
QImage::Format_Indexed8 );
image.setColorTable( sColorTable );
#endif
return image;
}
default:
qWarning() << "ASM::cvMatToQImage() - cv::Mat image type not handled in switch:" << inMat.type();
break;
}
return QImage();
}
QPixmap MyVideoCapture::cvMatToQPixmap( const cv::Mat &inMat )
{
return QPixmap::fromImage( cvMatToQImage( inMat ) );
}
The rest of the code is more or less standard. The mainwindow.cpp file contains the constructor and destructor of the MainWindow class whose parents are a QMianWindow and our UI. It has one event handler, the button that launches the OpenCV capture thread. This thread will run endlessly until the main window destructor terminates it.
#include "mainwindow.h"
#include "ui_mainwindow.h"
#include "myvideocapture.h"
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
ui->setupUi(this);
mOpenCV_videoCapture = new MyVideoCapture(this);
connect(mOpenCV_videoCapture, &MyVideoCapture::newPixmapCapture, this, [&]()
{
ui->opencvFrame->setPixmap(mOpenCV_videoCapture->pixmap().scaled(640,480));
});
}
MainWindow::~MainWindow()
{
delete ui;
mOpenCV_videoCapture->terminate();
}
void MainWindow::on_InitOpenCV_button_clicked()
{
mOpenCV_videoCapture->start(QThread::HighestPriority);
}
To start the GUI example, just use the hammer to compile the code. You can now use either the green arrow will start the application in release mode or the green arrow with the bug in debug mode. If you would like more information, we would like to refer you to the internet where you can find the world of tutorials and forums about Qt5.

The final outcome.
