Face and Circle Detection using OpenCV with C++

Facebook
Twitter
LinkedIn

In this article, I will talk a little bit about how do the face and circle detection actually work.

If you have troubles with the OpenCV installation using C++, so I made a video for that (unique method).

Edge detection

Introduction

So in the first part, we’re going to make an edge detection using OpenCV, so in openCv there’s already an integrated function to calculate the edge with the “Canny” method, so to make this realization it is necessary to make the treatment of a frame by frame because we cannot make the treatment directly on the video then we are going to take only one frame we make it in the gray level by using always one of the functions of OpenCV called “cvtColor” then after we pass the resulting image (in gray level) by another function which is going to apply a Gaussian blur called “GaussianBlur” and finally we pass the result by the function “Canny” which is going to apply the algorithm of Canny, so here is the program used:

Program

Mat frame;

cap >> frame;

cvtColor(frame, edges, CV_BGR2GRAY);

GaussianBlur(edges, edges, Size(7, 7), 1.5, 1.5);

Canny(edges, edges, 0, 30, 3);

So that was the program that does all the work, and then in our case we have a video and not a single frame, so we have to put all this part of the code in a loop to process all the frames of the video, and finally we have to display them frame by frame using the same loop and the function integrated in openCv called “imshow

Result

So in this image you see, an example of the result which means a frame of the final video, this image contains only the edge without the original image so in the next part we will put the two together.

Embedding the edge in the original image

So now after we have calculated the border, we can insert it into the original image so we will have the real image with the border, but to do this we need to use the images (frames) at the gray level.

So we’ll transform them using the same function of OpenCV and then apply the border.

So we need to add the following two lines to our old program:

cvtColor(frame, gry, CV_BGR2GRAY);

gry += edges;

Donc on va faire juste la somme de chaque frame et le bord résultant correspondant à cette frame.

Alors on obtient l’image résultante suivante:

All the steps we did were in “release” mode because in “debug” mode the execution will be a bit slow because we have to attach other modules that are not important for the moment.

Detection of circles in the image

Definition

Initially, Hough characterized the lines by their slope and their intercept 1. The disadvantage of this approach is that the slope tends to infinity when the line tends to the vertical. In 1972, Duda and Hart proposed a parameterization by polar coordinates (ρ, θ) that has since been used universally.

The circle detection method is also called HCT (Hough circle transform).

In this method, a circle is described by its Cartesian equation:

(x — a)2 + (y — b)2 = r2

where

  • the point of coordinates (a, b) is the center of the circle
  • r is the radius.

In space (a, b, r), a circle is characterized by a point. The set of circles passing through a given point M(x, y) forms a cone with vertex (a = x, b = y, r = 0) and axis r. A “good candidate” corresponds to the intersection of several cones.

If the radius of the circle we are looking for is known, we can then place ourselves in the plane (a, b). In this plane, the set of circles passing through M is described by the circle of center (a = x, b = y) and radius r. A good candidate is therefore at the intersection of several circles. We construct an accumulation matrix A: each element Ai, j of the matrix contains the number of circles passing through the point, or through a square of several pixels, corresponding to this element.

If the radius is unknown, the search method consists of constructing an accumulation hypermatrix where each cell Ai, j, k corresponds to a cube of the space (a, b, r), by scanning all possible radii from 1 pixel to the image dimension.

Program

Since we are still using the OpenCV library, it will be easy to integrate the Hough algorithm for the detection of the circles because there is a function that will do all the calculations.

So before passing each frame through the function that applies Hough’s algorithm, we must first put it in gray level.

Then you need to set the size and color of the circles, after which you need to insert them into the original image (frame).

And of course we have to put the parts of the code we talked about in a loop so that we can process the whole video.

For the test, I took a sheet of paper that contains some circles designated by hand and then I put the sheet in front of the camera so we notice that the algorithm works very well and it was able to detect all the circles.

So we put all these functions in a dialog box where you can call each function from a button, and here is the face of the box:

Face location

Explanation

After we have seen the detection of circles in a video or image, we can take a small application that has the same principle of the detection of circles, it is the localization of the face so we can detect the face by a big circle and the eyes also by other small circles and like that we will have a localization of face in a video.

The principle is the same but this time we are not going to use the same algorithm, we are going to use the Haar cascade.

So the idea of using this method with openCv, is that we are going to use pre-trained models to detect the faces which will be called using the function “CascadeClassifier” and then to detect the face in our images that is to say to apply these pre-trained models we are going to call the function “detectMultiscale” then this function after it detects the face in our images and will return a rectangle in the image then we will make treatment to make the big rectangles as eclipses to say that it is the face and to make the small rectangle that are indicated for the eyes in small circles around the eyes.

Program

So first of all we have to give the paths to the pre-trained model:

And then of course you have to open the camera to take the frames, and you always have to check that the camera is open, if the paths are true before you start applying the algorithms.

So here is the program that will apply the explanation we made at the beginning, so we will take each frame, we must transform it into gray level and then we pass it through the function “detectMultiScale” and then we have the functions ellipse and circle to transform the results of the function detectMultiScale in eclipse for the face and in circle for the eyes.

If we notice that for both detection we have always called the same function, but the only thing that changes is just in the argument of the function, we must put “face” for the detection of the face and “eyes” for the detection of the eyes.

And of course, you have to declare the pre-trained models by the following two lines:

Results

So here is the result of the program (we also have to put the whole program in a loop so that we can process the whole video live).

And we have the eclipse in the face and the circles in the eyes, so the program works very well.

You can find the code for face detection at this link.

Find a sponsor for your web site. Get paid for your great content. shareasale.com.

More to explorer

Making Sense of AI in Medical Images

Explore how AI revolutionizes medical imaging, enhancing diagnosis and treatment. Dive into real-world AI applications for better healthcare outcomes.