Journal of Multimedia Information System
Korea Multimedia Society
Section A

Non-Contact Heart Rate Monitoring from Face Video Utilizing Color Intensity

Sarker Md Sahin1, Qikang Deng1, Jose Castelo1, DoHoon Lee1,*
1Information Convergence Engineering, Pusan National University, Busan, Korea, smdsahin@gmail.com, dengqikang@pusan.ac.kr, jose.castelo@pusan.ac.kr, dohoon@pusan.ac.kr
*Corresponding Author: DoHoon Lee, 2 Busandaehak-ro 63beon-gil, Geumjeong-gu, Busan 46241, Republic of Korea, +82-51-510-3469, dohoon@pusan.ac.kr.

© Copyright 2021 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Mar 06, 2021; Accepted: Mar 22, 2021

Published Online: Mar 31, 2021

Abstract

Heart Rate is a crucial physiological parameter that provides basic information about the state of the human body in the cardiovascular system, as well as in medical diagnostics and fitness assessments. At present day, it has been demonstrated that facial video-based photoplethysmographic signal captured using a low-cost RGB camera is possible to retrieve remote heart rate. Traditional heart rate measurement is mostly obtained by direct contact with the human body, therefore, it can result inconvenient for long-term measurement due to the discomfort that it causes to the subject. In this paper, we propose a non-contact-based remote heart rate measuring approach of the subject which depends on the color intensity variation of the subject’s facial skin. The proposed method is applied in two regions of the subject’s face, forehead and cheeks. For this, three different algorithms are used to measure the heart rate. i.e., Fast Fourier Transform (FFT), Independent Component Analysis (ICA) and Principal Component Analysis (PCA). The average accuracy for the three algorithms utilizing the proposed method was 89.25% in both regions. It is also noteworthy that the FastICA algorithm showed a higher average accuracy of more than 92% in both regions. The proposed method obtained 1.94% higher average accuracy than the traditional method based on average color value.

Keywords: Computer vison; Face video; Heart rate monitoring; Remote photoplethysmography

I. INTRODUCTION

With the development of society, the pursuit of human health is constantly improving, and with it the importance of preventing diseases in daily life. For this reason, it is very important to have a variety of constant physiological information detection methods. Due to the increase of care and awareness about health in individuals, the world has observed a rapid growth of wearable and non-wearable technologies that can measure the state of human health at every moment. Vital signs are good indicators of the health and current condition of an individual. Heart rate can be considered the most basic and important among the vital signs. It can also serve as an important physiological parameter to know the state of the human body.

Heart rate can be defined as the number of pulses per unit of time, usually expressed as pulses per minute, and it expresses the present condition of the heart. Although heart rate changes according to the fitness of the human body, the normal heart rate on a healthy adult in resting-state ranges from 60 to 100 beats per minute (bpm) [1]. It is also possible to identify heart conditions by monitoring the heart rate. A slower than normal heart rate in a resting state is called Bradycardia (“slow heart rate”), on the other hand, a faster than normal heart rate in resting-state is called Tachycardia (“fast heart rate”) [2]. If the resting heart rate is too high or low, different cardiovascular symptoms such as irregular heartbeat, dizziness, fainting, pain, shortness of breath, can be presented.

The human significant physiological parameter heart rate can be measured using the photoplethysmography (PPG) technique. This technique is a simple optical technique used to monitor the light absorption intensity of illumination on the surface of tissues that depend on blood circulation in the human body. A low-cost and non-invasive technique is PPG which measures the pulse wave of blood in the skin. This technique also provides worthy information about the cardiovascular system which is more crucial for measuring heart rate.

Based on PPG, heart rate measurement methods are classified into two categories: contact and non-contact. Contact measurement of heart rate involves the use of sensors or devices attached to the patient’s skin. However, in some situations it the wiring system of these sensors can restrict the movement of the patient. Besides this, sometimes it is not possible to attach the sensors due to the condition of the patient’s body. An example can be seen in The Neonatal Intensive Care Unit (NICU) where the use of adhesives to attach sensors to the skin of premature infants can cause pain and skin irritation [3]. For these limitations, the non-contact technique to estimates PPG signals without directly attached a device to the subject’s body can be a more suitable option.

A growing body of literature is exploring the probability of applying a non-contact camera-based method known as non-contact PPG to measure the heart rate. The principle of this method is the subtle change of color intensity in the skin surface due to circulating blood in the subject’s body. Using a standard RGB video camera, these changes in color intensity of the skin can be recorded. Heart rate could be measured from the change of skin color intensity using signal processing methods. We note that when the heart rate is measured based on RGB camera video data, these methods have some limitations:

  • ■ When a video is recorded the subject present slight movements unconditionally.

  • ■ The measured PPG signal was vincible to noise caused by the subject’s movement.

In this paper, we propose a computer vision non-contact method for monitoring heart rate from a subject’s face videos. This method utilizes face recognition technology to automatically extract the forehead and cheeks on every frame of the video, then the video data is used to measure the color intensity values in the skin surface and heart rate is measured using the signal processing algorithms. At first, three color channels red, green and blue are split from the selected face region. The spatial average is taken from the selected region and obtained the photoplethysmographic signal from that. Then three different algorithms, i.e. Fast Fourier Transform (FFT), Independent Component Analysis (ICA), and Principal Component Analysis (PCA) are used on the acquired photoplethysmographic signal to extract the heartbeat signal. The Butterworth bandpass filter is used to eliminate noises from the signal before measuring the heart rate. Finally, FFT is applied to the filtered signal and heart rate is estimating using spectrum analysis in the frequency domain.

In the following section of this paper, related literature, as well as background information needed for the development of this project, will be presented. Additionally, the followed methodology for the proposed hearth rate measurement method as well as the experimental result obtained after the implementation will be shown and analyzed.

II. RELATED WORK

The non-contact pulse measuring method could be easily implemented to a platform for health monitoring, there have various studies related to the measurement of heart rate use of image processing technique. Recently non-intrusive heart rate measurement is a popular topic for commercial and academic purposes. The vital signs of heart rate can be detecting human physiological status. “Monitoring physiological parameters, such as heart rate (HR), respiratory rate (RR), heart rate variability (HRV), blood pressure, and oxygen saturation is of great importance to an access individual’s health condition” [4]. The most important organ is the heart of the body for estimation and monitoring of heart rate, and it is inevitable for the supervision of the cardiovascular catastrophe and the treatment therapies of chronic diseases [5]. The first non-contact-based health monitoring method was explored in 1995 [6]. After that non-contact vital signs such as heartbeat and respiration measurement technique was proposed in 1997 [7]. They developed a radar-based vital signs monitoring system from a 10 meter distance of the object.

Ballistocardiography and PPG are two ideas that are used to estimate the heart rate from human face video [1]. In this paper, a critical review is also given on a digital camera-based heart rate measuring method from facial skin. They have discussed the theory and principles of the proposed methods that how to measure heart rate from face video. Significant contributions to improving reliability as well as significance and challenges were discussed.

The plethysmographic signal can be estimated remotely from video using a standard RGB camera in ambient light [8]. The three channels of the RGB color spectrum, red, green and blue are used to detect signals from the video of exposed skin tissue. While the green channel holds the higher plethysmographic signal information, the blue and red channel also holds plethysmographic signal information. The signal can be detected from multiple locations of the body, but it contained important information on the face, especially on the forehead. The plethysmographic signal may be detected in the raw color channel data, it is mixed with another source of color variation such as changes in ambient light or motion. The proposed paper discussed the measurement of heart rate remotely based on videos and challenge the other advanced methods which are based on motion artifacts and change of color illumination of skin tissue [4]. They briefly discussed how the technique of non-contact heart rate measurement provides a more comfortable and convenient feeling for patients and will be more beneficial for medical applications in human healthcare.

III. THEORETICAL BACKGROUND

3.1 Photoplethysmography

An optical measurement method is PPG, this method can be applied to detect blood volume change in microvascular tissues and estimate from the skin blood flow using infrared light [9]. It is a non-invasive and low-cost technique of cardiovascular activity measurement. There is a widespread application for clinical devices using the PPG technique, it is even utilized on clinical devices commercially available.

The plethysmograph is used to register and determine the variation in the flow of blood volume in the body which depends on heart pulse [9]. PPG technology requires only the component of optoelectronic: the light source is illuminated on skin tissue and measure the subtle variation of intensity using a photodetector in the skin surface with perfusion of the blood volume. PPG is a simple, convenient, easy to set up and economically efficient method for measuring heart rate [10]. The volumetric measurement of the organ in the body is called plethysmography which depends on the fluctuating amount of blood, therefore it can be used to detect pulse rate by observing the change of blood flow continuously. “There has two basic techniques of PPG: transmittance and reflectance” [11]. Within a device, a transmitter emits light to the skin and the sensor detects and measures how much light returned to the device as shown in Figure 1. Normally, infrared light emitting from the sources and a phototransistor is used for the detector. When a fingertip is illuminated from a source, three things happen: some amount of light is transmitted, some amount of light is absorbed, and some amount of light is reflected back to the device [12]. The reflected light intensity changes according to the capillary dilation and constriction of blood volume in the fingertip, which varies according to heart rate. Especially, “lower intensity of reflected light indicates the higher volume of blood and vice versa” [13]. The amount of blood flowing through the blood vessels is proportional to the PPG voltage signals. It cannot be used to determine the amount of blood but it is possible to detect subtle changes in the amount of blood using this method.

jmis-8-1-1-g1
Fig. 1. Optical heart rate sensing PPG technique.
Download Original Figure

The PPG signal refers to the changing light intensity of skin over time [14]. The volumetric change of arterial blood in a PPG signal that is involved in cardiac activity, the changes of blood volume in veins modulates the PPG signals, the optical properties of tissue, and subtle energy changes in the body are related to the direct component (DC) component. This mostly occurs when the blood flow is varying in the arteries and not in the vein.

The PPG signals are composed of two components: DC and alternating component (AC) as shown in Figure 2 [15]. The AC corresponds to the variation of blood volume in synchronization with heartbeat pulse, and it is used to measure heart rate. The DC is derived from optical signals reflected or sent by tissue, it is determined by the tissue’s frame as well as the amount of blood in the veins and blood vessels. It shows fractional changes in respiration. Thus to perform the AC analysis the DC must be removed. A small portion of the AC needs to be filtered to obtain an accurate PPG signal. The basic frequency of the AC is used to detect heart rate.

jmis-8-1-1-g2
Fig. 2. PPG signal [15].
Download Original Figure
3.2 Remote Photoplethysmography

Remote PPG is a technique for remote measurement of human cardiac activities by detecting a small pulse-induced variation on the subject’s skin utilizing a multi-wavelength RGB camera [16]. It shares the same principle as PPG, but the measurement is done remotely. It measures the variance on RGB light reflection as the brightness between specular reflection and diffused reflection. In recent years, researchers have proposed several remote PPG methods for calculating the pulse signal from videos.

“Consider a light source illuminating a piece of human skin tissue containing pulsatile blood and a remote color camera recording this image, as illustrated” in Figure 3 [16]. When light falls on the skin surface, the specular and diffused reflection that remains from the scattering and absorption in skin tissues varies with the change of blood volume. The observed intensity depends on the camera and light source distance to the measurement point in the subject’s king. Subtle changes can be observed over time in the color depending on the blood circulation, movement and specular variation in the body.

jmis-8-1-1-g3
Fig. 3. “A skin reflection model that contains specular and diffuse reflections, where only the diffuse reflection contains pulsatile information” [17].
Download Original Figure

IV. METHODOLOGY

The proposed method uses the facial RGB color on subjects’ videos as input and outputs the measured heart rate. The first step consists of identifying the subject’s face from the input frame and be able to track it through each frame of video. After this, temporal filtering is applied to the selected region of interest (ROI) to isolate the frequency of interest. Following, the cardiovascular pulse signals from face videos are extracted by using component analysis algorithms. Our proposed method performs with the FFT, PCA [18] and FastICA algorithms. Then, the FFT technique is applied to detect the maximum spectrum in each component. The peak value of the spectrum indicates the heart rate value of the subject.

4.1 Preprocessing Method

An initial stage of preprocessing needs to be performed in the video before the component analysis. It includes methods for face detection, tracking the ROI, data interpolation, noise removal [19] and temporal filtering. There are three main steps to detect heart rate from facial videos. A more detailed discussion of these steps can be found in the following sections. At first, the face area must be identified in each frame since it is the part of the video that contains the heart rate information. Second, the desired ROI is selected from the face bounding box. Thirdly, the PPG signal is extracted based on variations over time of the color intensity in the ROI of the skin. Finally, the signal processed with noise reduction methods and analyzed to determine the heart rate.

4.2 Face detection and tracking

Face detection is an inevitable step of many applications like face authentication, recognition, tracking, emotion recognition, etc. A Face detection algorithm objective is to determine if a face appears in the image or not. It is an easy task for humans to detect faces, but for a computer, it can represent a challenging task. For this reason, this task has been an interesting research topic for the past decades.

Using a face detection algorithm, the subject’s face needs to detect from a video frame which is a basic part of videos. The heart rate monitoring system needs to figure out each image frame properly one by one at specific times from videos. Then, the most important facial region for the proposed algorithm has to be tracked in every video frame. It must be noted the importance of maintaining a relatively consistent pose from the subject across the entire video to be capable of performing the needed calculations. The proposed method needs a reliable face tracking method to perform face detection accurately. Due to this reason, for this project implementation, the ‘CascadeObject-Detection’ method [20], [21] was utilized from the Open source Computer Vision (OpenCV) developed by Viola-Jones [22].

4.3 Region of Interest Selection

The ROI is selected on specific criteria from the video frames. The information contained in this region is then used for the computation process. Making use of a face detection algorithm, the facial pixels need to be identified and separated from the background pixels to be able to place a bounding box around the face. From the area contained in the bounding box, the ROI is obtained. The most suitable area to observe the skin color intensity variations are the forehead and cheeks. Since these regions provide more detailed changes it is easier to track the variations. The dimensions of the rectangle placed in this area are related to the face detection box, and its size varies depending on the distance between subject to webcam.

4.4 RGB Signal Extraction

From the ROI, the red, green and blue channel values are extracted to obtain the corresponding signal for each color [23]. Since every video frame consists of three color channels, every pixel has a 3x1 matrix of color values. With this information, it is possible to convert the three color channel values to the desired color signal signals. For this, three channels of the ROI are separated and the average pixel value is calculated for the y-line of each channel. From each value of the y-line, the mean value of the signal is removed from the signal itself and the standard deviation. Then select the higher color intensity value from the standard deviation value [24]. The average value of all channels of the video are the red, green and blue signals. On this signal, a 3rd order Butterworth band pass filter is applied for noise reduction of the desired signal with the heart rate range from 40 bpm to 180 bpm. Every step of the proposed method is shown in Figure 4.

jmis-8-1-1-g4
Fig. 4. Proposed method sequence to estimate heart rate.
Download Original Figure

The most notable difference with the traditional method is the implementation of a pixel intensity change method utilized to extract the RGB signal. While ICA and PCA are used to remove the noise from the time serial signal pixel intensity changes method tries to remove the noise from the image level. As described in [24], the pixel intensity changes method takes each of the and each channel of the ROI and for each frame f and calculates the intensity component IB(x,y,c,f) where c represents each of the RGB channels. The intensity components of each row of pixels (y) of the ROI are averaged by Eq. (1):

I B ( y , f ) = 1 x R O I x = 1 x R O I ( c = R , G , B I ( x , y , c , f ) )
(1)

Among the IB(y,f) values obtained, the higher 5% are selected for the estimation of the heart rate. Figure 5 illustrates the process for calculating the IB(y,f) from an image frame.

jmis-8-1-1-g5
Fig. 5. Proposed method sequence to estimate heart rate.
Download Original Figure
4.5 Heart Rate Detection

The photoplethysmographic signal has been obtained from the selected regions in each video frame by averaging pixel values. The signal processing techniques of the FastICA and PCA are used to reduce noise, eliminate motion artifacts and obtain a higher quality desired signal. Using FFT to computes the Discrete Fourier Transform (DFT) of noise-free signal. In other words, the DFT frequencies of a noise-free signals are extracted and the heart rate is finally estimated using Power Spectrum. The results of different processes can be seen in Figure 6.

jmis-8-1-1-g6
Fig. 6. Recover the source signal to measure heart rate.
Download Original Figure
jmis-8-1-1-g7
Fig. 7. Heart rate measurement sequence. 7(a) input raw signal 7(b) filtered signal 7(c) power spectrum 7(d) heart rate.
Download Original Figure

V. EXPERIMENT RESULT

In this experiment, a web camera was used to record the video in an indoor environment, and a pulse oximeter was also used for the reference heart rate data recorded to validate the findings.

5.1 Collection of data

The data acquisition was driven by ten (10) participants (all masculine) of different ages ranging from 24 to 35 years. The subjects were from different regions of the world and with different skin colors. The example recordings were taken in an indoor environment with sufficient artificial light. The windows of the laboratory room where the videos were recorded were covered during the time of the data acquisition. The participants were informed about the purpose of the study and were requested to seat in front of a webcam placed at 0.8 meters from their faces. All participants were asked to keep their normal breathing and to maintain a stable position facing towards the webcam while their video was recorded. At the same time as the face video recording, a pulse oximeter device was attached to the participants’ left-hand index finger; a second camera recorded pulse oximeter bpm measurement at the same time as the face video was being recorded. The recording of the device measurements was processed with an optical character recognition (OCR) technique to generate a comma separated values (CSV) with the heart bpm every second.

Each participant was recorded in the described conditions, obtaining the participant’s face video and a CSV file with the ground truth that allowed to determine the hearth rate at any specific time in the video. Figure 8 shows an example of the data collection setup.

jmis-8-1-1-g8
Fig. 8. Setup for data collection.
Download Original Figure
5.2 Result with analysis

The proposed method was performed by selecting two regions of the subject’s face, forehead and cheeks. These regions carry the most crucial plethysmography signal values. On the photoplethysmographic signal, three algorithms were applied to extract the heart rate value. In the experiments, 60 seconds face videos were utilized to evaluate the proposed method. Based on the proposed method, the heart rate value was extracted from the selected region using the change of color intensity value. The FFT technique was applied on the filtered photoplethysmographic signal to get the power spectrum signal. The heart rate value was estimated from the power spectrum signal using the frequency domain. The maximum power spectrum was detected for the corresponding frequency and the maximum value of the power spectrum represents the heart rate value. Table 1 represents the measured heart rate value using the proposed method. From this table, the accuracy of the three algorithms was calculated for the selected ROI. The following equations 2 and 3 were used for the accuracy calculation:

Table 1. Heart rate (Forehead and Cheek region) in bpm using proposed method.
Subj. Forehead (bpm) Cheek (bpm) GT (bpm)
FFT Fast CA PCA FFT Fast ICA PCA
P1 72.30 73.86 79.13 67.86 72.84 77.04 75
P2 74.36 75.19 74.10 83.42 73.38 88.17 71
P3 71.09 83.98 72.76 74.97 79.51 81.55 100
P4 70.66 72.56 71.30 71.62 71.89 70.64 83
P5 60.92 79.96 63.74 60.29 66.55 65.97 71
P6 74.94 70.51 71.19 77.30 73.98 73.84 66
P7 59.16 66.76 62.29 61.40 78.23 61.30 73
P8 62.45 68.55 75.08 66.99 74.19 66.87 69
P9 69.05 76.93 67.87 62.04 71.64 67.17 71
P10 63.45 71.77 66.03 57.98 70.07 61.82 72
Download Excel Table
PE = | GT M | GT × 100 ,
(2)
A c c u r a c y = ( 100 % P E ) ,
(3)

where PE is the percentage error; GT is ground truth value, and M is the measured value.

The accuracy summary of the proposed method is presented in Table 2. For all algorithms, the proposed method given an average accuracy of more than 85%. It was seen that the FastICA algorithm provided good average accuracy from all algorithms for both forehead and cheek regions. The FastICA algorithm average accuracy is more than 92% for both regions. The 10th subject contained the best accuracy among all with the FastICA algorithm. His accuracy was more than 99% for the forehead region. The 3rd subject gave the worst accuracy among all subjects for the FFT algorithm. Its accuracy was less than 72% for the forehead region.

Table 2. Summary of accuracy on proposed method measurement.
Algorithms Forehead (Accuracy in %) Cheek (Accuracy in %)
FFT 87.71 85.11
FastICA 92.67 92.33
PCA 89.45 88.21
Download Excel Table

Table 1 presents the obtained results of the proposed method. Discrepancies among the prediction and the ground truth can be observed. Some possible causes that can further affect these differences can be movements from the subjects, illumination problems and other noise present in the environment. In order to solve these issues, further filtering algorithms can be explored in the future looking to improve the accuracy.

The ROI average color value was utilized to extract the heart rate using the proposed method. The output results are shown in Table 3. From Table 3, the average accuracy of the three algorithms was calculated, the results are shown in Table 4. It can be observed that this method’s average accuracy is more than 84% for average color values and produce a maximum accuracy of 93.39% for the FastICA algorithm in the cheek region.

Table 3. Heart rate (Forehead and Cheek region) in bpm using traditional method.
Subj. Forehead (bpm) Cheek (bpm) GT (bpm)
FFT Fast ICA PCA FFT Fast ICA PCA
P1 80.45 77.66 85.31 66.79 73.84 71.44 75
P2 77.77 75.93 73.49 63.50 68.28 68.69 71
P3 64.49 71.68 62.04 63.43 73.80 63.12 100
P4 64.80 70.47 68.18 74.84 75.95 73.84 83
P5 60.34 75.38 64.14 62.78 69.23 63.45 71
P6 64.19 63.76 60.19 74.92 65.21 74.03 66
P7 54.70 62.56 55.04 56.73 64.58 57.54 73
P8 75.55 66.84 74.94 74.31 68.95 72.40 69
P9 67.82 65.81 68.20 66.61 70.04 63.44 71
P10 53.65 65.24 50.79 56.39 65.24 59.87 72
Download Excel Table
Table 4. Summary of accuracy on traditional method measurement.
Algorithm Forehead (Accuracy in %) Cheek (Accuracy in %)
FFT 84.35 84.91
FastICA 90.24 93.39
PCA 84.19 86.77
Download Excel Table

From the accuracy shown in Tables 2 and 4, it can be seen that the proposed method based on color intensity value given higher accuracy for forehead region in FastICA algorithm.

Even though the proposed method provided a higher accuracy on all algorithms, it was noted that in the cheek region the traditional method based on average color value gave a higher accuracy with the FastICA algorithm. Figure 9 and 10 show a comparison of the accuracy obtained by the proposed and traditional method on the forehead and cheek ROI respectively.

jmis-8-1-1-g9
Fig. 9. Comparison of accuracy for the proposed and traditional method on the forehead ROI.
Download Original Figure
jmis-8-1-1-g10
Fig. 10. Comparison of accuracy for the proposed and traditional method on the cheek ROI.
Download Original Figure

VI. CONCLUSION

Non-contact based heart rate estimation method is becoming more popular thanks to the higher versatility when comparing it to the contact-based measurement methods utilized in clinical environments. In this paper, the proposed non-contact method estimates heart rate values from RGB face videos using intensity color value. This method is non-invasive, easier to use in some situations and has a wider range of applications than the traditional method. It depends on a physiological signal which contains the crucial information of heart rate and extracts heart rate value using the PPG signal. The proposed non-contact or remote heart rate monitoring method is an easy to implement, low-cost and comfortable method.

Even though the PPG method produces a high accuracy when measuring heart rate, there exists an individual time window when the measurement accuracy decrease. This inconsistency can be due to face movement, light changes, or another kind of environmental noise.

In this paper, the utilization of the pixel intensity changes is proposed to improve on the traditional method that utilize the average color value. The proposed method gives an accuracy higher than 85% when the three algorithms are averaged on forehead and cheek regions. It was also seen that the FastICA technique gives the highest average accuracy among the three algorithms with 92% in both regions. It was observed from the result that the proposed method based on color intensity obtained a higher accuracy on both regions with all the algorithms except for the FastICA result on the cheek region which was the only one overperformed by the traditional method. The utilization of the pixel intensity changes method obtained a 1.94% higher average accuracy than the traditional method.

As follow-up research, the improvement of the method focusing on filtering out environmental noise to increase the accuracy can be explored. Additionally, an interesting implementation of the proposed method could be in the human expression detection field. The utilization of a non-contact heart rate measurement method as presented in this paper can be useful to some state-of-the-art human emotion predictors to improve even further.

Acknowledgement

The authors would like to give their gratitude to the members of the VBLab research laboratory, the Department of Information Convergence Engineering at Pusan National University, for reviewing the early version of the paper. Their corrections and suggestions helped to improve the quality of the paper. We would also like to thank the people who volunteer for the collection of data.

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2019S1A5A2A03045884).

REFERENCES

[1].

M. A. Hassan, A. S. Malik, D. Fofi, N. Saad, B. Karasfi, Y. S. Ali, and F. Meriaudeau, “Heart rate estimation using facial video: A review,” Biomed. Signal Process. Control, vol. 38, pp. 346–360, 2017,
.

[2].

D. S. Short, “The syndrome of alternating bradycardia and tachycardia,” British Heart Journal, vol. 16, no. 2 p. 208, 1954.

[3].

S. Davis et al., “Continuous non-contact vital sign monitoring in neonatal intensive care unit,” Healthc. Technol. Lett., vol. 1, no. 3, pp. 87–91, 2014,
.

[4].

X. Chen, J. Cheng, R. Song, Y. Liu, R. Ward, and Z. J. Wang, “Video-Based Heart Rate Measurement: Recent Advances and Future Prospects,” IEEE Trans. Instrum. Meas., vol. 68, no. 10, pp. 3600–3615, 2019,
.

[5].

P. K. Jain and A. K. Tiwari, “Heart monitoring systems-A review,” Comput. Biol. Med., vol. 54, no. 2014, pp. 1–13, 2014,
.

[6].

German Da Costa, “Optical remote sensing of heartbeats,” Opt. Commun., vol. 117, no. 5–6, pp. 395–398, 1995,
.

[7].

E. F. Greneker, “Radar sensing of heartbeat and respiration at a distance with applications of the technology,” Radar 97 (Conf. Publ. No. 449), pp. 150–154, 1997,
.

[8].

W. Verkruysse, L. O. Svaasand, and J. S. Nelson, “Remote plethysmographic imaging using ambient light,” Opt. Express, vol. 16, no. 26, p. 21434, 2008,
.

[9].

M. Elgendi, “On the Analysis of Photoplethysmogram Signal,” Curr. Cardiol. Rev., vol. 8, no. c, pp. 14–25, 2012.

[10].

H. Demirezen and C. E. Erdem, “An overview of non-contact photoplethysmography,” pp. 1–4, 2017,
.

[11].

N. Saquib, M. T. I. Papon, I. Ahmad, and A. Rahman, “Measurement of heart rate using photoplethysmography,” in Proceeding of 2015 Int. Conf. Netw. Syst. Secur. NSysS 2015, 2015,
.

[12].

S. Hu, J. Zheng, V. Chouliaras, and R. Summers, “Feasibility of imaging photoplethysmography,” in Proceeding of the 1st Int. Conf. Biomed. Eng. Informatics, BMEI 2008, vol. 2, pp. 72–75, 2008,
.

[13].

A. Reisner, P. A. Shaltis, D. McCombie, and H. H. Asada, “Utility of the photoplethysmogram in circulatory monitoring,” Anesthesiology, vol. 108, no. 5, pp. 950–958, 2008,
.

[14].

J. Zheng and S. Hu, “The preliminary investigation of imaging photoplethysmographic system,” J. Phys. Conf. Ser., vol. 85, no. 1, 2007,
.

[15].

P. Y. Chiang et al., “Machine learning classification for assessing the degree of stenosis and blood flow volume at arteriovenous fistulas of hemodialysis patients using a new photoplethysmography sensor device,” Sensors (Switzerland), vol. 19, no. 15, pp. 1–16, 2019,
.

[16].

W. Wang, A. C. Den Brinker, S. Stuijk, and G. De Haan, “Algorithmic Principles of Remote PPG,” IEEE Trans. Biomed. Eng., vol. 64, no. 7, pp. 1479–1491, 2017,
.

[17].

H. Abdi and L. J. Williams, “Principal component analysis,” Wiley Interdiscip. Rev. Comput. Stat., vol. 2, no. 4, pp. 433–459, 2010,
.

[18].

Boubenna, Hadjer, and Dohoon Lee, “Feature selection for facial emotion recognition based on genetic algorithm,” in Proceeding ofthe 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), pp. 511-517, Changsha, China, 2016.

[19].

H. Boubenna and D. Lee, “Image-based emotion recognition using evolutionary algorithms,” Biologically Inspired Cognitive Architectures, vol. 24, pp. 70-76, 2018.

[20].

Ji-Hae Kim, Byung-Gyu Kim, Partha Pratim Roy, and Dami Jeong, “Efficient facial expression recognition algorithm based on hierarchical deep neural network structure,” IEEE Access, vol. 7, pp. 41273-41285, 2019.

[21].

Dami Jeong, Byung-Gyu Kim, and Suh-Yeon Dong, “Deep joint spatiotemporal network (DJSTN) for efficient facial expression recognition,” Sensors, vol. 20, no. 7, p. 1936, 2020,
.

[22].

P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceeding of IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 1, 2001,
.

[23].

G. Akti and D. Goularas, “Frequency component extraction from color images for specific sound transformation and analysis,” in Proceeding of the 3rd Int. Conf. Image Process. Theory, Tools Appl. IPTA 2012, pp. 253–258, 2012,

[24].

C. Massaroni, E. Schena, S. Silvestri, and S. Maji, “Comparison of two methods for estimating respiratory waveforms from videos without contact,” Med. Meas. Appl. MeMeA 2019 - Symp. Proc., 2019,
.

Authors

Sarker Md Sahin

jmis-8-1-1-i1

received his BS degree in the Department of Electrical and Electronic Engineering from University of Asia Pacific, Bangladesh, in 2013 and an MS degree in the Department of Computer Engineering from Pusan National University, Korea, in 2021.

His research interests include image coding algorithms, video coding technique and heart rate monitoring in computer vision.

Qikang Deng

jmis-8-1-1-i2

received his BS degrees in the Department of Computer Science from South China University, China, in 2017, respectively. In 2020, he joined the VB Laboratory in Department of Computer Engineering for pursuing his MS degree at Pusan National University.

His research interests include autonomous vehicles, biomedical image analysis, and image generation.

José Cruz Castelo Beltrán

jmis-8-1-1-i3

received a BS degree in Mechatronics Engineering at CETYS University in Mexico. In 2020 he started a Master’s Degree in Computer Engineering at Pusan National University, Korea.

His research interests include biomedical images processing, big data applications and blockchain technologies.

DoHoon Lee

jmis-8-1-1-i4

is a Professor in Pusan National University. He received the B.S., M.S., and Ph.D. degree from Pusan National University, Korea.

His research interests are machine vision, data analysis, biometrics, and video prediction.