Journal of Multimedia Information System
Korea Multimedia Society
Section A

Lightweight Convolutional Neural Network (CNN) based COVID-19 Detection using X-ray Images

Muneeb A. Khan1, Hemin Park1,*
1Department of Software, Sangmyung University, Cheonan City, Rep. of Korea, muneebkhan046@gmail.com
*Corresponding Author: Heemin Park, Department of Software, Sangmyung University, Cheonan City, Rep. of Korea, heemin@smu.ac.kr

© Copyright 2021 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Dec 07, 2021; Revised: Dec 25, 2021; Accepted: Dec 28, 2021

Published Online: Dec 31, 2021

Abstract

In 2019, a novel coronavirus (COVID-19) outbreak started in China and spread all over the world. The countries went into lockdown and closed their borders to minimize the spread of the virus. Shortage of testing kits and trained clinicians, motivate researchers and computer scientists to look for ways to automatically diagnose the COVID-19 patient using X-ray and ease the burden on the healthcare system. In recent years, multiple frameworks are presented but most of them are trained on a very small dataset which makes clinicians adamant to use it. In this paper, we have presented a lightweight deep learning base automatic COVID-19 detection system. We trained our model on more than 22,000 dataset X-ray samples. The proposed model achieved an overall accuracy of 96.88% with a sensitivity of 91.55%.

Keywords: Coronavirus; Computer Tomography; Convolutional Neural Network; X-ray

I. INTRODUCTION

Coronavirus is a member of a vast family of viruses that vary from the common cold to a life-threatening infection. The first ever case of Severe Acute Respiratory Syndrome Coronavirus 1 (SARS-COV1) was reported in 2002 in Yunnan, China and spread over to 30 countries, with more than 8000 diagnosed cases and 11% fatality rate [22]. The second, the Middle East Respiratory Syndrome Coronavirus (MERS-CoV) epidemic started in 2013 in Saudi Arabia with over 2500 confirmed cases and a 35% fatality rate [15].

However, in 2019 a novel Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV2) case originated in Wuhan, China and spread all over the world. In February 2020, WHO termed SARS-COV2 as “COVID-19” and declared a worldwide pandemic with more than 263 million confirm diagnosed cases and 5.23 million deaths [7, 8]. A brief history of the SARS-COV is illustrated in Table 1.

Table 1. SARS-COV History Overview.
Virus Name Pandemic Year Virus Origin Fatality Rate
SARS-COV1 2002 China 11%
MERS-COV 2013 Saudi Arabia 35%
SARS-COV2 2019 China 1.98%
Download Excel Table

Even before the COVID-19 pandemic, the density of doctors in countries was not enough all over the world. According to data in 2020, Austria comes first with 5.36 doctors per 1000 people as shown in Figure 1. While the average in 2020 based on 9 countries was 4 doctors per 1,000 people [14].

jmis-8-4-251-g1
Fig. 1. Number of Doctors per 1000 persons (Country Ranking).
Download Original Figure

Henceforth, the policies of nations all over the world revolve around controlling the spread of the coronavirus by implementing lockdown and making sure that the healthcare system does not collapse in the fight against COVID-19.

Multiple tests; such as Polymerase Chain Reaction (PCR), X-rays and Computed Tomography (CT), are considered as a standard test for COVID-19 diagnosis. But, PCR is very time consuming and sometimes isn’t able to detect the viral agglomeration, which results in a false negative [5].

Hence, X-rays and Computed Tomography (CT) emerged as key tests to identify the severity of the disease and its diagnosis. Sometimes they are more successful than PCR at identifying COVID 19, which might result in false negatives. Apart from the worldwide effort to develop medicine and treatment, researchers made an effort to exploit Artificial Intelligence (AI) and Deep Learning (DL) for the diagnosis of COVID-19.

In recent years, multiple AI & DL models are presented for efficient image classification. Due to this, these models are often used in Computer Aided Diagnosis (CAD) for many diseases, such as lung nodules [6, 24], heart diseases [16, 17] and brain diseases [9, 15, 22], to assist clinicians. Machine learning techniques have been designed to predict liver disease risk and outcomes like the risk of liver fibrosis, predicting liver cirrhosis, selecting a donor and survival and complications of transplant [23].

Furthermore, AI has significantly improved the overall accuracy of cancer diagnosis, medicine selection, monitoring of cancer tumors and the outcomes related to it. Researchers employed ethical and safety parameters in it to make CAD safer and build the trust of patients and clinicians. [1, 14]. However, in the field of obstetric and gynaecological, ultrasonography are two of the most used imaging procedures, AI has had little influence on this sector thus far [24].

The main contribution of this paper is as follows:

  1. A lightweight sequential classification model has been presented to classify and diagnose COVID-19 patients using X-ray images.

  2. This study has been performed on a very extensive dataset of more than 22 thousand X-ray Images.

  3. This study is carried out on a binary class dataset i.e COVID-19 infected and Normal Lung X-ray image.

  4. The performance of COVID-19 infected and normal lung detection using the proposed model is thoroughly evaluated on multiple variables.

  5. An efficient, robust, and high accuracy base system is presented to clinicians for COVID-19 patient diagnosis.

The rest of the paper structure is as follows. Section 2 provides an overview of the current state of art proposed frameworks. Section 3 explains the methodology, dataset and proposed system architecture. Experimental results, performance of the proposed model are evaluated and discussed in Section 4. Finally, a conclusion with future directions is discussed in Section 5.

II. RELATED WORK

In this section, we will discuss some state-of-the-art framework of CNN and its widespread application.

In [4], the authors presented a framework to efficiently detect the fight detection using blur information by employing CNN and transform learning. The proposed scheme accurately detects 56% and 75% fights in hockey video.

Face detection is used many fields such as security, expression detection and person identification. In [19], authors presented a novel framework of frontal face generation from multi-view face images. using Generative Adversarial Network (GAN). The proposed scheme considers various factors such as beard, ear, nose and mouth. While the authors of [16], presented a facial expression detection and identification by geometric and feature extraction information and employing CNN for better results. The proposed scheme has achieved an accuracy of 96.5% on EXTENDED COHN-KANADE (CK+) dataset. Moreover, it has achieved 91.3% accuracy to detect facial expression on JAFFE dataset.

The researchers in [3] presented a DL model to explore current and commercial medications for “drug repositioning”, which is the process of designing a fast treatment approach by utilizing existing medicines that may be given to infected patients promptly. Because it often takes years for new medicines to be successfully evaluated and approved before they are released to the market.

Singh et al. [10] presented a Convolutional Neural Network (CNN) based multi-objective differential evolution (MODE) framework to efficiently classify chest computed tomography (CT) images from persons infected with and without COVID-19.

The authors of [14] presented a model to improve the Visual Geometry Group (VGG19) and the Google Mobile Network’s performance. The author used a dataset of 50 X-ray imaging, 25 of which were COVID-19 positive. The proposed approach has a 90 % overall accuracy for identifying patients.

X. Chen et al. [29] presented an automated framework; Residual Attention U-Net, to classify and diagnose COVID-19 patients using CT images. They investigate the COVID-19 link with pneumonia and its effect on the lungs. Adhikari [21] proposed a framework called “Auto Diagnostic Medical Analysis (ADMA)” which searches and identifies infectious parts of lungs to assist clinicians. In their research, they have used X-ray and CT imaging for efficient diagnosis. In addition to that, the DenseNet network can be opted to eliminate and label infectious regions. Shuai, et al. [26] proposed a hybrid transfer learning model with multi-validation (internal and external) to diagnose the COVID-19 patient. In their paper, they only used CT scan images for classification and achieved an overall accuracy of 89.5% with a sensitivity of 87%.

Narin et al. [2] proposed a ResNet50, Inception v3, and Inception-ResNetV2 base hybrid DL model to detect the COVID-19 infected lung through X-ray imaging. Goshal et al. [11] presented a Bayesian CNN and VGG16 based framework to diagnose COVID-19 using X-ray images. They improved the cognition of DL to accurately and efficiently diagnose the COVID-19 patient.

A comparison analysis of all the related work papers in terms of accuracy and precision is illustrated in Table 2.

Table 2. Comparison of AI based COVID-19 systems.
Sr. Modality Datasets Samples Number of Classes Results
Accuracy Precision
[2] X-ray 8088 4 96.1% 76.5%
[10] CT scan 40 2 93.4% N/A
[14] X-ray 50 2 90% 91.5%
[21] X-ray and CT scan 4344 2 89% 90.1%
[26] CT scan 1065 3 89.5% N/A
[29] CT scan 110 2 89% 95%
Download Excel Table

III. METHODOLOGY

In this section, we explain the dataset, system architecture, proposed framework, and experimental environment.

3.1. Dataset

Dataset plays a very pivotal role in generating better accuracy of the system because with a bigger dataset the system can be trained more thoroughly. In addition to that, results that are acquired from systems that are trained on a very scarce dataset can never be trusted or have a very little confidence in.

In this study, a dense chest X-ray images dataset has been provided by Dr. Muhammad Haziq Khan1. The acquired dataset consists of mainly SARS-COV2 patients and normal person chest X-rays images.

Overall, the dataset contains a total of 22,294 image samples, out of which 6276 are COVID-19 patients while the rest are healthy chest X-rays. The collected dataset samples range from 1112 x 624 to 2170 x 1953 pixels. The dataset is randomly divided into training and testing datasets with a 70% to 30% ratio respectively.

Our experimentation is based on the binary class dataset i.e., COVID-19 infected and normal Lung X-ray image as show in Figure 2.

jmis-8-4-251-g2
Fig. 2. Collected Chest X-ray Samples (a) Normal (b) COVID-19.
Download Original Figure
3.2. System Architecture

We presented a lightweight CNN model for automatic and efficient COVID-19 image classification. Fig 3 displays the overall working of the proposed system. The proposed system can be divided into three parts.

jmis-8-4-251-g3
Fig. 3. Workflow of Proposed System.
Download Original Figure
3.2.1. Preprocessing

The dataset is very diverse in terms of sizes ranging from 1112 x 624 to 2170 x 1953 pixels. To create uniformity and consistency, we rescale the sample images into 224 x 224 pixels. Furthermore, one hot encoding is applied based on labels of data to create a distinction between the positive and negative COVID-19 X-ray image.

3.2.2. Model Training

Before training we split the dataset into 2 parts (Training and Testing ) with a 70 to 30 ratio. This means that out of 22,294 X-ray samples we use 30% (6687 samples) for testing while using 70% (15607 samples) for the training phase.

In addition to that image segmentation and feature extraction is implemented and those features are forwarded to the proposed lightweight sequential model for training.

3.2.3. Classification and Diagnosis

After the training phase is completed, testing data samples are forwarded to the trained model for evaluation. The trained model automatically classifies the dataset into COVID-19 positive or negative as shown in Figure 3. Furthermore, the performance of the overall system is evaluated based on multiple parameters such as F1-score, accuracy, specificity etc.

3.3. Experimental Setup

We implemented the model in Keras on Tensorflow v2 with a learning rate of 0.01 using Adam optimizer and a total 25 number of epochs were set for model training. The experimentation was performed on the Jupyter Notebook equipped with GeForce RTX 2080 Ti GPU. The main code is written in python, related version details are shown in Table 3.

Table 3. Experimental Environment Details.
Experimental Setup Version
CUDA 11.2
Operating System Ubuntu 18.04
GPU GeForce RTX 2080 Ti
Python 3.6.9
Download Excel Table

Each image sample is transformed into a greyscale. Furthermore, to minimize the biases of the model training, we used shear image, shuffling, image rotation and flip, zoom and rescaling. We constitute that shear range (0.2), zooming (0.3) and horizontal flip are most apt for training.

The implemented model has a multi convolutional layer, each of which is followed by max-pooling and batch normalization. A derail structure of the model in block diagram is illustrated in Figure 4.

jmis-8-4-251-g4
Fig. 4. Block Diagram of Proposed System.
Download Original Figure

IV. RESULT AND DISCUSSION

The model has been thoroughly evaluated on multi-performance parameters i.e., accuracy, precision, specificity etc. Furthermore, we train and tested our proposed model on publicly 2 available datasets to validate its performance.

4.1. Confusion Matrix

Confusion Matrix is a salient method to analyze the performance of a model. It comprises four parameters True Positive (TP), False Positive (FP), False Negative (FN), and True Negative (TN).

We evaluate our proposed model on binary class dataset samples (COVID-19 and Normal X-ray images). A total of 6687 samples are selected for testing containing 1760 COVID-19 and 4927 healthy X-ray images respectively. From the confusion matrix (as shown in the Figure 5), it is clear that we have successfully diagnosed 1723 out of 1760 COVID-19 patients, thus achieving 97.38% accuracy. At the same time, we have successfully detected 4768 normal people out of 4927 and achieved an accuracy of 96.77%.

jmis-8-4-251-g5
Fig. 5. Confusion matrix.
Download Original Figure

This indicates that the presented model has achieved an overall accuracy of 96.88% to diagnose COVID-19 patients successfully from the randomly chosen dataset. In addition to that, a detailed analysis of the results has been presented in Table 4.

Table 4. Performance parameter analysis.
Class Precision Recall F1 Score
Overall 95.5% 97.5% 96.5%
COVID-19 92% 98% 95%
Normal 99% 97% 98%
Download Excel Table
4.2. Training Accuracy

We train our model for 25 epochs with a learning rate of 0.01. The average training accuracy is more than 94% with an error rate of 0.3%. However, during the training of the model, it is noted that training accuracy of more than 98% accuracy with a 0.012% error rate has been achieved (as shown in Figure 6) which is quite good. Such accuracy gives strengths and confidence in the model, especially in the CAD system.

jmis-8-4-251-g6
Fig. 6. Curve Graph of Proposed model Training and Validation.
Download Original Figure
4.3. Validation on Public Dataset

The proposed model has been thoroughly validated by using publicly available Chest X-ray datasets2. The dataset2 [11] contains both CT scan and chest x-ray images. However, we only use X-ray images to validate and calculate the accuracy of model on publicly available dataset2. It has a total of 9546 (5501 Normal and 4045 COVID-19) X-ray images.

A total of 6682 images are randomly selected for training while 2864 images are selected for testing purposes. Among 2864 testing X-ray samples, there are 1145 healthy X-ray and 1719 COVID-19 images respectively. From Figure 7, it is clear that we have successfully diagnosed 1672 out of 1719 COVID-19 patients, thus achieving 97.26% accuracy.

jmis-8-4-251-g7
Fig. 7. Confusion matrix.
Download Original Figure

While on the other hand, model has successfully detected the 1095 healthy people out of 1145 random X-ray samples. Thus, achieving an accuracy of 95.63%. In terms of an overall accuracy the proposed model has achieved an accuracy of 96.61% to efficiently make a diagnosis.

During the training of the model on the dataset2, the model have achieved an accuracy of more than 98% with an error of 0.061%. However, in some epochs the model has achieved an accuracy of more than 99% with an error rate of 0.017% as shown in Figure 8.

jmis-8-4-251-g8
Fig. 8. Curve Graph of Proposed model Training and Validation.
Download Original Figure

Although we have trained and evaluate our model on an extensive dataset, but the model still misclassifies some COVID-19 positive patients. This is wrong diagnosis happen due to hazy line of demarcation, shadow or blur effects wrong X-ray image angle (as shown in Figure 9. Furthermore, there should be a balance between both classes (COVID & Normal X-ray images) so that bias and variance between the sub-dataset (test & train) can be reduced.

jmis-8-4-251-g9
Fig. 9. (a) Blur X-ay Image (b) Side-view image of X-ray.
Download Original Figure

V. CONCLUSION

An early diagnosis of COVID-19 is key to stopping the spread of disease to other people. Recently multiple frameworks are presented to diagnose COVID-19 by exploiting AI & DL models. However, most of these frameworks are either trained on a very scarce dataset or take too much time. In this paper, we propose a lightweight CNN model which is trained on more than 20,000 dataset samples. The proposed model has achieved an overall accuracy of 96.88% with a sensitivity of 91.55%.

Notes

District Headquarter Hospital, Vehari, Pakistan

Acknowledgement

We are thankful to Dr. Muhammad Haziq Khan for his professional contributions and insights throughout the development of the study.

This research was funded by a 2020 research Grant from Sangmyung University, South Korea.

REFERENCES

[1].

Al-shamasneh and U. Obaidellah, “Artificial Intelligence Techniques for Cancer Detection and Classification: Review Study,” European Scientific Journal, vol. 13, no. 3, 2017. Available:
.

[2].

A. Narin, C. Kaya and Z. Pamuk, “Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks,” Pattern Analysis and Applications, vol. 24, no. 3, pp. 1207-1220, 2021. Available:
.

[3].

Beck, B. Shin, Y. Choi, S. Park and K. Kang, “Predicting commercially available antiviral drugs that may act on the novel coronavirus (SARS-CoV-2) through a drug-target interaction deep learning model,” Computational and Structural Biotechnology Journal, vol. 18, pp. 784-790, 2020. Available:
.

[4].

B. Mukherjee, “Fight Detection in Hockey Videos using Deep Network,” Koreascience.or.kr, 2021. [Online]. Available: http://koreascience.or.kr/article/JAKO201707851608120.page.

[5].

Wang, Y. Sun, T. Duong, L. Nguyen and L. Hanzo, “Risk-Aware Identification of Highly Suspected COVID-19 Cases in Social IoT: A Joint Graph Theory and Reinforcement Learning Approach,” IEEE Access, vol. 8, pp. 115655-115661, 2020. Available:
.

[6].

Liang, Y. Liu, M. Wu, F. Garcia-Castro, A. Alberich-Bayarri and F. Wu, “Identifying pulmonary nodules or masses on chest radiography using deep learning: external validation and strategies to improve clinical practice,” Clinical Radiology, vol. 75, no. 1, pp. 38-45, 2020. Available:
.

[7].

“Coronavirus,” Who.int, 2021. [Online]. Available: https://www.who.int/health-topics/coronavirus. [Accessed: 04- Dec- 2021].

[8].

“Coronavirus disease (COVID-19) – World Health Organization,” Who.int, 2021. [Online]. Available: https://www.who.int/emergencies/diseases/novel-coronavirus-2019. [Accessed: 04- Dec- 2021].

[9].

Bangare, “Brain Tumor Detection Using Machine Learning Approach,” Design Engineering, no. 7, pp. 7557-7566, 2021. Available: http://www.thedesignengineering.com/index.php/DE/article/view/3264.

[10].

D. Singh, V. Kumar, Vaishali and M. Kaur, “Classification of COVID-19 patients from chest CT images using multi-objective differential evolution–based convolutional neural networks,” European Journal of Clinical Microbiology & Infectious Diseases, vol. 39, no. 7, pp. 1379-1389, 2020. Available:
.

[11].

El-Shafai, Walid; Abd El-Samie, Fathi (2020), “Extensive COVID-19 X-Ray and CT Chest Images Dataset,” Mendeley Data, V3, doi:

[12].

Ezz El-Din Hemdan, M. Shouman and M. Karar, “COVIDX-Net: A Framework of Deep Learning Classifiers to Diagnose COVID-19 in X-Ray Images,” arXiv.org, 2020. [Online]. Available: https://arxiv.org/abs/2003.11055.

[13].

B. Ghoshal, A. Tucker, “Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection,” arXiv preprint arXiv:2003.10769. 2020 Mar 22.

[14].

Hyo-Eun Kim, et al., “Changes in cancer detection and false-positive recall in mammography using artificial intelligence: a retrospective, multireader study,” The Lancet Digital Health, vol. 2, no. 3, pp. e138-e148, 2020. Available:
.

[15].

J.-H. Kim, B.-G. Kim, P. P. Roy and D.-M. Jeong, “Efficient Facial Expression Recognition Algorithm Based on Hierarchical Deep Neural Network Structure,” IEEE Access, vol. 7, pp. 41273-41285, 2019. Available:
.

[16].

H. Rathore, A. Al-Ali, A. Mohamed, X. Du and M. Guizani, “A Novel Deep Learning Strategy for Classifying Different Attack Patterns for Deep Brain Implants,” IEEE Access, vol. 7, pp. 24154-24164, 2019. Available:
.

[17].

“Human Resources - Doctors,” Organization for Economic Co-operation and Development (OECD), 2021. [Online]. Available: https://data.oecd.org/healthres/doctors.html. [Accessed: 04- Dec- 2021].

[18].

“Middle East respiratory syndrome coronavirus (MERS-CoV),” Who.int, 2019. [Online]. Available: https://www.who.int/health-topics/middle-east-respiratory-syndrome-coronavirus-mers#tab=tab_1. [Accessed: 04- Dec- 2021].

[19].

M. Karar, S. El-Khafif and M. El-Brawany, “Automated Diagnosis of Heart Sounds Using Rule-Based Classification Tree,” Journal of Medical Systems, vol. 41, no. 4, 2017. Available:
.

[20].

M. Karar, D. Merk, C. Chalopin, T. Walther, V. Falk and O. Burgert, “Aortic valve prosthesis tracking for transapical aortic valve implantation,” International Journal of Computer Assisted Radiology and Surgery, vol. 6, no. 5, pp. 583-590, 2010. Available:
.

[21].

N. C. Das Adhikari, “Infection Severity Detection of CoVID19 from X-Rays and CT Scans Using Artificial Intelligence,” International Journal of Computer, vol. 38, no. 1, pp. 73-92, May 2020.

[22].

N. Ghassemi, A. Shoeibi and M. Rouhani, “Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images,” Biomedical Signal Processing and Control, vol. 57, p. 101678, 2020. Available:
.

[23].

P. Decharatanachart, R. Chaiteerakij, T. Tiyarattanachai and S. Treeprasertsuk, “Application of artificial intelligence in chronic liver diseases: a systematic review and meta-analysis,” BMC Gastroenterology, vol. 21, no. 1, 2021. Available:
.

[24].

S. Hussein, P. Kandel, C. Bolan, M. Wallace and U. Bagci, “Lung and Pancreatic Tumor Characterization in the Deep Learning Era: Novel Supervised and Unsupervised Learning Approaches,” IEEE Transactions on Medical Imaging, vol. 38, no. 8, pp. 1777-1787, 2019. Available:
.

[25].

“Summary of probable SARS cases with onset of illness from 1 November 2002 to 31 July 2003,” Who.int, 2015. [Online]. Available:https://www.who.int/publications/m/item/summary-of-probable-sars-cases-with-onset-of-illness-from-1-november-2002-to-31-july-2003. [Accessed: 04- Dec- 2021].

[26].

S. Wang et al., “A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19),” European Radiology, 2021. Available:
.

[27].

Y. Minoda et al., “Efficacy of endoscopic ultrasound with artificial intelligence for the diagnosis of gastrointestinal stromal tumors,” Journal of Gastroenterology, vol. 55, no. 12, pp. 1119-1126, 2020. Available:
.

[28].

Y.-H. Heo, B.-G. Kim and P. P. Roy, “Frontal Face Generation Algorithm from Multi-view Images Based on Generative Adversarial Network,” Journal of Multimedia Information System, vol. 8, no. 2, pp. 85-92, 2021. Available:
.

[29].

X. Chen, L. Yao and Y. Zhang, “Residual Attention U-Net for Automated Multi-Class Segmentation of COVID-19 Chest CT Images,” arXiv.org, 2020. [Online]. Available: https://arxiv.org/abs/2004.05645.

Authors

Muneeb Ahmed Khan

jmis-8-4-251-i1

received the B.S. degree in Computer Engineering from the Department of Electrical and Computer Engineering, COMSATS University, Pakistan and M.S degree in Information Technology (IT) from the School of Electrical Engineering and Computer Science, National University of Sciences and Technology (NUST), Pakistan in 2014 and 2019 respectively. He has served as a lecturer in the Department of Computer Science, Institute of Southern Punjab. Currently, he is pursuing his Ph.D. from Department of Software, Sangmyung University, Cheonan, South Korea.

His research interests include Internet of Things, artificial intelligence, computer vision, reinforcement learning and pervasive computing.

Heemin Park

jmis-8-4-251-i2

received the B.S. and M.S. degrees in computer science from Sogang University, South Korea, in 1993 and 1995, respectively, and the Ph.D. degree in electrical and computer engineering from the University of California, Los Angeles in 2006. He was with Yonsei University, Sookmyung Women’s University, and Samsung Electronics, South Korea. He is currently an Associate Professor in the Department of Software, Sangmyung University, Cheonan, South Korea.

His research interests include artificial intelligence, computer vision, cyber physical systems, and pervasive computing.