Section B

Smoke Detection System Research using Fully Connected Method based on Adaboost

Yeunghak Lee1,*, Taesun Kim2, Jaechang Shim3
Author Information & Copyright
1Computer Engineering, Andong National Univ. Andong, Republic of Korea, annaturu@ikw.ac.kr
2Dept. of Avionics Eng. College of Aviation, Kyungwoon University, Kumi, Republic of Korea, tskim@ikw.ac.kr
3Computer Engineering, Andong National Univ. Andong, Republic of Korea, jcshim@andong.ac.kr
*Corresponding Author: Yeunghak, Lee, Andong National Univ. 1375 Gyeongdong-ro, Andong-si, Republic of Korea, +82-54-820-5645, annaturu@ikw.ac.kr.

© Copyright 2017 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Jun 28, 2017 ; Revised: Jun 29, 2017 ; Accepted: Jun 29, 2017

Published Online: Jun 30, 2017

Abstract

Smoke and fire have different shapes and colours. This article suggests a fully connected system which is used two features using Adaboost algorithm for constructing a strong classifier as linear combination. We calculate the local histogram feature by gradient and bin, local binary pattern value, and projection vectors for each cell. According to the histogram magnitude, this paper applied adapted weighting value to improve the recognition rate. To preserve the local region and shape feature which has edge intensity, this paper processed the normalization sequence. For the extracted features, this paper Adaboost algorithm which makes strong classification to classify the objects. Our smoke detection system based on the proposed approach leads to higher detection accuracy than other system.

Keywords: Smoke Detection; Adaboost; Histogram

I. INTRODUCTION

There are two kinds of fire and smoke detection systems based on sensors and image processing. In case of sensors, it will be limited the performance of detection ability according to the around environment. For example, the smoke detection sensors are difficult to detect, if the air diffusion occurred around the sensor, due to ventilation at the outbreak of fire. And ultraviolet detectors for use with flame detection devices are dropped the sensitivity by the smoke and other factors absorbing the ultraviolet radiation.

Because smoke is reliminary symptom of fire, earlier smoke detection provides to prevent spread of fire flame. Many researchers studied for the smoke and fire flame detection algorithms in the literature based on video image. The majority of these algorithms focuses on the colour and shape characteristics together combined to the temporal behavior of smoke and fire flames. [1]. Gubbi [2] used discrete cosine transform and wavelet transform to extract features and support vector machine to detect smoke. Ko [3] extracted several features: colour, wavelet coefficient, motion vector, and 9 feature vectors and used random forest to classify blocked smoke. Frizzi [1] used neural network method using deep convolutional neural network by tensorflow to detect smoke or smoke and fire.

This paper approaches the shape and texture of smoke. Extracting characteristics for detecting smoke is classified with single characteristics, multiple characteristics, whole area characteristics, and district characteristics pursuant to the usage of extracted characteristics [4]. Especially, HOG characteristics or Improved HOG characteristics are widely utilized in the methods for recognizing smoke by using the CCTV vision. Zhu et al. [5] applied the HOG characteristics based on variable block size to improve detection speed.

This paper consists of the following: Section II introduces basic extracting characteristics methods with HOG, LBP, and projection vectors. Section III states the framework and training of suggested detecting two-wheelers system. The evaluation and detailed analysis for the experimental results are summarized in section IV. Section V states the conclusion.

II. Feature Extraction

2.1 Histogram of Oriented Gradients (HOGs)

HOG [4] converts the distribution directions of brightness for a local region into a histogram to express them in feature vectors, which is utilized to express the shape characteristics of an object.

Magnitude m(x, y) and orientation θ (x, y) of a given image are computed using Eq. (1) and (2). Since unsigned orientations are desired for this implementation, the orientation range of degree which is less 0° is summed up with 180°.

m ( x , y ) = 2 f x ( x , y ) 2 + f y ( x , y )
(1)
θ ( x , y ) = arctan ( f y ( x , y ) f x ( x , y ) )
(2)
{ f x ( x , y ) = I ( x + 1 , y ) I ( x 1 , y ) f y ( x , y ) = I ( x , y + 1 ) I ( x , y 1 )
(3)

In this paper, each cell, as shown Figure 1 (c), is represented by 8x8 pixel size and has 9 bins covering the orientation for [0°, 180°] interval, as shown in Figure 1 (c). Directional histograms for brightness prepared in each of the cells were normalized as a block of 3x3 cell. Normalization processes are summarized in Figure 1 (d), where a movement of block is based on that it is moved to right side and to lower side by one cell each. A contrast-normalization is used on the local responses to get better invariance regarding illumination, shading, etc. To normalize the cell’s orientation histograms, it should be grouped into blocks (3x3 cells). This is done by accumulating a measure of local histogram value over the blocks and the result is then used to normalize the cells in the block. Although there are four different methods for block normalization, L2-norm normalization Π is implemented using equation (4)

jmis-4-2-79-g1
Fig. 1. The example of two wheelers HOG normalization. (a) Original image (b) Calculated magnitude vector (c) cells (d) blocks
Download Original Figure
Π = f f 2 2 + ε 2
(4)
2.2 Local Binary Pattern

The Local Binary Pattern(LBP) texture operator, which is initially proposed by Ojala et al. [6], has been used successfully in various computer vision application, such as texture descriptors, face recognitions and etc.

L B P P , R ( x C , y C ) = P = 0 P 1 s ( g P g C ) 2 P
(5)
s ( z ) = { 1 , z 0 0 , otherwise

where gc is the intensity value of center pixel, gp is the intensity value of neighbor pixels. P is the total number of invalid pixels and R is the radius of neighborhood. The calculation principle of LBP for LBP8,1 adapted equation (4) is shown in Figure 2.

jmis-4-2-79-g2
Fig. 2. Example of LBP calculation
Download Original Figure
2.3 Projection Vectors

In general, the projection executes the role of converting N dimensional coordinate systems into smaller dimensions than N. We can take shape information, if a projection is used in the image. If it is used as one dimension, the horizontal projection vectors can be obtained by vertical projection; the vertical projection vectors can be obtained by a horizontal projection. The binary image of a lizard with its vertical and horizontal projections is shown in Figure 3 [7]. If given an image whose size is M x N and the image is I (i, j), the projection H(k) along the row and the projection V(k) along the column of binary image are given by

H ( k ) = j = 0 N 1 I ( i , j )
(6)
V ( k ) = j = 0 M 1 I ( i , j )
(7)
jmis-4-2-79-g3
Fig. 3. Example of projection for binary image.
Download Original Figure

III. Classifier[8]

Adaboost is a simple learning algorithm that selects a small set of weak classifiers from a large number of potential features according to the weighted majority of classifiers. The algorithm takes as input a training set where each belong to some domain and each label is in some label set. The algorithm takes as input a training set (x1,y1),…,(xn,yn) where each xi belong to some domain X and each label yi is in some label set Y.

The final hypothesis H is a weighted majority vote of the T weak hypotheses where αt is the weight assigned to ht as below sequence.

Given training set: (x1,y1),…,(xn,yn)

where xiX,yiY={+1,1}

1. Initialize weights w1,i=12m,12lforyi=+1,1

m: the number of positive image (+1)

n: the number of negative image (−1)

2. For t=1 ···T:

(a) Normalize the weights,

w t , i = w t , i j = 1 n w t , j

so that wt,i is a probability distribution of ith training image for tth weak classification

(b) For each feature, j, train a classifier hj which is restricted to using a single feature. The error is evaluated with respect to wi

ε j = i w j | h j ( x i ) y i | .

(c) Choose the classifier, ht, with the lowest error εt

(d) Update the weights:

w t + 1 , i = w t , i β t 1 ε i

where εi=1 if example xi is classified correctly, εi=+1 otherwise, and βti=εt1εt

3. Output the final hypothesis:

H ( x ) = s i g n ( t = 1 T α t h t ( x ) ) ,

where αt=log(1/βt)

IV. Experimental Results

In this study, an experiment was carried out with an ordinary user computer environment consisting of a Pentium 3.1 GHz and Visual C++ 6.0 Program. An image of smoke can be expressed with various shape according to the environments. For our purposes, it is hypothesized in the experiment for the following 2 cases: a smoke and non-smoke. 1,200 pictures of normalized smoke were used with a size of 64x128 from the taken photos and internet. They were utilized by dividing a training image and an experimental image. Non smokes were obtained by utilizing randomly extracted pictures from the street photos in ordinary cities. The number of non-smokes used in the training was equal to the number of smoke, and 817 pictures of non-smokes were used in the experiment. The experiment was carried out using HOG Method (the most widely utilized), LBP+HOG, and projection vectors. The highest accuracy for each of the methods were calculated with equation (8) and the results are 85.98%, 84.06, and 86.19%.

A c c u r a c y = T P + T N T P + F P + T N + F N
(8)

In Table 1, proposed method showed higher detection rate than ordinary method.

Table 1. The results of experiment for the recognition rate (%)
Method Recognition Rate
Ordinary HOG 85.98
HOG+LBP 84.04
Proposed Method 86.19
Download Excel Table

In case of HOG+LBP, the recognition rate is decreased, because of caused of complexity. But proposed method which is used fully connected method showed improved result.

V. CONCLUSION

This paper proposed a fully connected of two features to improve the detection rate based on Adaboost which make strong classifier. Accurately and efficiently smoke detection in still images is one of the most difficult works due to a variety shape of poses, as well as environmental conditions and cluttered backgrounds. In this study, we have introduced a novel practical implementation of the solution for weak object. It has been experimentally demonstrated that the proposed method leads to better classification results than ordinary method. we have to consider including fire flame, increasing the detection rate, several environments, video, other algorithm and etc.

ACKNOWLEDGEMENT

This work was supported by a grant from 2017 Research Funds of Korea Western Power Co., Ltd..(2017-0142)

REFERENCES

[1].

S. Frizzi, R. Kaabi, M. Bouchouicha, J. Ginoux, E. Moreau, and F. Fnaiech, “Convolutional Neural Network for Video Fire and Smoke Detecion,” IECON 2016-42nd Annual Conference of the IEEE, pp. 877-882, 2016.

[2].

S. Verstockt, A. Vanoosthuyse, S. Van Hoecke, P. Lambert, and R. Walle, “Multi-sensor fire detection by fusing visual and non-visual flame features,” In Proceedings of International Conference on Image and Signal Processing, pp. 333-341, 2010.

[3].

B. U. Toreyin, Y. Dedeoglu, U. Gudukbay, and A. E. Cetin, “Computer vision based method for real-time fire and flame detection,” Pattern recognition letters, Vol. 27, No. 1, pp. 49-58, 2006.

[4].

L. Yu, F. Zhao, and Z. An, “Locally Assembled Binary Feature with Feed-forward Cascade for Pedestrian Detection in Intelligent Vehicle,” Int. Conf. on Cognitive Informatics, pp. 458-463, July, 2010

[5].

Q. Zhu, M. C. Yeh, K. T. Cheng and S. Avidan, “Fast human detection using a cascade of histograms of oriented gradients,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1491-1498, June, 2006.

[6].

T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern Recognition, Vol. 29, No. 1, pp.51-59, Jan. 1996.

[7].

R. Jain, Machine Vision, McGraw-Hill, 1995.

[8].

Y. H. Lee, T. S. Kim, and J. C. Shim, “Two-wheeler Detection System using Histogram of Oriented Gradients based on Local Correlation Coefficients and Curvature,” JMIS, Vol.2, No.4, pp303-310, 2015.