Section A

Implementation of Vehicle Plate Recognition Using Depth Camera

Eun-seok Choi1, Soon-kak Kwon2,*
Author Information & Copyright
1Dept. of Computer Software Engineering, Dong-eui University, Busan, Korea, 2883060@naver.com
2Dept. of Computer Software Engineering, Dong-eui University, Busan, Korea, skkwon@deu.ac.kr
*Corresponding Author : Soon-kak Kwon, Address: (47340) Eomgang-ro 176, Busanjin-gu, Busan, Korea, Tel: +82-51-890-1727, E-mail: skkwon@deu.ac.kr

© Copyright 2019 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Sep 06, 2019; Revised: Sep 20, 2019; Accepted: Sep 23, 2019

Published Online: Sep 30, 2019

Abstract

In this paper, a method of detecting vehicle plates through depth pictures is proposed. A vehicle plate can be recognized by detecting the plane areas. First, plane factors of each square block are calculated. After that, the same plane areas are grouped by comparing the neighboring blocks to whether they are similar planes. Width and height for the detected plane area are obtained. If the height and width are matched to an actual vehicle plate, the area is recognized as a vehicle plate. Simulations results show that the recognition rates for the proposed method are about 87.8%.

Keywords: Vehicle Plate Detection; Plane Detection; Depth Camera

I. INTRODUCTION

Recently, it has begun to take the spotlight of smart traffic, and as it has been a long time, there is a foothold for building smart cities such as autonomous vehicles and smart signals. Because of this, there is automation technology, and it is necessary to work to miniaturize the technology so that it can be applied more and more in real life. Intelligent transportation systems are continuously being introduced and use of existing CCTVs or black box systems to recognize the number of vehicles is also increasing. Research is accomplished on how to quickly detect license plates, and the results of these studies are being applied to actual products. Smart vehicle plate detection is an essential technology to find out the information of vehicles in operation. By detecting the vehicle plate area, it is possible to recognize the vehicle number in the vehicle plate area, thereby acquiring and utilizing the information of the vehicle and the driving information.

Methods of detecting the features of the outside of the license plate in the color image were mainly used for recognizing a vehicle plate. R. Zunino [1] proposed a method of detecting a vehicle plate region using window segmentation and vector quantization. This method detects an area with a large change in brightness and divides the license plate area into blobs. The method has a disadvantage in that it is difficult to construct a blob when the plate is long distance. A method of generating the respective fuzzy maps for the edge and color components of the license plate area and detecting the license plate area through fuzzy inference was also studied [2]. A method of vehicle license plate detection using a Haar-like feature [3] constructed a chain weak classifier to learn the morphological arrangement of characters in the license plate. A method of detecting license plate areas by learning local binary patterns of license plates[4] was proposed. However, these methods using color videos had the disadvantage that it was difficult to apply these to real life due to phenomena such as rotation and perspective projection distortion accompanying a change in screen attitude. There were also computer performance issues, so the methods were hard to apply to real situation.

In this paper, we provide a method of detecting a plate with a depth image in order to overcome the weaknesses of illumination change and camera attitude change in plate detection using color images. When a depth camera is used to recognize an object, an infrared sensor of the depth camera determines a value obtained by measuring the distance between the object and the depth camera as information. Recently, methods of object recognition through depth videos [5-7] have been studied.

Due to the structural characteristics in which the plate is planarized, it is possible to regard an area composed of similar planes as a license plate candidate area of a vehicle. After dividing the image into square shaped blocks for detection of planar areas, the depth information in each block can be used to obtain information of its surface and the first near plane. If the in-block plane error is less than a certain value, the block is considered to be planar. Thereafter, the normal vectors of the planes between adjacent blocks are compared to measure the degree of similarity. If the degree of similarity between the two blocks is high enough, it can be considered that the two blocks were performed in the same plane. Therefore, the plane area in the image can be obtained. The actual size of the area is then measured using depth information and if the actual height and width of the measured area is similar to the size of the actual plate, the area is considered as a plate area. For each labeled plane area, the actual size of the plane can be measured through a depth image. The width of the plane can be obtained by converting the distance into camera coordinate distances for each pixel at the left end and each pixel at the right end of each labeled area. For each pixel at the top end of the area and at the bottom end, the height of the corresponding plane can be obtained in the same way. If the width and height of this measured plane area match the width and height of the actual plane, the area is detected by the number plate. The method proposed in this paper can be used to perform vehicle plate detection independent of the lighting environment.

II. Vehicle Plate Recognition Using Depth Camera

In the case of vehicle plate detection using RGB images, the contrast of the vehicle plate, the color difference information and the morphological information of the edge are used with the actual information of the surface of the vehicle plate and the actual length of the vehicle plate as shown in Fig. 1. It is possible to easily obtain information on the surface and the actual length between two points by using the depth information that indicates the distance to the camera.

jmis-6-3-119-g1
Fig. 1. Example of object recognition through camera.
Download Original Figure

A relationship between a pixel p≡(x, y) in the 2D image coordinate system and a point pc≡(xc, yc, zc) in the 3D camera coordinate system is represent as Eq. (1) [1].

x c = d f x , y c = d f y , z c = d ,
(1)

where f means a focal length that is such of the camera parameter and d means a depth pixel value of p.

In order to detect a plane area using depth information, 2D image coordinates of depth pixel should transform to 3D camera coordinate system.

A plane containing a point (xc, yc, zc) on a 3D camera coordinate system is represented as follows:

a p x c + b p y c z c + c p = 0,
(2)

where ap, bp, cp are plane coefficients that determine the plane.

The following matrices are obtained by substituting points in an N×N block into Equation (2):

A R = B A  = [ x c 1 y c 1 1 x c 2 y c 2 1 x c n y c n 1 ] B  =  [ z c 1 z c 2 z c n ] R = [ a p b p c p ] ,
(3)

where (xci,yci,zci) mean coordinates of it point in the block and n means the number of point in the block. If n is more than 3, a plane that has the smallest distance from the given point can be obtained by finding plane coefficients through following equation:

R = A + B
(4)

where A+ means a pseudo-inverse matrix of A that is calculated by following equation:

A + = ( A T A ) - 1 A T .
(5)

In Equation (2), a normal vector is (ap,bp,−1), and a distance between an origin of the camera coordinate system and the plane is cp as shown in Fig. 2.

jmis-6-3-119-g2
Fig. 2. Normal vector and distance from origin of modeled plane.
Download Original Figure

If both planes are similar, normal vectors and distances of the planes should be similar. Therefore, the plane similarity between the adjacent blocks can be measured by comparing coefficients of the modeled planes of the blocks.

In order to measuring the plane similarity, the angle between normal vectors and a distances difference between the planes are calculated by following equations:

δ 12 = cos θ = ( n 1 n 2 ) / | n 1 | | n 2 | = ( a p 1 a p 2 + b p 1 b p 2 + 1 ) ( a p 1 2 + b p 1 2 + 1 + a p 2 2 + b p 2 2 + 1 ) ,
(6)
ε 12 = | c p 1 c p 2 | .
(7)

If δ12 and ε12 satisfy Equation (7), the planes can be considered to be similar planes.

δ 12 > U ε 12 < D ,
(8)

where U and D mean thresholds of the angle between normal vectors and the distance difference. If a pair of modeled planes for two adjacent blocks satisfies equation (8), the blocks are grouped into a same plane.

The outlines shape of each grouped plane area is extracted in order to find rectangle object. If the shape of the outline is similar to the shape of a polygon with four corners, the area becomes a vehicle plate candidate as shown in Fig. 3.

jmis-6-3-119-g3
Fig. 3. Detection of rectangle object with planar surface.
Download Original Figure

Actual width and height of each vehicle plate candidate are measured through the depth information. Image coordinates of 4 pixels at the top-most, bottom-most, left-most, and right-most of the candidate area are transformed to 3D camera coordinate through Equation (1). The width and height are measured through following equation:

height 2 = ( x t x b ) 2 + ( y t y b ) 2 + ( z t z b ) 2 width 2 = ( x r x l ) 2 + ( y r y l ) 2 + ( z r z l ) 2 ,
(9)

where (xt,yt,zt) mean the 3D coordinates of pixels at the top-most as shown in Fig. 4. If the width and height is matched to the vehicle plate, the area is detected as the vehicle plate area.

jmis-6-3-119-g4
Fig. 4. Calculating width and height of plane area.
Download Original Figure

III. Simulation Result

In order to measure a performance of vehicle plate recognition through the proposed method, we use a manufactured car model with a vehicle plate. Results of the vehicle plate recognize through the manufacture car is shown in Fig. 5.

jmis-6-3-119-g5
Fig. 5. Results of vehicle plate recognition through manufacture car model. (a) RGB picture of manufacture car model, (b) plane surface detection, and (c) Recognized vehicle plate area.
Download Original Figure

In order to measure the performance of the proposed method, we use 20 depth pictures including vehicles with a Korean vehicle plates are captured such as Fig. 6 for simulation. The depth pictures are captured by Intel Realsense D435 whose resolution is 320x240 and focal distance f used in Equation (1) is 423.5. The width and height specifications of Korean vehicle plates are shown in Table 1.

jmis-6-3-119-g6
Fig. 6. Example pictures of vehicles used in simulations. (a) produced before 2006 and (b) produced after 2006.
Download Original Figure
Table 1. Specification of Korean vehicle plate.
Vehicle plate type Width Height
General vehicles produced before 2006 335mm 155mm
General vehicles produced after 2006 520mm 110mm
Download Excel Table

Table 2 shows successful recognition rates of vehicle plate recognition according to D and U as shown in Fig. 7. When D and U are 50 and 0.95, respectively, the recognition rates are best. The recognition rate is more accuracy for the vehicle plates that are produced after 2006.

Table 2. Simulation results of vehicle plate detection.
Plate type D U Detection accuracy(%)
Before 2006 30 0.95 85
30 0.98 92.5
50 0.95 87.5
50 0.98 95
After 2006 30 0.95 75
30 0.98 92.5
50 0.95 77.5
50 0.98 97.5
Download Excel Table
jmis-6-3-119-g7
Fig. 7. Results of license plate recognition simulation.
Download Original Figure

IV. Conclusion

In this paper, we proposed a method of recognizing vehicle plates through capturing depth pictures. The existed methods of recognizing the plates through a color picture had a problem that it were affected by lighting environment, but the proposed method can recognize the vehicle plates more accurately. We expect that the proposed method will be possible to improve the intelligent traffic system by recognizing the vehicles and license plates more quickly and accurately.

Acknowledgement

This research was supported by the BB21+ Project in 2019.

REFERENCES

[1].

C. N. E. Anagnostopoulos, I. E. Anagnostopoulos, V. Loumos, and E. Kayafas, “A License Plate-Recognition Algorithm for Intelligent Transportation System Applications,” IEEE Transactions on Intelligent Transportation Systems, 7(3), 377-392.

[2].

S. Chang, L. Chen, Y. Chung, and S. Chen, “Automatic License Plate Recognition,” IEEE Transactions on Intelligent Transportation Systems, 5(1), 42-53.

[3].

H. Zhang, W. Zia, X. He, and Q. Wu, “Learning Based License Plate Detection Using Global and Local Features,” Proceeding of 18th International Conference on Pattern Recognition, 1102-1105.

[4].

S. Du, M. Ibrahim, M. Shehata, and W. Badawy, “Automatic License Plate Recognition (ALPR): A State-of-the-art Review,” IEEE Transactions on Circuits and Systems for Video Technology, 23(2), 311-325.

[5].

D. S. Lee and S. K. Kwon, “Improvement of Depth Video Coding by Plane Modeling”, Journal of the Korea Industrial Information Society, 21(5), 11-17.

[6].

D. S. Lee, and S. K. Kwon, “Correction of Perspective Distortion Image Using Depth Information”, Journal of Multimedia, 18(2), 106-112.

[7].

D. M. Kim, H. J. Wi, J. H. Kim, and H. C. Shin, “Drowsy Driving Detection Using Facial Recognition System”, Proceeding of the Conference of Korea Information Science Society, pp. 2007-2009, 2015.

Authors

Eun-seok Choi

jmis-6-3-119-i1

Eun-seok Choi is currently an undergraduate student in the Department of Computer Software Engineering at Dong-eui University. His research interest is in the areas of image recognition.

Soon-kak Kwon

jmis-6-3-119-i2

Soon-kak Kwon received the B.S. degree in Electronic Engineering from Kyung-pook National University, in 1990, the M.S. and Ph.D. degrees in Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST), in 1992 and 1998, respectively. From 1998 to 2000, he was a team manager at Technology Appraisal Center of Korea Technology Guarantee Fund. Since 2001, he has been a faculty member of Dong-eui University, where he is now a professor in the Department of Computer Software Engineering. From 2003 to 2004, he was a visiting professor of the Department of Electrical Engineering in the University of Texas at Arlington. From 2010 to 2011, he was an international visiting research associate in the School of Engineering and Advanced Technology in Massey University. Prof. Kwon received the awards, Leading Engineers of the World 2008 and Foremost Engineers of the World 2008, from IBC, and best papers from Korea Multimedia Society, respectively. His biographical profile has been included in the 2008~2014, 2017~2020 Editions of Marquis Who’s Who in the World and the 2009/2010 Edition of IBC Outstanding 2000 Intellectuals of the 21st Century. His research interests are in the areas of image processing, video processing, and video transmission.