Section A

Speed Sign Recognition Using Sequential Cascade AdaBoost Classifier with Color Features

Oh-Seol Kwon 1 , *
Author Information & Copyright
1School of Electrical Electronics and Control Eng., Changwon National University, Changwon, Gyeongnam, South Korea,
*Corresponding Author : Oh-Seol Kwon, 20 Changwondaehak-ro, Uichang-gu, Changwon, Gyeongnam, South Korea, +82-55-213-3669,

© Copyright 2019 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Dec 06, 2019; Revised: Dec 21, 2019; Accepted: Dec 21, 2019

Published Online: Dec 31, 2019


For future autonomous cars, it is necessary to recognize various surrounding environments such as lanes, traffic lights, and vehicles. This paper presents a method of speed sign recognition from a single image in automatic driving assistance systems. The detection step with the proposed method emphasizes the color attributes in modified YUV color space because speed sign area is affected by color. The proposed method is further improved by extracting the digits from the highlighted circle region. A sequential cascade AdaBoost classifier is then used in the recognition step for real-time processing. Experimental results show the performance of the proposed algorithm is superior to that of conventional algorithms for various speed signs and real-world conditions.

Keywords: Speed sign recognition; Color features; Sequential cascade Adaboost classifier


The recent advent of autonomous vehicles has sparked active research in many extended areas, including Advanced Driver Assistance Systems (ADASs) as a core technology of autonomous vehicles [1,2] covering such applications as lane detection, pedestrian detection, vehicle detection, traffic light recognition, and speed sign recognition as shown in Fig. 1. Importantly, these options provide information to improve safety.

Fig. 1. Example of autonomous vehicle with speed sign recognition system.
Download Original Figure

As an ADAS technique, speed sign recognition tracks information in the surrounding environment so that an autonomous vehicle can maintain the proper speed. Clearly, this involves the consistent recognition of speed signs, which generally consist of numbers inside a thick red circle. However, in real road images, speed signs can sometimes fail to express color characteristics or are distorted due to weather, lighting, and breakage, thereby decreasing the accuracy of real-time speed sign recognition systems.

Various techniques have already been explored to solve such problems, including SVM (support vector machine) [3], MCT (modified census transform), CNN (convergent neural network) [4], template matching [5], decision tree [6], and random forest technique [7].

An accurate performance was recently reported by Mathias [8] when using a HOG histogram and multi-layer SVM technique based on the distribution of lightness, yet the learning and testing require too much time for real-time processing. A recognition method using cross correlation was also proposed by Barns [9], yet the real-time performance is affected by distortion of the side images. Froba [10] proposed a MCT detection method that expresses the logical magnitude relationship of pixels in binary form. However, while local binary patterns are robust to illumination changes, they are sensitive to noise as they extract features in a small region of 3×3 pixels. Aoyagi and Asakura [11] also proposed a feature extraction filter using an artificial neural network of the whole image, yet this significantly increases the computational load. Moreover, deep learning technique is an improved method of machine learning technique that improves performance by optimizing and applying human biometric model to learning. Recently, a speed sign recognition algorithm [12] has been developed that enables real-time processing based on GPU performance improvement. This method is designed with hierarchical structure of SVM and CNN and implemented through GPGPU to enable real time processing.

There are also several studies a real-time embedded processing from the viewpoint of consumer electronics [13,14]. Recently, the centroid-to-contour (CtC) algorithm, which is resistant to translation, rotation, and scale changes with a support vector machine classifier, was proposed [15]. However, these methods have some limitations in performance owing to real-time processing.

Accordingly, this paper proposes a method of color-based features and a sequential AdaBoost classifier for speed sign recognition. The proposed method emphasizes the color characteristics of a speed sign circle using the modified YUV color space. The detected sign region is then recognized using an AdaBoost classifier. This paper aims to confirm the performance of detection and recognition of speed signs by real time processing and to experiment through various images and real-world roads.


Template matching is commonly used for speed sign recognition as it has a high processing speed and high recognition rate. However, the recognition rate decreases when the position and angular rotation of the sign are not uniform. An Adaboost classifier using a Haar-like feature is a conventional method that can rapidly detect and recognize an object. The Haar-like feature extracts various features of an image using the difference in brightness between masks.

The Adaboost technique developed from ‘adaptive boosting’ combines weak and strong classifiers, which greatly increases the detection speed, as the time-consuming strong classifiers are only applied to candidates selected by the weak classifiers. Adaboost bounds weak classifiers h(t): ℵ → ℜ into a single ensemble by combing their responses into a sum

f T ( x ) = t = 1 T h ( t ) ( x )

The final class is modified for the binary-classification as follows;

H T ( x ) = s i g n ( f T ( x ) ) .

where T is defined T = {(x1, y1),…,(xm, ym)}, as the training set, and is defined as a positive or negative class. When increasing the number of combined classifiers, the performance can be improved by weighting the data that are difficult to classify. However, while the Adaboost and cascade technique can accurately detect or recognize objects of interest in an image, the recognition accuracy of speed signs remains low due to ineffective extraction of color characteristics.


Traffic signs are specifically designed to stand out from the background in terms of their shape and color. As such, these shape and color features are invariably used to detect or recognize traffic signs. However, the feature extraction process is also significantly affected by the outdoor conditions which vary constantly. In particular, the weather can produce drastic color changes. Therefore, this is the first study to emphasize color features for effective speed sign detection in a single image. The RGB values are modified to highlight the red circle in the detection step as follows:

[ R G B ] = [ R G B ] + [ α β γ ]

where α > β > 0, and γ < 0. RGB color space is then converted to a YUV color space for the input image as follows:

[ Y U V ] = [ 0.30 0.59 0.11 0.15 0.29 0.44 0.62 0.52 0.10 ] [ R G B ]

In YUV color space, Y represents the image brightness, and U and V represent the chrominance, as shown in Fig. 2. Plus, modified YUV color space is also used to highlight the color based features of a speed sign.

Fig. 2. Results after applying different weights for each channel: (a) input image, (b) Y channel, (c) U channel, and (d) V channel.
Download Original Figure

Thereafter, the image is improved using histogram equalization as a normalization process for illumination compensation. Separating the U channel in the converted color space allows more efficient speed sign detection due to the highlighted red circle. The recognition accuracy is also enhanced by removing the area around the speed limit number expressed within the detected circle. As shown in Fig. 3, the proposed algorithm includes an Adaboost classifier with a sequential cascade based on a color-based Haar-like feature.

Fig. 3. Flowchart of proposed speed sign recognition algorithm.
Download Original Figure

The proposed method assigns different weights to the RGB channels of an input image. YUV color space is then used for easy detection of the red circle of a speed sign. As the U channel saturates red color regions, this highlights a speed-sign circle in an image. Next, to differentiate from other traffic signs, the digit region is extracted from the detected circle region. Thus, the sequential Adaboost classifier is only applied to the extracted digit region.

Fig. 4 shows the result of enlarged the speed sign region using the YUV channels. While the Y and V channels do not reveal the shape of the speed sign, the U channel clearly shows the circular characteristic of the speed sign. Therefore, the proposed method uses a weighted U channel for detecting sign regions. As speed signs generally consist of two or three digits within a red circle, the boundary circle remains the same irrespective of the digits. Therefore, the proposed method uses a selective ROI of the digit area within a speed sign, as shown in Fig. 5.

Fig. 4. Zooming results for speed sign region in Fig. 4: (a) Y channel, (b) U channel, and (c) V channel.
Download Original Figure
Fig. 5. Proposed selective ROI detection process: (a) input, (b) reassigned ROI from detected region, and (c) extracted speed digits.
Download Original Figure

The digit area is detected within the red circle using four xy coordinates, and resized to uniform dimensions, as shown in Fig. 5(b). The extracted digit information is shown in Fig. 5(c). The final recognition process only focuses on the extracted digit area and is modified using trained sequential classifiers based on the cascade Adaboost classifier developed by Viola and Jones [16]. Importantly, this allows real-time processing (i.e. millions of classifications per second) due to the sequential design of the cascade classifier, efficiently evaluated Haar-like features, the measurement selection and combination of stage classifiers by the Adaboost algorithm, and bootstrapping technique used to explore a large training set. A cascade is an ordered set of classifiers of increasing complexity, where each classifier rejects a fraction of negative samples while retaining the positive ones. Thus, if an image sub-window is not rejected by the first stage classifier, it is passed to the second stage and so on. As a result, this cascade structure allows fast decisions via the quick rejection of dominating and simple background features, and by concentrating the computational power on more difficult and unusual features. Fig. 6 shows the speed information recognition process from a candidate sign region following the detection of a speed sign, where the recognition performance is improved using a sequential cascade Adaboost classifier.

Fig. 6. Sequential cascade Adaboost classifier.
Download Original Figure


To verity the performance of the proposed method, experiments were conducted using images from various environments, including cities, suburbs, and highways. The methods were implemented on a desktop with a 3.2 GHz CPU and 16 GB RAM. Table 1 shows the number of frames used for the experiments and learning that were selected from over 144,000 frames. In the experiments, the image size was set to 1280x672 pixels.

Table 1. Number of data used for learning and experiments.
Training Test
Data Number of signs Data Number of signs
30 127 30 291
40 167 40 180
50 83 50 32
60 206 60 223
70 158 70 117
80 217 80 625
90 382 90 569
100 1538 100 1131
Download Excel Table

Table 2 compares the recognition rate and processing speed when using the proposed method, the HOG-based SVM [8], CtC-based SVM [15], and improved CLAHE-based AdaBoost methods [17], with a total of 3618 speed sign images. The recognition rate for HOG-based SVM, CtC-based SVM, and CLAHE-based AdaBoost methods was 89.5%, 91.2%, and 86.7%, respectively; however, the proposed method yielded the highest recognition rate of 95.5%. While the improved CtC-based SVM method showed the best average processing speed of 35 ms, the proposed method also exhibited an acceptable real-time average processing speed of 46 ms. Fig. 7 shows the real-time speed sign recognition results, including the processing time for each frame. Fig. 8 shows the speed sign recognition results for the proposed method with various road environments.

Table 2. Comparison of conventional and proposed methods.
SVM with HOG Adaboost with Haar SVM with CtC Proposed method
Number of signs 3168 3168 3168 3168
Detection 2968 3059 2809 3129
Recognition 2836 2890 2747 3027
false Recognition 332 188 421 141
speed 48ms 45ms 35ms 46ms
Recognition rate 89.5% 91.2% 86.7% 95.5%
Download Excel Table
Fig. 7. Examples of speed sign recognition for real-time processing.
Download Original Figure
Fig. 8. Examples of speed sign recognition using proposed method.
Download Original Figure

As the proposed algorithm depends on color-based features, image saturation, especially the red channel, is very important. Weather and time changes always affect the color attributes in real-world road images. Plus, poor recognition performance started to deteriorate with tilting 3° left or right, and continued to decrease as the tilt increased. However, this problem can be solved by adding positive data or performing a post-processing process to correct the slope of the detected image.

The speed sign recognition rate was also analyzed when the positional characteristic of the speed sign was tilting rather than facing forward. As shown in Fig. 9, the detection results due to minimal light in the input image, such as a dark night, can also affect the recognition step. Therefore, experiments were conducted to test the color-feature detection performance of the modified U channel with saturation changes.

Fig. 9. Examples of detection and recognition of speed signs with tilt changes.
Download Original Figure

The proposed method also showed a robust performance in the case of complicated environments, including downtown images and frames with shadow from an overpass that produced color changes. However, as shown in Fig. 10, some false errors were produced when two or more speed signs were connected horizontally or vertically and when another sign with a red circle was recognized as a speed sign.

Fig. 10. Examples of false recognition.
Download Original Figure


This paper presented a method for speed sign recognition for real-world scenes. The proposed method extracts ROIs based on color features and the shape of the sign. The color-based features of modified YUV space are used. The digit information is then extracted from the detected ROI. The speed recognition finally used a sequential cascade Adaboost classifier. The performance of the proposed method was confirmed in various real-world environments, including downtown, suburbs, and highways, with a real-time processing speed of 5ms per a frame. In the future, research is needed to improve the performance in the place where lighting such as tunnel is insufficient.


This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (No. 2019R1F1A1058489).



S. Applin, “Autonomous vehicle ethics: stock or custom?” IEEE Consumer Electronics Magazine, vol. 6, no. 3, pp. 108-110, Jul. 2017.


P. Koopman and M. Wangner, “Autonomous vehicle safety: an interdisciplinary challenge,” IEEE Intell. Transp. Syst. Magazine, vol. 9, no. 1, pp. 90-96, Jan. 2017.


S. Bascon, S. Arroyo, P. Jimenez, H. Moreno, and F. Ferreras, “Road-sign detection and recognition based on support vector machines,” IEEE Tran. Intell. Transp. Syst., vol. 8, no. 2, pp. 264-278, Jun. 2007.


D. Ciresan, U. Meier, J. Masci, and J. Schmidhuber, “A committee of neural networks for traffic sign classification,” in Proc. of the IEEE International Joint Conf. on Neural Networks, pp. 1918-1921, July, 2011.


F. Ren, J. Huang, R. Jiang, and R. Klette, “General traffic sign recognition by feature matching,” in International Conference on Image and Vision Computing, pp. 409-414, Nov. 2009.


F. Zaklouta and B. Stanciulescu, “Real-time traffic-sign recognition using tree classifiers,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 4, pp 1507-1514, Nov. 2012.


G. Won, H. Cheol, K. Chul, and N. Yeal, “Real-time speed-limit sign detection and recognition using spatial pyramid feature and boosted random forest,” in International Conference on Image Analysis and Recognition, pp. 437-445, Jul. 2015.


M. Mathias, R. Timofte, R. Benenson, and L. Gool, “Traffic sign recognition—How far are we from the solution?” in IEEE Int. Conference Neural Networks, pp. 1-8, Aug. 2013.


N. Barnes, A. Zelinsky, and L. Fletcher, “Real-time speed sign detection using the radial symmetry detector,” IEEE Trans. Intelligent Transportation Systems, pp. 322-332, Jun. 2008.


B. Froba and A. Ernst, “Face detection with the modified census transform,” in Proc. of the sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 91-96. May, 2004.


Y. Aoyagi and T. Asakura, “A study on traffic sign recognition in scene image using genetic algorithms and neural networks,” in IEEE Conference IECON, pp. 1838-1843, Aug. 1996.


K. Lim, Y. Hong, Y. Choi, and H. Byun, “Real-time traffic sign recognition based on a general purpose GPU and deep-learning,” PLoS ONE, vol. 12, no. 3, 2017.


S. Lee, E. Lee, Y. Hwang, and S. Jang, “Low-complexity hardware architecture of traffic sign recognition with IHSL color space for advanced driver assistance systems,” in International Conference on Consumer Electronics, Asia, pp. 1-2, Oct. 2016.


K. Chang and P. Liu, “Design of real-time speed limit sign recognition and over-speed warning system on mobile device,” in International Conference on Consumer Electronics, Taiwan, pp. 43-44, Jun. 2015.


C. Tsai, H. Liao, and K. Hsu, “Real-time embedded implementation of robust speed-limit sign recognition using a novel centroid-to-contour description method,” IET Computer Vision, vol. 11, no. 6, pp. 407-414, Sep. 2017.


P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 511-518, Dec. 2001.


S. Kang and D. Han, “Robust vehicle detection in rainy situation with Adaboost using CLAHE,” The Journal of Korean Institute of Communications and Information Sciences, vol. 41, no. 12, pp. 1978-1984, Dec. 2016.


Oh-Seol Kwon


Oh-Seol Kwon (M’02) received his B.S. and M.S. degrees in Electrical Engineering & Computer Science from Kyungpook National University, Republic of Korea in 2002 and 2004, respectively and Ph. D. degree in Electronics from the same university in 2008. From 2008 to 2010, he was a Postdoctoral Research Fellow at New York University, NY, USA. From 2010 to 2011, He was a Senior Researcher with the Visual Display Division, Samsung Electronics, Suwon, South Korea. He joined Changwon National University in 2011, and is currently an Associate Professor. His research interests are color signal processing, imaging systems, computer vision, and human visual system.