Section A

New Vehicle Verification Scheme for Blind Spot Area Based on Imaging Sensor System

Gwang-Soo Hong1, Jong-Hyeok Lee1, Young-Woon Lee1, Byung-Gyu Kim*,2
Author Information & Copyright
1Dept. of Computer Engineering, SunMoon University, Asan, Republic of Korea, E-mail: {gs.Hong@vicl.sookmyung.ac.kr, jh.Lee@vicl.sookmyung.ac.kr, yw.Lee@vicl.sookmyung.ac.kr
2Dept. of IT Engineering, Sookmyung Women’s University, Seoul, Korea, E-mail: bg.kim@sm.ac.kr
*Corresponding Author: Byung-Gyu Kim, Dept. of IT Engineering, Sookmyung Women’s University, Seoul, Republic of Korea, Tel: +82-2-2077-7293, E-mail: bg.kim@sm.ac.kr.

© Copyright 2016 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Mar 29, 2017 ; Revised: Apr 04, 2017 ; Accepted: Apr 09, 2017

Published Online: Mar 31, 2017

Abstract

Ubiquitous computing is a novel paradigm that is rapidly gaining in the scenario of wireless communications and telecommunications for realizing smart world. As rapid development of sensor technology, smart sensor system becomes more popular in automobile or vehicle. In this study, a new vehicle detection mechanism in real-time for blind spot area is proposed based on imaging sensors. To determine the position of other vehicles on the road is important for operation of driver assistance systems (DASs) to increase driving safety. As the result, blind spot detection of vehicles is addressed using an automobile detection algorithm for blind spots. The proposed vehicle verification utilizes the height and angle of a rear-looking vehicle mounted camera. Candidate vehicle information is extracted using adaptive shadow detection based on brightness values of an image of a vehicle area. The vehicle is verified using a training set with Haar-like features of candidate vehicles. Using these processes, moving vehicles can be detected in blind spots. The detection ratio of true vehicles was 91.1% in blind spots based on various experimental results.

Keywords: imaging sensor; smart car; blind spot detection (BSD); shadow detection; vehicle detection algorithm

I. INTRODUCTION

The ubiquitous computing (UC) provides a useful context information for integrating capability in various scales. Such environments may therefore consist of devices ranging from personal devices to supercomputers. Also, various kinds of sensors and communication networks have been widely employed in smart car system. Moreover, new IT technologies such as human sensors (smart phone sensing), car-equipped vision/voice sensors, and visible light communication (VLC), are being merged to assist a driver safely and gather extensive information from the driving environment.

Among the growing applications of vision processing, vision-based vehicle verification algorithm for DAS has been concentrated to achieve remarkable for many years [1].

As one of the most promising applications of computer vision, vision-based vehicle detection for driver assistance has received considerable attention over the last many years [1]. We can consider the reasons why this field is blooming: First, the startling losses both in human lives and finance caused by vehicles accidents; Second, the availability of feasible technologies accumulated within the last 30 years of computer vision research [2], [3]; Lastly, the exponential growth of embedded computer performance has paved the way for computation of intensive video-processing mechanisms even though it is on a low-end PC [4].

The traffic accident rates in France, Germany, and the UK (among OECD members) have been reduced slowly after the1980s. However, the accident rate largely increased before the early 1990s, and then reduced after 1996 in Korea. The rate in recent has been reported to be 12.1 accidents per 100,000 vehicles on the road. Approximately 9 % are made from blind spot problem, where a driver cannot observe [5]. This is very high accident rate.

The blind spot refers to an area of the road that a driver cannot see using either the rear-view or side-view mirrors. Common blind spot areas are behind a vehicle on both sides. A vehicle in a blind spot cannot be seen by a driver.

Many automobile companies have developed new research programs to cope with the problem of blind spots. There are two major technologies: short-range radar and vision-based image processing. An image-based vehicle extraction system provides the processed information that can be intuitive for drivers to utilize [6]. The hot issue of image-based vehicle detection is to drive different driving environment. It can be necessary to extract candidate vehicles, which is robust to variation of the driving situation. To define vehicle features, symmetry, color, shadow, geometrical characteristics, texture, and headlights are employed to extract candidate vehicles [1]. Also, some learning methods are used to make a decision as vehicles after detecting candidate vehicles.

In this study, the feature extraction algorithm employes Haar-Wavelets [7], [8]. We employ the support vector machine (SVM) which is popularly utilized as a verification method [7], [8], [9]. An efficient method based on lane information has been suggested by Sun Zehang et al. [10]. Wu, Yao-Jan, et al. (2007) [18] based vehicle detection in a blind spot on lane information and vehicle shadow area detection. Lin Che-Chung et al. (2006) [14] and Balcones, D. et al. (2009) [11] used vehicle symmetry based edge feature extraction. Alonso Daniel, Luis Salgado, et al. (2007) [12] used edge extraction to define areas of interest. Also they made a structure of image division based on splitting and merging process. Song Gwang Yul et al. (2008) [13] proposed a method based on boundary extraction for the right and left sides of a vehicle, and extraction of the contacting region between the tires and the ground.

In this paper, we propose a new vehicle verification algorithm for the blind spot area using an imaging sensor. First, a region of interest (ROI) is set-up for obtaining candidate area. Through Haar-like feature, we train the vehicles. Then the shadow region is extracted for verifying whether candidate areas are approaching vehicles or not. This paper is organized as follows: Section 2 presents the overall system and related works. The proposed vehicle detection method is also defined in Section3. Section 4 discusses simulation results, and concluding remarks will be given in Section 5.

II. PREVIOUS RESEARCHES

We would like to define blind spot areas and present some previous researches according to stage of the processing in this section.

2.1. Definition of Blind Spot Area

Figure 1 illustrates the problem of blind spot areas. A blind spot means a place that disappears from a driver’s view where side and rear vehicles are in specific locations. For extracting a vehicle in a blind spot, it is necessary to make recognition and verification scheme based on the approaching vehicles (from far to close in position). From the next subsection, the detailed procedure of vehicle extraction and verification scheme is explained.

jmis-4-1-9-g1
Fig. 1. An illustration for blind spot area.
Download Original Figure
2.2. Vehicle Data Learning

Table I shows the reported studies on data learning. Data learning is for verification of true vehicles after detecting candidate vehicles. As mentioned, we employ Haar-Wavelets [9], [14] for the feature extraction. As we can see in Table 1, the support vector machine (SVM) is popular as a verification mechanism [9], [7], [14].

Table 1. Learning approaches for vehicle verification
Researches Feature Extraction Training and Classification Main goal
Method [9] SIFT + PCA SVM Various objects
Method [7] Haar Waveletes SVM Person
Method [8] Haar Waveletes Online Boosting Face
Method [19] Local features SVM Various objects
Download Excel Table
2.3. Detection and Verification

Reported studies for detecting vehicles and their tracking scheme are described in Table 2. We can categorize two detection approaches. One is a scheme based on lane information and the other is that without lane information. Usually, edge component is a good feature to identify vehicles [10], [15]-[18]. However, edge information may be not available in the detection stage because of change of the conditions of illumination such as weather when we drive, even though edge feature is very powerful [1]. Therefore, robust algorithm is inevitable to adapt various driving conditions.

Table 2. The previous researches for detection and tracking methods.
Researches Detection of Candidate Vehicles Tracking of Vehicle
Method [18] - Lane Information Base
-Road and Vehicle Shadow Area Segmentation
-
Method [14] - Lane Information Base
-Symmetry Use of vehicle based edge feature
- Use Motion vector estimation
Method [10] - Lane Information Base
- Edge information Use
- Use Detection and Tracking, not General Detect then Track
Method [15] -Symmetry Use of vehicle based edge feature -
Method [16] - Edge image detection for definition of interested area
- Image division using split and merge
-
Method [17] - Boundary detection for right and left side of vehicle
-Detection of the contact area of tire and road
-
Method [7] - Normalization perform of video brightness value
-Investigation of shadow area
- Vertical edge detection by Haar –like Feature
-
Download Excel Table

III. THE PROPOSED ALGORITHM

A blind spot is an area behind a driver’s normal field of vision that the driver cannot see using side-view and rear-view mirrors. For finding vehicles in a blind area, we can make recognition and verification scheme using the approaching vehicles (from far to close in position). The procedure of the proposed verification mechanism is presented in Fig. 2. A candidate vehicle is detected and confirmed based on the trained vehicle data in advance. The suggested vehicle verification method is powerful to be applied for various driving condition, because it is based on improvements from candidate vehicle verification with advanced training. Blind spot vehicle detection is based on shadows, Haar-like features, and AdaBoost classification [20].

jmis-4-1-9-g2
Fig. 2. Overall flow of the blind spot vehicle detection algorithms.
Download Original Figure
jmis-4-1-9-g3
Fig. 3. Side-view mirror camera used in the detection system.
Download Original Figure

To achieve blind spot vehicle extration under all driving conditions, a single digital camera using the visible spectrum was set on the passenger side-view mirror (Fig.2). The digital camera had a CMOS 5.0M pixel capacity and operated at 30 frames per second with a 1920 1080 pixel resolution.

3.1. Set of region of interest (ROI)

Vehicle extraction in the blind spot should only be needed in the area of interest. Figure 4 shows vehicle-mounted camera information. Hc is the height from the road surface to the camera (mounting height) and θ is the angle of the camera from the horizontal. The mounting height and angle of the camera, and information about the height and width of the image input image in the region of interest (The blind spot) were fixed. The ROI can be expressed as the following:

jmis-4-1-9-g4
Fig. 4. Camera coordinate system
Download Original Figure
R O I = { R x , R y } ,
(1)

where

R x = sin n θ × i m a g e h × H c , R y = sin n θ × i m a g e w × H c .
(2)

The region of interested (ROI) is defined using a set of four points. Eq. (1) is the region-of-interest formula for blind spot vehicle detection. Rx and Ry are four sets of data points derived from the ROI and imageh is the mean height of the input image. HC is the height of the camera, and θ is the angle of the camera. The product of these parameters defined the ROI and the input image showed the blind spot area.

3.2 Vehicle data learning

The detector classifies any object in the image detection area as vehicle or non-vehicle. Therefore, the data training system needed only to characterize one class of object. Features of the front and sides of a vehicle after detecting vehicle were learned before applying the detection mechanism.

Figure 5 displays Haar-like features employed for initial learning. In the initial image data learning stage, we utilized 700 images of a vehicle and 1400 images of non-vehicles using Haar-like features. Figurre 6 illustrates to classify a vehicle or a non-vehicle based on the defined Haar-like features. In the first step, we reduced the number of candidate vehicles. If there exists a candidate image area in the final step, the image area can be verified into a valid vehicle. Using the Adaboost classifier, the number of iterations was experimentally optimized at 13. The feature value given by Eq. (3) is used for speeding-up calculation:

Sacc(i,j)=x=0iy=0jI(i,j).
(3)

jmis-4-1-9-g5
Fig. 5. Haar-like features
Download Original Figure
jmis-4-1-9-g6
Fig. 6. A cascade-type classifier
Download Original Figure

The Sacc (i, j) is the summation of pixel values in the corresponding area and I(i, j) are intensities of true vehicle area at pixel location (i, j) [21].

Figure 7 shows a haar-like feature of the vehicle through the process of detection of candidates shown in Fig. The red area is a blind spot area that is set to the region of interest (ROI). In this area of the vehicle candidate is iteratively estimated thereby.

jmis-4-1-9-g7
Fig. 7. Iteration of the redundant Haar features.
Download Original Figure
3.3 Shadow Extraction

After analysis of the vehicle candidate region, the process proceeds to shadow analysis. Figure 8 shows a vehicle for verification shadow extraction. A shadow area extracting the existing region of interest (ROI) proceeds through the area. Region of Interest (ROI) in the vehicle through a range of brightness values of the shadow component is extracted.

b w i d t h = R x 10 , b h i g h t = R y 10 .
(4)

jmis-4-1-9-g8
Fig. 8. Detection in the area of brightness.
Download Original Figure

Eq. (4) for setting the brightness value of the surrounding area indicates the brightness component. The brightness of the final set and bwidthbheightis component of detecting region Eq. (5) shows the threshold (T) used to detect the shadows. Threshold uses the bwidth and bheight range of values of brightness with a total of brightness (I) of the surrounding.

I b w i d t h × b h e i g h t < T 1 ,
(5)

where bwidth and bheight are sizes of the area for brightness detection. In this work, we defined T1 as to be value of 10% in the gray-level histogram as shown in Fig. 9.

jmis-4-1-9-g9
Fig. 9. An adaptive thresholding scheme for shadow region extraction
Download Original Figure

The detection area of a vehicle varies depending on the distance of the vehicle from the camera. If the distance between the vehicles is large, the detection area will also be large. Conversely, if the vehicles are close together, the detection area will be small. The detection area is measured using a 70x70 pixel minimum size and a 200x200 pixel maximum size. Detection is achieved by setting the threshold for the shadow area based on the value of the detected brightness and the size of the region.

For verifying candidate regions, we utilize a histogram from the luminance value. To do that, we convert the RGB color space to the YCbCr color space based on Eq. (6):

[ Y Cb Cr ]   =   [ 0.299 0.587 0.144 0.16874 0.3313 0.500 0.500 0.4187 0.0813 ] [ R G B ]   +    [ 0 128 128 ] .
(6)

Candidate regions which are composed of a vehicle and non-vehicle are illustrated in Fig. 10(a). From this result, we can observe a significant difference from the components of shadow areas between the vehicle and the non-vehicle. Usually, the final result from a candidate vehicle can be disturbed by noise component as shown in Fig. 10 (a). Figures 10(b) and (c) show the detected regions for both a real vehicle and its shadow area in candidate region. In Fig. 10 (d) and (e), noise area is shown on the road. To verify true vehicle, shadow detection is useful as shown in Figs. 10(d) and (e) because of absolutely different property.

B ( x , y ) = i = 0 4 j = 0 4 R ( i , j ) ,
(7)
B ( x , y ) = { 1 , if B ( x , y ) > T 2 , 0 , else .
(8)

jmis-4-1-9-g10
Fig. 10. Noise removal based on shadow extraction.
Download Original Figure

The area of vehicle detection is defined as the ROI and Eq. (7) is used to decide whether a block in the ROI is a vehicle. If there is pixel with a value over a defined threshold value (T2) in the 5×5 block area, Eq. (8) returns a value of 1. Otherwise, a value of 0 is returned. Using this defined block, the position of the vehicle area is determined.

Since candidate blocks of a vehicle are present at the bottom of a vehicle image, the position of the candidate block differs depending on the distance between the camera and the candidate vehicle. The nearest block, which is connected horizontally in the group of candidate blocks, would be in the area of a vehicle. The block that is in the vehicle area starting from the bottom of the image is found first. Then, the trace of the block connected to the right and left is determined. More than 1 block should not be connected vertically.

Figure 11 shows the process of tracking adjacent blocks. If a block is connected with more than 9 other blocks, the image is designated as a candidate vehicle. The candidate vehicle height is determined based on the ratio of height to the horizontal length. Vertical and horizontal edge components are used to confirm the presence of a vehicle in the ROI. The vehicle region is determined and the vehicle block unit area is changed into a pixel unit vehicle area. The vehicle area is precisely determined using vertical and horizontal edge components. In the case of a lane change, vehicle identification in the blind spot varies depending on the vehicle locations.

jmis-4-1-9-g11
Fig. 11. Shadow extraction process
Download Original Figure

A driver must judge whether or not a situation is normal. Differences in both the color component and region central point are used to confirm if the previous vehicle frame in the blind spot and the current vehicle frame are the same and represent the same vehicle.

D r a b = ( M h a M h b ) 2 + ( M s a M s b ) 2 + ( M v a M v b ) 2 .
(9)

Also the distance (Dcab) between the central points in different positions is computed as the following equation:

D c a b = ( D c a D c b ) 2 .
(10)

Using Eqs. (9) and (10), the candidate areas can be estimated as the weighted sum:

D T a b = ω × D r a b + ( 1 ω ) × D c a b ,
(11)

where ω is a weighting factor for considering the contribution of each term.

Eq. (9) is the formula for recognition and determination of a vehicle blind spot. Drrab is the difference in the average value of each component in HSV, Dcab is the distance between the central point in different positions. The smallest area of DTab is determined to represent the same vehicle in different locations using the front region of the suspect vehicle in the preceding frame compared with the same region in a later frame.

IV. SIMULATION RESULTS

For evaluating the performance, various sequences captured with a backward-looking camera mounted on a side mirror were used representing different situations, including a highway, an urban road, and a rural road at different times of day. Figure 12 shows tested input sequences. Figures 12 (b), (h) and (d) show highway images. Figures 12 (a), (g) and (f) show images in a dark driving environment. Figure 12 (c) shows normal city driving. Figure 12 (e) shows a rainy driving environment. A test platform with CPU of 3.4 Hz and memory (RAM) of 8.00 GB, was employed for experiments. The image resolution was 128x128 using 700 learning images and 1400 learning images with non-vehicles.

jmis-4-1-9-g12
Fig. 12. Test video sequences in different environments.
Download Original Figure
Table 3. Result for recognition in each test video
Test Video Sequences Results
Frame TP FP Recognition Rate (%)
(a) 500 456 12 91.2
(b) 500 487 58 97.4
(c) 500 481 20 96.2
(d) 500 491 11 98.2
(e) 500 417 38 83.4
(f) 500 401 42 80.2
(g) 500 451 25 90.2
(h) 500 479 13 95.8
Average 500 452.8 27.37 91.5
Download Excel Table

Table 3 displays recognition results with each test sequence. True Positives (TP) defines the correct detection of a true vehicle. False Positives (FP) denoted the incorrect extraction of a non-vehicle. From the results, we achieved 91.5% of the recognition ratio of true vehicles. This indicates that the suggested scheme give a good performance to detect true vehicles for blind spot areas. Under sunny driving conditions, a high recognition rate of 98.2% was obtained. However, in rainy or dark driving environments, the vehicle recognition rate dropped to nearly 10%.

Figure 13 shows false positive and true positive using the proposed algorithm. For almost sequences, we achieved very small values of False Positives (FP). For sequences (b), (e), and (f), values of False Positives were a little large. Although these results are caused by the low illumination (clouded day and night) and much amount of sun light, very small false positive and very high true positive were observed base on this result. This means the suggested scheme is reliable for detecting vehicles located in a blind spot area.

jmis-4-1-9-g13
Fig. 13. Results of False Positive/True Positive detections.
Download Original Figure

Table 4 shows the error and the blind spot vehicle recognition rates. The error rate was 3.2 percent with a 91.1% true positive (TP) rate for the entire vehicle. The processing time was based on 28 frames per second with 640x480 (VGA) video sequences. Thus, we can know that the proposed algorithm operates in real-time. In terms of error rate, 3.25% of false positive indicates very small error. From these results, we can deduce that the proposed scheme is very promising to provide an alarm for some approached vehicles in the blind spot area.

Table 4. Results for TP, FP, FN, and TN rates.
Results Car Background
Car TP: 91.1% FP: 3.4%
Background FN: 6.4% TN: 88.4%
Download Excel Table

V. CONCLUSION

An efficient algorithm for detecting vehicles located in a blind spot area has been proposed for smart imaging sensor-based system. The proposed algorithm uses information obtained from a rear-looking vehicle-mounted camera. We developed an real-time scheme to verify vehicles in the blind area based on adaptive shadow detection techniques. The proposed algorithm was able to adaptively cope with changes in illumination in the driving environment and performs well under varied driving conditions. The proposed algorithm can be applied for reducing the number of traffic accidents caused by blind spots. Also, the proposed technique can be extended to a smart sensor network in vehicle system.

REFERENCES

[1].

Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection: A review,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 5, pp. 694–711, May 2006.

[2].

K. Goswami and B. G. Kim, “A Novel Mesh-Based Moving Object Detection Technique in Video Sequence,” Journal of Convergence, Vol. 4, No.1, pp. 20-24, 2013.

[3].

Shahabi, Cyrus, et al. “Janus-Multi Source Event Detection and Collection System for Effective Surveillance of Criminal Activity,” Journal of information processing systems, Vol. 10, No. 1, pp. 1-22, 2014.

[4].

M. Mohammad and O. Murad, “Artificial neuro fuzzy logic system for detecting human emotions,” Human-centric Computing and Information Sciences, vol. 3, No. 1, pp. 1-13, 2013.

[6].

M. M. Trivedi and S. Cheng, “Holistic sensing and active displays for intelligent driver support systems,” Computer, vol. 40, no. 5, pp. 60–68, May 2007.

[7].

M. Enzweiler and D. M. Gavrila, “A mixed generative-discriminative framework for pedestrian classification,” in Proc. IEEE Conf. Comput.Vis. Pattern Recog., pp. 1–8, 2008.

[8].

P. M. Roth and H. Bischof, “Active sampling via tracking,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 1–8, June 2008.

[9].

Kapoor, Ashish, et al. “Active learning with gaussian processes for object categorization,” in Computer Vision, ICCV 2007 IEEE 11th International Conference on IEEE, pp.1-8, October 2007.

[10].

S. Zehang, G. Bebis, and R. Miller, “Monocular precrash vehicle detection: features and classifiers,” IEEE Transactions on Image Processing, Vol. 15, No. 7, pp. 2019-2034, 2006.

[11].

D. Balcones, et al., “Real-time vision-based vehicle detection for rear-end collision mitigation systems,” In Computer Aided Systems Theory-EUROCAST 2009, Springer Berlin Heidelberg, pp.320-325, 2009.

[12].

D. Alonso, L. Salgado, and M. Nieto, “Robust vehicle detection through multidimensional classification for on board video based systems,” in IEEE International Conference on Image Processing, Vol. 4, No. 4, pp. 321-324, September 2007.

[13].

G. Song, K. Y. Lee, and J. W. Lee, “Vehicle detection by edge-based candidate generation and appearance-based classification,” in IEEE Intelligent Vehicles Symposium, pp. 428-433, June 2008.

[14].

C. C. Lin, et al., “Development of a Multimedia-Based vehicle lane departure warning, forward collision warning and event video recorder systems,” In Nine-th IEEE International Symposium on Multimedia Workshops, pp. 122-129 December 2007.

[15].

D. Balcones, et al., “Real-time vision-based vehicle detection for rear-end collision mitigation systems,” In Computer Aided Systems Theory-EUROCAST 2009 Springer Berlin Heidelberg, pp. 320-325, 2009.

[16].

D. Alonso, L. Salgado, and M. Nieto, “Robust vehicle detection through multidimensional classification for on board video based systems,” in IEEE International Conference on Image Processing, Vol. 4. No. 4 pp. 321-324, September 2007.

[17].

G. Y. Song, K. Y. Lee, and J. W. Lee, “Vehicle detection by edge-based candidate generation and appearance-based classification,” in IEEE Intelligent Vehicles Symposium, pp. 428-433, June 2008.

[18].

Wu, Yao-Jan, et al. “Image processing techniques for lane-related information extraction and multi-vehicle detection in intelligent highway vehicles.” Int. J. Automotive Technology 8.4: pp. 513-520, 2007.

[19].

S. Vijayanarasimhan and K. Grauman, “Multi-level active prediction of useful image annotations for recognition,” in Proc. Neural Inf. Process. Syst. Conf., pp. 1705–1712, 2008.

[20].

R. E. Schapire and Y. Singer, “Improved boosting using confidence rated predictions,” Machine Learning, vol. 37, no. 3, pp. 297–336, 1999.

[21].

P. Viola, and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in IEEE Proceedings of Computer Vision and Pattern Recognition, Vol. 1, No. 1, pp. 511-518, 2001.