Section A

Mobile Palmprint Segmentation Based on Improved Active Shape Model

Fumeng Gao1, Kuishun Cao1, Lu Leng1,*, Yue Yuan1
Author Information & Copyright
1Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition, School of Software, Nanchang Hangkong University, Nanchang, P. R. China,,,,
*Corresponding Author: Lu Leng, Nanchang Hangkong University, 696 Fenghe South Avenue, Nanchang City, 330063, P.R. China, 0086-791-86453251,

© Copyright 2018 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Dec 24, 2018 ; Accepted: Dec 27, 2018

Published Online: Dec 31, 2018


Skin-color information is not sufficient for palmprint segmentation in complex scenes, including mobile environments. Traditional active shape model (ASM) combines gray information and shape information, but its performance is not good in complex scenes. An improved ASM method is developed for palmprint segmentation, in which Perux method normalizes the shape of the palm. Then the shape model of the palm is calculated with principal component analysis. Finally, the color likelihood degree is used to replace the gray information for target fitting. The improved ASM method reduces the complexity, while improves the accuracy and robustness.

Keywords: Improved Active Shape Model; Mobile Palmprint Recognition; Palmprint Segmentation


1. Advantages of Palmprint Recognition on Mobile Devices

With the rapid development of telecommunication and network technologies, mobile devices have become integrated information processing platforms with plenty of functions. It follows that mobile devices play a significant role for almost everyone nowadays, and hence it is vital to protect users’ permissions and privacy on mobile devices [1]. Unfortunately, users are often racked by the missing or theft of mobile devices. Thus accurate user authentication and authorization control are crucial functions of mobile device. However, current authentication techniques are not secure enough as they are claimed, especially if only one factor, i.e., possession, knowledge, or biometric, is employed [2].

Biometric refers to the metrics related to human physiologic and behavioral characteristics. Users do not need to memorize their biometric data. Furthermore, biometric data have high-entropy, so they are not easy to crack [3].

A lot of authentication protocols were consequently designed to guarantee the security and privacy in mobile environments [4]. Some schemes combined multiple factors, namely possession, knowledge, and biometric, to further enhance security and privacy [5].

Palmprint refers to the features on the palm region of a hand, which has several superiorities to other biometric modalities for the recognition on mobile devices, including plenty of discriminant features, few restrictive conditions, low cost, difficult leakage, high acceptance, and so on [6]. According to the above comparison, palmprint is a favorable authentication technique on mobile devices.

2. Modes of Palmprint Recognition

Palmprint recognition can be briefly categorized into contact and contactless modes [7].

In contact systems, users’ hands and equipment surface are contacted, even some pegs are used to fix hand position. Both the background and illumination are stable in contact acquisition, so it is easy to segment hand region and locate ROI. Although contact palmprint recognition systems can achieve high accuracy performance, their practical applications incur several problems, including the infection risks, the lack of acquisition flexibility, the contamination of equipment surface, the resistance of traditional cultures in some countries and regions, etc.

Users’ hands do not need to contact any equipment surface in contactless systems, so the acceptance is improved. Unfortunately, it is impractical to directly transplant the contact preprocessing methods onto contactless systems due to the severe challenges, including the uncontrolled hand poses and positions, the interference of complex backgrounds, the illumination variance and so forth [8].

Since palmprint images can be captured by the built-in cameras of mobile device, hands do not need to contact the equipment surface, and accordingly mobile mode can be considered as one special case of contactless mode.

Palmprint recognition on mobile devices has tremendous economic and market potential; however, far more technical challenges impede its development and promotion. Besides the uncontrolled hand poses and positions, the interference of complex backgrounds and illumination variance are more severe. Furthermore, the computation power and storage capacity of mobile devices are not comparable to those of desktop computers, which illustrates that both computation complexity and storage cost of palmprint recognition on mobile devices must be low.

3. Existing Technologies in Palmprint Recognition

Enrollment and authentication are two stages in the framework of palmprint recognition. Four steps, including acquirement, preprocessing, feature extraction, storage (in enrollment) / matching ( in authentication), are performed in sequence.

Preprocessing is critical to palmprint recognition. The aim of palmprint preprocessing is to accurately locate and crop the region of interest (ROI). It is relatively easy to segment hand region from background in contact mode because the background and illumination can be carefully and rigidly controlled. Contact palmprint preprocessing could not be used for contactless mode directly, but it is very helpful to the design of practical preprocessing in contactless mode and mobile environments, where more restrictions and complexities have to be fully considered.

Due to the imperfect segmentation in contactless mode, robust ROI localization is a knotty problem. Gaussian skin-color model [9], 2D-Otsu [10], ellipse skin-color [11], twice adaptive skin-color [12], traditional ASM [13], active appearance model + nonlinear regression [14] are some examples of frequently-used palmprint preprocessing methods. Nevertheless, it is still difficult to accurately segment the hands from the quite complex backgrounds and illuminations in mobile mode, so some assistant acquirement techniques are significant for palmprint acquirement sometimes.

Root regions between three fingers were aligned with the frame top border in [15]. Red guide rectangle guided users to place their palms in [16]. Hand-shaped guide curves helped users to place their four fingers, i.e., index finger, middle finger, ring finger, and little finger, in [17]. Hand locations are not strictly restricted in the above assistant acquirement techniques, so preprocessing is crucial to accurate localize hand position, which requires additional computation complexity. Vertical peak + horizontal peak was designed in [18]. The far distance between two peak points leads to low comfort, in other words, this scheme is not easy for users to manipulate. Besides, the accuracy of ROI localization depends on the stretch content of fingers, which is not easy to control.

Since binary texture codes need little storage capacity, low computational complexity of matching, and are free of training, they have become popular palmprint features for recognition [19]. The dissimilarity between two texture codes is commonly measured with normalized Hamming distance.

4. Motivations and Contributions

Skin-color information is not sufficient for palmprint segmentation in complex scenes, especially in mobile environments. Traditional active shape model (ASM) combines gray information and shape information, but its performance is not good in complex scenes. An improved ASM method is developed for palmprint segmentation, in which Perux method normalizes the shape of palm. Then the shape model of the palm is calculated with principal component analysis (PCA). Finally, the color likelihood degree is used to replace the gray information for target fitting. The improved ASM method reduces the complexity, while improves the accuracy and robustness.


1. Shape Representation

The statistical principle is used in ASM to combine the shape information of the target with the texture information, so the boundary contour of the target is fitted for the target localization. Its advantages include high localization accuracy, low sensitivity to illumination, and strong robustness.

The contour boundary points constitute the shape of the target. First, the boundary points on the target contour are manually marked, which are required to effectively reflect the target shape. The marked boundary points are represented by a shape vector of the length 2 × n, i.e., xi=(xi1, yi1, xi2, yi2, xi3,…xik, yik, xin, yin)T.

2. Alignment

The above process is repeated to mark the contour boundary points on each sample in the training set to obtain the training sample set Ω = {x1,x2,…xL}, where xi represents the shape vector of the object in the i-th training image.

There is a difference in the shape and position of each sample in the training sample set, and these training samples need to be aligned, so Perux method is used for alignment. The commonly used alignment methods include scaling, translation and rotation. The specific steps are as follows.

Step 1: Calculate the coordinates of the center points of each sample contour.

Step 2: Use the zoom operation to unify the dimensions of the target.

Step 3: Align the center coordinates of the sample contour.

Step 4: Determine the direction of the palm in the sample and rotate the sample for alignment.

The coordinate of the center point of the sample contour and the calculation equation for the dimensions are:

( x ¯ , y ¯ ) = ( 1 n j = 1 n x j , 1 n j = 1 n y j )
R ( s ) = j = 1 n [ ( x j x ¯ ) 2 + ( y j y ¯ ) 2 ]
3. Dimension Reduction

Due to the large number of samples and the large number of feature points on the contour, the complexity of direct processing is high, so the data are usually reduced by PCA.

Assume that there are n training samples (x1, x2, …, xn). The model in (3) can be obtained by PCA. The average shape is computed by (4). Φ is obtained by the feature vector matrix composed of the feature vectors corresponding to the first k largest eigenvalues. b is a feature vector composed of the first k eigenvalues.

x x ¯ + Φ b
x ¯ = 1 n i = 1 n x i x
4. Model Localization

The essence of target contour search based on ASM is an iterative problem. The positions of the boundary points are adjusted according to the gray information around them, and then the prior model is gradually approached to the target object until the change degree before and after the iteration is less than a certain threshold. When the contour does not change significantly, the final searched contour is obtained. The specific process is as follows.

Step 1: Place the average sample model obtained in the alignment operation on the test sample.

Step 2: Use the gray level information as the search condition of the boundary point, look for the best matching position of each boundary point at its normal direction, and move the boundary point to the new position.

Step 3: Compute the displacement Δx at each boundary point.

Step 4: Compute Δb to satisfy

x + Δ x = x ¯ + Φ ( b + Δ b )

Please note that the size of Δb needs to be limited by a certain threshold.

Step 5: Jump to Step 2 until Δb does not change significantly.


1. Dual-restriction-box Assistance

In order to overcome the uncontrollable problems in contactless mode, especially in mobile environments, “Dual-restriction-box assistance” is designed for the acquisition and localization of palmprint image [12], as shown in Fig. 1.

Fig. 1. Dual-restriction-box assistance.
Download Original Figure

During capture stage, the two bottom-points of the valleys, which are between the index finger and the middle finger, the ring finger and the little finger, are located on two “assistance points”, i.e., the centers of the two “restriction boxes”. “Dual-restriction-box assistance” can effectively restrain the location and posture of the acquired hand, reduce the complexity of the follow-up image preprocessing, and improve the execution efficiency and accuracy.

2. Model Localization

The sub-image in each restriction-box is a sample. Select 10 samples shown in Fig. 2, and then use manual marking and interpolation to mark the points on the contour.

Fig. 2. Samples for contour marking.
Download Original Figure

The boundary points of the selected samples are shown in Fig. 3. 11 feature points on the valley boundary between two fingers are roughly manually marked, and then 31 feature points are obtained with piece-wise linear interpolation. The Euclidean distances between the pairs of adjacent feature points are approximately equal.

Fig. 3. Boundary point set. (a) Sub-image in restriction box, (b) Manual marking (11 points), (c) Piece-wise linear interpolation (31 points).
Download Original Figure

There are 31 boundary points in each sample, and these boundary points are represented by two-dimensional coordinates. The boundary point set of each image is represented by xi=(xi1, yi1, xi2, yi2, xi3,…xi30, yi30, yi31yi31)T.

3. Alignment

After training the feature points of the 10 selected samples, the sample shape is normalized using the Perux method described in Section 2.2.

4. Dimension Reduction

Each training sample xi is a feature vector with a dimension of 62. There is a high correlation between these vectors, for example, the shape of the valley between two fingers is close to U-shape. To eliminate this correlation, a model that reflects this correlation needs to be built. This correlation can be reflected by PCA. The specific steps are as follows.

Step 1: Use xi (i = 1, 2, 3, …, 10) to represent the normalized vector of the valley between the two fingers in the restriction box, and compute a new average shape vector x¯=1ni=1nxi.

Step 2: dxi = xi - .

Step 3: Compute the covariance S=1ni=1ndxidxiT of dxi (i = 1,2,3,…,10).

Step 4: Compute all the eigenvalues in the covariance matrix S and their corresponding eigenvectors.

The larger the eigenvalue is, the more the corresponding eigenvector can mainly reflect the target shape. Therefore, after sorting, the feature vectors corresponding to the first k largest eigenvalues are selected to form a feature vector matrix Φ=(p1, p2,…pk), and k eigenvalues constitute a vector b = (λ1, λ2,…, λk). Any shape vector in the final training sample can be expressed as:

X X ¯ + Φ b

Please note that the relationship criterion i=1kλiVi=110λi should be satisfied, that is, the ratio of the target object deformation determined by the first k eigenvalues to the total deformation of all objects is not less than V (V is generally 0.98).

Since the k feature vectors in Φ are mutually orthogonal, b can be expressed as:

b = Φ T ( X X ¯ )

By adjusting b, a new model instance can be generated, and the variation of b needs to be limited to a certain range. Otherwise, the model generated in the iterative process is prone to deformation, resulting in failure of the target fitting. Usually limit each feature value in b :

3 λ i b i 3 λ i
5. Searching Based on Skin-color Model

We use the twice adaptive skin-color model in [12] for target search. The specific steps are as follows.

Step 1: Place the initial shape model on the center of the test image as the initial position for the target search, and set the model parameter b to 0, then the model is x=.

Step 2: According to a certain search criterion, look for the best matching point (xi, yi)(i = 1,2,3,…,31) of each boundary point (xi, yi) along its bidirectional normal directions.

Step 3: Update the position of each boundary point to get a new shape. Adjust b so that the newly generated model can get close to the actual shape to the maximum extent, and finally set the newly generated model as the current model.

Step 4: Repeat Steps 2 and 3 until b does not significantly change.

In the above iterative process, Step 2 is significant. The bidirectional normal directions of the current boundary point are its search directions. After the search directions of the boundary points are determined, the search criteria directly affect the accuracy of the target localization. The developed method in this paper adopts a more accurate skin-color likelihood-based search scheme.

Based on the “twice adaptive skin-color model” in [12], the skin-color likelihood is computed. Then, along the search directions of the boundary points, the likelihood statistics are performed on the pixels around them. Suppose that for the i-th boundary point, take k (here k=7) points along bidirectional normal directions, and use the skin-color likelihood of 2k+1 points to form a vector gi = {v1, v2, v3,…,v2k+1}. Perform the same operation for each boundary point to form a skin-color statistical model of the target contour S = {g1, g2, g3,…,g31}.

For a contour boundary point to be processed during accurate segmentation, since the first k points and the last k points are located in the palm area and the background area, respectively, the matching function is defined as:

f ( g i ) = | i = 1 7 v i i 9 15 v i |

f(gi) is maximum at the best split position.

6. Key Point Detection

After using the above search scheme to determine the shape position of the valley between the fingers, the detection of the key point between the fingers is designed.

As shown in Fig. 4, the edge of the valley between the fingers is U-like shaped. It is observed that the first 1/3 feature points and the last 1/3 feature points are approximately two straight lines that are not parallel. The first 10 feature points and the last 10 feature points are selected for the fitting of two straight lines with the least square method. The intersection of the angle bisector between the two fitted straight lines and the inter-finger valley edge is detected as the key point.

Fig. 4. Key point detection. (a) Feature points, (b) Two fitted lines, (c) Angle bisector, (d) Key point.
Download Original Figure


The developed method is compared with the existing methods in Table 1. EER and d′ index are used as the evaluation indicators. Binary orientation co-occurrence vector is used as the palmprint template [20]. The improved ASM is superior to the compared methods.

Table 1. Accuracy comparison.
Methods EER (%) d′
Traditional Gaussian skin-color [9] 9.0 2.95
2D-Otsu [10] 4.3 3.46
Ellipse skin-color [11] 11.1 2.71
Twice adaptive skin-color [12] 2.4 4.30
Traditional ASM [13] 4.6 3.37
Our method 1.9 4.60
Download Excel Table


An improved ASM method is developed for mobile palmprint segmentation. The advantages of our developed method include:

(1) Stronger robustness against interference

After skin-color-based segmentation, there may be some interfering pixels, which directly disturb the detection of the key points. The improved ASM method can resist these interference points in the target search process.

(2) Accurate detection of key point

The improved ASM method combines the U-shaped geometry between the fingers for key point detection with rotational invariance, so the accuracy is improved.

(3) Avoidance of complex threshold selection

The skin-color models need to determine the threshold, and the unreasonable threshold setting directly degrades the segmentation effect. In the improved ASM method, the differences of the skin-color likelihood between each pixel and its adjacent pixels along its bidirectional normal directions are computed to update the positions of the boundary points, so the threshold is needless.


The authors thank the editors and anonymous reviewers for providing very thoughtful comments which have leaded to an improved version of this paper. This work was supported by National Natural Science Foundation of China (61866028, 61881340421, 61741312, 61663031, 61763033, 61866025, 61662049), Key Program Project of Research and Development (Jiangxi Provincial Department of Science and Technology) (20171ACE50024, 20161BBE50085), Construction Project of Advantageous Science and Technology Innovation Team in Jiangxi Province (20165BCB19007), Application Innovation Plan (Ministry of Public Security of P. R. China) (2017YYCXJXST048), Open Foundation of Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition (ET201680245, TX201604002), Innovation Foundation for Postgraduate (YC2017067), and “Triple-little” Extracurricular Academic Projects (2017YBRJ034).



M. Chen, Y. F. Qian, S. W. Mao, W. Tang, and X. M. Yang, "Softwaredefned mobile networks security," Mobile Networks and Applications, vol. 21, no. 5, pp. 729-743, Oct. 2016


L. Leng and A. B. J. Teoh, "Alignment-free row-co-occurrence cancelable palmprint fuzzy vault," Pattern Recognition, vol. 48, no. 7, pp. 2290-2303, Jul. 2015


H. Liu and E. E. Lazkani, "Biometric inspired mobile network authentication and protocol validation," Mobile Networks and Applications, vol. 21, no. 1, pp. 130-138, Feb. 2016


L. Leng, A. B. J. Teoh, M. Li, and M. K. Khan, "A remote cancelable palmprint authentication protocol based on multidirectional two-dimensional palmphasor-fusion," Security and Communication Networks, vol. 7, no. 11, pp. 1860-1871, Nov. 2014


R. Amin, S. H. Islam, G. Biswas, M. K. Khan, L. Leng, and N. Kumar, "Design of an anonymity-preserving threefactor authenticated key exchange protocol for wireless sensor networks," Computer Networks, vol. 101, pp. 42-62, Jun. 2016


A. Kumar, "Toward more accurate matching of contactless palmprint images under less constrained environments," IEEE Transactions on Information Forensics and Security, vol. 14, no. 1, pp. 34-47, Jan. 2019


W. Jia, B. Zhang, J. T. Lu, Y. H. Zhu, Y. Zhao, W. M. Zuo, and H. B. Ling, "Palmprint recognition based on complete direction representation," IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4483-4498, May. 2017


L. Zhang, L. D. Li, A. Q. Yang, Y. Shen, and M. Yang, "Towards contactless palmprint recognition: A novel device, a new benchmark, and a collaborative representation based identifcation approach," Pattern Recognition, vol. 69, pp. 199-212, Sep. 2017


G. K. O. Michael, T. Connie, and A. B. J. Teoh, "Touchless palm print biometrics: Novel design and implementation,"Image and vision computing, vol. 26, no. 12, pp. 1551-1560, Dec. 2008


Q. Li, H. Tang, J. N. Chi, Y. Y. Xing, and H. T. Li, "Gesture segmentation with improved maximum between-cluster variance algorithm," Acta Automatica Sinica, vol. 43, no. 4, pp. 528-537, Apr. 2017


J. Li and X. L. Hao, "Face detection using ellipse skin model," Computer Measurement & Control, vol. 14, no. 2, pp. 170-171, Feb. 2006


K. S. Cao and L. Leng, "Double-point auxiliary on valleys between fingers for palmprint preprocessing on mobile devices," Journal of Optoelectronics • Laser, vol. 29, no. 2, pp. 205-211, Feb. 2018


A. P. Liu, Y. Zhou, and X. P. Guan, "Application of improved active shape model in face positioning," Computer Engineering, vol. 33, no. 18, pp. 227-229, Sep. 2007


M. Aykut and M. Ekinci, "Developing a contactless palmprint authentication system by introducing a novel ROI extraction method," Image and Vision Computing, vol. 40, pp. 65-74, Aug. 2015


Y. F. Han, T. N. Tan, Z. N. Sun, and Y. Hao, "Embedded palmprint recognition system on mobile devices," in Proceedings of the International Conference on Biometrics, pp. 1184-1193, Aug. 2007


S. Aoyama, K. Ito, T. Aoki, and H. Ota, "A contactless palmprint recognition algorithm for mobile phones," in Proceedings of the International Workshop on Advanced Image Technology, pp. 409-413, Jan. 2013


J. S. Kim, G. Li, B. J. Son, and J. Kim, "An empirical study of palmprint recognition for mobile phones," IEEE Transactions on Consumer Electronics, vol. 61, no. 3, pp. 311-319, Aug. 2015


S. Ibrahim and D. A. Ramli, "Evaluation on palm-print ROI selection techniques for smart phone based touch-less biometric system," American Academic & Scholarly Research Journal, vol. 5, no. 5, pp. 205-211, jul. 2013


L. K. Fei, G. M. Lu, W. Jia, S. H. Teng, and D. Zhang, "Feature extraction methods for palmprint recognition: A survey and evaluation," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018. In press


Z. H. Guo, D. Zhang, L. Zhang, and W. M. Zuo, "Palmprint verifcation using binary orientation co-occurrence vector," Pattern Recognition Letters, vol. 30, no. 13, pp. 1219-1227, Oct. 2009