Journal of Multimedia Information System
Korea Multimedia Society
Section A

Video Palmprint Recognition System Based on Modified Double-line-single-point Assisted Placement

Tengfei Wu1,2, Lu Leng1,2,*
1School of Software, Nanchang Hangkong University, Nanchang, 330063, P. R. China, 928993126@qq.com
2Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition, Nanchang Hangkong University, Nanchang, 330063, P. R. China
*Corresponding Author: Lu Leng, Nanchang Hangkong University, 696 Fenghe South Avenue, Nanchang, 330063, P. R. China, +86-791-86453251, leng@nchu.edu.cn.

© Copyright 2021 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Mar 20, 2021; Accepted: Mar 24, 2021

Published Online: Mar 31, 2021

Abstract

Palmprint has become a popular biometric modality; however, palmprint recognition has not been conducted in video media. Video palmprint recognition (VPR) has some advantages that are absent in image palmprint recognition. In VPR, the registration and recognition can be automatically implemented without users’ manual manipulation. A good-quality image can be selected from the video frames or generated from the fusion of multiple video frames. VPR in contactless mode overcomes several problems caused by contact mode; however, contactless mode, especially mobile mode, encounters with several revere challenges. Double-line-single-point (DLSP) assisted placement technique can overcome the challenges as well as effectively reduce the localization error and computation complexity. This paper modifies DLSP technique to reduce the invalid area in the frames. In addition, the valid frames, in which users place their hands correctly, are selected according to finger gap judgement, and then some key frames, which have good quality, are selected from the valid frames as the gallery samples that are matched with the query samples for authentication decision. The VPR algorithm is conducted on the system designed and developed on mobile device.

Keywords: Mobile terminal; Video biometric recognition; Palmprint recognition; Assisted placement

I. INTRODUCTION

Accurate user authentication and authorization control are the key functions on Internet. The possessions for authentication can be stolen, broken, or forged; the passwords for authentication can be forgotten or attacked. Biometric recognition realizes authentication based on the inherent physical or behavioral characteristics of human beings [1]; which overcomes the drawbacks of the traditional authentication technologies [2].

Palmprint has rich discriminative features, including main lines, wrinkles, ridges and minutiae points. In addition, palmprint recognition can have low equipment requirements, fast speed, and high accuracy; therefore, it is widely deemed as an important biometric modality.

Video palmprint recognition (VPR) has some advantages that are absent in image palmprint recognition. In VPR, the registration and recognition can be automatically implemented without users’ manual manipulation. A good-quality image can be selected from the video frames or generated from the fusion of multiple video frames [3]. VPR in contactless mode overcomes the problems caused by contact mode.

  • Health risk: Due to the health and personal safety, it is unhygienic for users to contact the identical sensors or devices. Contact acquisition definitely increases the risk of infectious diseases, such as COVID-19.

  • Low acquisition flexibility: Contact acquisition suppresses the users’ acceptance, flexibility and comfortability.

  • Surface contamination: The surface of contact sensors can be contaminated easily, especially in harsh, dirty, and outdoor environments. The surface contamination of contact sensors is likely to degrade the quality of the subsequent acquired biometric images. In addition, the biometric features can be retained on the surface of contact sensors, which increases the risk of leaking biometric privacy.

  • Resistance from traditional customs: Some conservative nations/nationalities resist the users of different genders to contact the identical devices.

Although VPR in contactless mode overcomes the aforementioned problems, it encounters with several revere challenges such as complex background, various illuminations, uncontrollable hand placement (pose and location) [4].

This paper develops practical VPR algorithm and system. The main contributions are summarized as follows.

  1. Double-line-single-point (DLSP) assisted placement technique can overcome the challenges as well as effectively reduce the localization error and computation complexity. Modified DLSP (MDLSP) technique is developed to reduce the invalid area in the frames.

  2. The valid frames, in which users place their hands correctly, are selected according to finger gap judgement, and then some key frames, which have good quality, are selected from the valid frames as the gallery samples that are matched with the query samples for authentication decision.

  3. The VPR algorithm is conducted on the system designed and developed on mobile device.

The rest of this paper is organized as follows. Section 2 revisits the related works. Section 3 specifies the methodology. Section 4 are the experiments and discussions. Finally, the conclusions are drawn in Section 5.

II. RELATED WORKS

2.1. Preprocessing

The purpose of palmprint image preprocessing is typically to accurately segment, localize and crop the region of interest (ROI) [5]. Palmprint recognition can be divided into contact mode, contactless mode and mobile mode. Mobile mode is a special case of contactless mode, which can be considered as the most difficult contactless mode. The preprocessing on contact mode is easier than those on the other two modes. Table 1 summarizes the palmprint preprocessing methods in contactless mode.

Table 1. Contactless palmprint preprocessing.
Reference Background Segmentation Description
[6] It possibly fails in complex background.
[7] Controllable Skin color model The uniformity of the intra-class features is poor, and it is unsuitable for complex backgrounds.
[8,9] It possibly fails in complex background.
[10,11] Shape model The computational complexity is high.
[12] Users’ freedom is low, and the uniformity of the intra-class features is poor.
[13] Complex Assisted The assisted techniques are complicated, which degrades the comfortability.
[14] DLSP reduces the localization error and computation complexity, but the invalid area is large.
[15] Improved active shape model The computational complexity is high.
Download Excel Table
2.2. Feature Extraction

Palmprint recognition methods can be briefly divided into five categories [16], namely coding-based methods, structure-based methods, subspace-based methods [17], statistics-based methods [18], deep-learning/machine-learning based methods. Fusion technologies are also been used in palmprint recognition [19,20]. Because coding-based methods can be free from training and have low storage and computational complexity, they are suitable for edge computing devices such as mobile phones with low-performance hardware. Therefore, coding-based methods, including PalmCode [21], OrdinalCode[22], FusionCode[23], CompCode[24], RLOC[25], BOCV[26], E-BOCV[27], DCC[28], DRCC[28], DOC[29], are employed in our VPR system.

III. METHODOLOGY

The key frame selection in VPR system is shown as in Figure 1. The valid frames, in which users place their hands correctly, are first selected according to finger gap judgement, and then the key frames, which have good quality, are selected from the valid frames according to quality judgement. Some key frames at registration are used as the gallery samples; while each key frame at authentication is used as a query sample and matched with the gallery samples until the authentication request is approved.

jmis-8-1-23-g1
Fig. 1. Key frame selection in VPR system.
Download Original Figure
3.1. Assisted Placement

DLSP assisted placement technique can overcome the challenges as well as effectively reduce the localization error and computation complexity. However, the assisted lines in DLSP are oblique, so there are large invalid area. In addition, the preprocessing is not easy conducted along oblique directions. Thus MDLSP technique is developed to reduce the invalid area in the frames and facilitate computation. Figure 2 shows the assisted placement interface for left hand in VPR system. The interface can be mirror flipped for right hand. The invalid area is very small, and the computation can be easily conducted along horizontal and vertical directions.

jmis-8-1-23-g2
Fig. 2. MDLSP interface.
Download Original Figure

In MDLSP, there are two horizontal assisted lines (long line CD, short line AB) and one assisted point B (the right end point of line segment AB) on the screen. The point O in the upper left corner is the origin of the coordinate system. The positive directions of the X-axis and Y-axis are horizontally rightward and vertically downward, respectively. The positions of the assisted graphics are defined as follows.

The image is a horizontal screen preview on mobile devices. The width and height are W and H, respectively. When the palm surface is parallel to the interface surface, the upper and lower boundaries of the palm are approximately parallel to AB and CD, respectively. Let the two end points of the upper boundary be A(xA, yA) and B(xB, yB), respectively.

x A = 7 15 H , y A = 1 5 W .
(1)
x B = 9 15 H , y B = 1 15 W .
(2)

Let the two end points of the lower boundary be C(xC, yC), D(xD, yD).

x c = 4 15 H , y c = 14 15 W .
(3)
x D = 9 15 H , y D = 14 15 W .
(4)

The distance between line segment AB and CD is L.

L = y D y B .
(5)

L determines the distance from the palm surface to the camera. A user needs to keep his/her palm surface and the lens at a proper distance.

3.2. Hand Placement

A user should place his/her hand according to the following rules.

  1. The four fingers (index, middle, ring and little fingers) are naturally brought together, and the thumb is spread out.

  2. The upper and lower boundary lines of the palm should be aligned with and tangent to the two assisted lines.

  3. Point B should be aligned with the intersection of the upper boundary of the index finger and the bottom line of the index finger.

3.3. Valid Frame selection

The valid frames are selected according to the finger gap judgement [14]. The finger gap is the sub-region of hand, i.e., the shaded rectangle in Figure 2. The location of shaded rectangle is:

{ x : [ x B , x B + L × 2 5 ] y : [ y B + L / 10 , y B + L × 4 / 5 ] .
(6)

If the dark rectangle has the finger gap, i.e., the finger gap appears at the correct location in one frame, which demonstrates that the user places his/her hand correctly according to the assisted placement, then this frame is considered as the valid frame. The finger gap processing is shown in Figure 3 and its horizontal integral projection is shown in Figure 4.

jmis-8-1-23-g3
Fig. 3. Finger gap processing.
Download Original Figure
jmis-8-1-23-g4
Fig. 4. Horizontal integral projection of finger gap region.
Download Original Figure
3.4. ROI Localization and Cropping

The shaded area in Figure 5 is the ROI. The location of ROI is:

{ x : [ x B L × 4 / 5 , x B L / 10 ] y : [ y B + L / 10 , y B + L × 4 / 5 ] .
(7)
jmis-8-1-23-g5
Fig. 5. ROI localization and cropping.
Download Original Figure
3.5. Key Frame Selection

For the image evaluation without reference, the gray-scale variance product is used to evaluate the quality of valid frames, which is defined as:

G ( f ) = y x | f ( x , y ) f ( x + 1 , y ) | × | f ( x , y ) f ( x , y + 1 ) | .
(8)

In order to reduce the computation complexity, only the quality of ROI is evaluated. Valid frames have different qualities, as shown in Figure 6. The valid frames, whose gray variance products are higher than β, are selected as the key frames.

jmis-8-1-23-g6
Fig. 6. The gray-scale variance product values of valid frames.
Download Original Figure
3.6. Registration

The system inputs the gallery video, and the ROI set is Rr={r1, r2, …, rn-1, rn}. The template set generated from ROI set is Tr={t1, t2, …, tn-1, tn}. Assume that the number of gallery templates is k, the registration distance judgment threshold is hr, and the gallery template set is Tr={t1,t2,..,tk}, the gallery templates are generated by the following registration algorithm.

Registration algorithm

Input: The number of registered templates, k, the registration distance judgment threshold, hr, the template set Tr is generated from the ROIs cropped from the gallery video frames, Tr={t1, t2, ..., tn-1, tn}.

Output: Gallery template set Tr={t1,t2,..,tk}, or registration failed.

p=0;

fori=2; i<=n; i++ do

  d=Hamming distance(ti-1, ti);

  ifd<h

   p++;

   ti add to Tr;

else

   p=0; clear Tr;

ifp>=k

   break;

elsecontinue;

end for

if length(Tr)==k

returnTr;

else registration failed;

3.7. Verification

During authentication, the system inputs the authentication video, and the template set of the ROI is Tv={t1, t2, …, tn-1, tn}, the authentication distance threshold is hv, then the authentication algorithm is:

Verification algorithm

Input: The query template set Tv={t1, t2, …, tn-1, tn} is generated from the ROIs cropped from the query video frames, gallery template set Tr={t1,t2,..,tk}, authentication distance threshold hv.

Output: authentication succeeded or failed.

fori=1; i<=n; i++ do

forj=1; j<=length(Tr); j++ do

  d = Hamming distance(ti,tj);

ifd<hv

   return authentication succeeded;

  else continue;

end for

ifi>n

  return authentication failed;

end for

IV. EXPERIMENTAL RESULTS

4.1. Database

The hand video capture application is shown in Figure 7. We capture totally the videos of 100 left and right hands of 50 persons, each palm has 5 videos. Each video lasts about 6 seconds and its size is about 10MB. The ROIs are cropped from valid frames, as shown in Figure 8.

jmis-8-1-23-g7
Fig. 7. Hand video capture APP interface.
Download Original Figure
jmis-8-1-23-g8
Fig. 8. ROI cropping.
Download Original Figure
4.2. Accuracy

Equal error rate (EER) and decidability index d’ are used to evaluate the accuracy. d’ is defined as:

d = | μ 1 μ 2 | σ 1 2 + σ 1 2 2
(9)

where μ1 and μ2 are the mathematical expectations of intra-class and inter-class normalized Hamming distances, respectively. σ1 and σ2 are the standard deviations of intra-class and inter-class normalized Hamming distances, respectively.

Table 2 shows the EER and d’. Figure 9 shows the ROC curves. The experimental results demonstrate the effectiveness of our VPR algorithm and system.

Table 2. Verification accuracy.
Method EER(%) d’
PalmCode [21] 0.63 5.48
OrdinalCode [22] 0.55 8.43
FusionCode [23] 0.58 4.62
CompCode [24] 0.66 5.25
RLOC [25] 0.64 6.26
BOCV [26] 0.45 7.28
E-BOCV [27] 0.60 5.65
DCC [28] 1.10 3.97
DRCC [28] 0.74 3.89
DOC [29] 0.97 5.26
Download Excel Table
jmis-8-1-23-g9
Fig. 9. ROC curves.
Download Original Figure

V. CONCLUSIONS AND FUTURE WORKS

Palmprint recognition in video media is a novel technology. This paper modifies DLSP technique to reduce the invalid area in the frames. In addition, the valid frames are selected according to finger gap judgement, and then some key frames are selected from the valid frames according to image quality as the gallery samples that are matched with the query samples for authentication decision. The VPR algorithm is conducted on the system designed and developed on mobile device. In future, we will further modify the assisted technique or develop other state-of-the-art assisted techniques, and employ other recognition methods, such as deep learning, to improve accuracy.

Acknowledgement

This research was supported by National Natural Science Foundation of China (61866028, 61866025, 61763033), Key Program Project of Research and Development (Jiangxi Provincial Department of Science and Technology) (20203BBGL73222), Innovation Foundation for Postgraduate (YC2019093).

REFERENCES

[1].

D. Jeong, B. G. Kim and S. Y. Dong, “Deep Joint Spatiotemporal Network (DJSTN) for Efficient Facial Expression Recognition,” Sensors, vol. 2020, no. 20, p. 1963 (
), March 2020.

[2].

J. H. Kim, B. G. Kim, P. P. Roy and D. M. Jeong, “Efficient Facial Expression Recognition Algorithm Based on Hierarchical Deep Neural Network Structure,” IEEE Access, vol. 7, no. -, pp. 41273-41285, Mar. 2019.

[3].

B. G. Kim and D. J. Park, “Unsupervised video object segmentation and tracking based on new edge features,” Pattern Recognition Letters, vol. 25, no. 15, pp. 1731–1742 (
), Nov. 2004.

[4].

L. Leng and A. B. J. Teoh, “Alignment-free row-co-occurrence cancelable palmprint fuzzy vault,” Pattern Recognition, vol. 48, no. 7, pp. 2290-2303, Jul. 2015.

[5].

B. G. Kim, J. I. Shim and D. J. Park, “Fast image segmentation based on multi-resolution analysis and wavelets,” Pattern Recognition Letters, vol. 24, no. 15, pp. 2995–3006, Dec. 2003.

[6].

M. Franzgrote, C. Borg, B. J. T. Ries, S. Bussemaker, X. Jiang, M. Fieleser, and L. Zhang, “Palmprint verification on mobile phones using accelerated competitive code,” in Proceeding of 2011 International Conference on Hand-Based Biometrics, Hong Kong, pp. 1-6, Nov. 2011.

[7].

S. Aoyama, K. Ito, T Aoki and H. Ota, “A contactless palmprint recognition algorithm for mobile phones,” in Proceeding of International Workshop on Advanced Image Technology, pp. 409-413, Jan. 2013.

[8].

H. Sang, Y. Ma and J. Huang, “Robust palmprint recognition base on touch-less color palmprint images acquired,” Journal of Signal and Information Processing, vol. 4, no. 2, pp. 134-139, Apr. 2013.

[9].

M. K. Balwant, A. Agarwal and C. R. Rao, “Online touchless palmprint registration system in a dynamic environment,” Procedia Computer Science, vol. 54, pp. 799-808, Jan. 2015.

[10].

M. Aykut and M. Ekinci, “AAM-based palm segmentation in unrestricted backgrounds and various postures for palmprint recognition,” Pattern Recognition Letters, vol. 34, no. 9, pp. 955-962, Jul. 2013.

[11].

M. Aykut and M. Ekinci, “Developing a contactless palmprint authentication system by introducing a novel ROI extraction method,” Image and Vision Computing, vol. 40, pp. 65-74, Aug. 2015.

[12].

M. Gomez-Barreroa, J. Galbally, A. Morales, M. A. Ferrer, J. Fierrez and J. Ortega-Garcia, “A novel hand reconstruction approach and its application to vulnerability assessment,” Information Sciences, vol. 268, pp. 103-121, Jun. 2014.

[13].

J. S. Kim, G. Li, B. Son and J. Kim, “An empirical study of palmprint recognition for mobile phones,” IEEE Transactions on Consumer Electronics, vol. 61, no. 3, pp. 311-319, Oct. 2015.

[14].

L. Leng, F. Gao and Q. Chen, “Palmprint recognition system on mobile devices with double-line-single-point assistance,” Personal and Ubiquitous Computing, vol. 22, no. 1, pp. 93-104, Feb. 2018.

[15].

F. Gao, K. Cao, L. Leng and Y. Yuan, “Mobile palmprint segmentation based on improved active shape model,” Journal of Multimedia Information System, vol. 5, no. 4, pp. 221-228, Dec. 2018.

[16].

D. Zhong, X. Du and K. Zhong, “Decade progress of palmprint recognition: A brief survey,” Neurocomputing, vol. 328, pp. 16-28, 2019.

[17].

L. Leng, J. Zhang, G. Chen, M. K. Khan and K. Alghathbar, “Two-directional two-dimensional random projection and its variations for face and palmprint recognition,” in Proceeding of International conference on computational science and its applications, pp. 458-470, Jun. 2010.

[18].

L. Leng, J. Zhang, M. K. Khan, X. Chen and K. Alghathbar, “Dynamic weighted discrimination power analysis in DCT domain for face and palmprint recognition,” in Proceeding of International Journal of the Physical Sciences, vol. 5, no. 17, pp. 2543-2554, 2010.

[19].

L. Leng, M. Li, C. Kim and X. Bi, “Dual-source discrimination power analysis for multi-instance contactless palmprint recognition,” Multimedia Tools and Applications, vol. 76, no. 1, pp. 333-354, Jan. 2017.

[20].

L. Leng and J. Zhang, “Palmhash code vs. palmphasor code,” Neurocomputing, vol. 108, pp. 1-12, May 2013.

[21].

D. Zhang, W. K. Kong, J. You and M. Wong, “Online palmpint identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, pp. 1041-1050, Sep. 2003.

[22].

Z. Sun, T. Tan and Y. Wang, “Ordinal palmprint represention for personal identification,” in Proceeding of 2005 IEEE computer society conference on computer vision and pattern recognition, San Diego, USA, pp. 279-284, June. 2005.

[23].

A. Kong, D. Zhang and M. Karmel, “Palmprint identification using feature-level fusion,” Pattern Recognition, vol. 39, no. 3, pp. 478-487, Mar. 2006.

[24].

A. K. Kong and D. Zhang, “Competitive coding scheme for palmprint verification,” in Proceeding of the 17th International Conference on Pattern Recognition, vol. 1, pp. 520-523, Aug. 2004.

[25].

W. Jia, D. S. Huang and D. Zhang, “Palmprint verification based on robust line orientation code,” Pattern Recognition, vol. 41, no. 5, pp. 1504-1513, May. 2008.

[26].

Z. Guo, D. Zhang, L. Zhang and W. Zuo, “Palmprint verification using binary orientation co-occurrence vector,” Pattern Recognition Letters, vol. 30, no. 13, pp. 1219-1227, Oct. 2009.

[27].

L. Zhang, H. Li and J. Niu, “Fragile bits in palmprint recognition,” IEEE Signal processing letters, vol. 19, no. 10, pp. 663-666, Oct. 2012.

[28].

Y. Xu, L. Fei, J. Wen and D. Zhang, “Discriminative and robust competitive code for palmprint recognition,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 2, pp. 232-241, Aug. 2018.

[29].

L. Fei, Y. Xu, W. Tang, and D. Zhang, “Double-orientation code and nonlinear matching scheme for palmprint recognition,” Pattern Recognition, vol. 49, pp. 89-101, Jan. 2016.

Authors

Tengfei Wu

jmis-8-1-23-i1

received his B.S. degree in Hubei Engineering University, Wuhan, China in 2018. He is now pursuing his M.S. degree in Nanchang Hangkong University, Nanchang, China.

His research interests include pattern recognition, image processing, and deep learning.

Lu Leng

jmis-8-1-23-i2

received the Ph.D. degree from Southwest Jiaotong University, Chengdu, China, in 2012. He did his Postdoctoral Research at Yonsei University, Seoul, South Korea, and the Nanjing University of Aeronautics and Astronautics, Nanjing, China. From 2015 to 2016, he was a Visiting Scholar with West Virginia University, USA. From 2019 to 2020, he was a Visiting Scholar with Yonsei University. He is currently an Associate Professor with Nanchang Hangkong University.

He has published more than 90 international journal and conference papers, and been granted several scholarships and funding projects in his academic research. His research interests include computer vision, biometric template protection, and biometric recognition. He is a reviewer of several international journals and conferences.

Dr. Leng is a member of the Institute of Electrical and Electronics Engineers (IEEE), Association for Computing Machinery (ACM), the China Society of Image and Graphics (CSIG), the China Computer Federation (CCF), and the China Association of Artificial Intelligence (CAAI).