Section A

Brief Paper: A Computationally Efficient Retina Detection and Enhancement Image Processing Pipeline for Smartphone-Captured Fundus Images

Yaroub Elloumi1,2, Mohamed Akil1, Nasser Kehtarnavaz3
Author Information & Copyright
1Gaspard Monge Computer Science Laboratory, ESIEE-Paris, University Paris-Est Marne-la-Vallée, France,
2Medical Technology and Image Processing Laboratory, Faculty of Medicine, University of Monastir, Tunisia,
3Department of Electrical and Computer Engineering, University of Texas at Dallas, USA,
Corresponding Author: Nasser Kehtarnavaz, Department of Electrical and Computer Engineering, University of Texas at Dallas, Richardson, TX 75080, USA, +1-972-883-6838,

© Copyright 2018 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Apr 28, 2018 ; Accepted: May 10, 2018

Published Online: Jun 30, 2018


Due to the handheld holding of smartphones and the presence of light leakage and non-balanced contrast, the detection of the retina area in smartphone-captured fundus images is more challenging than retinography-captured fundus images. This paper presents a computationally efficient image processing pipeline in order to detect and enhance the retina area in smartphone-captured fundus images. The developed pipeline consists of five image processing components, namely point spread function parameter estimation, deconvolution, contrast balancing, circular Hough transform, and retina area extraction. The results obtained indicate a typical fundus image captured by a smartphone through a D-EYE lens is processed in 1 second.

Keywords: Biomedical image processing; smartphone-captured fundus images; retina detection


Fundus images have been extensively used to detect abnormal eye conditions such as diabetic retinopathy. For example, in [1], Kar et al. provided an approach to detect neovascularization and in [2], Tan et al. presented an approach for detecting lesions that lead to diabetic retinopathy. Other examples include the work described in [3] which addressed the Age-related Macular Degeneration (AMD) condition by examining the anatomy of fundus images and the work in [4] by Prakash et al. extracting the macula in order to detect diabetes. Also, the work in [5] discussed an approach for detecting glaucoma. Such works, which are aimed at detecting eye diseases from fundus images, are performed by using retinography-captured fundus images where the retina location and size are more or less the same and the contrast is uniform. As a result, the detection process is rather straightforward.

The use of mobile devices in ophthalmology has been growing and several works have been reported in the literature for capturing fundus images using mobile devices. Optical lenses have been introduced to capture the retina via smartphones. Examples of these lenses are the condenser lens [6], the Welch Allyn Panoptic Ophthalmo-Scope [7], and the DEC200 portable fundus camera [8]. D-EYE is another optical lens that is designed to be snapped onto a smartphone in order to capture fundus images [9][10], see Fig. 1(a). In this work, this lens is used to capture fundus images.

Fig 1. (a) Smartphone with D-EYE lens snapped on, (b)-(c) two examples of smartphone-captured fundus images.
Download Original Figure

There are several differences between retinography-captured and smartphone-captured fundus images. First, there exists a non-uniformity of fundus distances when using a smartphone, thus leading to different sizes and locations of the retina, two such images are shown in Fig. 1. Second, the projection angle does not remain stationary which causes a lack of brightness and a non-balanced contrast. Third, any motion between the D-EYE lens and a subject’s eye causes a blur in fundus images. Finally, the spacing between the D-EYE lens and the pupil generates a light leakage which produces noise in fundus images. Hence, in general, the image quality is lower when using smartphones as compared to retinography.

This paper presents a computationally efficient image processing pipeline in order to: (i) detect the retina area in smartphone-captured fundus images, and (ii) and enhance the retina area for eye examination purposes. The rest of the paper is organized as follows. In Section II, we describe the developed image processing pipeline for detecting and enhancing the retina area in smartphone-captured fundus images. The experimental results are presented in Section III, followed by the conclusion in Section IV.


1. Pipeline components

The initial component of the pipeline consists of enhancing the retina by deblurring of a smartphone-captured fundus image. We have used the image deblurring approach in [11] [12] based on the denoising equation

O ( x , y ) = I ( x , y ) H ( x , y ) + N ( x , y )

where O(x, y) denotes the observed image made up of the convolution of the original image I(x, y) with the degradation function H(x, y) and N(x, y) denotes noise. The function H(x, y), called the Point Spread Function (PSF), is given by [12] [13]

H ( x , y ,  L , θ ) =   { 1 L i f x 2 + y 2 L 2 a n d   y x = tan ( θ ) 0   e l s e w h e r e

where L represents the blur length and θ the blur angle. Restoring the image from its blurred version is achieved by deconvolution. Once L and θ are estimated, a deblurred image is generated by using either Richardson–Lucy (RL) deconvolution or by Wiener filtering.

The next component of the pipeline involves contrast balancing. Several works have previously addressed this problem. In this work, we have used the approach discussed in [14] for contrast balancing which is found here to be an effective way to achieve contrast balancing of smartphone-captured fundus images.

The next component consists of extracting the retina area irrespective of its size and location. Since the retina has a circular shape, the circular Hough transform is employed here in order to detect circular patterns. A radius range is imposed to avoid other round objects such as the optic disk from getting detected. The entire image processing pipeline is illustrated in Fig. 2.

Fig 2. Image processing components of smartphone-captured fundus images.
Download Original Figure
2. PSF parameters estimation

There are two kinds of deconvolution: non-blind and blind. The first kind of deconvolution uses a known PSF to provide a restored image by reversing the convolution operation. The second kind involves not knowing the PSF. This requires estimating the length and angle parameters of the blurring process. The parameter estimation can be achieved either by using an image set or from a single image.

Due to the handheld aspect of smartphone-captured fundus images, we have considered the estimation from a single image as outlined in [11] by Dobes et al., where the PSF parameters are estimated in the frequency domain. This is done by considering the fact that edges would appear blurred and as a result the frequency domain of the image gradient exhibit a sinusoidal wave and the power spectrum appears as parallel stripes where the wave width corresponds to the length of the motion blur, and the direction of the blur is perpendicular to the wave direction. The steps involved in the deblurring and the denoising process are shown in Fig. 3. The power spectrum is computed via taking Fast Fourier Transform (FFT). The Butterworth filtering shown in this figure is applied in order to remove the high frequency noise.

Fig 3. Deblurring and denoising process of smartphone-captured fundus images.
Download Original Figure

Next, Radon Transform (RT) is applied along different angles and the mean vector of each RT projection is obtained. Thereafter, the mean vector having the highest difference between the largest and the smallest values is identified. Noting that the corresponding angle is perpendicular to the blur angle θ, the width of its centered valley D is identified. For a fundus image with a resolution of M×M, the blur length L becomes equal to 2×M/D. Once the blur parameters are obtained, a Wiener deconvolution filter is applied to provide the deblurred fundus image.

3. CLAHE equalization

The retina and its background have similar contrasts in smartphone-captured fundus images and the pixels appear with high and low contrast. Thus, a regular histogram equalization is not adequate to rectify this contrast problem. Since retina texture is not uniform and changes as moving away from the optic nerve head, the contrast is rectified by using sub-images. We have used the Contrast Limited Adaptive Histogram Equalization (CLAHE) described in [14] that operate on sub-images. This equalization is applied to the green component to match the contrast in a fundus image. Using an average size of retina in a smartphone-captured fundus image, the CLAHE equalization divides the image into M×N blocks or sub-images, and enhances the contrast in each block separately. Our experiments have shown that the block size of M=N=12 provides an effective contrast correction in smartphone-captured fundus images.

Furthermore, when the retina is not centered, some background structure may get leaked into the blocks or sub-images. Therefore, a clip limit parameter of 0.2 is imposed to avoid this in such blocks or sub-images. As a result, the CLAHE equalization allows distinguishing the retina area from the background area.


The developed pipeline was evaluated using the publicly available online database in [15]. This database consists of 14 smartphone-captured fundus images captured via the D-EYE lens. The resolution of the images is 1200×1200. The image processing pipeline was applied to these smartphone-captured images on a desktop with the programming done in MATLAB. The desktop had an i5 processor running at 3.3GHz with 8GB of RAM.

Fig. 4(a) shows an example of a smartphone-captured fundus image via the D-EYE lens while Fig. 4(b) shows its enhanced version after deconvolution. As shown in this figure, the blood vessels can be clearly seen and the improvement in the image quality allows the detection of the lesions. The equalized image done by CLAHE is shown in Fig. 4(c). Fig. 4(d) shows the circle generated by the circular Hough transform. Finally, Fig. 4(e) displays the final image where the retina is displayed by separating it from the background.

Fig 4. A smartphone-captured fundus image: (a) initial image, (b) deblurred and denoised image, (c) contrast balanced image, (d) binary mask, and (e) background removed.
Download Original Figure

The first experiment consisted of measuring the Euclidean distance D between the detected retina center position Pauto and the ground truth retina center Preal, and formulating this distance in terms of the fundus radius R, as proposed in [16]. The ground truth retina centers were identified by an ophthalmologist specialist. The performance for the location ratio with D less than 1R, 1/2 R and 1/4 R were found to be 92.8%, 85.7% and 78.5%, respectively.

The second experiment involved evaluating the processing time of the pipeline running for different smartphone-captured images through the D-EYE lens. The results of this experiment are displayed in Fig.5. The average processing time per image was found to be 0.85s, which makes the processing pipeline practical to use in eye examination.

Fig 5. Processing time of the developed image processing pipeline.
Download Original Figure


In this paper, a computationally efficient image processing pipeline has been developed in order to detect and enhance the retina area in fundus images that are captured by smartphones through a D-EYE lens. It is shown that it takes 1 second to process a typical fundus image captured in this manner. This work enables the retina examination of the eye to be performed easily by merely using a smartphone and a lens which is snapped onto the smartphone in front of its camera. This solution provides a cost-effective way to examine retina abnormalities such as diabetic retinopathy.



S. Kar, and S. Maity. Detection of neovascularization in retinal images using mutual information maximization. Computers and Electrical Engineering, 62:1–15, August 2017.


J. Tan, H. Fujita, S. Sivaprasad, S. Bhandary, A. Rao, K. Chua, and U. Acharya. Automated Segmentation of Exudates, Haemorrhages, Microaneurysm susing Single Convolutional Neural Network. Information Sciences, 420 :66-76, December 2017.


A. Floriano, Á. Santiago, O. Nieto, and C. Márquez. A machine learning approach to medical image classification: Detecting age-related macular degeneration in fundus images. Computers & Electrical Engineering, available online, November 2017.


J. Medhi and S. Dandapat. An effective fovea detection and automatic assessment of diabetic maculopathy in color fundus images. Computers in Biology and Medicine, 74:30–44, July 2016.


J. Cheng, J. Liu, Y. Xu, F. Yin, D. Wong, N. Tan, D. Tao, C. Cheng, T. Aung, and T. Wong. Superpixel Classification Based Optic Disc and Optic Cup Segmentation for Glaucoma Screening. IEEE Trans. on Medical Imaging, 32:1019-1032, June 2013.


S. Devi, K. Ramachandran, and A. Sharma. Retinal Vasculature Segmentation in Smartphone Ophthalmoscope Images. Proceedings of 7th WACBE World Congress on Bioengineering, 52:64-67, 2015.


M. Blanckenberg, C. Worst and C. Scheffer. Development of a Mobile Phone Based Ophthalmoscope for Telemedicine. Proceedings of the IEEE Engineering in Medicine and Biology Conference, Massachusetts, 5236-5239, 2011.


S. Wang, K. Jin, H. Lu, C. Cheng, J. Ye, and D. Qian. Human visual system-based fundus image quality assessment of portable fundus camera photographs. IEEE Trans. on Medical Imaging, 35:1046 – 1055, April 2016.


A. Russo, F. Morescalchi, C. Costagliola, L. Delcassi, and F. Semeraro. A Novel Device to Exploit the Smartphone Camera for Fundus Photography. Journal of Ophthalmology, Article ID 823139, 2015.


A. Russo, F. Morescalchi, C. Costagliola, L. Delcassi, and F. Semeraro. Comparison of Smartphone Ophthalmoscopy With Slit-Lamp Biomicroscopy for Grading Diabetic Retinopathy. American Journal of Ophthalmology, 159:360-364, February 2015.


M. Dobeš, L. Machala, and T. Fürst. Blurred image restoration: A fast method of finding the motion length and angle. Digital Signal Processing, 20:1677–1686, December 2010.


J. Cai, H. Ji, C. Liu, and Z. Shen. Blind motion deblurring using multiple images. Journal of Computational Physics, 228:5057–5071, August 2009.


A. Deshpande, and S. Patnaik. Single image motion deblurring: An accurate PSF estimation and ringing reduction. Optik, 125:3612–3618, July 2014.


H. Lidong, Z. Wei, W. Jun, and S. Zebin. Combination of contrast limited adaptive histogram equalization and discrete wavelet transform for image enhancement. IET Image Processing, 9:908 – 915, October 2015.


Smartphone-Captured Retinal Image Database,, 2017.


Gegundez-Arias, M. E., Marin, D., Bravo, J. M., and Suero, A. Locating the fovea center position in digital fundus images using thresholding and feature extraction techniques. Computerized Medical Imaging and Graphics, 37:386-393, September 2013.