Journal of Multimedia Information System
Korea Multimedia Society
Section D

Brief Paper: Combining Object Detection and Hand Gesture Recognition for Automatic Lighting System Control

Giao N. Pham1,*, Phong H. Nguyen2, Ki-Ryong Kwon3
1Advanced Analytics Center, FPT Software Co., Ltd., Hanoi, Vietnam, giaopn@fsoft.com.vn
2Center of Machine Vision and Signal Analysis, University of Oulu, Finland, phong.nguyen@oulu.fi
3Dept. of IT Convergence and Application Eng., Pukyong National University, Busan, South Korea, krkwon@pknu.ac.kr
*Corresponding Author : Dr. Giao N. Pham, Advanced Analytics Center, FPT Software Co., Ltd., Hanoi, Vietnam, +84-(0)967711286, giaopn@fsoft.com.vn

© Copyright 2019 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Dec 12, 2019; Revised: Dec 17, 2019; Accepted: Dec 19, 2019

Published Online: Dec 31, 2019

Abstract

Recently, smart lighting systems are the combination between sensors and lights. These systems turn on/off and adjust the brightness of lights based on the motion of object and the brightness of environment. These systems are often applied in places such as buildings, rooms, garages and parking lot. However, these lighting systems are controlled by lighting sensors, motion sensors based on illumination environment and motion detection. In this paper, we propose an automatic lighting control system using one single camera for buildings, rooms and garages. The proposed system is one integration the results of digital image processing as motion detection, hand gesture detection to control and dim the lighting system. The experimental results showed that the proposed system work very well and could consider to apply for automatic lighting spaces.

Keywords: Light control system; light dimming; moving object detection; smart home system; dimming control

I. INTRODUCTION

In recent years, smart lighting systems have been increasing. It is widely used in lighting system to substitute fluorescent lamps which have dis-advantages such as high cost, unstable and difficult to integrate with other devices. But general control systems are often controlled by hand or semi-automatic [1], [2]. And the simplest lighting systems often use one light driver to control turn on/off. In fact, these systems only control turn on/off all lights without select each individual light. Thus, it is requirement for developers to give an intelligent control system that can be automatically controlled by software that user can program to change system if it is necessary.

At this point, smart lighting systems are used motion sensors and light sensors. These systems control on/off, and the brightness of lights based on the motion of objects and the brightness of environment [3], [4]. In the proposed system, we only used one single camera to replace the role of light sensors and motion sensors that is used in previous systems. Our system controls to automatically turn on/off, and dims the brightness of lights following the position of moving object [5], the hand gesture of moving object [6] and the illumination of environment based on video processing. In section 2, we will present the proposed system, the implementation and experimental results are shown in Section 3. The conclusion is described in the Section 4.

II. THE PROPOSED SYSTEM

The proposed system includes one single camera, computer, one LED board which consists of the LED array and LED driver as shown in Fig. 1. When an object appears in camera zone, the camera will send video sequence to computer to perform video processing. The purpose of video processing is to detect the position of moving objects, recognize hand gesture and calculate the brightness of environment to control turn on/off and dimming LEDs.

jmis-6-4-329-g1
Fig. 1. The proposed system.
Download Original Figure

Our system operates on computer and the LED board. On computer, camera will get video sequence to perform moving object detection and hand gesture recognition. And the system then finds corresponding LEDs to the position of moving object and hand gesture, and it then calculates the brightness of LEDs based on the brightness of the received images. The position and brightness of LEDs will be transferred to the LED board. On LED board: the data that is received from computer then checked the content of data. If data is different null, it will be decoded to determine the brightness and position of LEDs. Finally, it sends control signal to de-multiplexer to control dimming LEDs.

III. IMPLEMENTATION AND EXPERIMENTAL RESULTS

As description in section 2, our system is implemented and experimented on computer and LED board. We designed and implemented LED board as shown in Fig. 2. The hardware structure of LED board includes 5 parts: LED array (1), De-multiplexer (2), AVR micro controller (3), Supply power (4), USB-RS232 transceiver (5)

jmis-6-4-329-g2
Fig. 2. LED board implementation.
Download Original Figure

We would like to propose a scenario of the proposed system as shown in Fig. 3. Experimental environment is one room or one class. Camera is connected to computer, and computer interfaces with the LED board by the USB-RS232 transceiver. The LED board is divided into blocks. When a person enters room, the system always detects his motion and recognizes his hand gesture to control LEDs. Firstly, he entered room and did not has any hand gesture. System only detected motion and controlled turn on, dimming LED following his position. Then he has hand gesture, system recognized and controlled dimming LEDs following his hand gestures. The brightness level of LED is dimmed based on the brightness of the received image. When he goes to position where has low brightness, the LED will be has high dimming level and reverse. We also experimented with two peoples to prove that our system operate well with multi moving objects. All experimental results are shown in Fig. 4.

jmis-6-4-329-g3
Fig. 3. The scenario of the proposed system.
Download Original Figure
jmis-6-4-329-g4
Fig. 4. Experimental results, (a) with single objects, and (b) with multi objects.
Download Original Figure

IV. CONCLUSION

In this paper, we designed and experimented an automatic lighting control system using single camera, based on moving object detection, hand gesture recognition and the brightness of the received image. Our system is implemented and experimented on LED board. Future, we will implement and experiment with a large capacity LEDs system, and evaluate the level of energy consumption, and compare with current others.

ACKNOWLEGDEMENT

This paper is supported by the AI Committee; and Advanced Analytics Center; FPT Software Co., Ltd., Hanoi, Vietnam.

REFERENCES

[1].

J. Dongying and W. Wei, “The Intelligent System for LED lighting Based on STCMCU”, in Proceeding of International Conference on Computer and Communication Technologies in Agriculture Engineering 2010, Chengdu, pp. 445-447, June 2010.

[2].

R. Singh, A. Bhardwaj, Y. Chauhan and R. Jain, “PWM Strategies in 32-Bit Micro controller for Interior White LED Down Panel”, International Journal of Computer Applications, vol. 58, no.22, pp. 25-32, Nov. 2012.

[3].

I. Hong, J. Byun, and S. Park, “Intelligent LED Lighting System with Route Prediction Algorithm for Parking Garage,” in Proceeding of 1stInternational Conference on Intelligent Systems and Applications, Chamonix, pp. 54-59, May, 2012.

[4].

Z. Hwang, Y. Uhm, Y. Kim, G. Kim, and S. Park, “Development of LED Smart Switch with Light-weight Middleware for Location-aware Services in Smart Home,” IEEE Transactions on Consumer Electronics, vol. 56, no. 3, pp. 1395-1402, August 2010.

[5].

J. Heikkila and O. Silven, “A real-time system for monitoring of cyclists and pedestrians,” in Proceeding of 2nd IEEE Workshop on Visual Surveillance, Lausanne, pp. 74–81, June 1999.

[6].

I. Steinberg, T.-M. London, and D. Castro, “Hand Gesture Recognition in Images and Video,” Center For Communication and Information Technology, March, 2010.