Section C

Real-time Implementation of Character Movement by Floating Hologram based on Depth Video

Kyoo-jin Oh1, Soon-kak Kwon2,*
Author Information & Copyright
1Department of Computer Software Engineering Dongeui University, Busan, Korea, okj6012@gmail.com
2Department of Computer Software Engineering Dongeui University, Busan, Korea, skkown@deu.ac.kr
*Corresponding Author: Soon-kak Kwon, (47340) Eomgang-ro 176, Busanjin-gu, Busan, Korea, +82-51-890-1727, skkwon@deu.ac.kr.

© Copyright 2017 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Dec 07, 2017 ; Revised: Dec 17, 2017 ; Accepted: Dec 18, 2017

Published Online: Dec 31, 2017

Abstract

In this paper, we implement to make the character content with the floating hologram. The floating hologram is the one of hologram techniques for projecting the 2D image to represent the 3D image in the air using the glass panel. The floating hologram technique is easy to apply and is used in exhibitions, corporate events, and advertising events. This paper uses both the depth information and the unreal engine for the floating hologram. Simulation results show that this method can make the character content to follow the movement of the user in the real time by capturing the depth video.

Keywords: Floating hologram; Depth information; Character movement

I. INTRODUCTION

In recent years, the consumption pattern of the media contents is changing from 2D contents to 3D contents [1]. Media contents is changed in ways that provide the contentment and the excitement through convergence with various technologies, such as the virtual reality technology, augmented reality technology, or the hologram technology. Since the convergence of the media contents and these technologies, the media contents are created that contents can experience the virtual reality similar to the real world [2].

The hologram technique is the technique for recoding and outputting the 3D stereoscopic vision image that is the same as the actual 3D object using the holography, which is the interference effect of the light caused by two laser beams [3]. The hologram technique can enhance the realism and the immersion to provide the realizable 3D stereoscopic vision. The floating hologram is the one of hologram techniques that is the method for projecting the 2D image to represent the 3D image in the air using the glass panel. The floating hologram is used in the exhibition, the corporate events, advertising events, and so on.

In this paper, we propose the method of creating the character content, which is one of the media contents, using the floating holographic techniques. We capture the depth video using the depth camera. We obtain the motion of the user which is captured by the depth camera and we connect the motion to the 3D character. Then, we output the 3D character through the floating hologram device.

We explain the related research in chapter 2, and we implement the method of implementation of the character floating hologram in chapter 3. Then we discuss the expectations and conclusions.

II. RELATED RESEARCH

2.1. Floating hologram

The term ‘hologram’ is a combination of ‘holo’, which means perfect or whole, and ‘gram’, which means message or information. The hologram technique is the technique for recording the 3D stereoscopic vision using holography which is the interference effect of the light caused by two laser beams. The holography is discovered in 1947 by scientist Dennis Gabor to record interference pattern using electron beams and replay by diffraction of the light [4-6]. The hologram can be classified into the 3D hologram and the floating hologram. The floating hologram projects the hologram through the transparent panel. The floating hologram can use only one panel or 4 panels. The method of using one panel is to install the transparent panel at 45° as shown in Figure 1. The hologram is projected onto the transparent panel. When the viewer sees the hologram on the back of the screen, it feels like in the air. The method of using 4 panels is to install 4 transparent panel at a 45° as shown in Figure 2[7-8]. The hologram is projected onto the air where is center of panels as shown Figure 2.

jmis-4-4-289-g1
Fig. 1. The floating hologram using one transparent panel.
Download Original Figure
jmis-4-4-289-g2
Fig. 2. The floating hologram using 4 transparent panels.
Download Original Figure
2.2. Kinect

The depth information can be obtained using the Kinect. The Kinect is developed to detect the user`s movement in the Xbox 360 that is the home video game console device. The Kinect is composed of the RGB camera, the IR emitter, the IR depth sensor, four microphone arrays, and the tilt motor to move the sensor up and down. These sensors represent the color view captured by the RGB camera, the depth view showing the depth information of the captured image, and the skeleton view information, which is the information of the human body structure.

2.3. Unreal Engine

The Unreal Engine is the integrated game engine that provides the overall game development environment. The benefits of the Unreal Engine are the outstanding technology in the computer vision, the steady updates, technical support, network support among Unreal developers, and excellent development tools. The best benefit of the Unreal Engine is that the engine’s configuration is flexible, so it has a good structure to combine and expand various technologies. In this paper, we use the Unreal Engine to connect the Kinect.

2.4. Skeleton data

Kinect tracks the 3D depth information and detect the motion of the human. The 3D skeleton information is processed into the linear data through Kinect’s gesture recognition algorithm to control 3D content. Kinect recognizes main features of the person and tracks it to sense body movements. Main features of the person are divided into 20 parts to represent the skeleton. Figure 3 is shown that the image showing the 20 parts of the main bones shown in the skeleton information [9].

jmis-4-4-289-g3
Fig. 3. Skeleton data of human body
Download Original Figure
2.5. Hologram Devices

Current hologram devices are typically HOKO VISION developed by Bandai, 3D Holocube, Become Iron Man developed by MARVELS. Table 1 explains these devices.

Table 1. Hologram devices.
HOKO VISION 3D Holocuve Become Iron Man
Device equipped smart phone
Only used of created hologram
Visible in various directions
Only used of created hologram
Hologram that follow the movement of people
Difficultly for use
Download Excel Table
jmis-4-4-289-g4
Fig. 4. BANDAI’s HAKO VISION
Download Original Figure
jmis-4-4-289-g5
Fig. 5. 3D Holocube
Download Original Figure
jmis-4-4-289-g6
Fig. 6. Marvel’s Become Iron Man
Download Original Figure

III. IMPLEMENTATION OF CHARACTER FLOATING HOLOGRAM

3.1. Implementation of character floating hologram

We implement the floating hologram that moves in real time using the depth information and the unreal engine. We implement the character hologram as the same structure of real human. The motion of the human is recognized using the depth information by Kinect. Then, we connect the motion information to the character hologram. The hologram is projected into the panel so the user can view the motion of the character hologram in real time.

The hologram is projected onto the floating hologram device as shown Figure 7. The floating hologram device consists of the display, frames and 4 transparent panels. The display can use the 16:9 resolution monitor or the table PC display. Frames consist of the bracket and 4 columns. Frames are made of plastic. Panels are placed in the pyramid form. The material of panels are the acrylic.

jmis-4-4-289-g7
Fig. 7. Floating hologram device.
Download Original Figure

First, it checks if Kinect is connected. The Unreal Engine creates the character model that has the human structures. Kinect capture the depth information and analyze it. When a person is detected in depth information, it extracts the skeleton information from the depth information. It divides the depth information into the head, the shoulder, the arm, the leg, and so on and connect each part into the character model. The character model is converted into the floating hologram by the Unreal Engine. The hologram is projected onto the floating hologram device. The flowcharts of this processing is shown in Figure 8.

jmis-4-4-289-g8
Fig. 8. Flowcharts of character floating hologram
Download Original Figure

In order to detect the motion of the person, it extracts the direction and rotates information of the shoulder and the arm from the skeleton information and connects these values to the character model. Figure 9 is shown it.

jmis-4-4-289-g9
Fig. 9. Connection between person motion and character model.
Download Original Figure
3.2. Simulation Results

We use the Microsoft’s Kinect v2. The resolution of the depth information which is captured by Kinect v2 is 512×424. We also The Unreal Engine whose version is 4.17.2. We place the Kinect at 1.9m height to recognize the whole body of a person and at −20° in the vertical direction as shown Fig. 10.

jmis-4-4-289-g10
Fig. 10. Simulation environment.
Download Original Figure

Figure 11 is shown the skeleton information of a captured person and the hologram image that is created by Unreal Engine. When the person moves, the character model also moves in the real time.

jmis-4-4-289-g11
Fig. 11. Skeleton information and hologram image
Download Original Figure

Figure 12 is shown that the hologram is projected onto the floating hologram device. The hologram can be viewed in every direction. The hologram is projected onto 4 panels so it looks like the hologram is directly projected in the air.

jmis-4-4-289-g12
Fig. 12. Character floating hologram
Download Original Figure

IV. CONCLUSION

In this paper, we implement the floating hologram using the depth information. It extracts the skeleton information from the depth information and recognizes the motion information. It connects the motion information with the character model using the Unreal Engine. Previous contents of floating hologram are provided only one direction from the creator to the user. However, this proposed method can provide the content as the interaction. We expect to improve the use of the floating hologram through the proposed method.

Acknowledgement

This research was supported by The Leading Human Resource Training Program of Regional New industry through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and future Planning (No. 2017045434).

REFERENCES

[1].

H. Jung, S. Jo, D. Sin, and J. Kim, “Implementation of Real-time Floating Holograms by Kinect,” Proceeding of KIIT Summer Conference, pp. 252-253, 2016.

[2].

M. Choi, D. Jeong, H. Jang, B. Kim, H. Jung, and H. Ko, “Development of Interactive Character Hologram Product, ‘Moment’,” Journal of Korea Multimedia Society, Vol. 20, No. 2, pp. 440-443, 2017.

[3].

Korea Creative Contents Agency, “Recent Trends and Examples of 3D Hologram Technology,” 2014.

[4].

N. Kim and Y. Lim, “Recent Research Trend of Hologram Fusion Industry Technology,” The Journal of The Korean Institute of Communication Sciences, Vol. 34, No. 2, pp.35-41, 2017.

[5].

D. Gabor, “A New Microscopic Principle,” Nature, Vol. 161, pp. 777-778, 1948.

[6].

D. Gabor, “Microscopy by Reconstructed Wave-fonts,” Proceeding of the Royal Society, Vol. 197, pp. 454-487, 1949.

[7].

R. Thange, P. Sugdare, V. Kaudare, and V. Jain, “Interactive Holograms using Pepper Ghost Pyramid,” International Journal for Scientific Research & Development, Vol. 4, No. 1, pp. 1221-1224, 2016.

[8].

J. Ryu and J. Lee, “A Study on The Design of Stage and Exhibition Space by Using Pseudo-hologram,” Journal of Architectural Institute of Korea, Vol. 36, No. 2, pp. 1108-1109, 2016.

[9].

X. Tong, P. Xu, and X. Yan, “Research on Skeleton Animation Motion Data Based on Kinect,” proceeding of Fifth International Symposium on Computational Intelligence and Design, pp. 347-350, 2013.