Journal of Multimedia Information System
Korea Multimedia Society
Section A

Enriching Natural Monument with User-Generated Mobile Augmented Reality Mashup

Choonsung Shin1, Sung-Hee Hong2, Hyoseok Yoon3,*
1Graduate School of Culture, Chonnam National University, Gwangju, Korea, cshin@jnu.ac.kr
2Hologram Research Center, Korea Electronics Technology Institute, Seoul, Korea, shhong@keti.re.kr
3Division of Computer Engineering, Hanshin University, Osan-si, Gyeonggi-do, Korea, hyoon@hs.ac.kr
*Corresponding Author : Hyoseok Yoon, 137 Hanshindae-gil, Osan-si, Gyeonggi-do, 18101 Korea, +82-31-379-0645, hyoon@hs.ac.kr.

© Copyright 2020 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Feb 25, 2020; Revised: Mar 02, 2020; Accepted: Mar 03, 2020

Published Online: Mar 31, 2020

Abstract

This paper proposes a mobile augmented reality mashup for cultural heritage sites such as natural monuments. Several benefits of mobile augmented reality solutions are ideal for preserving and protecting cultural heritage sites. By presenting mobile augmented reality mashup scenarios and mobile mashup framework, we introduce how user-generated multimedia contents can be added. We present two scenarios of Mashup Viewer and Mashup Maker. In Mashup Viewer mode, visitors can create new AR contents using mashup tools for memo, Twitter, images and statistical graphs. In Mashup Maker mode, other visitors also can view the user-generated multimedia AR contents using QR codes as access points. To show feasibility of our approach in mobile platforms, we compare several detection algorithms on PC and mobile platform and report on deployment of our approach in a natural monument museum. With our proposed mashup tools, visitors to the cultural heritage sites can enjoy default AR contents provided by the site administrators and also participate as active content producers and consumers.

Keywords: Cultural heritage; Mobile augmented reality; Mobile mashup; Natural monument

I. INTRODUCTION

Cultural Heritage (CH) sites such as archaeological remains, natural monuments and museums have multiple dimensions of historical, cultural, natural and academic values. Since CH sites should be preserved and protected, many studies adopted augmented reality (AR) to supplement CH sites in non-invasive ways. Diek and Jung have identified economic, experiential, social, epistemic, cultural, historical and educational values of AR in CH sites [1]. They considered AR as an effectively promising technology that can preserve CH and enhance visitors’ satisfaction while strengthening visitors’ learning experience [1]. There are many examples of employing AR in CH sites [2][3][4][5][6][7]. However, even if AR contents are used, visitors are often offered with limited explanation on-site using pre-defined static information on various assets. In this paper, we propose a mobile AR mashup to enrich AR contents with user-generated multimedia on CH sites such as natural monuments.

Our contributions on this paper are as follows.

  • We propose an extended framework for mobile AR mashup using various recognition/tracking algorithms, mashup content management and mashup UI.

  • We develop an AR mashup content generation method for end-users.

  • We demonstrate feasibility of our approach by identifying a proper configuration setting in mobile environment and deploying on a real natural monument museum.

II. RELATED WORK

2.1. Augmented reality for cultural heritage

The preservation of CH sites including museums, galleries and archaeological remains is important to study past cultures and civilizations. For this reason, AR technology is applied to CH sites to preserve and protect CH sites. Diek and Jung used a stakeholder approach to explore the perceived value of the AR implementation within the museum context [1]. They have identified enhanced user satisfaction and user experience for using AR in CH sites [1]. Pedersen et al. developed the TombSeer software prototype to support embodied interaction to visitors of the Egyptian Tomb of Kitines replica exhibit at the Royal Ontario Museum [2]. This prototype employed head-mounted display (HMD) to present 3D interactive holographic images [2]. Martínez et al. presented TinajAR which is a multi-marker video-based AR edutainment application for showing virtual ceramic pieces and explaining the pottery process through virtual avatars [3]. Boboc et al. proposed a mobile AR application that contains historical information related to the Roman poet Ovid [4]. Guimarães et al. applied AR technology as a form of digital media art to the Caloust Gulbenkian Foundation Garden in Lisbon, Portugal [5]. Voinea et al. developed an AR application to visualize and explore a 3D model of a digital replica of a recognized UNESCO monument [6]. Olesky and Wnuk developed an AR application to display historical photos in the former Jewish district in Warsaw, Poland [7]. They found that such AR application helps facilitate positive attitudes towards a place and enhance multicultural place meaning [7].

2.2. Mobile mashup

Unlike traditional data mashup [8][9], AR technologies are adopted in mashup tools to create and author user-generated/user-participated contents in the real-world. Shin et al. developed a general framework for mobile AR mashup that is consisted of object tracking, context management, content management, visualization and interaction components [10]. Shin et al. also developed RGB-D SLAM based social spatial mashup to create a 3D feature map and link information to the 3D space [11]. Yoon and Woo introduced a concept of context-aware mobile augmented reality (CAMAR) mashup [12]. They later defined and solidified the concept of in-situ AR mashup as, “seamlessly combining additional contextual information to a real-world object to enrich content in one or more senses, where mashup process and its outcome are enhanced with context awareness and visualized with augmented reality for intuitive UI/UX [13]”.

Langlotz et al. developed an in-situ authoring solution to create 3D content using 3D primitives and 2D annotations [14]. Seo et al. proposed a concept of “webizing” AR mashup to connect legacy things with existing Web services [15]. Meawad presented InterAKT as a solution to enhance AR browsing in the real-world with crowdsourced geo-social content [16].

2.3. Compare and contrast to our approach

Previous AR systems can be categorized into fiducial marker-based and feature point-based (i.e., non-fiducial marker-based) AR systems. Feature point-based AR systems require pre-modelling of the real space for recognition and tracking capabilities. For a small space, feature point-based AR systems (i.e., AR Core, AR Kit, Vuforia) can be applied to various applications. However, they are not suitable in terms of object recognition and tracking performance for a guidance application covering a large indoor space. On the other hand, since fiducial marker-based AR systems have been developed for a long time, they are fast and stable in terms of object recognition and tracking performance. These systems require specially designed frame-based markers such as Vuforia’s VuMark which can be visually distractive and incompatible. Compared to previous approaches, we use widely accepted QR codes to recognize and track objects of interest. Our approach is scalable to be used in various objects in a large indoor space. Furthermore, we can easily identify objects of interest and author/mashup user-generated multimedia contents with associated QR code identifiers for mobile computing environment (i.e., smartphone and tablet).

III. MOBILE AUGMENTED REALITY MASHUP SCENARIOS

We first introduce our mobile AR mashup scenarios on natural monuments. There are Mashup Maker and Mashup Viewer scenarios for our mobile AR mashup to support end-user mashup as shown in Figure 1. First scenario is Mashup Maker mode that allows users to link web contents to objects in the real world. For this purpose, Mashup Maker mode is composed of target object selection, multimedia content category selection, multimedia content adjustment/modification, and mashup confirmation steps. An output of Mashup Maker mode is user-generated mobile AR content which is stored to a cloud database. For example, consider a visitor is in a natural monument such as Asiatic Black Bear’s habitat. The visitor can select a QR code on a tree in the real world in the target object selection step. Then the visitor can search for appropriate multimedia content category in the multimedia content category selection step using various Web services. In multimedia content adjustment and modification step, the visitor can select recently taken photos of Asiatic Black Bears to create a new mobile AR content anchored to the QR code on the tree, which is stored on a cloud database.

jmis-7-1-25-g1
Fig. 1. Mashup Maker and Mashup Viewer scenarios.
Download Original Figure

Second scenario is Mashup Viewer mode where the saved mobile AR mashup content is retrieved to be used for further guidance on CH sites and sharing. For example, other visitors can view the same QR code on the tree with their mobile devices. Then, user-generated multimedia AR contents such as photos, videos and handwritings of other visitors are retrieved from the cloud database and presented as additional and enriched AR contents to consume.

IV. FRAMEWORK FOR MOBILE AR MASHUP

Previously, a general framework for mobile AR mashup has been designed to include object tracking, context management, visualization, interaction and content management components [10]. Also, similar concepts of mobile AR mashup [11][12][13] and in-situ authoring [14] have been introduced and developed. Extending this general framework as shown in Figure 2, we integrated object tracking and recognition modules to cover various detection algorithms based on QR codes. Furthermore, we present implementation details on representational elements of the created mobile AR mashup contents.

jmis-7-1-25-g2
Fig. 2. General framework for mobile AR mashup.
Download Original Figure
4.1. Elements of mobile AR mashup contents

We define elements that constitute mobile AR mashup contents in detail. The basic elements are mashup target object (MTO), mashup target contents (MTC), and the resulting mashup AR contents (MARC). An MTO represents an object being targeted for mashup that is physically located in the real-world. In our scenarios, we use QR codes attached to real-world objects as mashup targets. A real-world object has multiple attributes including object name, keyword, corresponding QR code ID, location, and description. The MTC represent various types of multimedia information available in Web services and local devices. Each of the MTC is represented with attributes including multimedia category, time, title, raw data and its data type. The MARC are a set of collocated information stored to a cloud database that links to an MTO in the real-world. Each of the resulting MARC includes a QR code ID referencing to an MTO, timestamp, MARC author information and MTC to display when the QR code is recognized.

4.2. Mashup content management

The resulting mashup content is stored on a cloud database. On the cloud database, different pieces of information are distributed across visitor, exhibition, and mashup contents tables and corresponding manager components as shown in Figure 3. The included information is divided into two parts: basic information tables and mashup content tables. The basic information table includes visitor information that describes visitor profile and exhibition information that describes exhibition items in a museum. The mashup content information includes memo, SNS, photo and visit history information tables. These contents information tables are connected to basic information tables and raw data.

jmis-7-1-25-g3
Fig. 3. Mashup AR content information.
Download Original Figure

Even though, visitors as producers have explicitly chosen to participate in our AR application by authoring and sharing new AR contents, we share the concern to protect visitors’ personal information. To address this issue, in the AR application, we only collect visitors’ information through their account using our application. This information is never shared. When a visitor creates a new mashup content, we provide options to either publish new AR contents publicly with creator information or anonymously. When the visitor chooses to reveal the creator’s information, only then this information is made public under the user’s explicit consent. Since our application is only deployed to a site with a small number of visitors, we acknowledge that this topic of protecting visitors’ information deserves further studies.

4.3. QR code recognition and tracking

Our mobile AR mashup tool mainly uses QR codes that contain identification information about objects in the real-world. In our work, we used QR codes for extracting identification and geolocation. As shown in Figure 4, QR code is 2D image pattern containing version information such as data, version, formation and position. Based on this structure and information, it is possible to identify the QR code and its 3D position.

jmis-7-1-25-g4
Fig. 4. QR code structure.
Download Original Figure

QR code is recognized and tracked following the flowchart shown in Figure 5. First, an input image from the camera is obtained and tested for QR code detection. QR code recognition step starts when QR code in the previous frame was not tracked continuously. Once position markers in the QR code are detected, then the orientation of the QR code is found. Otherwise, Optical Flow tracking starts to recover 3D position of the marker based on previous image and homography. Finally, 3D position of the QR code is estimated via pose estimation step.

jmis-7-1-25-g5
Fig. 5. QR code recognition and tracking.
Download Original Figure
4.4. AR UI for mashup

After, an object is recognized via its corresponding QR code, our proposed mashup tool allows users to view AR contents related to the object and also modify AR contents. For this purpose, 2D/3D rendering components are supported according to the user view for mobile mashup. The 2D renderer is used for presenting texts and images, while 3D renderer is used for showing 3D contents. Furthermore, mashup-related event processing and UI presentation are included in our mashup tool. During mobile mashup, several events such as object detected, object lost, view mode changed and mashup category changed are registered with related logics and GUI. Figure 6 shows our mobile AR mashup UI.

jmis-7-1-25-g6
Fig. 6. Mobile AR mashup UI.
Download Original Figure

V. IMPLEMENTATION AND VALUATION

5.1. Implementation

We implemented the proposed mobile AR mashup on Android tablets with OpenCV for end-users. The mashup content server is composed of Apache Web service, SQL database and PHP scripts.

As shown in Figure 7, Mashup Viewer mode provides guide information when a QR code is recognized from camera image frame. The Mashup Viewer then provides additional information related to the object detected. A set of related information includes Twitter messages, memos, photos and visiting statistics. When a user clicks on an appropriate icon, the Mashup Viewer provides user-generated contents previously added by other users. Furthermore, the Mashup Maker mode allows the user to add more information about the object. When the user touches the button located in the bottom of the screen, the mashup UI for each mashup multimedia content is launched. Each icon located on the right side represents Memo, Twitter, Photo, Statistics Graph and Exit button as shown in Figure 8. The user can add a memo on the screen by taking a note with a pen or the user can search and add pictures related to the exhibition object from Web services. Similarly, Tweets can be selectively added by the users from Twitter service.

jmis-7-1-25-g7
Fig. 7. Mashup Viewer mode where different monthly information is presented.
Download Original Figure
jmis-7-1-25-g8
Fig. 8. Mashup Viewer: Memo service, image search service.
Download Original Figure
5.2. Evaluation

To show feasibility of our mobile AR mashup using QR codes, we evaluated several well-known feature detection algorithms combined with Good Feature (GF), Speed Up Robust Features (SURF), Scale Invariant Feature Transformation (SIFT), Features from Accelerated Segment Test (FAST), and Oriented FAST/Rotated BREIF (ORB). For comparison, we compared performance on PC and mobile Android platforms. GF is fast and extracts features robust on translation of images [17]. SURF is speeded-up robust feature detection algorithm based on hessian calculation [18]. SIFT is a feature detection algorithm that is robust on scale and rotation of images [19]. FAST is fast feature detection based on decision tree [20]. ORB is improved by the combination of FAST detection and BRISK description [21]. We used Optical Flow for detecting motion information between two images [22]. For comparison, we compared performance on PC (Intel Core i7 CPU 3.40GHz, 8GB) and mobile Android platforms (Samsung Galaxy Tab 10.1, 1.4GHz CPU).

Table 1 and Table 2 show performance of recognition and tracking of a QR code in PC and Android platforms. QR code recognition on both PC and Android platforms took 27 to 57 ms. The performance of QR code recognition was comparable in two platforms. However, QR code detection took much more time in Android platform compared to PC. The longest detection time in PC was Moving Average + Optical Flow (SIFT) combination, which took 228 ms. On comparison, the same algorithm took over 27 seconds. So, on mobile platforms such as smartphones and tablets, we are restricted to use a few selected algorithms. For example, Moving Average took 69 ms, Moving Average + Optical Flow (GF) took 584 ms and Moving Average + Optical Flow (FAST) took 1,571 ms, respectively. Our evaluation results suggest that these three algorithms are more appropriate to apply in mobile AR mashup tools.

Table 1. QR code recognition and tracking performance on PC.
Algorithms on PC Detection Recognition
Moving Average (MA) 7 27
MA + Optical Flow (GF) 26 29
MA + Optical Flow (SURF) 94 29
MA + Optical Flow (SIFT) 228 29
MA + Optical Flow (ORB) 36 30
MA + Optical Flow (FAST) 27 28
Download Excel Table
Table 2. QR code recognition and tracking performance on mobile.
Algorithms on Mobile Detection Recognition
Moving Average (MA) 69 35
MA + Optical Flow (GF) 584 37
MA + Optical Flow (SURF) 6,098 25
MA + Optical Flow (SIFT) 27,822 36
MA + Optical Flow (ORB) 2,686 57
MA + Optical Flow (FAST) 1,571 38
Download Excel Table

We also evaluated relationships between QR code sizes and recognizable distances. For this purpose, we prepared different sizes of QR codes ranging from 3 cm to 10 cm. Note that QR codes have the same width and height. We placed QR codes in different distances ranging from 20 cm to 70 cm. As shown in Table 3, larger QR codes in shorter distance were recognized well. However, smaller QR codes from further distance were, not recognized well. We found that the QR code for mobile AR mashup should be big enough to be recognized in indoor/outdoor environments. Our recommendation is that QR codes of at least 8 cm will work well for various distances from 20 cm to 70 cm.

Table 3. QR code recognition with different size and distance.
Distance\Size 3 cm 5 cm 8 cm 10 cm
20 cm O O O O
30 cm O O O O
50 cm X O O O
70 cm X X O O
Download Excel Table

Based on the observation on the performance of the proposed mobile AR mashup, we deployed this mobile AR mashup service to a natural monument museum located in Daejeon, South Korea. In the natural monument museum, we observed and found that visitors were interested in user-generated information over the default contents linked to the exhibition objects. Since the proposed mashup tool allowed users to take a memo, connect images related to the object or to link Twitters, visitors were actively engaged in their museum tours. In the museum’s manager view, our mashup tool was a convenient tool for updating new information to exhibition objects. Usually, default information on objects was fixed or not frequently updated. With end-user mobile AR mashup, additional and user-generated information could be easily updated by visitors. Furthermore, visitors’ experiences on the museum were continuously accumulated and shared among visitors.

VI. CONCLUSION

In this paper, we introduced a mobile AR mashup for CH sites. To support user-generated content mashup based on mobile AR, we presented how mashup elements, QR code recognition/tracking, content management and AR mashup UI are utilized. Based on these components, users are allowed not only to see the default information related to exhibited objects, but also can add and connect to external information sources using Web services. Through our evaluation, we found that the proposed mashup is fast enough to support content mashup on mobile Android platform. Through the deployment and observation over the natural monument museum, we found that the proposed mashup was useful to give visitors engaging chances to add additional and multimedia information to the museum.

In spite of such outcomes of our approach, there are several limitations that should be addressed in further studies. First, we want to improve the overall speed and quality of mobile AR tracking and recognition. Second, a longitudinal study on visitors’ behaviors and content update should be considered. Nonetheless, our proposed mobile AR mashup provided end-users with abilities to create and author user-generated contents in CH sites. As the mobile AR technology becomes more mature to cover digital twins in mixed reality [23] and Web AR [24], we believe our approach can bridge traditional AR to become user-participated and collaborative mobile AR.

Acknowledgement

This work was supported by “The Cross-Ministry Giga Korea Project” Grant funded by the Korea government (Ministry of Science and Information Technology) (No. GK19C0200, Development of full-3D mobile display terminal and its contents) and by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1D1A1B07043983). Icons in Figure 1 are Bear by Loren Klein, Bear by www.mindgraphy.com, forest by Nesdon Booth, QR Code by Vectors Point, QR Code by Yarden Gilboa, User by Tommy Lau, Holding Phone by Siddharth Dasari, user generated by Sharon Showalter, and Mashup by H Alberto Gongora, all from the Noun Project.

REFERENCES

[1].

M.C.t. Dieck and T. H. Jung, “Value of augmented reality at cultural heritage sites: A stakeholder approach,” Journal of Destination Marketing & Management, vol. 6, no. 2, pp. 110-117, Jun. 2017.

[2].

I. Pedersen, N. Gale, P. Mirza-Babaei, and S. Reid, “More than meets the eye: The benefits of augmented reality and holographic displays for digital cultural heritage,” Journal on Computing and Cultural Heritage (JOCCH), vol. 10, no. 2, Article 11, Mar. 2017.

[3].

B. Martínez, S. Casas, M. Vidal-González, L. Vera, and I. García-Pereira, “TinajAR: An edutainment augmented reality mirror for the dissemination and reinterpretation of cultural heritage,” Multimodal Technologies and Interaction, vol. 2, no. 2, Article 33, Jun. 2018.

[4].

R.G. Boboc, M. Duguleană, G.-D. Voinea, C.-C. Postelnicu, D.-M. Popovici, and M. Carrozzino, “Mobile augmented reality for cultural heritage: Following the footsteps of Ovid among different locations in Europe,” Sustainability, vol. 11, no. 4, Article 1167, Feb. 2019.

[5].

F. Guimarães, M. Figueiredo and J. Rodrigues, “Augmented Reality and Storytelling in heritage application in public gardens: Caloust Gulbenkian Foundation Garden,” in Proceedings of 2015 Digital Heritage, Granada, Spain, pp. 317-320, Sep. 2015.

[6].

G.-D. Voinea, F. Girbacia, C. C. Postelnicu, and A. Marto, “Exploring cultural heritage using augmented reality through Google’s project Tango and ARCore,” in Proceedings of the 1-st International Conference on VR Technologies in Cultural Heritage, Brasov, Romania, pp. 93-106, May 2018.

[7].

T. Olesky and A. Wnuk, “Augmented places: An impact of embodied historical experience on attitudes towards places,” Computers in Human Behavior, vol. 57, pp. 11-16, Apr. 2016.

[8].

A. M. Elmisery and M. Sertovic, “Trusted fog based mashup service for multimedia IoT based smart environmental monitoring,” Journal of Multimedia Information System, vol. 4, no. 4, pp. 171-178, Dec. 2017.

[9].

A. M. Elmisery and M. Sertovic, “Environmental IoT-enabled multimodal mashup service for smart forest fires monitoring,” Journal of Multimedia Information System, vol. 4, no. 4, pp. 163-170, Dec. 2017.

[10].

C. Shin, B.-H. Park, G.-M. Jung, and S.-H. Hong, “Mobile augmented reality mashup for future IoT environment,” in Proceedings of 2014 IEEE 11-th International Conference on Ubiquitous Intelligence and Computing, Bali, Indonesia, pp. 888-891, Dec. 2014.

[11].

C. Shin, Y. Kim, J. Hong, S. Hong, and H. Kang, “Social spatial mashup for place and object - based information sharing,” in Proceedings of the 2016 Symposium on Spatial User Interaction, Tokyo, Japan, pp. 185, Oct. 2016.

[12].

H. Yoon and W. Woo, “CAMAR mashup: Empowering end-user participation in U-VR environment,” in Proceedings of 2009 International Symposium on Ubiquitous Virtual Reality, Gwangju, Korea, pp. 33-36, Jul. 2009.

[13].

H. Yoon and W. Woo, “Concept and applications of in-situ AR mashup content,” in Proceedings of the 1-st International Symposium on From Digital Footprints to Social and Community Intelligence, Beijing, China, pp. 25-30, Sep. 2011.

[14].

T. Langlotz, S. Mooslechner, S. Zollmann, C. Degendorfer, G. Reitmayr, and D. Schmalstieg, “Sketching up the world: in situ authoring for mobile augmented reality,” Personal and Ubiquitous Computing, vol. 16, pp. 623-630, Aug. 2012.

[15].

D. Seo, Doyeon Kim, B. Yoo and H. Ko, “Webized augmented reality mashup for legacy things,” in Proceedings of 2017 IEEE International Conference on Consumer Electronics, Las Vegas, Nevada, USA, pp. 15-16, Jan. 2017.

[16].

F. Meawad, “InterAKT: A mobile augmented reality browser for geo-social mashups,” in Proceedings of the 4-th International Conference on User Science and Engineering, Melaka, Malaysia, pp. 167-171, Aug. 2016.

[17].

J. Shi and C. Tomasi, “Good features to track,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, pp. 593-600, Jun. 1994.

[18].

H. Bay, T. Tuytelaars and L. Van Gool, “SURF: Speeded up robust features,” in Proceedings of European Conference on Computer Vision, Graz, Austria, pp. 404-417, May 2006.

[19].

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, pp. 91-110, Nov. 2004.

[20].

E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Proceedings of European Conference on Computer Vision, Graz, Austria, pp. 430-443, May 2006.

[21].

E. Rublee, V. Rabaud, K. Konolige and G. Bradski, “ORB: an efficient alternative to SIFT or SURF,” in Proceedings of 2011 International Conference on Computer Vision, Barcelona, Spain, pp. 2564-2571, Nov. 2011.

[22].

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of DARPA Image Understanding Workshop, pp. 121-130, Apr. 1981.

[23].

A. Peuhkurinen and T. Mikkonen, “Embedding web apps in mixed reality,” in Proceedings of 3rd International Conference on Fog and Mobile Edge Computing, Barcelona, Spain, pp. 169-174, Apr. 2018.

[24].

X. Qiao, P. Ren, S. Dustdar, L. Liu, H. Ma and J. Chen, “Web AR: A promising future for mobile augmented reality—State of the art, challenges, and insights,” in Proceedings of the IEEE, vol. 107, no. 4, pp. 651-666, Apr. 2019.

Authors

Choonsung Shin

jmis-7-1-25-i1

Choonsung Shin received his B.S. degree in Computer Science from Soongsil University in 2004. He received his M.S. and Ph.D. degrees in Information and Communication (Computer Science and Engineering) from the Gwangju Institute of Science and Technology, in 2006 and 2010, respectively. He was a Postdoctoral Fellow at the HCI Institute of Carnage Mellon University from 2010 to 2012. He was a principal researcher at Korea Electronics Technology Institute from 2013 to 2019 and a CT R&D Program Director of Ministry of Culture, Sports and Tourism from 2018 to 2019. In September 2019, he joined the Graduate School of Culture, Chonnam National University where he is currently an associate professor. His research interests include culture technology & contents, VR/AR and Human-Computer Interaction.

Sung-Hee Hong

jmis-7-1-25-i2

Sung-Hee Hong received his B.S. degree in the Department of Electrical Engineering from Sungkyunkwan University in 1999. He received his M.S. and Ph.D. degrees in Information and Communication Engineering from Sungkyunkwan University in 2001 and 2016, respectively. He is a Managerial Research Engineer at Korea Electronics Technology Institute since 2001 and Director of Hologram Research Center from 2019. He is interested in holography, multimedia and computer graphics.

Hyoseok Yoon

jmis-7-1-25-i3

Hyoseok Yoon received his B.S. degree in Computer Science from Soongsil University in 2005. He received his M.S. and Ph.D. degrees in Information and Communication (Computer Science and Engineering) from the Gwangju Institute of Science and Technology (GIST), in 2007 and 2012, respectively. He was a researcher at the GIST Culture Technology Institute from 2012 to 2013 and was a research associate at the Korea Advanced Institute of Science and Technology, Culture Technology Research Institute in 2014. He was a senior researcher at Korea Electronics Technology Institute from 2014 to 2019. In September 2019, he joined the Division of Computer Engineering, Hanshin University where he is currently an assistant professor. His research interests include ubiquitous computing (context-awareness, wearable computing) and Human-Computer Interaction (mobile and wearable UI/UX, MR/AR/VR interaction).