Section B

Affective Computing Among Individuals in Deep Learning

Seong-Kyu (Steve) Kim 1 , 2 , *
Author Information & Copyright
1Vice President of HackersLab Partners, Kuala Lumpur, Malaysia, +82-02-760-1171, guitara7@skku.edu
2Adjunct Professor of Dept. of IT, Sungkyunkwan University, Korea, +82-02-760-1171, guitara7@skku.edu
*Corresponding Author : Seong-Kyu (Steve) Kim, HackersLab Partners, Kuala Lumpur, Malaysia, +82-02-760-1171, guitara7@skku.edu

© Copyright 2020 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Mar 11, 2020; Revised: Mar 22, 2020; Accepted: Apr 30, 2020

Published Online: Jun 30, 2020

Abstract

This paper is a study of deep learning among artificial intelligence technology which has been developing many technologies recently. Especially, I am talking about emotional computing that has been mentioned a lot recently during deep learning. Emotional computing, in other words, is a passive concept that is dominated by people who scientifically analyze human sensibilities and reflect them in product development or system design, and a more active concept that studies how devices and systems understand humans and communicate with people in different modes. This emotional signal extraction, sensitivity, and psychology recognition technology is defined as a technology to process, analyze, and recognize psycho-sensitivity based on micro-small, hyper-sensor technology, and sensitive signals and information that can be sensed by the active movement of the autonomic nervous system caused by human emotional changes in everyday life. Chapter 1 talks about overview and Chapter 2 shows related research. Chapter 3 shows the problems and models of real emotional computing and Chapter 4 shows this paper as a conclusion.

Keywords: Artificial Intelligence; Big Data; Affective Computing; Deep Learning; Cyber Security

I. INTRODUCTION

In this paper, artificial intelligence (AI) is a field of computer science that focuses primarily on solving cognitive problems associated with human intelligence, such as learning, troubleshooting, and pattern recognition. While artificial intelligence, commonly referred to as “AI,” may imply robotics or future form, AI is becoming a reality of advanced computer science beyond the small robots in science fiction. Professor Pedro Domingos, a renowned scientist in the field, describes the “five groups” of mechanical learning [1-3], consisting of a symbolist based on logic and philosophy, a connecter derived from neuroscience, an evolutionist related to evolutionary biology, a Bayesian dealing with statistics and probabilities, and a psychology-based analogy. Recent improvements in statistical computing efficiency have enabled Beijian to successfully advance several areas in the field of “machine learning.” Similarly, the evolution of network computing has allowed the connecter to further develop its sub-sector under the name of “Deep Learning (DL).” Machine learning (ML) and DL are both computer science derived from artificial intelligence. It is a study of the problems and architecture of emotional learning in this artificial intelligence [4-6]. It is also a technique to measure and quantify levels of hormones, neurotransmitter speed, and nervous system activity to measure feelings that are a continuous state of emotion or emotion. Although it is difficult to accurately implement this until now, it is at the level of indirectly measuring sensitivity by creating a computing model based on the sensitivity or mood characteristics obtained from the observation of human emotional systems. So, this is the stage of emotional inference according to events generated by agent-to-agent interaction in multi-agent system and this paper seeks to study artificial intelligence emotional computing by overcoming these shortcomings.

II. Related Research

This paper studies the concepts of artificial intelligence and the study of various problems and models of emotional learning on how drug artificial intelligence and strong intelligence affect us. It also uses this emotional artificial intelligence to present better models in the future and to create improvement tasks to improve current problems. And it deals with the most important issue of empathy among many artificial intelligence systems.

2.1. Artificial Intelligence

Artificial Intelligence is a set of algorithm systems designed to think, sense and act like humans. This allows the definition of artificial intelligence as the concept of an agent that achieves what a person intended without human intervention. For example, robot vacuum cleaners that act differently depending on the type of room and the cleanliness of the room, or artificial intelligence washing machines that optimize the washing method according to the amount and type of laundry can also be seen as a kind of artificial intelligence as agents that achieve human goals on behalf of humans. Artificial intelligence, which was first created in 1956 at the Dartmouth Conference of 10 mathematicians, scientists and others, will reach the current stage of development through several evolution and decline. In the early days, attempts were made to translate human problem-solving logic into computer language. As artificial intelligence draws attention, machine learning and deep learning, which are core technologies, have also emerged as important keywords [7-9].

Deep learning is a kind of machine learning. The most important factor in the evolution of artificial intelligence is “learning.” Learning here is ‘a series of processes that create a system that extracts and classifies characteristics in any way’ and the choice of characteristics determines the performance of learning to recognize patterns and reduce error values. Machine learning refers to how to improve the performance of a particular task through experience. Due to the fact that there are so many different factors in the real world, and the exceptions that cannot be explained by the general rules, classical artificial intelligence required endless modifications and supplementation of the infinite cases in the application of the real problem in Fig. 1.

jmis-7-2-115-g1
Fig. 1. Definition of Artificial Intelligence.
Download Original Figure
2.2. Type of Artificial Intelligence

Artificial intelligence was first established by Ellen Tuning. Data-based artificial intelligence and self-learning artificial intelligence can be divided.

Also, the biggest difference between machine learning and machine learning is that deep learning can learn the data to be used for classification, while machine running must provide learning data manually, which is the biggest difference between deep learning and machine learning.

2.2.1. Machine Learning

The term machine learning has been around for a long time, but in the 2010s, it became a popular term for algorithms that use artificial intelligence. In terms similar to machine learning, deep learning, data mining and pattern recognition are also used, and it is not easy to understand what the difference is unless you are a related major, and it is easy to misunderstand that it is something different from machine learning. In conclusion, there is a slight difference, but I think intersecting is a very large and very similar meaning. There is no denying that there is a strategic purpose to make a new term appear trendier and sell better, even though there is no big difference in science or technology. Machine learning technology, as the term machine learning, is based on machine learning. But if you need data to learn [10].

The machine-learning algorithm can be classified by several criteria depending on how it works, with the biggest criteria being Machine Learning and Unsupervised Learning. First of all, leadership learning is the concept of giving and learning test genealogy with an answer sheet. The existence of the answer itself means that you look to guess the answer later on. Therefore, map learning is an algorithm used to find answers. An example of a map study is to give them multiple pictures and tell them to distinguish between a dog and a cat. At this time, they give a reply in advance to see which one is a picture of a cat or a dog. The data that gives you this learning is called training data, and the answer to training data is called labelling. Such labelled training data will allow the map-learning algorithm to learn in its own way and distinguish whether the photo coming in is a personal or a cat. This trained algorithm has become a model that can distinguish a dog from a cat. This model will then be able to properly distinguish cats and dogs even if unlabeled (unanswered) data is received in the future as shown in Fig. 2. Unlabeled data is called test data and how accurately the answer is correct will determine the performance of the algorithm. If you search “dog muffin AI” on Google, there are a lot of pictures of a mix of chihuahua and muffin to tell if you look at the computer as a joke, but these days, it is said to be extremely accurate.

jmis-7-2-115-g2
Fig. 2. Process of Machine Learning.
Download Original Figure

On the other hand, giving and learning labeled (undescripted) data is called unguided learning. Because it does not give an answer, non-guided learning is not taught for the purpose of guessing the answer. However, it serves to provide useful information on problems that don’t give answers, such as grouping together what data are similar, or determining which nature defines the data well. This refers to the situation in which pictures are entered for the purpose of distinguishing pictures of dogs and cats as before. Non-guided learning does not tell which picture is a dog or a cat. That is, it is not labeled. However, the algorithm can identify which pictures are similar to each other by looking at the pictures. So, the cat-like photos are divided into one group and the dog-like photos into two groups.

This distinction can be made, but it cannot tell us who this group is. It is up to a person to determine whether the distinction has been made well and how the group is defined. In the above case, the algorithm does not answer that the cat picture is a cat and the dog picture is a dog, but since the cat picture and the dog picture are different groups, one can see the results and the other can be judged as a cat picture and the other as a dog picture. This non-directory learning cannot be used to inform the final answer, but it is widely used as it helps people make decisions by giving them useful information to identify the characteristics of the data.

2.2.2. Deep Learning

Deep Learning is a machine-learning model inspired by a biological brain circuit. Like humans, what do humans think, think, and store memories. The human brain consists of 100 billion neurons and 100 trillion synapses. When a synapse sends an electrical signal to a neuron, the neuron responds to this electrical signal and passes it to the next neuron. These neuron-synapses are a brain that allows humans to understand, think, and think in high dimensions. Deep running was designed with inspiration from these brain signals. The imitation of neurons in deep learning is called Perceptron, which was first proposed by Rosenblatt in 1958. Perceptron accepts input of x1 to xn and calculates the weighted sum of these inputs and predicts the final y by substituting the activation function. These basic perceptories alone are difficult to solve [11-12].

At this time, the characteristics of a cat such as pointed ears and brown fur are referred to as re-preservation. Adding a Hidden layer to a deep learning model means adding one more layer to extract this presentation. The deeper the layer is stacked, the more capable it is to combine the basic features to extract high-dimensional features one step above. The difference between machine learning and deep learning is this. The classic machine-learning algorithm tries to map directly to input features (e.g., the molecular unit of a cat) to determine whether they are cats. As the term Deep Learning is found, the algorithm itself is embedded in the process of re-preservation learning, which extracts useful information from a very abstract molecular cat and converts it into a high-dimensional feature that can judge a cat. Deep learning is challenging the human-specific realm, an area that is easy to do but difficult to formalize and explain, an intuitive and high-dimensional one can do-it-your Because deep learning is implemented only through data and through numerous parameters that locate the rules of data, it requires a lot of data.

For example, let’s create a deep learning system to find cats. It takes a lot of cat pictures to figure out if the machine is a cat. It will not be easy for the machine to distinguish the difference in the small details from the cat to distinguish the fox and the cat that look similar to the cat. Therefore, no matter how good a deep-learning model is implemented if numerous datasets are not supported. On the other hand, it is true that deep learning is inspired by brain science, but it does not necessarily apply brain science knowledge as it is [10]. Deep learning has a rather engineering aspect. In fact, even without a biological level, it is useful if it helps improve performance (Fig. 3).

jmis-7-2-115-g3
Fig. 3. Deep Learning Process.
Download Original Figure

III. Limit Overcome Model of Affective Computing in Deep Learning

Affective computing is a field of research and development of artificial intelligence related to designing systems and devices that can recognize, interpret and process human sensibility, and span the fields of computer science, psychology and cognitive science. The origin of this field dates back to the philosophical consideration of sensibility, the latest field of computer science, which originates from Rosarind Picard’s 1995 paper [13].

Affective Computing is computing that relates to, arises from, or deliberately influences emotion or other affective phenomena [13].

The motivation of this study is the ability to imitate empathy, and this machine must interpret human emotional state and control behavior to make appropriate responses to these sensibilities. Emotional computing also means an artificial intelligence-based system that can study, analyze and interpret human emotions. Recognizing the psychological response people see from physical or sensory stimuli, they are using it for human-computer interaction. Emotional engineering is a passive concept in which a person who scientifically analyzes human sensibility and reflects it in product development or system design is the main body. Emotional computing is also a more active concept in which machines and systems understand humans and communicate with people in various modes. To implement an emotional computer, Picard examines the sentiments of animals and machines as well as humans, whose core content is largely centered on the recognition, expression and possession of emotion. The first area of study in emotional computing is emotional cognition. In order for computers to recognize the emotions of other beings, they need information about expressive markers and appropriate contexts. Body signs such as facial expressions, voice intonation, body movements, and body changes that occur with emotional changes such as skin electrical conductivity, blood pressure and heart rate are examples of expressive markers. To ensure that the emotional computer is aware of human emotions correctly, these body tables.

The premise that changes in the body and the ground are in a relationship with a particular emotional state should be established. Because such a premise is difficult to establish on its own, information on contexts that can supplement information about expressive markers is needed. In other words, even information about the same marker would vary in the context in which the label occurred. This has already emerged as an important issue that needs to be addressed urgently in the field of cognitive science as it has become clear that recognition is context dependent, and various solutions to the problem have been proposed. Emotional computing is a solution proposed by Media Lab of the Massachusetts Institute of Technology, closely related to robotics, along with the sub-sumption architecture of Brooks. The premise that human emotions are universal is also necessary for computers to be aware of human emotions.

If the human emotional system is not universal, it would be very difficult to theorize about human emotions and it would be impossible to recognize them by computers. In this connection, Darwin (C. Darwin, 1872) [14] studied emotional expressions of animals such as cats, dogs and horses as well as humans, arguing that emotions were similar within the same species but differed between different species. Psychologist Ekman argued in a study of Papua New Guinea and primitive tribesmen in the 1960s that expression of emotion’s face is not culture-relative, universal, and its origins are biological, so it can be safely. Eckman (1969) [15] presented a list of six basic human emotions: anger, disgust, fear, happiness, sadness and surprise. Even if a premise for the universality of human emotions is established, the simple emotional classification theory will not be very helpful for emotional computing to implement its intended emotional awareness and requires a much more sophisticated emotional classification system (which Picard has not presented). The problem here is that it is difficult to find a specific system that fully reflects the universality of sentiment among the existing candidates for such a classification system.

Emotional computers are computers that recognize, express and possess human emotions. If Picard is to claim all three functions of an emotional computer, it must show how it can have emotions.

Of course, you don’t have to do the task yourself and you can rely on a theory that claims a ‘conscious machine,’ but it’s essentially a philosophical task and it won’t be easy for Picard. That includes the very difficult question of whether the system is the same as the human emotional system, even if there is emotion in the emotional computer.

3.1. Model of Emotional Computing

Affective computing can be divided into six application areas. They include learning, robotics, customer-centered management, image recognition and language recognition (Fig. 4).

jmis-7-2-115-g4
Fig. 4. Model of Emotional Computing.
Download Original Figure
  • - Learning Engine: PeppyPpy Pals, a startup for education programs, uses a system that recognizes users’ emotions through machine-learning-based learning to develop social skills and sensibility.

  • - Robotics: SOFTBANK’s emotional recognition robot Pepper is the world’s first robot to respond to user emotions.

  • - Customer Centered Management Customer Centric Management: Cloveraf, a marketing company, has proved that emotional computing can boost sales by installing LCD display ‘Selfpoint SelfPoint’ to recognize buyer’s emotional patterns.

  • - Image Recognition: Unruly, a U.S. digital advertising company, analyzed 1,500 faces and images watching the 2017 Super Bowl. Budweiser, a beer company, had the highest advertising effect at the time (22 percent).

  • - Speech Recognition: A system that analyzes human emotions in depth based on voice processing.

3.2. OCC Model Computing

The OCC model was established in the late 1980s at the University of Illinois to study sensibility through psychological methods. The OCC model is the emotional evaluation model of the three researchers, Ortony, Collins and Clore, which is a research method that types into emotional clusters resulting from similar causes rather than analyzing all the emotions one can express. The principle of this is that these emotional types are evaluated and separated by three components (event, object, agent) and are classified into the final 28 emotional types. Robots have been developed to generate characteristics that can affect the generation of emotions using information from internal and external sensors, and based on this, to express emotions in various forms. Various emotional models have been used to generate robot sensitivities to internal and external stimuli. Emotional models can be divided into two main types [16-17].

First, there is a continuous multi-dimensional emotional model that defines emotional space in two or three-dimensional space and when mapping. A representative model for this is the 3D emotional model used by WE-4RII and Kismet. By placing a characteristic axis in the 3D emotional vector space and calculating the value, it is possible to maintain a continuous change of emotion and to provide a variety of emotional expressions. The calculations are high and somewhat complicated.

Second, it can be divided into discrete emotional models with tree-shaped structures predetermined to satisfy the conditions. A representative model for this is the OCC model bought in OZ project. While the OCC model is easy to implement, it is not easy to express a continuous change in sensitivity. Emotions generated through these emotional models should be expressed in the form of faces, gestures, emoticons, sounds and text. In this chapter, we will explore the research trends for typical emotional related robots and the creation of emotion and the expression of row movements in each robot. Emotional inference and expression is the recognition of emotion and the prediction and expression of emotional state through inference about the situation in which it occurs. To this end, the University of Illinois has established an OCC model since the late 1980s, conducting a study on sensibility. The OCC model relates to psychology and is an emotional evaluation model of three people, Ortony, Collins and Clore. Instead of trying to describe all of the emotions that a person can express, this defines the emotional clusters that are generated and distinguished by similar causes as the emotional type. For example, agony involves emotions such as sadness (sad), feeling (distraught), and love’s pain (lovesick) depending on the difference between the cause and so on. The three factors that evaluate emotional types are events, objects, and agents. Events refer to actions related to the objectives of agents, objects represent different objects that exist in equal capacities, and agents have emotional types depending on the event and the objects. Depending on these three factors, emotional types are divided into three categories and finally into 28 emotional types. The emotional evaluation process consists of an evaluation of the satisfaction of the events associated with the agent’s goals, the degree of approval for the agent’s actions or actions, and whether the agent’s attitude likes the objects involved. This process allows the OCC model to calculate the generation and intensity of sensibility, and to generate specific sensitivities according to the interpretation of a given situation. Typical sensibilities defined in the OCC model include joy, anguish, hope, fear, pride, shame, admiration, shame, anger, gratification, and regret.

3.3. Affective Reasoner

Emotional inference means emotional inference according to events that occur in multi-agent systems as inter-agent interactions. The agent’s emotional reasoning for this depends on the interpretation of the events that have occurred, and this interpretation affects everyone’s goals, expectations, learning, etc. Each has different internal models depending on its own perspective, and it is now known that there is a total of 24 emotional forms. The above-mentioned emotional/psychological cognitive skills are recently included in the facial expression recognition field, which belongs to bioinformatics. This technique is used to recognize the emotional state of a person by reading facial the technique that accounts for the largest portion of the rain is the technology that recognizes facial expressions. The changes in facial expressions represent different emotions 2 due to changes in the facial muscles of the human face (eyes, eyebrows, mouth, etc.) [18-19].

Sensitivity using facial expressions is recognized through the transformation of characteristic elements derived from basic emotional expressions such as anger, sadness, anaerobic error, joy, surprise, etc., and studies are under way to take into account various changes in face shape and state environment that are frequently changed by glasses, hairstyle, facial expressions, etc. Active Appearance Model (AAM) recognition, which automatically extracts the position of eyes, nose, mouth, etc. from the input by modeling the shape and appearance of the face as the main component analysis (PCA), and the input phase through the characteristics of the face with GW (Gabor Wavelet) are developing as major technologies. The Facial Action Coding System (FACS) categorizes the measurement of emotional state by AC (Action Units) to database the emotional state of changes in facial expressions. Facial image-based facial expression recognition technology is a technology that detects subtle facial changes and recognizes the emotions of the current employees and has drawn attention in the human-computer interaction field [21], [22]. In addition, a database was implemented to enable the recognition of facial expressions only for Koreans who excluded racial effects based on the standard image of the Korean public using facial expression training technology. When big data and artificial intelligence become an issue recently, Seo et al. [20] based emotional or emotional technology is also being developed to build AI, create cognitive models, and recognize emotions through facial learning of each emotional expression. Performance improvement and utilization of facial expression training technology are expected.

3.4. Tactile Sensitizer

With pet robots and portable robots in the spotlight, interest in the interaction between humans and robots through contact is growing. In robots, contact sensors have been used for such purposes as measuring location for autonomous driving, avoiding obstacles, balancing the body when walking on two-legged robots, or controlling movement when a robot manipulates objects with its own hands. The Communication robot project aimed at measuring human sensitivity through physical interaction and developing robot that can interact naturally with people through contact, suggesting that as a result, a way to recognize hitting behavior, light flick behavior and stroking behavior using I-SCAN contact sensors developed by NITTA Corp., which can measure the pressure of contact. The Huggable Project announced a Sensitive Skin so that the robot could feel contact with all of its body, with the aim of developing a portable robot that can be used for emotional therapy. The Sensitive Skin consists of a QTC force sensor and a heat sensor. And these contact sensors have produced an initial study that recognizes nine different contact behaviors that a robot can feel in its users.Shigeki Sugano defined PIFACT items, saying that robots should determine the contact conditions felt by robots in order to coexist with humans in their daily lives. He developed the human-robot PIFACT definition system and evaluated the system developed by applying it to WENDY. To detect contact, an FSR sensor, a force sensor, is attached to the WENDY shoulder and arm area.

3.5. Tactile Sensitizer

Humans like and dislike someone and can create and think new things. In comparison, artificial intelligence learns, acts and determines based on numerous data. With artificial intelligence now entering deep into human life as a key technology in our society, the issue of social and ethical value judgment on artificial intelligence has become an important issue. Recently, the issue of choosing between large and small is being considered in autonomous driving cars, nursing robots, and former two-robot or drones. U.S. government-affiliated research instruments have supported various research projects related to artificial intelligence ethics. Ronald Arkin of Georgia Institute of Technology in the U.S. has been studying the ethics of war robots. The purpose of this study is to reduce the cost of non-combatants in war, and to make robots have the ability to make ethical judgments to assess opponents’ sins. However, robots lack emotions, so it is difficult to properly judge human dignity and see it as a matter of comparative judgment of values in which mercy can be involved depending on circumstances (in Fig. 5). Above is talking about emotional computing. Starts with Emotion Engine and can be sensitively predicted for emotion, emotion, etc. Describe this architecture. Such UNC Berkeley’s “Center for Human-Competitive AI” is conducting research to enable AI systems to provide essential services to society in a manner consistent with human values.

jmis-7-2-115-g5
Fig. 5. Tactile Sensitizer Architecture.
Download Original Figure

In other words, using the reverse reinforcement learning approach, the development of artificial intelligence services to observe human row movements and learn human values. They are conducting research on whether self-driving cars have enough capacity to detect the conditions of surrounding roads, whether they can make the right decision when they are faced with unexpected events and unfamiliar objects or situations, and whether they are safer than human-driving vehicles. So the field of study in this paper should produce an immediate and automatic emotional response to certain patterns of external stimuli. Emotional generation is like synthesizing emotions in a machine. To synthesize emotions is to make it seem like a machine has real feelings, or to have internal mechanisms similar to those of humans or animals. Emotional generation requires, for example, the process of synthesizing various tones and speeds depending on the emotion or situation or synthesizing necessary actions to represent the state of emotion. For example, in an emotional agent, synthesis determines what state the machine should be in. These studies have long been carried out. In particular, emotional synthesis of speech or facial expressions has been under way for almost 30 years and is now being studied.

IV. CONCLUSION

This paper looked at the emotional signal extraction and sensibility, psychological recognition technology, a base liquor that will lead the future fourth industrial revolution. Emotional signal extraction and emotional/psychological recognition technology, a human-centered technology that will be responsible for the quality of our lives in the future, ultimately aims to provide products and services based on individual emotional information. There are parts that require technical and legislative overhaul to gradually converge this technology.

This requires careful consideration and access. In addition, verification of development of relevant technologies, such as standards and emotional information exchange formats, emotional service platforms, interfaces, etc., and further national-led standardization should be carried out. Policies should be implemented to preemptively research and standardize the Sensitivity Interaction Communication Network and the Sensitivity Interaction Protocol technology that can communicate with each other the perceived emotional information and the emotional environment, sensitivity situation information not yet attempted in advanced countries. Artificial intelligence also needs to be understood as a new paradigm that will go beyond just technological changes and create changes across other industries and society. Deep learning technology is self-evolving and smarter. Losing resulted in an intelligence explosion. An intelligence explosion that uses smart machines to create smarter machines and then leads to new smart machines was already predicted in 1965.

In so many lives, artificial intelligence systems are already having a lot of influence. Thus, many of the studies in this paper presented the problems of emotional computing, which are a bit lacking, and thus the most important model. It controls human emotions and experiences changes in heart rate, body sweat secretions, and body temperature in emotional changes. These physical changes are intended to explore and study basic models using autonomic nervous system lights consisting of sympathetic and parasympathetic nerves. And going forward, the issues of artificial intelligence, big data and security are expected to emerge continuously and are thought to be applied to many different industrial groups.

Affective computing is also expected to create high value added in various markets, including advertising, online education and games, as well as in the medical sector by understanding, researching and analyzing human behavior. And as artificial intelligence has recently become an issue at the core of society, interest in human emotions - the connection between mental health and physical illness - has been growing recently, depending on economic development, medical technology development and so on. In the future, attempts to study human sensibility are expected to expand further. The field of affective computing research deals with private personal information, requiring the resolution of supplementary problems for personal information leakage and hacking. So in this paper, we do research on how to apply this affective computing.

REFERENCES

[1].

Doan, AnHai, Pedro Domingos, and Alon Y. Halevy, “Reconciling schemas of disparate data sources: A machine-learning approach,” in Proceedings of the 2001 ACM SIGMOD International Conference on Management of Data, 2001.

[2].

Byung-Gyu Kim and Dong-Jo Park, “Novel target segmentation and tracking based on fuzzy membership distribution for vision-based target tracking system,” Image and Vision Computing, vol. 24, no. 12, pp. 1319-1331, 2006.

[3].

J-H Huh and Y-S Seo, “Understanding Edge Computing: Engineering Evolution With Artificial Intelligence,” IEEE Access, vol. 7, pp. 164229-164245, 2019.

[4].

Chen, Min, Shiwen Mao, and Yunhao Liu, “Big data: A survey,” Mobile networks and applications, vol. 19, no. 2, pp. 171-209, 2014.

[5].

Lee Sangdo, Hanchul Woo, and Yongtae Shin, “Study on Personal Information Leak Detection Based on Machine Learning,” Advanced Science Letters, vol. 23, no. 12, pp. 12818-12821, 2017.

[6].

J-H Huh, S Otgonchimeg and K Seo, “Advanced metering infrastructure design and test bed experiment using intelligent agents: focusing on the PLC network base technology for Smart Grid system,” The Journal of Supercomputing, vol. 72, no. 5, pp. 1862-1877, 2016.

[7].

Tran Quoc-Tuan and Hoanh-Su Le, “An Empirical Study on Continuance Using Intention of OTT Apps with Young Generation” Advanced Multimedia and Ubiquitous Engineering, vol. 2019, pp. 219-229, 2019.

[8].

Kim, Yonghoon, and Mokdong Chung, “An Approach to Hyperparameter Optimization for the Objective Function in Machine Learning,” Electronics, vol. 8, no. 11, p. 1267, 2019.

[9].

J.-H. Huh, Smart grid test bed using OPNET and power line communication, IGI Global, 2017.

[10].

Kim S. K., Kwon H. T., Kim Y. K., Park Y. P., Keum D. W., and Kim U. M., “A Study on Application Method for Automation Solution Using Blockchain dApp Platform,” in Proceeding of International Conference on Parallel and Distributed Computing: Applications and Technologies, Singapore, pp. 444-458, 2018.

[11].

Bengio Y., Courville A., and Vincent P., “Representation learning: A review and new perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798–1828, 2013.

[12].

Wachsmuth I., Lenzen M., and Knoblich, Embodied Communication in Humans and Machines, Oxford University Press, 2008.

[13].

Picard Rosalind W., “Computer learning of subjectivity,” ACM Computing Surveys, vol. 27, no. 4, pp. 621-623, 1995.

[14].

Charles Darwin, “The expression of emotions in animals and man,” London: Murray 11, 1872.

[15].

Eckman Judith, James D. Meltzer, and Bibb Latane, “Gregariousness in rats as a function of familiarity of environment,” Journal of Personality and Social Psychology, vol. 11, no. 2, p. 107, 1969.

[16].

Ali Hassan Sodhro, Sandeep Pirbhulal, Victor Hugo C. de Albuquerque, “Artificial Intelligence-Driven Mechanism for Edge Computing-Based Industrial Applications,” IEEE Transactions on Industrial Informatics, vol. 15, no. 7, pp. 4235 - 4243, 2019.

[17].

Zhehui Wang, J. Luc Peterson, Cristina Rea, David Humphreys, “Special Issue on Machine Learning, Data Science, and Artificial Intelligence in Plasma Research,” IEEE Transactions on Plasma Science, vol. 48, no. 1, pp. 1-2, 2020.

[18].

Yingxu Wang, Sam Kwong, Henry Leung, Jianhua Lu, Michael H. Smith, and Ljiljana Trajkovic, “Brain-Inspired Systems: A Transdisciplinary Exploration on Cognitive Cybernetics, Humanity, and Systems Science Toward Autonomous Artificial Intelligence,” IEEE Systems, Man, and Cybernetics Magazine, vol. 6, no. 1, pp. 6-13, 2020.

[19].

Trevor Watkins, “Cosmology of artificial intelligence project: libraries, makerspaces, community and AI literacy,” ACM AI Matters, vol. 4, no. 5, pp. 134-140. 2019.

[20].

Y-S Seo, and J-H Huh, “Automatic emotion-based music classification for supporting intelligent IoT applications,” Electronics, vol. 8, no. 2, pp. 1-20, 2019.

[21].

Ji-Hae Kim, Byung-Gyu Kim, Partha Pratim Roy, and Da-Mi Jeong, “Efficient Facial Expression Recognition Algorithm Based on Hierarchical Deep Neural Network Structure,” EEE Access, vol. 7, no. 12, pp. 41273-41285, 2019.

[22].

Dami Jeong, Byung-Gyu Kim, and Suh-Yeon Dong, “Deep Joint Spatio-Temporal Network (DJSTN) for Efficient Facial Expression Recognition,” Sensors, vol. 2020, pp. 1-23, 2020.

Authors

Seong-Kyu (Steve) Kim

jmis-7-2-115-i1

has born in Seoul, Republic of Korea. In Feb. 2006, he graduated from Sungkyunkwan University at Seoul, Department of Information Communication Engineering in Korea and received his master degree. In Aug, 2019, He graduated (Ph.D: Blockchain and AI) from Sungkyunkwan University at Suwon, Department of Electronic and Electrical Computer Engineering.

He started his career as a ICT in 1999, and he was before worked Hyundai Information Technology, he has worked on Hyundai Motor IT R&D Research, Hyundai Construction IT R&D Research, Korea Railroad IT Project, Korea Highway Corporation IT Project, and Ministry of Public Administration and Security IT Project.

He worked at Samsung during 1999 ~ 2017. He was responsible for Saudi Aramco security (physical and information protection) projects, Kuwait KNPC security (physical and information protection) projects, and Singapore Changi Airport security (physical and information protection) projects.

He also lectured on “Introduction to Public Computers” at Songdam University, Yongin (2010 ~ 2011). Lectured “Security System” at Sungkyunkwan University Graduate School of Information and Communication (2015).

CISA, PMP, CISSP, and CPPG lectures were conducted at Wise Road, an accredited Ministry of Employment and Labor (2010-2016). Computer Engineering Lecture at the Hackers Lab, an accredited Ministry of Employment and Labor (2016).

Lectured on industrial security management at “HackersLab Edu” educational institution certified by Ministry of Employment and Labor (2010 ~ 2016)

In addition, he received the Best Paper Award at the Korea Multimedia Society (MITA) in 2019. Also, he received the Best Paper Award at the Vietnam-Korea Joint Workshops in 2020.

He has international certifications such as CISA (Certified Information Systems Auditor), CISSP (Certified Information Systems Security Professional), PMP (Project Management Professional), ITIL Foundation, CCNP, SCJP, ISE, CPPG, ISO 27001, ISO 19011, ISO 20000, ISO 9000, ISO 22301 has etc.

Currently he is Vice President of “HackerLabPartners” at Kuala Lumpur, Malaysia and Adjunct Professor of Dept. of IT, Sungkyunkwan University, Seoul, Republic of Korea and Adjunct Professor of Dept. of Smart IT, Hanyang Women’s University, Seoul, Republic of Korea. His research interests are Blockchain, AI, Big Data, Smart Grid, Network Security, IoT, System Architecture.