Journal of Multimedia Information System
Korea Multimedia Society
Section D

An Evaluation Framework for Defense Informatization Policy

Hosang Jung1, Sangho Lee2,*
1Asia Pacific School of Logistics, Inha University, Incheon, Republic of Korea, hjung@inha.ac.kr
2Department of IT Management, Sun Moon University, Asan-si, Republic of Korea, slee@sunmoon.ac.kr
*Corresponding Author : Sangho Lee, 70 Sunmoon-ro 221beon-gil, Asan-si, Chungcheongnam-do, 31460, Republic of Korea, +82-41-530-2515, slee@sunmoon.ac.kr

© Copyright 2020 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Mar 03, 2020; Revised: Mar 24, 2020; Accepted: Mar 26, 2020

Published Online: Mar 31, 2020

Abstract

The well-known sentence, “You can’t manage what you don’t measure” suggests the importance of measurement. The Ministry of National Defense (MND) in Korea is measuring the effort of informatization for various dimensions such as validity, adequacy, and effectiveness using the MND evaluation system to obtain positive and significant effects from informatization. MND views the defense informatization domain as divided into the defense informatization policy, the defense informatization project, and the defense informatization level, which can measure the informatization capability of the MND and the armed forces or organizations. Furthermore, it feels there is some limitation, such as those related to ambiguity and reliability, present in the system. To overcome the limitations in the existing system to evaluate the defense informatization policy, this study proposes a revised evaluation framework for the policy of defense informatization, its indicators, and measurement methods.

Keywords: Defense Informatization Policy; Evaluation System; Evaluation Indicator; Informatization Performance

I. INTRODUCTION

The sentence, “You can’t manage what you don’t measure” is alleged to have been used by Peter Drucker or Edward Deming [1] and suggests the importance of measurement. An evaluation is “an assessment, as systematic and objective as possible, of an on-going or completed project, program or policy, its design, implementation and results” [2] or an assessment of policy effectiveness, efficiency, relevance, and coherence during and after implementation. It seeks to measure outcomes and impacts to assess and determine whether the anticipated benefits of policy have been realized [3]. It “should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors” [2]. In addition, an evaluation refers to the process of judging the value and merits of the object to be evaluated based on certain criteria and procedures. It is an important part of the logical process using which public or private organizations determine the policy: that is, they start by making a plan, followed by implementing or executing the plan or policy, and then evaluating the outcomes and processes, and further taking any follow-up action based on the evaluation results [4].

This study proposes a framework to assess the defense informatization policy (DIP) in terms of the validity of policy-making, the appropriateness of the policy-making process, the adequacy of performance by the policy at the policy-making stage; the properness of policy implementation at the policy implementation stage; achievement of performance objectives, adequacy of the performance analysis process, and utilization of analysis results at the outcome/performance stage. It also describes quantitative evaluation indicators for each item for policy evaluation.

The remainder of this paper is organized as follows. It reviews existing works related to the evaluation of the DIP in Section 2. We suggest a framework for the evaluation of DIP and describe evaluation indicators and their measuring method in Section 3. The last section presents a summary, limitations, and directions for future work.

II. RELATED WORKS

2.1 Korean Government’s Evaluation of Policies

In Korea, the Framework Act on the Evaluation of Government Services is a base for government evaluation [5]. The evaluation refers to checking, analyzing, and evaluating the establishment, implementation, and results of the plan with respect to policies, projects, and duties carried out by a given institution, corporation, or organization [5]. Government service evaluation refers to the evaluation of policies carried out by the government or public organizations or corporations to ensure the efficiency, effectiveness, and accountability of government operations. The government evaluation is divided into self-assessment and specific evaluation. Self-assessment refers to self-evaluation of jurisdiction policy by the central administrative agency or local government. The specific evaluation means that the Prime Minister evaluates the policies necessary for the central administrative agency to manage the government service integrally.

Another effort related to the evaluation of the informatization policy in the Government of the Republic of Korea is evaluating the performance management in the evaluation of administrative management capability [6]. The Ministry of the Interior and Safety manages this evaluation, which is based on the Framework Act on Public Service Evaluation [7]. Forty-four central government departments, including the Ministry of National Defense (MND), were evaluated in 2019. The evaluation item related to the informatization policy is performance management indicators, and its weight is just seven percent.

2.2 Informatization Evaluation in MND

The MND performs various measurements to obtain significant realized effects from informatization. The term “defense information” refers to any type of material or knowledge processed by optical or electronic means for defense and is expressed in code, letters, voice, sound, and video [8]. The optical or electronic means naturally use and depend on multimedia, which is “a technique (such as the combining of sound, video, and text) for expressing ideas (as in communication, entertainment, or art) in which several media are employed” [9]. The term “defense informatization” refers to the production, distribution, or utilization of defense information to enable activities in the defense sector or to promote efficiency. The DIP is the policy for defense informatization, and follows four principles: strategic informatization for national security of the information society, economic informatization through efficient management of defense information resources, technical informatization to secure excellent defense information technology, and integrated informatization to maximize the utility of defense power [8].

The evaluation in the defense informatization domain is divided into the evaluation for DIP, the evaluation for defense informatization project by the Act on Defense Informatization [8], [10], [11], and the evaluation of defense informatization level [12], [13].

The evaluation of the defense informatization project assesses the establishment, implementation process, and results of project plans for specific defense informatization projects such as IT procurement projects, information system (IS) development projects, and IS maintenance and operation projects, which are being carried out by defense organizations. The project evaluation consists of three stages: ex ante project stage, project progression stage, and ex post project stage [14]. It should focus on defining the performance indicators from the establishment of the operation concept of the informatization project, reviewing the progress of the performance indicators in the progression stage, and evaluating whether the performance indicators achieved the target values in the subsequent stages.

The evaluation of the defense informatization level can measure the informatization capacity and readiness, as the informatization level, in defense organizations [12], [15]. The level evaluation should focus on measuring the level of the organization's informatization mind and informatization infrastructure (facility, equipment, budget, etc.) along with the utilization of IS operated as a result of the informatization project [16].

The evaluation for the DIP is an annual evaluation of the implementation direction, result, and performance of policy for all agencies and units of the MND, the Army, the Navy, and the Air Force promoting defense informatization. It should focus on evaluating whether the policy was implemented in accordance with the policy direction for the DIP items included in the Defense Informatization Policy Statement (DIPS) and the Defense Informatization Basic Plan [12], [13], [16]. It checks compliance with procedures and standards to be considered at each stage of policy-making, implementation, and result measurement as an assessment of the adequacy of policy-making and implementation, and also checks targeting of performance indicators according to the characteristics of policy and the results of the implemented policy as policy implementation and performance evaluation.

2.3 Current Evaluation Method of MND for DIP

The MND’s current evaluation method for DIP uses evaluation indicators by stages such as policy planning, policy implementation, and output/performance of policy from a systematic perspective [12]. It uses eleven indicators. In the policy planning stage, two items (the adequacy of planning and the adequacy of the performance plan) are used. For the adequacy of the planning item, five indicators such as conformity with DIPS (<a-1> Has the policy been adequately analyzed in accordance with the policy contents in the DIPS?), adequacy of policy analysis (Were the policy measures for achieving the policy objectives prepared appropriately?), fidelity of opinion (Did the organization faithfully collect expert opinions when planning?), and sufficiency of preliminary validity review on the plan (Did an organization fully conduct a preliminary survey when planning? Were the anticipated side effects reviewed and their alternatives fully reviewed?) are used. For the adequacy of performance plan item, four indicators such as specificity of performance goal setting (Are the objectives an organization is trying to achieve through the policy sufficiently specific? Has an organization specified specific targets for evaluating the outcome of the policy? Is there a concrete way to evaluate the effectiveness of the policy?), and relevance of performance indicators (Were the performance indicators and their performance targets set appropriately?) are used.

In the policy implementation stage, three indicators, such as the fidelity of the propulsion schedule (Has the organization proceeded faithfully the policy in accordance with the schedule?), responsiveness to changes in administrative conditions and circumstances (Has the organization responded appropriately to changes in administrative conditions and circumstances?), and connectivity with relevant institutions and policies (Did the organization establish proper connectivity and cooperation system with relevant institutions and policies in the process of implementation?) are used to measure the relevance of the implementation process.

In the output/performance of the policy stage, two items (achievement of performance objective and feedback of evaluation results) are used. The achievement of the performance objective item uses the achievement of the target of performance indicators (Did the organization achieve the originally set objectives in the policy planning? Did the organization provide good things and poor things through performance analysis? Did the organization suggest appropriate implications by performance analysis?). The feedback of the evaluation results item uses the evaluation result utilization indicators (Are the results of the performance analysis properly reflected in the next plan? Were the results of performance analysis fully utilized through knowledge management?).

The current method has some limitations. Specifically, in conformity with DIPS (Has the policy been adequately analyzed in accordance with the policy contents in the DIPS?), it uses a 5-point Likert scale (Very poor – Poor – Acceptable – Good – Very good). This suggests the check criteria below [12]:

  • ▪ “Very good (5 point),” when the policy content by the policy plan matches the policy direction in the DIPS.

  • ▪ “Acceptable (3 point),” when the policy does not exactly match the direction in the DIPS but is related to the direction of informatization in the DIPS.

  • ▪ “Very poor (1 point),” when the policy is not related to the direction of informatization in the DIPS.

However, it may not be meaningful to use <a-1> evaluation indicator (Has the policy been adequately analyzed in accordance with the policy contents in the DIPS?) because the policy to be evaluated cannot be made completely apart from the DIP. Moreover, it may also be artificial to assign different points according to the level of conformity, in which the subjective judgment of the evaluator is involved [16].

To overcome the limitations of the current method, it is necessary to reconstruct the evaluation system for DIP based on clear and quantitative evaluation indicators that can guarantee objectivity.

III. EVALUATION FRAMEWORK FOR DEFENSE INFORMATIZATION POLICY

The proposed evaluation framework for DIP consists of three stages: policy-making, policy implementation, and outcome/performance of policy, as in the existing system.

Fig. 1 shows a policy-making process. Table 1 presents the evaluation items, their evaluation indicators, and their descriptions in the framework. Seventeen indicators were used.

jmis-7-1-73-g1
Fig. 1. Policy-making process.
Download Original Figure
Table 1. Evaluation indicators for defense informatization policy.
Item Indicator Description
Policy-making stage
Validity of policy-making (A) <A-1> Necessity of policy Check if the necessity of policy was reviewed when making the policy
<A-2> Timeliness of policy Check if the timeliness of policy was reviewed when making the policy
Appropriateness of policy-making process (B) <B-1> Fidelity of collecting opinions Check whether the opinions of internal and external experts were collected when making the policy
<B-2> Fidelity of study in advance Check whether enough study necessary to making the policy was performed in advance
<B-3> Fidelity of policy analysis Check whether a systematic analysis was performed when making the policy
<B-4> Fidelity of post preparation Check whether side effects, which are predicted by the policy, were reviewed well
Adequacy of performance by policy (C) <C-1> Representativeness of performance indicators Ensure that the performance indicators represent the objectives that the organization wants to achieve through the policy
<C-2> Objectivity of performance indicators Ensure that performance indicators are set up to be measured or calculated
<C-3> Redundancy of performance indicators Check that there is no overlap between performance indicators
Policy implementation stage
Properness of policy implementation (D) <D-1> Compliance with plan Ensure that the policy went as planned
<D-2> Responsiveness to change of circumstance Check whether the organization has responded appropriately to changes in administrative condition or circumstance identified through monitoring
<D-3> Connectivity with relevant organizations or policies Confirmation of establishment and operation of connectivity and cooperation system with relevant organizations/policies
Outcome/performance of policy stage
Achievement of performance objective (E) <E-1> Achievement of performance objective Check the achievement level of the performance objective made when making the policy
Adequacy of performance analysis process (F) <F-1> Concreteness of performance analysis In the result of the performance analysis, check whether the problem of the policy and its cause are specified in detail
<F-2> Reliability of performance analysis Check if the performance analysis was conducted with the participation of internal and external experts
Utilization of analysis results (G) <G-1> Sharing and learning level of analysis result Ensure that analysis results were shared and learned by policy-making and implementation organizations
<G-2> Intellectualization level of analysis result Check if the analysis result was systematically accumulated and managed
Download Excel Table

In the policy-making stage, the validity of policy-making (A), the appropriateness of the policy-making process (B), and the adequacy of performance by the policy (C) are evaluated. The validity of policy-making is based on the necessity and timeliness of policy as an evaluation indicator. The appropriateness of policy-making process is evaluated using the indicators of fidelity of collecting opinions, fidelity of study in advance, fidelity of policy analysis, and fidelity of post preparation. The adequacy of performance by policy is evaluated by three indicators such as representativeness, objectivity, and redundancy of performance indicators.

In the policy implementation stage, the properness of policy implementation (D) are reviewed with three indicators, i.e., compliance with plan, responsiveness to change of circumstance, and connectivity with relevant organizations or policies.

In the output/performance stage, the achievement of the performance objective (E), the adequacy of the performance analysis process (F), and the utilization of the analysis results (G) are evaluated. Two indicators such as concreteness and reliability of performance analysis are used for the evaluation of the adequacy of performance analysis process. In addition, the utilization of analysis results is based on the sharing and learning level of analysis result and the intellectualization level of analysis result.

For the evaluation framework to work well, specific measures, descriptions, and criteria should be provided for each evaluation indicator. The explanation of the evaluation indicator for <A-1> the necessity of policy is as below:

  • ▪ Indicator: Policy-making >> Validity of policy-making > Necessity of policy

  • ▪ Description: Check if the necessity of policy was reviewed when making the policy

  • ▪ Question: Was the policy fully reviewed in accordance with the policy contents in the DIPS and the National Informatization Basic Plan?

  • ▪ Check criteria: See Table 2

  • ▪ Source of data: DIPS, Framework Act on National Informatization [17], National Informatization Basic Plan [17], [18], Formal informatization policy report published by other private or public research institutes, universities, etc. within the last two years.

Table 2. Check criteria for <A-1> necessity of policy indicator.
Measure Score Consistency with the direction of defense informatization policy (based on DIPS) Consistency with the direction of national informatization policy (based on National Informatization Basic Plan) Consistency with the direction of other private or public informatization policies
Very good 4 Matched - -
Good 3 Partially matched - -
Acceptable 2 Almost NO matched Matched -
Poor 1 Almost NO matched Partially matched -
Almost NO matched Almost NO matched Matched
Very poor 0 Almost NO matched Almost NO matched Partially matched or below

* Note. The direction of informatization policy of the private or public sectors is based on the official reports published by private or public research institutes, universities in the past two years.

Download Excel Table

Table 2 shows check criteria for necessity of policy indicator. This indicator checks whether the policy is consistent with the direction of defense informatization, national informatization, and other public or private informatization. The criteria use a 5-point Likert scale (Very poor (0) – Poor (1) – Acceptable (2) – Good (3) – Very good (4)). If the policy is consistent with the direction of all informatization, one can mark “Very good.” Even though the policy is not consistent with the direction of defense informatization, one should mark “Acceptable” if it is fully consistent with the direction of national informatization. One can mark “Poor” if it matches with the direction of other informatization policy except defense or national informatization one.

All evaluation indicators as tabulation are shown in Tables 3 to 19.

Table 3. Indicator <A-1> Necessity of policy.
Indicator Policy-making >> Validity of policy-making > Necessity of policy
Description Check if the necessity of policy was reviewed when making the policy
Question Was the policy fully reviewed in accordance with the policy contents of the Defense Informatization Policy Statement (DIPS) and National Informatization Basic Plan?
Check criteria Measure Score Consistency with the direction of defense informatization policy (based on DIPS) Consistency with the direction of national informatization policy (based on National Informatization Basic Plan) Consistency with the direction of other private or public informatization policies
Very good 4 Matched - -
Good 3 Partially matched - -
Acceptable 2 Almost NO matched Matched -
Poor 1 Almost NO matched Partially matched -
Almost NO matched Almost NO matched Matched
Very Poor 0 Almost NO matched Almost NO matched Partially matched or below
* Note. The direction of informatization policy of the private or public sectors is based on the official reports published by private or public research institutes, universities in the past two years.
Source of data - Defense Informatization Policy Statement (DIPS)
- National Informatization Basic Plan in the Framework Act on National Informatization [17]
- Formal informatization policy report published by other private or public research institutes, universities, etc. within the last two years
Download Excel Table
Table 4. Indicator <A-2> Timeliness of policy.
Indicator Policy-making >> Validity of policy-making > Timeliness of policy
Description Check if the timeliness of policy was reviewed when making the policy
Question Was the plan adequately reviewed for timely policy planning in the Defense Informatization Policy Statement (DIPS)?
Check criteria Measure Score Consistency with the priority in the defense informatization policy plan (based on DIPS) Consistency with the priority in the national informatization policy plan (based on National Informatization Basic Plan) Consistency with the priority in the other private or public informatization policies
Very good 4 Matched - -
Good 3 Partially matched - -
Acceptable 2 Almost NO matched Matched -
Poor 1 Almost NO matched Partially matched -
Almost NO matched Almost NO matched Matched
Very Poor 0 Almost NO matched Almost NO matched Partially matched or below
* Note. The direction of informatization policy of the private or public sectors is based on the official reports published by private or public research institutes, universities in the past two years.
Source of data - Defense Informatization Policy Statement (DIPS)
- National Informatization Basic Plan in the Framework Act on National Informatization [17]
- Formal informatization policy report published by other private or public research institutes, universities, etc. within the last two years
Download Excel Table
Table 5. Indicator <B-1> Fidelity of collecting opinions.
Indicator Policy-making >> Appropriateness of policy-making process > Fidelity of collecting opinions
Description Check whether the opinions of internal and external experts were collected when making the policy
Question Did the opinions of internal and external experts for policy-making be collected and reflected in the policy?
Check criteria Measure Score Opinion gathering method Experts participation Level of policy reflection
Very good 4 When conduct at least one of five methods (public hearing, debate, meeting, council, survey) Both internal and external experts Convergence results are reflected in policy-making
Good 3 When conduct at least one of five methods Both internal and external experts Convergence results are NOT reflected in policy-making
Acceptable 2 When conduct at least one of five methods Only internal or external experts Convergence results are reflected in policy-making
Poor 1 When conduct at least one of five methods Only internal or external experts Convergence results are NOT reflected in policy-making
Very poor 0 When NO conduct five methods - -
Source of data - Public hearing, debate, meeting, council, and survey material created during policy-making
- List of experts who participated in policy-making (including profiles) and documented expert opinion
- Result report showing the policy-making reflection items (e.g., Project Closure Report)
Download Excel Table
Table 6. Indicator <B-2> Fidelity of study in advance.
Indicator Policy-making >> Appropriateness of policy-making process > Fidelity of study in advance
Description Check whether enough study necessary to making the policy was performed in advance
Question Did you fully conduct a preliminary study when planning?
Check criteria Measure Score Level of preliminary study Level of policy reflection
Very good 4 Both quantitative (e.g., statistical survey) and qualitative (e.g., case study) preliminary studies are conducted When the study results are used as an important basis for the necessity of policy-making
Good 3 Both quantitative and qualitative preliminary studies are conducted When the study results are used as an auxiliary basis for the necessity of policy-making
Acceptable 2 Both quantitative and qualitative preliminary studies are conducted When the study results are not nearly used as an important basis for the necessity of policy-making
At least one quantitative or qualitative preliminary study is conducted When the study results are used as an important basis for the necessity of policy-making
Poor 1 At least one quantitative or qualitative preliminary study is conducted When the study results are used as an auxiliary basis for the necessity of policy-making
Very poor 0 Preliminary study is NOT conducted -
Source of data - Statistical survey data necessary for policy-making
- Case study data (interview data/document and recorded file) necessary for policy-making
- Result report showing the policy-making reflection items (e.g., Project Closure Report)
Download Excel Table
Table 7. Indicator <B-3> Fidelity of policy analysis.
Indicator Policy-making >> Appropriateness of policy-making process > Fidelity of policy analysis
Description Check whether a systematic analysis was performed when making the policy
Question Have you conducted the core analysis (effect analysis, pros and cons analysis) and additional analysis (other analysis besides the core analysis) to make the policy?
Check criteria Measure Score Core analysis (effect analysis) Core analysis (pros and cons analysis) Additional analysis
Very good 4 When conduct an effect analysis such as benefit cost analysis When conduct a pros and cons analysis such as SWOT analysis When conduct an additional quantitative and qualitative analysis
Good 3 When conduct an effect analysis such as benefit cost analysis When conduct a pros and cons analysis such as SWOT analysis When NO conduct an additional quantitative and qualitative analysis
Acceptable 2 When either an effect analysis or a pros and cons analysis is conducted When conduct additional quantitative and qualitative analysis
Poor 1 When either an effect analysis or a pros and cons analysis is conducted When NO conduct additional quantitative and qualitative analysis
When neither an effect analysis nor a pros and cons analysis is conducted When conduct additional quantitative and qualitative analysis
Very poor 0 When neither an effect analysis nor a pros and cons analysis is conducted When NO conduct additional quantitative and qualitative analysis
* Note. If a simple expert survey or interview is used, it is interpreted as additional analysis.
Source of data - Result of effect analysis (benefit cost analysis), pros and cons analysis, etc.
- Expert survey / interview output (document and recorded file) for policy-making
- Requirement institution document for informatization project
Download Excel Table
Table 8. Indicator <B-4> Fidelity of post preparation.
Indicator Policy-making >> Appropriateness of policy-making process > Fidelity of post preparation
Description Check whether side effects, which are predicted by the policy, were reviewed well when making the policy
Question Are the expected side effects and solutions fully reviewed?
Check criteria Measure Score Level of side effect review Level of alternative review
Very good 4 When reviewed side effects on four perspectives (required personnel, budget, necessary system, and conflict among stakeholders) When suggested all alternatives to side effects on four perspectives
Good 3 When reviewed side effects on four perspectives When suggested some alternatives to side effects on four perspectives
When reviewed side effects on three of the four perspectives When suggested all alternatives to side effects on three of the four perspectives
Acceptable 2 When reviewed side effects on three of the four perspectives When suggested some alternatives to side effects on three of the four perspectives
When reviewed side effects on two of the four perspectives When suggested all alternatives to side effects on two of the four perspectives
Poor 1 When reviewed side effects on two of the four perspectives When suggested some alternatives to side effects on two of the four perspectives
When reviewed side effects on one of the four perspectives When suggested alternatives to side effects on one of the four perspectives
Very poor 0 When reviewed side effects on one of the four perspectives When NOT suggested any alternative to side effects
When NO reviewed side effects -
Source of data - List of required personnel, labor cost, and budget statement to need for policy-making
- Survey data on the necessity of policy-making studied before policy-making
- Formal document containing the reasons for the rejection of the proposal
- Report on alternatives against four expected side effects
Download Excel Table
Table 9. Indicator <C-1> Representativeness of performance indicators.
Indicator Policy-making >> Adequacy of performance by policy > Representativeness of performance indicators
Description Ensure that the performance indicators represent the objectives that the organization wants to achieve through the policy
Question Are the objectives to be achieved through the policy sufficiently detailed and expressed in performance indicators?
Check criteria Measure Score Clarity of objectives Connectivity of performance indicators
Very good 4 When the objective of the policy is specifically set for each period (short-term, long-term) When performance indicators are clearly aligned with the objective of the policy
Good 3 When the objective of the policy is specifically set for each period When performance indicators are NOT clearly aligned with the objective of the policy
Acceptable 2 When the objective of the policy is roughly set for each period When performance indicators are clearly aligned with the objective of the policy
Poor 1 When the objective of the policy is roughly set for each period When performance indicators are NOT clearly aligned with the objective of the policy
Very poor 0 When the objective of the policy is NOT defined over time, or it is absent -
- When performance indicators are NOT defined
Source of data - Proposal report showing the policy objectives
- Report showing the connectivity of policy objectives by performance indicators
- Report on last year's performance and this year's plan for informatization in the Act on Defense Informatization [8]
- Defense informatization performance evaluation report in the Act on Defense Informatization [8]
Download Excel Table
Table 10. Indicator <C-2> Objectivity of performance indicators.
Indicator Policy-making >> Adequacy of performance by policy > Objectivity of performance indicators
Description Ensure that performance indicators are set up to be measured or calculated
Question Are there data on which performance indicators can be measured or calculated, and are the measurement criteria or calculation methods provided?
Check criteria Measure Score Measurement/calculation method of performance indicators
Very good 4 When all performance indicators have specific measurement criteria or calculation methods
Good 3 When two-thirds (⅔) or more of performance indicators have specific measurement criteria or calculation methods
Acceptable 2 When a half or more to less than two-thirds (⅔) of performance indicators have specific measurement criteria or calculation methods
Poor 1 When one-thirds (⅓) or more to less than a half of performance indicators have specific measurement criteria or calculation methods
Very poor 0 When less than one-thirds (⅓) of performance indicators have specific measurement criteria or calculation methods
* Note. The performance indicators refer to indicators defined in connection with the objectives identified in <C-1> indicator (representation of performance indicators), excluding performance indicators not linked to the performance objectives.
Source of data - Report showing measurement or calculation method of performance indicators
- Report on last year's performance and this year's plan for informatization in the Act on Defense Informatization [8]
- Defense informatization performance evaluation report in Act on Defense Informatization [8]
Download Excel Table
Table 11. Indicator <C-3> Redundancy of performance indicators.
Indicator Policy-making >> Adequacy of performance by policy > Redundancy of performance indicators
Description Check that there is no overlap between performance indicators
Question Is there no overlap between performance indicators?
Check criteria Measure Score Redundancy of performance indicators
Very good 4 When there is NO redundancy between all performance indicators
Good 3 When there is redundancy between less than one-thirds (⅓) of performance indicators
Acceptable 2 When there is redundancy between one-thirds (⅓) or more to less than a half of indicators
Poor 1 When there is redundancy between a half or more to less than two-thirds (⅔) of indicators
Very poor 0 When there is redundancy between two-thirds (⅔) or more of performance indicators
* Note. 1. Redundancy of performance indicator means that there are many or few overlapping indicators that measure similar performance. The level of redundancy is determined using the number of indicators involved in the redundancy. For reference, if four (A, B, C, D) out of ten performance indicators (A-J) are involved in overlapping, regardless of the type of duplication (similar between A and B, similar between C and D or all similar in A, B, C, D) the level of redundancy is judged to be four-tenths.
2. Whether it's a duplicate is determined using the expertise of individual evaluator.
Source of data - Checklist that checks the redundancy of performance indicators
- Performance indicators item
Download Excel Table
Table 12. Indicator <D-1> Compliance with plan.
Indicator Policy implementation >> Properness of policy implementation > Compliance with plan
Description Ensure that the policy went as planned
Question Did the policy proceed faithfully in accordance with the schedule?
Check criteria Measure Score Compliance with plan
Very good 4 When two-thirds (⅔) or more of the schedule against the plan was completed
Good 3 When a half or more to less than two-thirds (⅔) of the schedule against the plan was completed
Acceptable 2 When one-thirds (⅓) or more to less than a half of the schedule against the plan was completed
Poor 1 When less than one-thirds (⅓) of the schedule against the plan was completed
Very poor 0 When the policy was NOT driven at all
* Note. 1. In case of less than two-thirds of the schedule compared to the plan, if the objective evidence is provided that the schedule was delayed due to unavoidable external circumstances (budget change, an order from higher institutions or organizations, etc.), judge as “Agree (3 points).”
2. The schedule compared to the plan is calculated as maximum of the ratio of progress time to total schedule or input cost to total budget. For example, if the policy has been advanced about four months compared to the 12-month schedule and 1.2 billion of the total 2 billion budgets have been used, the schedule against the plan is determined as max [4/12, 12/20] = 0.6 and the judgement is “Agree (3 points).”
Source of data - Gantt chart of policy implementation plan and a current progress; Documents showing a current progress
- Report on last year's performance and this year's plan for informatization in the Act on Defense Informatization [8]
Download Excel Table
Table 13. Indicator <D-2> Responsiveness to change of circumstance.
Indicator Policy implementation >> Properness of policy implementation > Responsiveness to change of circumstance
Description Check whether the organization has responded appropriately to changes in administrative condition or circumstance identified through monitoring
Question Has the organization responded appropriately to changes in administrative condition or circumstance?
Check criteria Measure Score Monitoring level Responding level
Very good 4 When regular monitoring data is presented If an organization has prepared specific measures after understanding the situation
Good 3 When regular monitoring data is presented If an organization has prepared rough measures after understanding the situation
Acceptable 2 When regular monitoring data is presented If an organization does NOT prepare any measure after understanding the situation
When irregular monitoring data is presented If an organization has prepared specific measures after understanding the situation
Poor 1 When irregular monitoring data is presented If an organization has prepared rough measures after understanding the situation
Very poor 0 When irregular monitoring data is presented If an organization does NOT prepare any measure after understanding the situation
When NO monitoring data -
Source of data - Monitoring data periodically conducted during policy implementation
- Checklist for responding to changes in administrative condition or circumstance
- Result report revised due to changes in administrative condition or circumstance
Download Excel Table
Table 14. Indicator <D-3> Connectivity with relevant organizations or policies.
Indicator Policy implementation >> Properness of policy implementation > Connectivity with relevant organizations or policies
Description Confirmation of establishment and operation of connectivity and cooperation system with relevant organizations and policies
Question Has the organization established and operated connectivity and cooperation system with relevant organizations and policies in the process of implementation?
Check criteria Measure Score Level of establishment of cooperation system with relevant organizations Level of connectivity with relevant policies
Very good 4 When reviewed the establishment of cooperative system with relevant organizations and continuously met (more than once) When the relevant policy is identified, the connectivity is reviewed, and the actual connectivity case is presented in detail
Good 3 When reviewed the establishment of cooperative system with relevant organizations and met once When the relevant policy is identified, and the connectivity is reviewed, but the actual connectivity case is NOT specific
Acceptable 2 When reviewed the establishment of cooperative system with relevant organizations and met once When the relevant policy is identified, and the connectivity is reviewed, but NO case
Poor 1 When only reviewed the establishment of cooperative system with relevant organizations and NO met When the relevant policy is ‘identified,’ and the connectivity is reviewed, but No case
When the relevant policy is only identified
Very poor 0 When NO reviewed the establishment of cooperative system -
- When the relevant policy is NOT identified
Source of data - Memorandum for requesting cooperation to relevant institutions
- Report showing connectivity cases to relevant policies
- Joint review plan in the Act on Defense Informatization [8]
Download Excel Table
Table 15. Indicator <E-1> Achievement of performance objective.
Indicator Output/performance >> Achievement of performance objective > Achievement of performance objective
Description Check the achievement level of the performance objective made when making the policy
Question Did you achieve the originally set objectives in the policy?
Check criteria Measure Score Objective achievement ratio
Very good 4 When the objective achievement ratio is two-thirds (⅔) or more
Good 3 When the objective achievement ratio is a half or more to less than two-thirds (⅔)
Acceptable 2 When the objective achievement ratio is one-thirds (⅓) or more to less than a half
Poor 1 When the objective achievement ratio is zero over to less than one-thirds (⅓)
Very poor 0 When the objective achievement ratio is zero
* Note. If there are multiple performance objectives, the weight average is used.
Source of data - Performance objective achievement ratio report; Documents showing a current progress
- Report on last year's performance and this year's plan for informatization in the Act on Defense Informatization [8]
- Defense informatization performance evaluation report in the Act on Defense Informatization [8]
Download Excel Table
Table 16. Indicator <F-1> Concreteness of performance analysis.
Indicator Output/performance >> Adequacy of performance analysis process > Concreteness of performance analysis
Description In the result of the performance analysis, check whether the problem of the policy and its cause are specified in detail
Question Are the problems of the policy and its causes specified?
Check criteria Measure Score Presenting the problem of the policy Identifying the cause of the problem
Very good 4 When analyzed systematically the problem of the policy and presented When analyzed systematically the cause of the problem and presented
Good 3 When analyzed systematically the problem of the policy and presented When NOT analyzed systematically the cause of the problem and presented
When analyzed generally the problem of the policy and presented When analyzed systematically the cause of the problem and presented
Acceptable 2 When analyzed generally the problem of the policy and presented When NOT analyzed systematically the cause of the problem and presented
Poor 1 When analyzed generally the problem of the policy and presented When NOT analyzed the cause of the problem
Very poor 0 When NOT analyzed on the problem of the policy -
Source of data - Performance analysis report, Policy problem analysis report
- Project closure report in Act on Defense Informatization [8]
Download Excel Table
Table 17. Indicator <F-2> Reliability of performance analysis.
Indicator Output/performance >> Adequacy of performance analysis process > Reliability of performance analysis
Description Check if the performance analysis was conducted with the participation of internal and external experts
Question Did the internal and external experts related to the policy actively participate in the performance analysis?
Check criteria Measure Score Level of internal expert participation Level of external expert participation
Very good 4 When multiple (two or more) internal experts participated in the analysis more than once When multiple (two or more) external experts participated in the analysis more than once
Good 3 When multiple (two or more) internal experts participated in the analysis once When multiple (two or more) external experts participated in the analysis once
Acceptable 2 When ONLY one internal expert participated in the analysis once or more When ONLY one external expert participated in the analysis once or more
Poor 1 When ONLY one internal expert participated in the analysis once -
- When ONLY one external expert participated in the analysis once
Very poor 0 When any internal expert did NOT participate in the analysis When an external expert did NOT participate in the analysis
* Note. Internal expert refers to skilled workers in policy-making and implementation organizations, while external expert refers to professionals belonging to other organizations.
Source of data - List of internal and external experts (including profiles) and their documented opinions related to the policy
- Minutes (or photos of meetings), confirmation of participation, etc.
- Joint review plan in the Act on Defense Informatization [8]
Download Excel Table
Table 18. Indicator <G-1> Sharing and learning level of analysis result.
Indicator Output/Performance >> Utilization of analysis results > Sharing and learning level of analysis result
Description Ensure that analysis results were shared and learned by policy-making and implementation organizations
Question Were the result of performance analysis shared and learned by the policy-making and implementation organization?
Check criteria Measure Score Level of sharing of analysis result Level of learning of analysis result
Very good 4 When the analysis results are systematically shared within the policy-making and implementation organization When presenting the result that the analysis results are systematically learned and discussed in the policy-making and implementation organization
Good 3 When the analysis results are systematically shared within the policy-making and implementation organization When NO presenting the result that the analysis results are learned and discussed in the policy-making and implementation organization
Acceptable 2 When the analysis results are non-systematically shared within the policy-making and implementation organization When presenting the result that the analysis results are learned and discussed in the policy-making and implementation organization
Poor 1 When the analysis results are non-systematically shared within the policy-making and implementation organization When NO presenting the result that the analysis results are learned and discussed in the policy-making and implementation organization
Very poor 0 When the analysis results are NOT shared within the policy-making and implementation organization -
* Note. In the sharing of analysis results, “systematically” means sharing through formal meeting, workshop, seminar, etc. with documented data, not verbal sharing.
Source of data - Sharing result of analysis result (e-mail capture, internal system sharing record, etc.)
- Minutes (workshop, seminar material, etc.) that have learned and discussed the analysis results within the organization
Table 19. Indicator <G-2> Intellectualization level of analysis result.
Download Excel Table
Table 19. Indicator <G-2> Intellectualization level of analysis result.
Indicator Output/performance >> Utilization of analysis results > Intellectualization level of analysis result
Description Check if the analysis result was systematically accumulated and managed
Question Did the organization accumulate and manage the result of performance analysis using a database system?
Check criteria Measure Score Accumulation and management level of analysis result
Very good 4 When the results of performance analysis were systematically accumulated and managed online using relevant specialized solutions
Good 3 When the results of performance analysis were written in specialized documents and accumulated and managed only offline
Acceptable 2 Where the results of performance analysis were accumulated and managed in parallel with the minutes and other documents
Poor 1 When documents, which recorded the results of performance analysis, were presented but were one-time, and were not accumulated and managed
Very poor 0 When the results of performance analysis are NOT accumulated and managed
* Note. The relevant specialized solutions for the accumulation and management of analysis results refer to the online-based tools that provide the functions for accumulating and managing data such as database, data warehouse, and data mart.
Source of data - Screen capture of online database saved analysis results
- Report on the analysis results
- Defense informatization performance evaluation report in the Act on Defense Informatization [8]
Download Excel Table

IV. CONCLUSION

This study describes the improved evaluation framework, which was revised from the current defense informatization evaluation method [12], for the DIP. The proposed framework for the policy of defense informatization is evaluated in each stage of policy-making, policy implementation, and outcome/performance of policy. This does not use a survey method but a direct evaluation of the policy by evaluators, if possible. The evaluation requires measurement effort. For an efficient evaluation that reduces the burden of the defense organizations on overlapped evaluation by national and defense methods, the proposed evaluation method takes in and is consistent with the national evaluation method [5-7] as much as possible. The framework proposed in this study can be applied to assess other various policies such as multimedia broadcasting policy, ICT convergence policy, and multimedia policy as well as DIP.

There are some limitations in the current study, as is the case with most researches and methodologies. It is necessary to set the performance objective for each policy in advance. Most policies do not have a clear and quantitative performance objective, indicator, or target [19]. If the policy does not have quantitative performance indicators related to an objective and target value, the evaluation framework cannot be workable. Moreover, the proposed evaluation framework is a revision based on an existing study [12], and not a theory.

The simple is more beautiful and better than the complex. It is more useful to develop an evaluation framework that most users can intuitively understand or easily use. Lower acceptance may weaken its effectiveness. It is better to evolve an imperfect evaluation framework by repetitively evaluating the informatization policy than waiting for the development of a fully reasonable and theoretically perfect evaluation framework. In addition, it must be as open as possible with the methods and results made widely available.

Repetitive uses of an evaluation framework can accumulate experience. They can lead to lessons learned and modification requirements, which can make the evaluation framework more useful. Users can easily accept the evaluation framework. Through such a virtuous cycle, the evaluation framework for the policy about defense informatization will be easily accepted by the users and can aid in generating effective policies.

Acknowledgements

This manuscript is based on Research Report [16]. The authors wish to thank the editors and the anonymous reviewers for their careful reviews and constructive suggestions. Their suggestions helped strengthen the manuscript. All errors are the sole responsibility of the authors.

REFERENCES

[1].

A. McAfee and E. Brynjolfsson, “Big data: The management revolution,” Harvard Business Review, Vol. 90, No. 10, October 2012, pp. 60-68.

[2].

Development Assistance Committee, “Principles for Evaluation of Development Assistance,” OECD, 1991. https://www.oecd.org/dac/evaluation/2755284.pdf

[3].

Policy and Operations Evaluation Department (IOB) of the Dutch Ministry of Foreign Affairs, “Evaluation Policy and Guidelines for Evaluations,” The Netherlands, October 2009. https://www.oecd.org/dac/evaluation/iob-evaluation-policy-and-guidelines-for-evaluations.pdf

[4].

J. H. Kim, “Seeking ways to improve the use of policy evaluation” Korean Journal of Policy Analysis and Evaluation, Vol. 26, No. 3, 2016, pp. 205-222. (In Korean)

[5].

Korea Prime Minister, Framework Act on the Evaluation of Government Services, Act No. 14118, Mar. 2016. (In Korean) http://www.law.go.kr/법령/정부업무평가기본법/(14118)

[6].

Korea Ministry of the Interior and Safety, “2019 Central Government Department Self-evaluation (Public Administration Capability Part) Plan (Draft),” May 17, 2019. (In Korean)

[7].

Korea Office for Government Policy Coordination, Framework Act on Public Service Evaluation, Act No. 14839, July 26, 2017. (In Korean) http://www.law.go.kr/%EB%B2%95%EB%A0%B9/%EC%A0%95%EB%B6%80%EC%97%85%EB%AC%B4%ED%8F%89%EA%B0%80%EA%B8%B0%EB%B3%B8%EB%B2%95

[8].

Korea Ministry of National Defense (MND), Act on Establishment of Infrastructure for Informatization of National Defense and Management of Informational Resources for National Defense (Abbreviation: Act on Defense Informatization), Act No. 12553, May 9, 2014. http://elaw.klri.re.kr/kor_service/lawView.do?hseq=32670&lang=ENG

[9].

Merriam-Webster, retrieved at Mar. 24, 2020. https://www.merriam-webster.com/dictionary/multimedia

[10].

Korea Ministry of National Defense (MND), Enforcement Decree of Act on Establishment of Infrastructure for Informatization of National Defense and Management of Informational Resources for National Defense (Abbreviation: Act on Defense Informatization), Presidential Decree No. 25906, Dec. 30, 2014. (In Korean) http://www.law.go.kr/%EB%B2%95%EB%A0%B9/%EA%B5%AD%EB%B0%A9%EC%A0%95%EB%B3%B4%ED%99%94%20%EA%B8%B0%EB%B0%98%EC%A1%B0%EC%84%B1%20%EB%B0%8F%20%EA%B5%AD%EB%B0%A9%EC%A0%95%EB%B3%B4%EC%9E%90%EC%9B%90%EA%B4%80%EB%A6%AC%EC%97%90%20%EA%B4%80%ED%95%9C%20%EB%B2%95%EB%A5%A0%20%EC%8B%9C%ED%96%89%EB%A0%B9

[11].

Korea Ministry of National Defense (MND), Defense Informatization Task Directive, MND Directive No. 2129, Feb. 5, 2018. (In Korean) http://www.law.go.kr/행정규칙/국방정보화업무훈령

[12].

H. J. Kwon, J. S. Choi, S. T. Kim, H. J. Lee, and Y. P. Sung, “A study for improving an evaluation systems of defense informatization,” Korea Institute for Defense Analyses, Seoul, Republic of Korea, Research Report, Feb. 2012. (In Korean)

[13].

H. J. Kwon, J. K. Hong, S. T. Kim, and H. J. Lee, “A study on development of defense informatization evaluation and management plan,” Korea Institute for Defense Analyses, Seoul, Republic of Korea, Research Report, Sep. 2016. (In Korean)

[14].

S. Lee, and C.-H. Song, “Evaluation system for defense IT project in Korea: Post-implementation stage,” Journal of Multimedia Information System, Vol. 5, No. 4, 2018, pp. 291-297.

[15].

S. Sim, and S. Lee, “Development of evaluation system for defense informatization level,” Journal of Multimedia Information System, Vol. 6, No. 4, 2019, pp. 271-282.

[16].

S. Lee, H. S. Jung, and S. J. Yoon, “An application with an evaluation methodology for defense informatization and validating the methodology,” Sun Moon University, Asan, Republic of Korea, Research Report, Nov. 2012. (In Korean)

[17].

Korea Ministry of Science and ICT, Framework Act on National Informatization, Act No. 15786, Oct. 16, 2018. http://elaw.klri.re.kr/kor_service/lawView.do?hseq=50665&lang=ENG

[18].

Korea Ministry of Science and ICT, Enforcement Decree Framework Act on National Informatization, Presidential Decree No. 28264, Sep. 5, 2017. http://elaw.klri.re.kr/kor_service/lawView.do?hseq=45333&lang=ENG

[19].

H. J. Lee, S. T. Kim, and H. J. Kwon, “A case study on performance management of public informatization: Based on evaluation of defense informatization policy,” in Proceedings of the 2019 Fall Conference on the Korea Society of Management Information Systems, Seoul, Nov. 2019, pp. 224-231. (In Korean)

Authors

Hosang Jung

jmis-7-1-73-i1

Hosang Jung is a professor in the Asia Pacific School of Logistics at Inha University, Republic of Korea. He received the B.S. and Ph.D. degrees in Industrial Systems Engineering from Yonsei University. His research interests include logistics, supply chain management, asset management, and data analytics.

Sangho Lee

jmis-7-1-73-i2

Sangho Lee is an associate professor in the Department of IT Management at Sun Moon University, Republic of Korea. He received his Ph.D. from Korea Advanced Institute of Science Technology (KAIST) and Master/Bachelor's degree at Sungkyunkwan University. His research interests are the business value of IT, the evaluation of IT investment projects, software engineering, and causality between IT investment and performance.