Journal of Multimedia Information System
Korea Multimedia Society
Section C

Algorithms of the Parametric Adaptation of Models of Complex Systems by Discrete Observations

Bakhtiyor Radjabov1, Charos Khidirova1,*
1Tashkent University of Information Technologies, Tashkent, Uzbekistan
*Corresponding Author: Ch.M. Khidirova, Tashkent University of Information Technologies, Tashkent, Uzbekistan, khcharos@gmail.com.

© Copyright 2017 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Nov 28, 2017 ; Revised: Dec 06, 2017 ; Accepted: Dec 09, 2017

Published Online: Dec 31, 2017

Abstract

This paper examines approaches to the development of algorithms of parametric identification of models of complex systems from discrete observations. A modification of a known algorithm Kaczmarz which is designed for closed systems with perturbations, based on the methods of random search and investigates their statistical properties.

Keywords: Kaczmarz algorithm; adaptation of complex system; parametric adaptation; discrete observation

I. INTRODUCTION

We formulate the problem of parametric adaptation of models of complex systems (CS) with a discrete form of input of observation data through the “input-output” channel. Let CS be monitored on the “input-output” channel with a constant or variable measurement frequency. The values of the input vectors x¯0(N) and the output y¯0(N) when it is impossible to predict the errors in the presence of disturbances can differ significantly from the model values that were adapted at the previous moments i=1,2,3,…,N-1 measurements. Therefore, it becomes necessary to determine such values of the model parameters for which the values of the output parameters y¯0(N) differ little from the model values y¯M(N).

The purpose of the work is the development of algorithms for the parametric identification of models of complex systems from discrete observations.

II. THE MAIN PART

We consider this problem for the case when the CS is formalized by a linear model with additive interference η¯(K):

y¯(K)=A(K)x¯T+η¯(K)  ,
(1)
where A(K) is the matrix of the estimated parameters of dimension (m*n), x¯(K)=[x1(K), ,xn(K) is the CS input vector, y¯(K)=[y1(K), ,yn(K)] is the output vector of the CS.

We will assume that the CS is defined as a “black box” and its parameters are determined according to (1). The mathematical formulation of the problem of parametric adaptation of CS on discrete observations is defined as follows:

J[A(N)]=J[A(N)x¯(N+1)y¯0(N)]min=>A*(N+1)
(2)
where A(N) ∊ Ω
y(N+1)=A*(N+1)x¯T(N+1)y¯0(N+1)
(3)
where A*(N+1) is adaptive parameter of the model at the moment N+1; y¯0(N+1) is the value of the output parameters obtained from the adaptive model at the moment N+1; y¯0(N+1) is the value of the observations of the output of the object; J is the criterion of model adaptation.

A block diagram of the formalization of problems (2) and (3) is presented in Fig.1. Here, for ε in (4), the adaptation sensitivity threshold is adopted, according to which the adequacy of the model is determined by the object. Obviously, the adaptation procedure is connected, if the condition

J[A(N)]>ε,
(4)

jmis-4-4-317-g1
Fig. 1. Block diagram of the formalization of problems (2) and (3)
Download Original Figure

Otherwise, as an adapted model, the initial (adequate) model is adopted, the parameters of which were estimated at the previous measurement points. Algorithms and methods for solving the problem of parametric adaptation of models are divided into two groups according to their characteristics. The first group includes step-by-step algorithms, which are mostly recurrent [1,6]. Among this group, the basic Kaczmarz algorithm [2,5,6], which is developed for closed systems with perturbations, is most well studied and widely distributed.

A typical example of a stepping algorithm is a

a¯(N+1)=a¯(N)+1N+1[y0(N+1)a¯T(T)x¯(N)]x¯(N+1) ,
(5)
where y0 (N+1) is the value of observations of the object at the moment N+1.

Algorithms of the type (5) can be used both for linear models

y(N)=j=0naj(N)xj(N); x0=1
(6)
and for nonlinear models of CS
y(N)=j=0nj=0naij(N)xj(N)xi(N), x00=1
(7)

Specifying (5), we consider the fundamental properties of the Kaczmarz algorithm, and define its modification with reference to the adaptation of linear models of the CS in the foreground (6) or (7). To do this, the CS model or its local stage will be searched in the following scalar form [2,3,7], assuming the presence of inputs and one output:

yM(N+1)=i=0naj(N)xj(N+1) ,
(8)
where yM (N+1) are the model values of the CS output; aj(N) - estimates of model parameters at the Nth step of adaptation. Assuming that at the Nth step the model
y(N)yM(N)=i0nai(N)xj(N) ,
(9)
adapted. We define the adaptation conditions in such a way that (9) is satisfied at the N+1 - M step. For this, choosing arbitrary initial values of the estimates ai(0) in (8), at each step we will refine them for (9) with respect to the following recurrence formula:
aj(N+1)=aj(N)++y(N+1)i=0naj(N)x(N+1)ξ+i=0nx2(N+1),
(10)
where ξ is a parameter characterizing the level of random interference affecting the object.

As can be seen, in the numerator of the second term of the relation (10) there is a difference between the observed and model values of the outputs, which in general is a random variable with unknown statistical characteristics [1,6]. Obviously, in the absence of interference, this difference ceases to be a random variable. In this case it is considered that the object is stationary and in (10) we can set ξ = 0.

Introducing the notation:

Δy(N+1)=y(N+1)i=0nai(N)xi(N+1)Δ(N+1)=Δy(N+1)[ξ+i=0nx2(N+1)]
the adaptation algorithm (10) can be written in a more compact form:
ai(N+1)=ai(N)Δ(N+1)xi(N+1).
(11)

The vector form (11) is represented as follows:

a¯(N+1)=a¯(N)Δ(N+1)x¯T(N+1) ,
(12)
where Δ(N+1) is a scalar quantity.

Obviously, there are unknown values of the parameters ai (N) in the structure Δ(N+1), the limiting values of which are the adapted parameters at the N+1 step ai*(N+1). Here ai*(N+1) is taken as the solution of the adaptation problem (2) or (3). The error vector is introduced as follows:

H¯(N+1)=[H1(N+1), H2(N+1), ,Hn(N+1)]
(13)
where Hi(N+1)=ai*(N+1)ai(N+1).

Obviously, the vectors H¯(N+1) and x¯(N+1) are orthogonal, since

(H¯(N+1), x¯(N+1))=H¯T(N)x¯(N+1)H¯T(N)x¯(N+1)x¯T(N+1)x¯(N+1)x¯T(N+1)x¯(N+1)=HT(N)x¯(N+1)H¯Tx¯(N+1)=0 
(14)
where H¯(N+1), x¯(N+1) are the scalar products of the vectors H¯(N+1), x¯(N+1).

It follows that if even the total error (13) or (14) in the absence of an external perturbation ξ decreases, the errors of the individual parameters {Hi (N+1)} can take arbitrary values. Therefore, the model values of the adapted parameters at the N+1-st step according to the algorithm (11) can significantly differ from the true values. This fact for the two-dimensional case is illustrated in Fig. 2.

jmis-4-4-317-g2
Fig. 2. Fact for the two-dimensional case.
Download Original Figure

To analyze the convergence of the adaptation algorithm (10), we use the square of the error vector H(N+1)

H¯T(N+1)H'(N+1)=H¯T(N),1{[H¯T(N)x¯(N+1)]2HT(N)H¯(N)xT(N+1)x'(N+1)}.
(15)

It's obvious that

sin2[H¯(N)x¯(N+1)]==[H¯T(N)x¯(N+1)]2H¯T(N)H¯(N)x¯T(N+1)x¯(N+1)
(16)

By virtue of (16), it can be assumed that the maximum convergence of the algorithm is ensured when the vectors H¯(N) and x(N+1) are orthogonal. Here the convergence rate is the smaller, the less differ the directions of the H¯ and x¯ vectors in the neighboring measurement cycle. The rate of convergence has a second order, i.e. V=V(iHi2) [4,7]. This regularity for a two-dimensional object is shown in Fig.3. If α is the angle between adjacent directions, then the adaptation error after N+1 cycles is easily determined depending on the initial state:

H¯T(N+1)H¯(N+1)=H¯T(0)cosN+1α

jmis-4-4-317-g3
Fig. 3. Regularity for two-dimensional objects.
Download Original Figure

With zero mathematical expectations and the same variances for the components of the error vectors (13), one can calculate the mathematical expectation of the error after N+1 cycles of adaptation [1]

M{H¯T(N+1)H¯(N+1)}=(11n)N H¯ T(0)H¯(0)
where n is the dimension of the vector of input parameters x.

III. CONCLUSION

Specifying (5), the principal properties of the Kaczmazh algorithm are considered, and its modification is proposed with reference to the adaptation of the linear and nonlinear models of the CS

y(N)=j=0naj(N)xj(N), x0=1
y(N)=j=0ni0naij(N)xj(N)xi(N), x00=1

For this, the CS model or its local stage is sought in the next scalar form, assuming that there are multiple inputs and one output.

By virtue of (16), it can be assumed that the maximum convergence of the algorithm is ensured when the vectors H¯(N) and x(N+1) are orthogonal. Here the convergence rate decreases, the less the directions of the vectors H¯ and x¯ differ in the neighboring measurement cycles. The rate of convergence has a second order, i.e. V=V(iHi2).

REFERENCES

[1].

F. Giannini and G. Leuzzi, Nonlinear Microwave Circuit Design. NewYork, NY: John Wiley & Sons Inc., 2004.

[1].

L. Ljung, System Identification: Theory for the User, 2nd Edition. Prentice Hall PTR Upper Saddle River, New Jersy, USA, 1999.

[2].

A. L. Fradkov, I. V. Miroshnik, and V. O. Nikiforov, Nonlinear and Adaptive Control of Complex Systems. (Series: Mathematics and Its Applications. Vol. 491) Kluwer, Dordrecht, 1999.

[3].

I. Tyukin, Adaptation in Dynamical Systems. Cambridge University Press, 2011.

[4].

Haddad W.M., Chellaboina V.S. Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach. United Kingdom: Princeton University Press. 2008.

[5].

Gover Robert and Richtarik Peter, “Randomized iterative methods for liner systems,” SIAM Journal on Matrix Analysis and Applications, Vol. 36, No. 4, pp. 1660-1690, 2015.

[6].

Y. Sunahara, A. Ohsumi, and N. Imamura, “A method of parameter identification for linear distributed parameter system,” Automatica, Vol. 12, No. 3, pp. 245-256, 1976.

[7].

B. Sh. Radjabov and Ch. M. Khidirova, “Algorithms for continuous and discrete models of adaptation under conditions of incomplete information,” The 4th International Conference on Big Data Applications and Services (BIGDAS2017). Tashkent, Uzbekistan. pp. 47-52. August 15-18, 2017.