basis of nonlinear control with piecewise affine neural ... - Lehalle.net

air quantity entering the combustion chamber) state variables being given. A simple nonlinear model of engine combustion with a two-dimensional state space ...
176KB taille 1 téléchargements 489 vues
BASIS OF NONLINEAR CONTROL WITH PIECEWISE AFFINE NEURAL NETWORKS Charles-Albert Lehalle∗† and Robert Azencott‡ † CMLA (DIAM), Ecole Normale Sup´erieure de Cachan, France [email protected] ‡ CMLA (DIAM), Ecole Normale Sup´erieure de Cachan, France [email protected]

Abstract— Piecewise affine neural networks can be constructed to emulate any continuous piecewise affine function in any hypercube of its input space. This property can be used to initialize such a network with a set of linear controllers, where each of them is known to be efficient locally. This paper expose and illustrate this properties of piecewise affine neural networks. INTRODUCTION Linear control of systems governed by an equation such as xn+1 = Axn + Bun (where x and u belong to finite dimensional vector spaces) is widely known [3]. Control design is more complex when dealing with nonlinear systems (for instance [1] or [2]). A well-known method consists in linearizing the system at positions that occur the most often, then solving this serie of linear control problems, and finally patching such local controls together [6]. The piecewise affine neural networks are identical to feed-forward neural networks except that their activation function is continuous piecewise affine rather than sigmoidal. These neural networks can implement quite generic continuous piecewise affine functions on polyhedral cells. Therefore they stand between a collection of local linear controls and a nonlinear control deduced straight from the nonlinear system (this is not always possible, especially when the exact dynamic of the model is unknown [4] and [9]). Because the learning phase of a neural network by gradient retropropagation in a closed loop is difficult [5], its initialization based on a set of local linear controllers is a real benefit. Besides, it leads to an interesting number of hidden units. Since they can be considered as approximations of neural networks with sigmo¨ıdal activation, some of the results obtained for piecewise affine neural networks can be extended to ∗ In the framework of a thesis directed by Pr. R. Azencott at the CMLA (DIAM research group) Ecole Normale Sup´ erieure de Cachan, the thesis is carried out at the Research Department of Renault.

more general perceptrons. The aim of this paper is to show how the training of these networks can be initialized from linear functions, according to properties exhibited in [7], especially in a control environment. After a short presentation of piecewise affine neural networks paradigm and their properties (mainly their capability to emulate any piecewise affine function in any given hypercube) a methodology to initialize piecewise affine neural networks to control nonlinear systems and an illustration will be presented. I. PROPERTIES OF PIECEWISE AFFINE PERCEPTRONS Definition 1 (Piecewise affine perceptron) a piecewise affine perceptron (PAP) from IRd to IRa with one hidden layer of N neurons is a function such as (when X is in IRd ) : ³ ³ ´ ´ Ψ(X) = Φ W (2) · Φ W (1) · X + b(1) + b(2) (1) which is totally specified by a set W = [W (1) , b(1) , W (2) , b(2) ] where : W (1) is a matrix in M(d,N ) (IR), W (2) in M(N,a) (IR), b(1) a vector in IRN , and b(2) in IRa . Φ is a function which applies the response function g to each coordinate of a given vector. For piecewise affine neural networks : g(x) = x for −1 ≤ x ≤ 1, g(x) = 1 for x > 1, and g(x) = −1 for x < −1.

Figure 1: The activation function of a piecewise affine neural network and a sigmoidal one.

Such neural networks can be considered as approximations of perceptrons with sigmoidal activation function rather than piecewise affine. This allow some results on PAP to be extended to more standard perceptrons. A. Partition of space into polyhedral cells The initial partition of a neural network is the set C of polyhedral cells generated by the intersections of the N parallel hyperplanes pairs (H− (i), H+ (i)) which are normal to the ith row vector of W (1) and containing respectively the points x ˜+ ˜− i and x i defined by :  D E (1)  W (1) , x ˜+ = 1 − bi i D i E (2) (1)  W (1) , x ˜− = −1 − bi i i The Terminal partition is the partition generated by the intersections of parallel hyperplanes pairs 0 0 (H− (c, j), H+ (c, j)) depending on the second layer weights and on the value of h(X) on each cell in the initial partition. The adjacent cells of this partition where the PAP has the same behavior are joined. Proposition 1 (PAP specification) A PAP with N hidden neurons has a continuous piecewise affine behavior on each polyhedral cell of its terminal partition. Besides, the number of cells in its terminal partition is lower or equal to the number of cells in its initial partition. B. PAP and continuous piecewise affine functions If proposition 1 asserts that a PAP is a continuous piecewise affine function, reciprocally : Theorem 1 (Continuous piecewise affine function representation) Given an hypercube K of IRd and a continuous piecewise affine function f on a polyhedral partition of K, there is at least one PAP coinciding with f on the whole hypercube. There is in fact an infinity of PAP coinciding with f on the given hypercube. The idea of the proof is given in [7]. This theorem will be useful to initialize a PAP with a set of affine functions on different polyhedral areas of space. II. NONLINEAR CONTROL WITH PIECEWISE AFFINE PERCEPTRONS A. Methodology Given two vector spaces X (the state space) and U (the control space), and I(x, u) a norm on X × U , the purpose is to find a function u = g(x, x∗ ) (from X 2 into U) such that the trajectories of a state variable x with dynamics : ½ xn+1 = f (xn , un ) (3) un = g(xn , x∗ )

(where f is differentiable in X × U ) verify limn→∞ I(xn − x∗ , un ) = 0 for any initial state x0 and any given state x∗ in Ω a fixed set. The function g here is a PAP, the automatic learning will be a gradient retropropagation of the cost function. To control (3), one can first select some linear approximations (fˆi ) of f at chosen points (xi ), then design linear controls (Ki ) each of them being optimal around xi for (3) where f is replaced by its linear version fˆi , and finally initialize a PAP with a continuous piecewise affine function K constructed to be equal to each Ki around xi . The main PAP property (that one can construct at least one PAP coinciding with a given continuous piecewise affine function) is used here. The automatic learning will cause the PAP to emulate a nonlinear function that will fix the deficiencies of K. This will be an efficient initialization for a PAP before running it through automatic learning. Automatic learning is achieved by gradient retropropagation on a PAP. B. Illustration of initialization to control The purpose here is to control automobile engine combustion. The engine torque has to be rapidly driven to a desired value. A PAP will determine the command to apply to activators (dealing with the fuel and air quantity entering the combustion chamber) state variables being given. A simple nonlinear model of engine combustion with a two-dimensional state space (the torque and the speed of the engine) and a two-dimensional control space (the air volume injected in the chamber and the spark ignition timing) has been used. Two linear quadratic controllers K1 and K2 have been constructed to control the linearized dynamics around the points X1 and X2 through minimizing the cost function I = X 0 QX + U 0 RU where Q and R are diagonal matrices. Each of the obtained controllers is locally efficient but has some lack in other areas of the input space. The Figure 2.a and 2.b show that when one of the controllers is efficient, the other one has difficulties to control. The PAP has been constructed to be equal to Ki in an hypercube containing Xi for each i. To do this without maximizing the number of hidden units, d hidden neurons have been chosen for each area (d = 6 is the dimension of the input space of K1 and K2 ; 2 for the state space of the model, 2 for the desired values, 2 for the cumulate deviation between the state space and its desired values) and one hidden neuron to insure that the resulting “patched” piecewise affine function will be continuous. A polyhedral partition of space has been chosen so that the initial and terminal partition of space are identical. This involve some combinato-

(a)

(a)

2000

80

1900

70

1800

60

1700

50

1600

40 1500

30 1400 0

5

10

15

20

20 10

(b)

0

2800

10

20

30

40

50

60

70

80

50

60

70

80

(b)

2750 2700

80

2650

70

2600

60 2550

50

2500 2450

40 0

5

10

15

20

30

Figure 2: Two different controllers in two different situa-

20

tions, the thin line shows the controller designed near an engine speed of 3.000 and a resulting torque of 0 and the other line is for the controller around 1.200 and 0 : each controller is good locally and cannot control in a more general way.

10 0

10

20

30

40

Figure 3: Evolution of the quadratic cost function through rial geometry. Finally, the obtained PAP can control near X1 and X2 ; it is an accurate initialization before a learning phase. The learning phase itself is the object of an ongoing study on more complex nonlinear models. The folowing subsection shows its effect on a school case. C. A simple learning : control of a standard spring Here is the result of initialization and learning phases in the following school case : control design of a standard spring with position and speed to be controlled by applicating a force on it (¨ x = −kx + u − f x2 , [8]). The cost function I in u and x is u2 + x2 . The linear controller used to initialize the PAP has been designed by linearizing the spring dynamics, and a sigmoidal version of this network has been used. Figure 3 shows the cost function evolution for two situations : a linear control (a) and a neural network control during its learning phase (b). During the learning phase, the PAP quickly emulates a nonlinear controller.

time for standard spring trajectories : for a linear control (a) and a PAP (b) initialized with it during its learning phase.

CONCLUSION — FUTURE APPLICATIONS One of the main purposes of this study is to use piecewise affine perceptrons (PAP) to tune a neural network designed for adaptative control of automotive engine combustion. To use a neural network to control, we have first to initialize it in order to be near an optimal position and choosing an adequate number of hidden units, and then to determine cost functions to be efficient through gradient retropropgation ; this second part is an ongoing collaboration with the Research Center of Renault. The methodology for initialisation is to construct the PAP to locally emulate a given set of linear controls deduced from linearizations of a strongly nonlinear engine model, and then to let it evolve by automatic learning to a nonlinear control that will fix the linear control deficiencies. This initialization seems to be an efficient way to initialize the PAP.

References [1] B. Bonnard. Contrˆolabilit´e des syst`emes non lin´eaires. CR Acad des Sciences Paris, Ser I(292):535–537, 1981. [2] C. Byrnes. Control theory, inverse spectral problem and real algebraic geometry. Proceedings of the conference held at Michigan Technology, 1983. [3] d’Azzo. Linear Control System Analysis and Design, Conventional and Modern. McGraw Hill, 1988. [4] L. Hunt and R. Su. Design for multi-input nonlinear systems. Proceedings of the conference held at Michigan Technology, pages 268–297, 1983. [5] S. Jagannarthan. Multilayer discrete-time neuralnet controller with guaranteed performance. IEEE trans. on Neural Networks, 7(1):107–130, jan 1996. [6] B. Jakubczyk. On linearization of control systems. Bulletin de l’Acad´emie Polonaise des Sciences, Ser Sci Math, XXVIII(9-10):517–522, 1980. [7] C. A. Lehalle and R. Azencott. Piecewise affine neural networks and nonlinear control. Proceedings of ICANN’98, to be published, 1998. [8] E. Sontag. Mathematical Control Theory. SpringerVerlag, 1990. [9] H. Sussmann and R. Brockett. Differential Geometric Control Theory. Birkhauser, 1982.