a numerical procedure for filtering and efficient high.order signal

May 28, 2004 - ics may be of crucial help to achieve the desired objec- tive. This has ..... server for nonlinear systems: Application to bioreactors. - IEEE Trans.
792KB taille 1 téléchargements 335 vues
Int.I. Appl.Math.Comput.Sci.,2004,VoL14,No.2,201-209

aTtrs

A NUMERICAL PROCEDUREFOR FILTERING AND EFFICIENT HIGH.ORDER SIGNAL DIFFERBNTIATION S n r - r u I B R I R . , S E T T ED I O P * * *

Departmentof Automated Production,École de TechnologieSupérieure 1100,rue Notre Dame Ouest,Montreal,euébec,H3C 1K3 Canada e - m a i l :s - i b r i r G g p a

**

. et smtl . ca

Laboratoiredes Signaux et Systèmes,GNRS, Supélec,3 rue Juliot-curie 91 190 Gif-sur-Yvette, France - m a i l :s e t t e . d i o p G l s s . s u p e l e c . f r

In this paper, we propose a numerical algorithm for filtering and robust signal differentiation. The numerical procedure is basedon the solution of a simplified linear optimization problem. A compromisebetween smoothing and fidelity with respectto the measurabledata is achievedby the computation of an optimal regularizationparameterthat minimizes the GeneralizedCross Validation criterion (GCV). Simulation results are given to highlight the effectivenessof the proposed procedure. Keywords: generalizedcrossvalidation, smoothing,differentiation,splinesfunctions, optimization

l. Introduction In many estimationand observationproblems,estimating the unmeasuredsystemdynamics turns on estimatingthe derivatives of the measuredsystem outputs from discrete samplesof measurements(Diop et al., 1993; Gauthier er al., 1992; Ibrir, 1999). A model of the signal dynamics may be of crucial help to achieve the desired objective. This has been magnificently demonstratedin piorieeringworks by R.E. Kalman (1960) and D.G. Luenberger (1971) for signals generatedby known linear dynamical systems.Roughly speaking,if a signalmodel is kno*'n. then the resulting smooth signal can be differentiated r+'ithrespectto time in order to have estimatesof higherderivativesof the systemoutput.For example,consider the problem of estimating u - | first derivatives, y ( i ) . i - 0 . 1 . . . . . u - 1 o f t h e o u t p u t o af d v n a m i c s v s t e m ,s a y ,y ( ' \ = f f u . a , ù , . . . , r Q J r ) ) , w h e r ! y m a y b e a vector, and / may contain input derivatives. But we choose not to go into technical details. If the nonlinear function / is known accuratelyenough,then asymptotic nonlinearobserverscan be designedusing the resultsfrom (Ciccarellaet al., 1993; Gauthier et al., 1992; Misawa and Hedrick, I 989; Rajamani, I 998; Tornambè,1992;Xia and Gao, 1989). The proof of the asymptotic convergence of those observersrequires various restrictive assumptionson the nonlinearfunction f .If f is not known accuratelyenough then, estimatorsfor the derivativesof A may still be obtained via the theory of stabilizarion

of uncertainsystems,see,e.g., (Barmish and Leitmann, 1982; Chen, 1990; Chen and Leitmann, 1987; Dawson et al.,1992; Leitmann, 1981). The practicalconvergence that is reachedby the latter approachneedssome matching conditions. We shall also mention the approachvia sliding modesas in (Slotineet al.,1987). However, there are at least two practical situations where the availablemodel is not of great help. First, the systemmodel may be too poorly known. Second,it may be too complex for an extensionof linear observerdesign theory. In those situations, and as long as practical (in lieu of asymptotic)convergenceis enoughfor the specific application at hand, we may consider using differentiation estimatorswhich merely ignore the nonlinear function / in their design. Differentiation estimatorsmay be realizedin both continuoustime or discretetime as suggestedin (Ibrir, 2001;2003). This motivatesenough the study,by observerdesign theorists,of more sophisticated numerical differentiation techniquesfor use in more involved control design problems. The numerical analysis literature is where to find the main contributions in the area, see (Anderson and Bloomûeld, 1974; Craven and Wahba,1979;DeBoor, 1978;Eubank,1988;Gasseret al., 1985; Georgiev,1984; Hiirdle, 1984; 1985; Ibrir, 1999; 2000; 2003;Mûller, 1984;Reinsch, 1967; 197t)for more motivations and basic references. But theseresults have to be adaptedto observerdesignproblemssincethey were often envisionedso as to be used in an off-line basis.

Ibnr and S.

âITf,S The main difficulty that we face while designingdifferentiationobservers,without any a-priorl knowledgeof system dynamics, is noise filtering. For this reason,robust signal differentiationcan be classifiedas an ill-posed problem due to the conflicting goalsthat we aim to realize. Generally,noise filtering, precision,and the peaking phenomenon are three contradictoryperformancesthat characterizethe robustnessof any differentiation system. The field of ill-posed problems has certainly been one of the fastestgrowing areasin signal processingand appliedmathematics.This growth has largely beendriven by the needs of applicationsboth in other sciencesand in industry. A problem is mathematicallyill-posed if its solution does not exist, is not unique or does not depend continuously on the data. A typical example is the combined interpolation and differentiation problem of noisy data. A problem therein is that there are infinitely many ways to determinethe interpolatedfunction valuesif only the constraintfrom the datais used.Additional constraints are neededto guaranteethe uniquenessof the solution to make the problem well posed.An important constraintin context is smoothness.By imposing a smoothnessconstraint, the analytic regularizationmethod convertsan illposedproblem into a well-posedone. This has been used in solving numerouspractical problems such as estimating higher derivativesof a signal throughpotentially noisy data. As will be shown, inverse problems typically lead to mathematical models that are not well posed in Hadamard's sense,i.e., to ill-posed problems. Specifically, this meansthat their solutionsis unstableunder data perturbations.Numerical methodsthat can cope with this problem are the so-calledregularizationmethods. These methodshave been quite successfullyusedin the numerical analysisliterature in approachesto the ill-posed problem of smoothinga signal from its discrete,potentially uncertain,samples(Andersonand Bloomfield, 1974;Craven and V/ahba, 1979; Eubank, 1988; De Boor, 1978). One of these approachesproposedan algorithm for the computation of an optimal spline whose first derivativesare estimatesof the first derivativesof the signal. These algorithms suffer from a large amount of computationthey imply. One of the famous regularization criteria which have been extensively consideredin numerical analysis and statistics(De Boor, 1978)is rt

I

1ê, .12 r \ - -- )_,\ar- ar) * " | 9(-)1s)ds,

z!'

(l)

functionsare smoothpiecewisefunctions. Sincerheir introduction,splineshave provedto be very popular in rnterpolation,smoothingand approximation,and in computationalmathematicsin general. In this paper we presentthe stepsof a new discretetime algorithm which smooths si_enals from their uncertain discrete samples. The proposedalgorithm does not require any knowledge of the statistics of the measurement uncertaintiesand is basedon the minimization of a criterionequivalentto (1). The new discrete-timesmoothing criterion is inspired by finite-differenceschemes. In this algorithm the regularizationparameter is obtained from the optimality condition of the GeneralizedCrossValidation criterion as earlier introduced in (Craven and 'Wahba, 1979). We show that the smooth solution can be given as discrete samplesor as a continuous-timespline function defined over the observation interval. Consequently, the regularizedsolution can be differentiatedas many times as possibleto estimatesmooth higher derivatives of the measuredsignal.

2. Problem Statement and Solution of the Optimization Problem Here, we consider the problem of smoothing noisy data with possibly estimating the higher derivatives g @ ( Q , F : 0 , 1 , . . . ) u - 1 f r o m d i s c r e t ep, o t e n t i a l l y uncertai n, sampl esA t - A (tù* e (t2), ( - i -n* I,. . . , i, measuredwith an error e(t7) at n distinct instants,by minimizing the cost function i

la\ù - a$ùl'

t-

t':ln,: i-n



+1

t

l:i-n*m

la[*'(Ar)-], iÇv'>n,

where V ' > n is the set of positive integer numbers greater than or equal to n. For each moving window Ito-n+t,. . . ,ttf of length n, we minimize (2) with respectto y. The first term in the criterion is the well-known least-squarescriterion, and the secondterrn representsan equivalentfunctional to the continuousintegral ft;

I J

û'^r(t)dt'

t;-n+t

which .-uooi"r'ut"o-oromise betweenthe closenessto the measureddata and smoothnessof the estimate. The balancebetweenthe two distancesis masteredby a particular choice of the parameter À. It was shown that the minimum of the performanceindex (1) is a spline function of order 2m, see(De Boor, 1978). Recall that spline

such that û@n)(t) is the continuous rn-th derivative of the function i(t). Here 9j*) denotesthe finite-difference schemeof the n'L-thderivativeof the continuousfunction $(t) attime t = t1.In orderto computethe m-th derivative of y(t) attime t = tr we will only use the samples

.4,numericalprocedure for filtering and efficient high-order signal differentiation . . . . . y,. Then the lastcost functionis writû;-^,ûi-^-l ten in the matrix form as

/ ,= J r - i'll' + Àllat'll',

The derivativeformulae (3) come from the approximation of the m-thderivative of y by the following finitedifferencescheme:

(2) ^(m\ ,u\"-, dx

r*'here

ATf,S

t

rn*L

+ (at)* I

(-f )-*i Ci*yr_^*.r*l. @)

j :Lo,

Ii' Y- lI'i:' -:.: I' : * l ' - l ' n:::l - : l*," I

This differentiation schemeis obtainedby solving the set of the following Taylor expansionswith respect to the d e r i v a t i v egr l t ) , û : ' ) , . . . , û [ ^ ) ,

L v ;l

1 , ,I

ô . ( l )' 6' ^el ^ -^. + + "'-l U r - r = U iT.Ai OAi'

and f/ is an (n - m) x (n) matrix consistingof general rows j :!,...,m*I,

r-l)--r-rcln-r,

ô^ ^^ p LUt-t- zAt+ Ut+tl )

, l:2 n-l S r

+ 3 y t - r - 3 û r * A l , +t r2).'

Ll,-yt-z {=3 n-l

I

[Or-t- 4ûçz* 6Yr-r- 4Yt* û+r)2,

l=4 n-l

Dl- At- +* 5v2-, l0Ûçz + L}aF L 5at * a+rl', I:5

respectively. Consequently,the coffesponding matrices are

r - 2 0 L

LT tr\n_2,xn -

: 0 -l

1 -2

: 0

0

3 -1

: 0

: 0

0 1

-

31 -3

3

f/16-:)x n =

H6-t)xn

=

;

0

0 0 1 ... 0

0 0

-4

i

6

where 6 : t,i - t*r is the sampling period. We have selectedthis finite-differenceschemein order to force the matrix H'H to be positivedefinite.The symbol ll . ll d.notes the Euclideannorm, and À is a smoothingparameter chosenin the interval [0, *[. We look for a solution of the last functional in the space of B-spline functions of order lc - 2m. An interpretationof minimizing such a functional concernsthe trade-off between the smoothing and the closenessto the data. If À is set to zero, the minimization of (2) leads to a classicalproblem of leastsquaresapproximation by a B-spline function of degree 2m-I. We shall use splinesbecausethey often exhibit some optimal properties in interpolation and smoothing-in other words, they can often be characterizedas solutions to variational problems. Roughly speaking,splines minimize some sort of "energy" functional. This variational characterizationleads to a generalizednotion of splines, namely,variational splines. For eachfixed measurementwindow, we seekthe solution of (2) as i

a i b i , z * ( t ) ,t ç n + t 1 t 3 t i ,

D

j=i-n*L

- 4 1 0 6 -4 1 1

m6 ^ç1 , (* 6)' :.(2), (* 6)* î,. y ; ' + . . . +, T A ; . on), A ; m : U î, t-LIA;'* 2l

y(t):-

o -; 3 -3 i

1 - 4 6 0 L - 4 ;

tl

1

26 ^,,rt- (26)2*p1 , ^ .î , (26)^ .0,) * Ai-z=AiA;'+ "'+-U; , r1A; A

(3)

*'here Cf is the standardbinomial coefficient. For rn = 2. 3, and 4, the smoothnessconditions are n-L

6 ^ ^ ( ^- \' , ,,Ui

ie Z>n,

(s)

where a € IRn, and b,i,z*(t) is the i-th B-spline basis function of order 2m. For notationalsimplicity, g(t) and a arenot indexedwith respectto the moving window. We assumethat the conditionsof the optimizationproblem are the samefor eachmoving window. Thus, the cost function (1) becomes 1

y" - _(Y - Bo)'(Y- Ba)+ Àa'B'RBcr (6) n '

S. Ibrir and S. Diop varianceof the noise o2 is known. then À shouldbe chosen so that

such that R:= H' H ,

I

B i , j i : b i , z * ( t t ) , ( .- i

- n*I,...,i,

The optimum value of the control vector a is obtainedvia the optimality condition d;/ ldc - 0. Then we get ,

- ' - B ' ( Y - B a )+ 2 ^ B ' R B a - 0 , n

(7)

or

-- (ù,RB + B)-tY.

(8)

Consequently, NÀRB(ù,B, RB + B, B)_IB'Y. ' (9)

Y _ B A:

From (8), the continuousspline is fully determined. Hence the discretesamplesof the regularizedsolution are computedfrom

t - Y - n\R B(nÀB'RB + B' B1-tB' Y - (r - ù,R(I - \ + ù,R)-') ' / v. \-

(lo)

As for the last equation,note that the discreteregularized samplesare given as the output of an FIR filter where its coefficientsare functions of the regularizationparameter À. The sensitivityof the solution to this parameteris quite important, so the next section will be devotedto the optimal calculation of the regularizationparameterthrough the cross-validationcriterion.

(ll)

(.=i-n.-l

Let d (^) be the t;n+t,ti-n+zr...,ti f ^'' r 'l

n x rr matrix a n d À s u c ht h a t

depending on

t

tr-,,-:) I I alk-"+r)| I v( | |t , lt - d ( ^ ) l t , Il

I s - (n\B' RB + Bt B)-t B'Y

, y r t , -i a ( t r ) l :' r o ' .

I

i eV'2n.

(r3)

L Y(t;) I

a(t,) I

The main result of (Cravenand Wahba,1979) shows that a good estimateof the smoothing parameter À (also called the generalizedcross-validationparameter)is the minimizer of the GCV criterion

- d(^))Ylfr. r(^)- *ll! f *uuce(I d(^))l'

(r4)

of being free from the This estimatehas the advanta-ee knowledgeof the statisticalpropertiesof noise.Further,if the minimizer of y (^) is obtained,then the estimatesof higher derivativesof the function g(t) could be obtained by differentiatingthe smooth function 3r(t). Now, we outline a computational method to determine the smoothingparameterwhich minimizes the crossvalidation criterion y (^), where the polynomial smoothing spline y(t) is supposedto be a B-spline of degree 2m - 1. Using the definitionof d(^), we write

Y -i' -Y

- d ( ^ ) Y - ( 1- d ( ^ ) ) \ ' .

(ls)

From (7), we obtain Y -i'-

nÀ RBo'.

(16)

Substituting(8) in (16),we get

3. Computingthe RegularizationParameter In this sectionwe shall presentdetails of a computational method for estimatingthe optimal regularizationparameter in termsof the criterion matrices.We haveseenthat the spline vector o dependsupon the smoothing parameter À. In (Cravenand Wahba, L979),two ways of estimating the smoothingparameter À were given. The first method is called the ordinary cross-validation(OCV), which consists in finding the value of À that minimizes the OCV-

Y - i ' - n ) , R B ( ù , 8 'R B + B ' B ) - ' B ' Y - nÀR(I+ nÀ^R)-lY.

(17)

By comparisonwith (15), we deducethat (t *açn)

-nÀÂ(l+n.lR)-l.

(18)

The GCV-criterion becomes

(1e)

criterion L

f r ( \ ): - t

t-i-n*r

l a 1 ù- a \ ù l ' , i : T L , n * r , " ' ' ,

(r 1) 1. where y(t) is a smooth polynomial of degree 2m Reinsch (1967) suggests,roughly speaking, that if the

We proposethe classicalNewton method to compute the '/rQ). This yields to the following iteraminimum of tions:

v (^k) Àn+r=Àr-f

, tÀ*)'

(20)

A numericalprocedure for frltering and efficient high-order signal differentiation where / et y atethe first and secondderivativesof '/r with respectto À. respecuvely.

ATES

suchthat S -p2RWR-pR,

Let

ds zpn{w +:ry} " - ", dp: zdp)

p-n\,

t

, - (pR+ I)-rY,

dW

w - ( p R +/ ) - ' . Then the criterion /

y ( p ) _ _ . * l l p R " l l_' _ .

(2r)

(30)

du - pRWRU- Ru. Oo

(31)

Finally,thederivatives

l f u a c e ( pn ) l l ' j -

i _

Let

- Plu'R'Ru, }llrn lt

-{ =

e3)

Differentiating the last two equationswith respectto À, we obtain d J / ô , - 2pt'R'Rlt +p2nI4: R-pR)r, f-

e4)

and d

9

d*

2 : -traceQtilW)fuace (RW)

d (/\

d À\ e ) '

Y:.. - i 6z ^ 2/ \-,tr)' b)

(22)

s - flo*"ro ,wl

can be easily computed in terms of the first and second derivatives of ,tf and g. Remark 1. It is possible to recursively use the last algorithm if we take the values of the obtained spline as noisy data for anotheriteration. In this case the amount of noise in the data is reducedin each step by choosing a new smoothing parameter. The user could frx a priori a limited number of iterations according to the specified applicationand the time allowed to run the algorithm.

4. Connectionwith AdaptiveFiltering From (10), we have

-f trace(p R2w @Rw - D)] .

es) i' : d(À)Y,

Finally, the secondderivatives of ,// tively a2y æ

d29

#

and g are respec-

,d(^) - (or,j (À ))r.,,r.,,, then ûi = anJ(\)a;"+r

(

+ "'

,r'R' Rçl+s) ' *d p*r'^'a$r),rrul dp )

^r - 2 [ t r a c( en w + p R 2 w ( p R W - / ) ) ] '

\

ffi

dp/

(n'w @RW - /)) * rrace (p nrYU * rrace V

dp

"

* an,n(\)An.

),

e7)

(33)

- an;(\)z-n*Laan,2(\)z-n+2 +...forr,rr(À).

RW - /)) '/

* rrace (, o, n (ow +r"#))

* an,2(\)yn-n+,

Let y(z) and y(z) be the z-transforms of the discrete signals 96 and yi, respectively. Then by taking the ztransform of (33), we obtain y(')

pRW) {ou.r(oY) * 2trace( [

(32)

whered(^) _ I - n\R(t + nXA)-r. If we write

= 2ntr R' R(I + S)u

+2w\

(zs)

- p(RW)2 - RW,

Ë-

becomes

(2g)

(34)

The resulting system (3a) takes the form of an adaptive FIR filter, where its coefficients (arr,,;)1ç;a,"(À) are updatedby computing a new À in eachiteiation i e V,>n . The updating law in our case is basedon the minimization of the general\zedcross-validationcriterion y(^).

S. Ibnr and S. Diop

ATtrS If we see attentively the formulation of the generalized cross-validationcriterion given by (19), we realize that this criterion is simply a weighted least-squares(LS) performance index. The LS part is given by the numerator term llnÀR(I + nXA)-rV112 which is exactly the error betweenthe smootheddiscretesamplesand the noisy discrete data. The weighting parameteris given by the term - d(^))l'. (tln)ll$ln)trace(I Consequently, the filter (34) can be seen as a weighted least-squares(WLS) adaptiveFIR filter. The smoothing strategygiven in this paper has a relationshipwith the classicalLMS (LeastMean Squares) adaptivefiltering discussedin the signal processingliterature. Although our method of updating the filter coefficients is not quite identical to the principle of LMS adaptive filtering, the philosophy of smoothing remains the same. To highlight this fact, let us recall the principle of LMS adaptive filtering. In such a filtering strategy, the time invarianceof filter coefficientsis removed. This is done by allowing the filter to changecoefficients according to some prescribedoptimization criterion. At eachinstant,the desireddiscretesamplesfii arecompared filter output ût. On the basisof this with the instantaneous measure,the adaptivefilter will changeits coefficientsin an attempt to reducethe error. The coefficient updaterelation is a function of the error signal.

5. Numerical Algorithm Here, we summarizethe regularizationprocedurein the followingsteps: Step 1. Specifl' the desiredspline of order k _ 2m and constructthe optimal knot sequencewhich corr e s p o n dtso t h e b r e a l t p o i n tts1 - n + t t t i - n * 2 , . . . , t i . See(De Boor, 1978)for moredetailson optimalknot computing. Step 2. Construct B-spline basis functions that correspondto the optimal knotscalculatedin Step 1. Step 3. ConstructmatricesH, B, R, T, and 8. Step 4. Compute the optimal value of the smoothingparameter À using (23)-(27). Step 5. Computethe splinevector o. Step 6. Compute the derivativesof the spline using the formulae

,' ( t

t

o ' a , . *( )t )= I

o,'-rbi.r-r(t). i

where Dt is the l-th derivativewith respectto time, and "' | a' ' r*t r* r , - l - 1 - . ." ' r - Qt ' - L t \t-(.t,ap-2-t,

for (.: 0, I( 3 5 )

, for/>0.

Step 7. In order to graduallyreducethe amountof noise in the obtainedsmooth spline, the user has to repeat all the stepsfrom the beginning b1'taking the values of the spl i neat (ti -n' r2t.. .. t,-r J as noi sydat af or the next iteration.

6. Simulations Fig. l. Schemeof theLMS adaptivefilter. By comparisonwith the algorithm presentedin this paper,the imposedsignal fi is not known a priori, but its formulation in terms of the noisy samplesand the smoothing parameter À is known. The main advantageof the GCV-basedfilter is that the minimum of the GCV performanceindex is computedindependentlyof the knowledge of the statisticalpropertiesof noise. In addition,the information on the smoothingdegree rn is incorporatedin the quadratic performanceindex (2), which makes the algorithm not only capableof filtering the discretesamplesof the noisy signal but also capableof reliably reproducing the continuoushigher derivativesof the signal considered.

In the following simulationswe supposethat we measure the noisy signal y(t) - cos(3Ot)sin(t) + e(t)

(36)

for each ô - 0.01 s. We assumethat e(t) is a normbounded noise of unknown variance. The exact first derivativesof the signal A are ù(t) : -30 sin(30t)sin(t) * cos(30t)cos(t),

(37)

ù(t) : -901cos(3Ot)si n(t) 60 si n(3Ot)cos( t ) . ( 38) In Fig. 2 we show the noisy signal (36). In Fig. 3 we plot the exact signal (signal without noise) with the

A numericalprocedure for filtering and efficient high-order signal differentiation

affffi

continuous-timespline function, the solution to the minimization problem (2). In the whole simulation the moving window of observationis supposedto be constantof length 10. In Figs. 4 and 5 we depict the exact derivatives of the original signal with their estimatedvaluesgiven by the differentiation of the optimal continuous spline with respectto time. In Fig. 6, we compare the output of an LMS adaptiveFIR filter of order 7 with the exact sample of the signal y(t). We see clearly the superiority of the GCV-basedfilter in the first instants of the filtering processin comparisonwith the transientbehaviourof the adaptiveFIR filter presentedin Fig. 6.

7. Conclusion Fig.2. Noisy signal.

In this paper we have presenteda new numerical procedure for reliable filtering and high-order signal differ-

|| -

1.5 Timein (sec)

Fig. 3. Optimal spline vs. the exact signal.

1 ç

2.5

Time in (sec)

Fig. 5. Secondderivativeof the optimal spline and the exact secondderivativeof the signal.

1.5 Timein (sec)

Fig.4. First derivative of the optimal spline vs. the exact one.

Estimaled Exact I

Fig. 6. Output of the adaptiveFIR filter vs. the exact sienal.

S. Ibnr and S. Diop

afftrs entiation. The design strategy consists in determining the continuousspline signal which minimizes the discrete functional being the sum of a least-squarescriterion and a discrete smoothing term inspired by finite-difference schemes.The control of smoothingand the fidelity to the measurabledata is ensuredby the computationof one optimal regularizationparameterthat minimizes the generalized cross-validationcriterion. The developedalgorithm is able to estimatehigher derivativesof a smooth signal only by differentiating its basis functions with respectto time. Satisfactorysimulation resultswere obtainedwhich prove the efficiency of the developedalgorithm.

References AndersonR.S. and Bloomfield P.0974):. A time seriesapproach to numerical dffirentiation. - Technom.,Vol. 16, No. 1, pp.69-75. Barmish B.R. and Leitmann G. (1982): On ultimate boundness control of uncertain systems in the absence of matching assumptions.- IEEE Trans.Automat. Contr.,Vol. AC-27, N o . 1 ,p p . 1 5 3 - 1 5 8 . Chen Y. H. (1990): State estimationfor non-linear uncertain systems: A design based on properties related to the uncertaintybound.- Int. J. Contr.,Vol. 52, No. 5, pp. 1131-

rt46. Chen Y. H. and Leitmann G. (1987): Robustnessof uncertain systemsin the absenceof matching assumptions.- Int. J. Contr.,Vol.45, No. 5, pp.1527-1542, Ciccarella G., Mora M.D. and Germani A. (1993): A Luenberger-likeobserverfor nonlinear systems.- Int. J. Contr.,Vol. 57, No. 3, pp. 537-556. Craven P. and Wahba G. (1979): Smoothing noisy data with spline functions: Estimation the corcect degree of smoothing by the method of generalized cross-validation. - Numer.Math., Vol.31, No.4, pp.377403. DawsonD.M., Qu Z. and Caroll J.C. (1992):On the stateobservation and outputfeedbackproblemsfor nonlinear uncer' tain dynamic systems.- Syst. Contr. Lett., Vol. 18, No.3, pp.2l7-222. De Boor C., (1978): A Practical Guide to Splines.- New York: Springer. Diop S., Gnzzle J.W., Morral P.E. and StefanoupoulouA.G. (1993): Interpolation and numerical dffirentiationfor observer design.- Proc. Amer. Contr. Conf., Evanston,IL, pp.1329-1333. Eubank R.L. (1988): Spline Smoothingand Nonparametric Regression. - New York: Marcel Dekker.

Georgiev A.A. (1984): Kernel estimatesof functions and rheir derivativeswith applications.- Statist.Prob.Letr.. \bl. l. pp. 45-50. Hiirdle W. (1984): Robust regressionfunction estimntiort.Multivar.Anal., Vol. 14, pp. 169-180. Hiirdle W. (1985): On robustkernel estinntion of derivativesoJ' - Scand.J. Statist.,Vol. 12,pp.233regressionfunctions. 240. Ibrir S. (1999): Numericalalgorithmfor fltering and stateobservation.- Int. J. Appl. Math. Comp. Sci.. Vol. e, No.4, pp. 855-869. Ibrir S. (2000): Méthodes numriques pour la comntartdeet l'observationdes systèmesnon linéaires.- Ph.D. thesis, Laboratoiredes Signaux et Systèmes,Univ. Paris-Sud. Ibrir S. (2001): New dffirentiators for control and obsen'ation applications.- Proc. Amer. Contr. Conf., Arlington, pp.2522-2527. Ibrir S. (2003): Algebraic riccati equation baseddffirentiation trackers.- AIAA J. Guid. Contr.Dl'nam.. Vol. 26. No. 3. pp.502-505. KalmanR.E. (1960):A new approachto linearfltering and prediction problems.- Trans.ASME J. Basic En-e..\bl. 82, No. D, pp. 35-45. LeitmannG. (1981): On the fficacy of nonlinearcontrol in uncertain linear systems.- J. Dynam. Syst. Meas. Contr., Vol. 102,No.2, pp. 95-102. LuenbergerD.G. (1971): An introduction to observers.- IEEE Trans.Automat. Contr., Vol. 16, No.6, pp. 596-602. Misawa E.A. and Hedrick J.K. (1989): Nonlinear observers. A state of the art survey.- J. Dyn. Syst. Meas. Contr., Vol.111,No.3,pp. 344-351. Mtiller H.G. (1984): Smoothoptimum kernel estimatorsof densities, regressioncurvesand modes.- Ann. Statist.,Vol. 12, pp.766-774. RajamaniR. (1998): Observersfor Lipschitz nonlinear systems. - IEEE Trans.Automat.Contr.,Vol. 43, No. 3, pp.397400. ReinschC.H. (1967): Smoothingby splinefunctions. Math.,Vol. 10,pp. 177-183.

Numer.

ReinschC.H. (1971): Smoothingby splinefunctions ii. mer. Math., Vol. 16, No.5, pp.451454.

Nu-

SlotineJ.J.E.,Hedrick J.K. and Misawa E.A. (1987): On sliding observersfor nonlinear systems.- J. Dynam. Syst. Meas' Contr., Vol. 109, No.3, pp.245-252. Tornambè A. (1992): High-gain observersfor nonlinear systems.- Int. J. Syst.Sci., Vol. 23, No.9, pp. 1475-1489.

GasserT., Mtiller H.G. and Mammitzsch V. (1985): Kernelsfor nonparametric curve estimation.- J. Roy. Statist. Soc., Vol. 847, pp.238-252.

Xia X.-H. and Gao W.-8. (1989): Nonlinear observerdesign by observererror linearization.- SIAM J. Contr. Optim', Yol.27, No. l, pp. 199-216.

GauthierJ.P.,Hammouri H. and Othman S. (1992); A simple ob' serverfor nonlinear systems:Application to bioreactors. - IEEE Trans. Automat. Contr., Vol. 37, No. 6, pp. 875880.

Received:26 January2004 Revised:28 May 2004