State observer

System in control theory

In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system. It is typically computer-implemented, and provides the basis of many practical applications.

Knowing the system state is necessary to solve many control theory problems; for example, stabilizing a system using state feedback. In most practical cases, the physical state of the system cannot be determined by direct observation. Instead, indirect effects of the internal state are observed by way of the system outputs. A simple example is that of vehicles in a tunnel: the rates and velocities at which vehicles enter and leave the tunnel can be observed directly, but the exact state inside the tunnel can only be estimated. If a system is observable, it is possible to fully reconstruct the system state from its output measurements using the state observer.

Typical observer model

Block diagram of Luenberger Observer. Input of observer gain L is y y ^ {\displaystyle y\mathbf {-} {\hat {y}}} .

Linear, delayed, sliding mode, high gain, Tau, homogeneity-based, extended and cubic observers are among several observer structures used for state estimation of linear and nonlinear systems. A linear observer structure is described in the following sections.

Discrete-time case

The state of a linear, time-invariant discrete-time system is assumed to satisfy

x ( k + 1 ) = A x ( k ) + B u ( k ) {\displaystyle x(k+1)=Ax(k)+Bu(k)}
y ( k ) = C x ( k ) + D u ( k ) {\displaystyle y(k)=Cx(k)+Du(k)}

where, at time k {\displaystyle k} , x ( k ) {\displaystyle x(k)} is the plant's state; u ( k ) {\displaystyle u(k)} is its inputs; and y ( k ) {\displaystyle y(k)} is its outputs. These equations simply say that the plant's current outputs and its future state are both determined solely by its current states and the current inputs. (Although these equations are expressed in terms of discrete time steps, very similar equations hold for continuous systems). If this system is observable then the output of the plant, y ( k ) {\displaystyle y(k)} , can be used to steer the state of the state observer.

The observer model of the physical system is then typically derived from the above equations. Additional terms may be included in order to ensure that, on receiving successive measured values of the plant's inputs and outputs, the model's state converges to that of the plant. In particular, the output of the observer may be subtracted from the output of the plant and then multiplied by a matrix L {\displaystyle L} ; this is then added to the equations for the state of the observer to produce a so-called Luenberger observer, defined by the equations below. Note that the variables of a state observer are commonly denoted by a "hat": x ^ ( k ) {\displaystyle {\hat {x}}(k)} and y ^ ( k ) {\displaystyle {\hat {y}}(k)} to distinguish them from the variables of the equations satisfied by the physical system.

x ^ ( k + 1 ) = A x ^ ( k ) + L [ y ( k ) y ^ ( k ) ] + B u ( k ) {\displaystyle {\hat {x}}(k+1)=A{\hat {x}}(k)+L\left[y(k)-{\hat {y}}(k)\right]+Bu(k)}
y ^ ( k ) = C x ^ ( k ) + D u ( k ) {\displaystyle {\hat {y}}(k)=C{\hat {x}}(k)+Du(k)}

The observer is called asymptotically stable if the observer error e ( k ) = x ^ ( k ) x ( k ) {\displaystyle e(k)={\hat {x}}(k)-x(k)} converges to zero when k {\displaystyle k\to \infty } . For a Luenberger observer, the observer error satisfies e ( k + 1 ) = ( A L C ) e ( k ) {\displaystyle e(k+1)=(A-LC)e(k)} . The Luenberger observer for this discrete-time system is therefore asymptotically stable when the matrix A L C {\displaystyle A-LC} has all the eigenvalues inside the unit circle.

For control purposes the output of the observer system is fed back to the input of both the observer and the plant through the gains matrix K {\displaystyle K} .

u ( k ) = K x ^ ( k ) {\displaystyle u(k)=-K{\hat {x}}(k)}

The observer equations then become:

x ^ ( k + 1 ) = A x ^ ( k ) + L ( y ( k ) y ^ ( k ) ) B K x ^ ( k ) {\displaystyle {\hat {x}}(k+1)=A{\hat {x}}(k)+L\left(y(k)-{\hat {y}}(k)\right)-BK{\hat {x}}(k)}
y ^ ( k ) = C x ^ ( k ) D K x ^ ( k ) {\displaystyle {\hat {y}}(k)=C{\hat {x}}(k)-DK{\hat {x}}(k)}

or, more simply,

x ^ ( k + 1 ) = ( A B K ) x ^ ( k ) + L ( y ( k ) y ^ ( k ) ) {\displaystyle {\hat {x}}(k+1)=\left(A-BK\right){\hat {x}}(k)+L\left(y(k)-{\hat {y}}(k)\right)}
y ^ ( k ) = ( C D K ) x ^ ( k ) {\displaystyle {\hat {y}}(k)=\left(C-DK\right){\hat {x}}(k)}

Due to the separation principle we know that we can choose K {\displaystyle K} and L {\displaystyle L} independently without harm to the overall stability of the systems. As a rule of thumb, the poles of the observer A L C {\displaystyle A-LC} are usually chosen to converge 10 times faster than the poles of the system A B K {\displaystyle A-BK} .

Continuous-time case

The previous example was for an observer implemented in a discrete-time LTI system. However, the process is similar for the continuous-time case; the observer gains L {\displaystyle L} are chosen to make the continuous-time error dynamics converge to zero asymptotically (i.e., when A L C {\displaystyle A-LC} is a Hurwitz matrix).

For a continuous-time linear system

x ˙ = A x + B u , {\displaystyle {\dot {x}}=Ax+Bu,}
y = C x + D u , {\displaystyle y=Cx+Du,}

where x R n , u R m , y R r {\displaystyle x\in \mathbb {R} ^{n},u\in \mathbb {R} ^{m},y\in \mathbb {R} ^{r}} , the observer looks similar to discrete-time case described above:

x ^ ˙ = A x ^ + B u + L ( y y ^ ) {\displaystyle {\dot {\hat {x}}}=A{\hat {x}}+Bu+L\left(y-{\hat {y}}\right)} .
y ^ = C x ^ + D u , {\displaystyle {\hat {y}}=C{\hat {x}}+Du,}

The observer error e = x x ^ {\displaystyle e=x-{\hat {x}}} satisfies the equation

e ˙ = ( A L C ) e {\displaystyle {\dot {e}}=(A-LC)e} .

The eigenvalues of the matrix A L C {\displaystyle A-LC} can be chosen arbitrarily by appropriate choice of the observer gain L {\displaystyle L} when the pair [ A , C ] {\displaystyle [A,C]} is observable, i.e. observability condition holds. In particular, it can be made Hurwitz, so the observer error e ( t ) 0 {\displaystyle e(t)\to 0} when t {\displaystyle t\to \infty } .

Peaking and other observer methods

When the observer gain L {\displaystyle L} is high, the linear Luenberger observer converges to the system states very quickly. However, high observer gain leads to a peaking phenomenon in which initial estimator error can be prohibitively large (i.e., impractical or unsafe to use).[1] As a consequence, nonlinear high-gain observer methods are available that converge quickly without the peaking phenomenon. For example, sliding mode control can be used to design an observer that brings one estimated state's error to zero in finite time even in the presence of measurement error; the other states have error that behaves similarly to the error in a Luenberger observer after peaking has subsided. Sliding mode observers also have attractive noise resilience properties that are similar to a Kalman filter.[2][3] Another approach is to apply multi observer, that significantly improves transients and reduces observer overshoot. Multi-observer can be adapted to every system where high-gain observer is applicable.[4]

State observers for nonlinear systems

High gain, sliding mode and extended observers are the most common observers for nonlinear systems. To illustrate the application of sliding mode observers for nonlinear systems, first consider the no-input non-linear system:

x ˙ = f ( x ) {\displaystyle {\dot {x}}=f(x)}

where x R n {\displaystyle x\in \mathbb {R} ^{n}} . Also assume that there is a measurable output y R {\displaystyle y\in \mathbb {R} } given by

y = h ( x ) . {\displaystyle y=h(x).}

There are several non-approximate approaches for designing an observer. The two observers given below also apply to the case when the system has an input. That is,

x ˙ = f ( x ) + B ( x ) u {\displaystyle {\dot {x}}=f(x)+B(x)u}
y = h ( x ) . {\displaystyle y=h(x).}

Linearizable error dynamics

One suggestion by Krener and Isidori[5] and Krener and Respondek[6] can be applied in a situation when there exists a linearizing transformation (i.e., a diffeomorphism, like the one used in feedback linearization) z = Φ ( x ) {\displaystyle z=\Phi (x)} such that in new variables the system equations read

z ˙ = A z + ϕ ( y ) , {\displaystyle {\dot {z}}=Az+\phi (y),}
y = C z . {\displaystyle y=Cz.}

The Luenberger observer is then designed as

z ^ ˙ = A z ^ + ϕ ( y ) L ( C z ^ y ) {\displaystyle {\dot {\hat {z}}}=A{\hat {z}}+\phi (y)-L\left(C{\hat {z}}-y\right)} .

The observer error for the transformed variable e = z ^ z {\displaystyle e={\hat {z}}-z} satisfies the same equation as in classical linear case.

e ˙ = ( A L C ) e {\displaystyle {\dot {e}}=(A-LC)e} .

As shown by Gauthier, Hammouri, and Othman[7] and Hammouri and Kinnaert,[8] if there exists transformation z = Φ ( x ) {\displaystyle z=\Phi (x)} such that the system can be transformed into the form

z ˙ = A ( u ( t ) ) z + ϕ ( y , u ( t ) ) , {\displaystyle {\dot {z}}=A(u(t))z+\phi (y,u(t)),}
y = C z , {\displaystyle y=Cz,}

then the observer is designed as

z ^ ˙ = A ( u ( t ) ) z ^ + ϕ ( y , u ( t ) ) L ( t ) ( C z ^ y ) {\displaystyle {\dot {\hat {z}}}=A(u(t)){\hat {z}}+\phi (y,u(t))-L(t)\left(C{\hat {z}}-y\right)} ,

where L ( t ) {\displaystyle L(t)} is a time-varying observer gain.

Ciccarella, Dalla Mora, and Germani[9] obtained more advanced and general results, removing the need for a nonlinear transform and proving global asymptotic convergence of the estimated state to the true state using only simple assumptions on regularity.

Switched observers

As discussed for the linear case above, the peaking phenomenon present in Luenberger observers justifies the use of switched observers. A switched observer encompasses a relay or binary switch that acts upon detecting minute changes in the measured output. Some common types of switched observers include the sliding mode observer, nonlinear extended state observer,[10] fixed time observer,[11] switched high gain observer[12] and uniting observer.[13] The sliding mode observer uses non-linear high-gain feedback to drive estimated states to a hypersurface where there is no difference between the estimated output and the measured output. The non-linear gain used in the observer is typically implemented with a scaled switching function, like the signum (i.e., sgn) of the estimated – measured output error. Hence, due to this high-gain feedback, the vector field of the observer has a crease in it so that observer trajectories slide along a curve where the estimated output matches the measured output exactly. So, if the system is observable from its output, the observer states will all be driven to the actual system states. Additionally, by using the sign of the error to drive the sliding mode observer, the observer trajectories become insensitive to many forms of noise. Hence, some sliding mode observers have attractive properties similar to the Kalman filter but with simpler implementation.[2][3]

As suggested by Drakunov,[14] a sliding mode observer can also be designed for a class of non-linear systems. Such an observer can be written in terms of original variable estimate x ^ {\displaystyle {\hat {x}}} and has the form

x ^ ˙ = [ H ( x ^ ) x ] 1 M ( x ^ ) sgn ( V ( t ) H ( x ^ ) ) {\displaystyle {\dot {\hat {x}}}=\left[{\frac {\partial H({\hat {x}})}{\partial x}}\right]^{-1}M({\hat {x}})\operatorname {sgn}(V(t)-H({\hat {x}}))}

where:

  • The sgn ( ) {\displaystyle \operatorname {sgn}({\mathord {\cdot }})} vector extends the scalar signum function to n {\displaystyle n} dimensions. That is,
    sgn ( z ) = [ sgn ( z 1 ) sgn ( z 2 ) sgn ( z i ) sgn ( z n ) ] {\displaystyle \operatorname {sgn}(z)={\begin{bmatrix}\operatorname {sgn}(z_{1})\\\operatorname {sgn}(z_{2})\\\vdots \\\operatorname {sgn}(z_{i})\\\vdots \\\operatorname {sgn}(z_{n})\end{bmatrix}}}
    for the vector z R n {\displaystyle z\in \mathbb {R} ^{n}} .
  • The vector H ( x ) {\displaystyle H(x)} has components that are the output function h ( x ) {\displaystyle h(x)} and its repeated Lie derivatives. In particular,
    H ( x ) [ h 1 ( x ) h 2 ( x ) h 3 ( x ) h n ( x ) ] [ h ( x ) L f h ( x ) L f 2 h ( x ) L f n 1 h ( x ) ] {\displaystyle H(x)\triangleq {\begin{bmatrix}h_{1}(x)\\h_{2}(x)\\h_{3}(x)\\\vdots \\h_{n}(x)\end{bmatrix}}\triangleq {\begin{bmatrix}h(x)\\L_{f}h(x)\\L_{f}^{2}h(x)\\\vdots \\L_{f}^{n-1}h(x)\end{bmatrix}}}
    where L f i h {\displaystyle L_{f}^{i}h} is the ith Lie derivative of output function h {\displaystyle h} along the vector field f {\displaystyle f} (i.e., along x {\displaystyle x} trajectories of the non-linear system). In the special case where the system has no input or has a relative degree of n, H ( x ( t ) ) {\displaystyle H(x(t))} is a collection of the output y ( t ) = h ( x ( t ) ) {\displaystyle y(t)=h(x(t))} and its n 1 {\displaystyle n-1} derivatives. Because the inverse of the Jacobian linearization of H ( x ) {\displaystyle H(x)} must exist for this observer to be well defined, the transformation H ( x ) {\displaystyle H(x)} is guaranteed to be a local diffeomorphism.
  • The diagonal matrix M ( x ^ ) {\displaystyle M({\hat {x}})} of gains is such that
    M ( x ^ ) diag ( m 1 ( x ^ ) , m 2 ( x ^ ) , , m n ( x ^ ) ) = [ m 1 ( x ^ ) m 2 ( x ^ ) m i ( x ^ ) m n ( x ^ ) ] {\displaystyle M({\hat {x}})\triangleq \operatorname {diag} (m_{1}({\hat {x}}),m_{2}({\hat {x}}),\ldots ,m_{n}({\hat {x}}))={\begin{bmatrix}m_{1}({\hat {x}})&&&&&\\&m_{2}({\hat {x}})&&&&\\&&\ddots &&&\\&&&m_{i}({\hat {x}})&&\\&&&&\ddots &\\&&&&&m_{n}({\hat {x}})\end{bmatrix}}}
    where, for each i { 1 , 2 , , n } {\displaystyle i\in \{1,2,\dots ,n\}} , element m i ( x ^ ) > 0 {\displaystyle m_{i}({\hat {x}})>0} and suitably large to ensure reachability of the sliding mode.
  • The observer vector V ( t ) {\displaystyle V(t)} is such that
    V ( t ) [ v 1 ( t ) v 2 ( t ) v 3 ( t ) v i ( t ) v n ( t ) ] [ y ( t ) { m 1 ( x ^ ) sgn ( v 1 ( t ) h 1 ( x ^ ( t ) ) ) } eq { m 2 ( x ^ ) sgn ( v 2 ( t ) h 2 ( x ^ ( t ) ) ) } eq { m i 1 ( x ^ ) sgn ( v i 1 ( t ) h i 1 ( x ^ ( t ) ) ) } eq { m n 1 ( x ^ ) sgn ( v n 1 ( t ) h n 1 ( x ^ ( t ) ) ) } eq ] {\displaystyle V(t)\triangleq {\begin{bmatrix}v_{1}(t)\\v_{2}(t)\\v_{3}(t)\\\vdots \\v_{i}(t)\\\vdots \\v_{n}(t)\end{bmatrix}}\triangleq {\begin{bmatrix}y(t)\\\{m_{1}({\hat {x}})\operatorname {sgn}(v_{1}(t)-h_{1}({\hat {x}}(t)))\}_{\text{eq}}\\\{m_{2}({\hat {x}})\operatorname {sgn}(v_{2}(t)-h_{2}({\hat {x}}(t)))\}_{\text{eq}}\\\vdots \\\{m_{i-1}({\hat {x}})\operatorname {sgn}(v_{i-1}(t)-h_{i-1}({\hat {x}}(t)))\}_{\text{eq}}\\\vdots \\\{m_{n-1}({\hat {x}})\operatorname {sgn}(v_{n-1}(t)-h_{n-1}({\hat {x}}(t)))\}_{\text{eq}}\end{bmatrix}}}
    where sgn ( ) {\displaystyle \operatorname {sgn}({\mathord {\cdot }})} here is the normal signum function defined for scalars, and { } eq {\displaystyle \{\ldots \}_{\text{eq}}} denotes an "equivalent value operator" of a discontinuous function in sliding mode.

The idea can be briefly explained as follows. According to the theory of sliding modes, in order to describe the system behavior, once sliding mode starts, the function sgn ( v i ( t ) h i ( x ^ ( t ) ) ) {\displaystyle \operatorname {sgn}(v_{i}(t)\!-\!h_{i}({\hat {x}}(t)))} should be replaced by equivalent values (see equivalent control in the theory of sliding modes). In practice, it switches (chatters) with high frequency with slow component being equal to the equivalent value. Applying appropriate lowpass filter to get rid of the high frequency component on can obtain the value of the equivalent control, which contains more information about the state of the estimated system. The observer described above uses this method several times to obtain the state of the nonlinear system ideally in finite time.

The modified observation error can be written in the transformed states e = H ( x ) H ( x ^ ) {\displaystyle e=H(x)-H({\hat {x}})} . In particular,

e ˙ = d d t H ( x ) d d t H ( x ^ ) = d d t H ( x ) M ( x ^ ) sgn ( V ( t ) H ( x ^ ( t ) ) ) , {\displaystyle {\begin{aligned}{\dot {e}}&={\frac {\mathrm {d} }{\mathrm {d} t}}H(x)-{\frac {\mathrm {d} }{\mathrm {d} t}}H({\hat {x}})\\&={\frac {\mathrm {d} }{\mathrm {d} t}}H(x)-M({\hat {x}})\,\operatorname {sgn}(V(t)-H({\hat {x}}(t))),\end{aligned}}}

and so

[ e ˙ 1 e ˙ 2 e ˙ i e ˙ n 1 e ˙ n ] = [ h ˙ 1 ( x ) h ˙ 2 ( x ) h ˙ i ( x ) h ˙ n 1 ( x ) h ˙ n ( x ) ] d d t H ( x ) M ( x ^ ) sgn ( V ( t ) H ( x ^ ( t ) ) ) d d t H ( x ^ ) = [ h 2 ( x ) h 3 ( x ) h i + 1 ( x ) h n ( x ) L f n h ( x ) ] [ m 1 sgn ( v 1 ( t ) h 1 ( x ^ ( t ) ) ) m 2 sgn ( v 2 ( t ) h 2 ( x ^ ( t ) ) ) m i sgn ( v i ( t ) h i ( x ^ ( t ) ) ) m n 1 sgn ( v n 1 ( t ) h n 1 ( x ^ ( t ) ) ) m n sgn ( v n ( t ) h n ( x ^ ( t ) ) ) ] = [ h 2 ( x ) m 1 ( x ^ ) sgn ( v 1 ( t ) v 1 ( t ) = y ( t ) = h 1 ( x ) h 1 ( x ^ ( t ) ) e 1 ) h 3 ( x ) m 2 ( x ^ ) sgn ( v 2 ( t ) h 2 ( x ^ ( t ) ) ) h i + 1 ( x ) m i ( x ^ ) sgn ( v i ( t ) h i ( x ^ ( t ) ) ) h n ( x ) m n 1 ( x ^ ) sgn ( v n 1 ( t ) h n 1 ( x ^ ( t ) ) ) L f n h ( x ) m n ( x ^ ) sgn ( v n ( t ) h n ( x ^ ( t ) ) ) ] . {\displaystyle {\begin{aligned}{\begin{bmatrix}{\dot {e}}_{1}\\{\dot {e}}_{2}\\\vdots \\{\dot {e}}_{i}\\\vdots \\{\dot {e}}_{n-1}\\{\dot {e}}_{n}\end{bmatrix}}&={\mathord {\overbrace {\begin{bmatrix}{\dot {h}}_{1}(x)\\{\dot {h}}_{2}(x)\\\vdots \\{\dot {h}}_{i}(x)\\\vdots \\{\dot {h}}_{n-1}(x)\\{\dot {h}}_{n}(x)\end{bmatrix}} ^{{\tfrac {\mathrm {d} }{\mathrm {d} t}}H(x)}}}-{\mathord {\overbrace {M({\hat {x}})\,\operatorname {sgn}(V(t)-H({\hat {x}}(t)))} ^{{\tfrac {\mathrm {d} }{\mathrm {d} t}}H({\hat {x}})}}}={\begin{bmatrix}h_{2}(x)\\h_{3}(x)\\\vdots \\h_{i+1}(x)\\\vdots \\h_{n}(x)\\L_{f}^{n}h(x)\end{bmatrix}}-{\begin{bmatrix}m_{1}\operatorname {sgn}(v_{1}(t)-h_{1}({\hat {x}}(t)))\\m_{2}\operatorname {sgn}(v_{2}(t)-h_{2}({\hat {x}}(t)))\\\vdots \\m_{i}\operatorname {sgn}(v_{i}(t)-h_{i}({\hat {x}}(t)))\\\vdots \\m_{n-1}\operatorname {sgn}(v_{n-1}(t)-h_{n-1}({\hat {x}}(t)))\\m_{n}\operatorname {sgn}(v_{n}(t)-h_{n}({\hat {x}}(t)))\end{bmatrix}}\\&={\begin{bmatrix}h_{2}(x)-m_{1}({\hat {x}})\operatorname {sgn}({\mathord {\overbrace {{\mathord {\overbrace {v_{1}(t)} ^{v_{1}(t)=y(t)=h_{1}(x)}}}-h_{1}({\hat {x}}(t))} ^{e_{1}}}})\\h_{3}(x)-m_{2}({\hat {x}})\operatorname {sgn}(v_{2}(t)-h_{2}({\hat {x}}(t)))\\\vdots \\h_{i+1}(x)-m_{i}({\hat {x}})\operatorname {sgn}(v_{i}(t)-h_{i}({\hat {x}}(t)))\\\vdots \\h_{n}(x)-m_{n-1}({\hat {x}})\operatorname {sgn}(v_{n-1}(t)-h_{n-1}({\hat {x}}(t)))\\L_{f}^{n}h(x)-m_{n}({\hat {x}})\operatorname {sgn}(v_{n}(t)-h_{n}({\hat {x}}(t)))\end{bmatrix}}.\end{aligned}}}

So:

  1. As long as m 1 ( x ^ ) | h 2 ( x ( t ) ) | {\displaystyle m_{1}({\hat {x}})\geq |h_{2}(x(t))|} , the first row of the error dynamics, e ˙ 1 = h 2 ( x ^ ) m 1 ( x ^ ) sgn ( e 1 ) {\displaystyle {\dot {e}}_{1}=h_{2}({\hat {x}})-m_{1}({\hat {x}})\operatorname {sgn}(e_{1})} , will meet sufficient conditions to enter the e 1 = 0 {\displaystyle e_{1}=0} sliding mode in finite time.
  2. Along the e 1 = 0 {\displaystyle e_{1}=0} surface, the corresponding v 2 ( t ) = { m 1 ( x ^ ) sgn ( e 1 ) } eq {\displaystyle v_{2}(t)=\{m_{1}({\hat {x}})\operatorname {sgn}(e_{1})\}_{\text{eq}}} equivalent control will be equal to h 2 ( x ) {\displaystyle h_{2}(x)} , and so v 2 ( t ) h 2 ( x ^ ) = h 2 ( x ) h 2 ( x ^ ) = e 2 {\displaystyle v_{2}(t)-h_{2}({\hat {x}})=h_{2}(x)-h_{2}({\hat {x}})=e_{2}} . Hence, so long as m 2 ( x ^ ) | h 3 ( x ( t ) ) | {\displaystyle m_{2}({\hat {x}})\geq |h_{3}(x(t))|} , the second row of the error dynamics, e ˙ 2 = h 3 ( x ^ ) m 2 ( x ^ ) sgn ( e 2 ) {\displaystyle {\dot {e}}_{2}=h_{3}({\hat {x}})-m_{2}({\hat {x}})\operatorname {sgn}(e_{2})} , will enter the e 2 = 0 {\displaystyle e_{2}=0} sliding mode in finite time.
  3. Along the e i = 0 {\displaystyle e_{i}=0} surface, the corresponding v i + 1 ( t ) = { } eq {\displaystyle v_{i+1}(t)=\{\ldots \}_{\text{eq}}} equivalent control will be equal to h i + 1 ( x ) {\displaystyle h_{i+1}(x)} . Hence, so long as m i + 1 ( x ^ ) | h i + 2 ( x ( t ) ) | {\displaystyle m_{i+1}({\hat {x}})\geq |h_{i+2}(x(t))|} , the ( i + 1 ) {\displaystyle (i+1)} th row of the error dynamics, e ˙ i + 1 = h i + 2 ( x ^ ) m i + 1 ( x ^ ) sgn ( e i + 1 ) {\displaystyle {\dot {e}}_{i+1}=h_{i+2}({\hat {x}})-m_{i+1}({\hat {x}})\operatorname {sgn}(e_{i+1})} , will enter the e i + 1 = 0 {\displaystyle e_{i+1}=0} sliding mode in finite time.

So, for sufficiently large m i {\displaystyle m_{i}} gains, all observer estimated states reach the actual states in finite time. In fact, increasing m i {\displaystyle m_{i}} allows for convergence in any desired finite time so long as each | h i ( x ( 0 ) ) | {\displaystyle |h_{i}(x(0))|} function can be bounded with certainty. Hence, the requirement that the map H : R n R n {\displaystyle H:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} is a diffeomorphism (i.e., that its Jacobian linearization is invertible) asserts that convergence of the estimated output implies convergence of the estimated state. That is, the requirement is an observability condition.

In the case of the sliding mode observer for the system with the input, additional conditions are needed for the observation error to be independent of the input. For example, that

H ( x ) x B ( x ) {\displaystyle {\frac {\partial H(x)}{\partial x}}B(x)}

does not depend on time. The observer is then

x ^ ˙ = [ H ( x ^ ) x ] 1 M ( x ^ ) sgn ( V ( t ) H ( x ^ ) ) + B ( x ^ ) u . {\displaystyle {\dot {\hat {x}}}=\left[{\frac {\partial H({\hat {x}})}{\partial x}}\right]^{-1}M({\hat {x}})\operatorname {sgn}(V(t)-H({\hat {x}}))+B({\hat {x}})u.}

Multi-observer

Multi-observer extends the high-gain observer structure from single to multi observer, with many models working simultaneously. This has two layers: the first consists of multiple high-gain observers with different estimation states, and the second determines the importance weights of the first layer observers. The algorithm is simple to implement and does not contain any risky operations like differentiation.[4] The idea of multiple models was previously applied to obtain information in adaptive control.[15]

  • Multi-observer schema
    Multi-observer schema

Assuming that the number of high-gain observers equals n + 1 {\displaystyle n+1} ,

x ^ ˙ k ( t ) = A x k ^ ( t ) + B ϕ 0 ( x ^ ( t ) , u ( t ) ) L ( y k ^ ( t ) y ( t ) ) {\displaystyle {\dot {\hat {x}}}_{k}(t)=A{\hat {x_{k}}}(t)+B\phi _{0}({\hat {x}}(t),u(t))-L({\hat {y_{k}}}(t)-y(t))}
y k ^ ( t ) = C x k ^ ( t ) {\displaystyle {\hat {y_{k}}}(t)=C{\hat {x_{k}}}(t)}

where k = 1 , , n + 1 {\displaystyle k=1,\dots ,n+1} is the observer index. The first layer observers consists of the same gain L {\displaystyle L} but they differ with the initial state x k ( 0 ) {\displaystyle x_{k}(0)} . In the second layer all x k ( t ) {\displaystyle x_{k}(t)} from k = 1... n + 1 {\displaystyle k=1...n+1} observers are combined into one to obtain single state vector estimation

y k ^ ( t ) = k = 1 n + 1 α k ( t ) x k ^ ( t ) {\displaystyle {\hat {y_{k}}}(t)=\sum \limits _{k=1}^{n+1}\alpha _{k}(t){\hat {x_{k}}}(t)}

where α k R {\displaystyle \alpha _{k}\in \mathbb {R} } are weight factors. These factors are changed to provide the estimation in the second layer and to improve the observation process.

Let assume that

k = 1 n + 1 α k ( t ) ξ k ( t ) = 0 {\displaystyle \sum \limits _{k=1}^{n+1}\alpha _{k}(t)\xi _{k}(t)=0}

and

k = 1 n + 1 α k ( t ) = 1 {\displaystyle \sum \limits _{k=1}^{n+1}\alpha _{k}(t)=1}

where ξ k R n × 1 {\displaystyle \xi _{k}\in \mathbb {R} ^{n\times 1}} is some vector that depends on k t h {\displaystyle kth} observer error e k ( t ) {\displaystyle e_{k}(t)} .

Some transformation yields to linear regression problem

[ ξ n + 1 ( t ) ] = [ ξ 1 ( t ) ξ n + 1 ( t ) ξ k ( t ) ξ n + 1 ( t ) ξ n ( t ) ξ n + 1 ( t ) ] T [ α 1 ( t ) α k ( t ) α n ( t ) ] {\displaystyle [-\xi _{n+1}(t)]=[\xi _{1}(t)-\xi _{n+1}(t)\dots \xi _{k}(t)-\xi _{n+1}(t)\dots \xi _{n}(t)-\xi _{n+1}(t)]^{T}{\begin{bmatrix}\alpha _{1}(t)\\\vdots \\\alpha _{k}(t)\\\vdots \\\alpha _{n}(t)\end{bmatrix}}}

This formula gives possibility to estimate α k ( t ) {\displaystyle \alpha _{k}(t)} . To construct manifold we need mapping m : R n R n {\displaystyle m:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} between ξ k ( t ) = m ( e k ( t ) ) {\displaystyle \xi _{k}(t)=m(e_{k}(t))} and ensurance that ξ k ( t ) {\displaystyle \xi _{k}(t)} is calculable relying on measurable signals. First thing is to eliminate parking phenomenon for α k ( t ) {\displaystyle \alpha _{k}(t)} from observer error

e σ ( t ) = k = 1 n + 1 α k ( t ) e k ( t ) {\displaystyle e_{\sigma }(t)=\sum \limits _{k=1}^{n+1}\alpha _{k}(t)e_{k}(t)} .

Calculate n {\displaystyle n} times derivative on η k ( t ) = y ^ k ( t ) y ( t ) {\displaystyle \eta _{k}(t)={\hat {y}}_{k}(t)-y(t)} to find mapping m lead to ξ k ( t ) {\displaystyle \xi _{k}(t)} defined as

ξ k ( t ) = [ 1 0 0 0 C L 1 0 0 C A L C L 1 0 C A 2 L C A L C L 0 C A n 2 L C A n 3 L C A n 4 L 1 ] [ t t d t n 1 t t d t η k ( τ ) d τ η ( t ) η ( t ( n 1 ) t d ) ] {\displaystyle \xi _{k}(t)={\begin{bmatrix}1&0&0&\cdots &0\\CL&1&0&\cdots &0\\CAL&CL&1&\cdots &0\\CA^{2}L&CAL&CL&\cdots &0\\\vdots &\vdots &\vdots &\ddots \\CA^{n-2}L&CA^{n-3}L&CA^{n-4}L&\cdots &1\end{bmatrix}}{\begin{bmatrix}\int \limits _{t-t_{d}}^{t}{{n-1} \atop \cdots }\int \limits _{t-t_{d}}^{t}\eta _{k}(\tau )d\tau \\\vdots \\\eta (t)-\eta (t-(n-1)t_{d})\end{bmatrix}}}

where t d > 0 {\displaystyle t_{d}>0} is some time constant. Note that ξ k ( t ) {\displaystyle \xi _{k}(t)} relays on both η k ( t ) {\displaystyle \eta _{k}(t)} and its integrals hence it is easily available in the control system. Further α k ( t ) {\displaystyle \alpha _{k}(t)} is specified by estimation law; and thus it proves that manifold is measurable. In the second layer α ^ k ( t ) {\displaystyle {\hat {\alpha }}_{k}(t)} for k = 1 n + 1 {\displaystyle k=1\dots n+1} is introduced as estimates of α k ( t ) {\displaystyle \alpha _{k}(t)} coefficients. The mapping error is specified as

e ξ ( t ) = k = 1 n + 1 α ^ k ( t ) ξ k ( t ) {\displaystyle e_{\xi }(t)=\sum \limits _{k=1}^{n+1}{\hat {\alpha }}_{k}(t)\xi _{k}(t)}

where e ξ ( t ) R n × 1 , α ^ k ( t ) R {\displaystyle e_{\xi }(t)\in \mathbb {R} ^{n\times 1},{\hat {\alpha }}_{k}(t)\in \mathbb {R} } . If coefficients α ^ ( t ) {\displaystyle {\hat {\alpha }}(t)} are equal to α k ( t ) {\displaystyle \alpha _{k}(t)} , then mapping error e ξ ( t ) = 0 {\displaystyle e_{\xi }(t)=0} Now it is possible to calculate x ^ {\displaystyle {\hat {x}}} from above equation and hence the peaking phenomenon is reduced thanks to properties of manifold. The created mapping gives a lot of flexibility in the estimation process. Even it is possible to estimate the value of x ( t ) {\displaystyle x(t)} in the second layer and to calculate the state x {\displaystyle x} .[4]

Bounding observers

Bounding[16] or interval observers[17][18] constitute a class of observers that provide two estimations of the state simultaneously: one of the estimations provides an upper bound on the real value of the state, whereas the second one provides a lower bound. The real value of the state is then known to be always within these two estimations.

These bounds are very important in practical applications,[19][20] as they make possible to know at each time the precision of the estimation.

Mathematically, two Luenberger observers can be used, if L {\displaystyle L} is properly selected, using, for example, positive systems properties:[21] one for the upper bound x ^ U ( k ) {\displaystyle {\hat {x}}_{U}(k)} (that ensures that e ( k ) = x ^ U ( k ) x ( k ) {\displaystyle e(k)={\hat {x}}_{U}(k)-x(k)} converges to zero from above when k {\displaystyle k\to \infty } , in the absence of noise and uncertainty), and a lower bound x ^ L ( k ) {\displaystyle {\hat {x}}_{L}(k)} (that ensures that e ( k ) = x ^ L ( k ) x ( k ) {\displaystyle e(k)={\hat {x}}_{L}(k)-x(k)} converges to zero from below). That is, always x ^ U ( k ) x ( k ) x ^ L ( k ) {\displaystyle {\hat {x}}_{U}(k)\geq x(k)\geq {\hat {x}}_{L}(k)}

See also

References

In-line references
  1. ^ Khalil, H.K. (2002), Nonlinear Systems (3rd ed.), Upper Saddle River, NJ: Prentice Hall, ISBN 978-0-13-067389-3
  2. ^ a b Utkin, Vadim; Guldner, Jürgen; Shi, Jingxin (1999), Sliding Mode Control in Electromechanical Systems, Philadelphia, PA: Taylor & Francis, Inc., ISBN 978-0-7484-0116-1
  3. ^ a b Drakunov, S.V. (1983), "An adaptive quasioptimal filter with discontinuous parameters", Automation and Remote Control, 44 (9): 1167–1175
  4. ^ a b c Bernat, J.; Stepien, S. (2015), "Multi modelling as new estimation schema for High Gain Observers", International Journal of Control, 88 (6): 1209–1222, Bibcode:2015IJC....88.1209B, doi:10.1080/00207179.2014.1000380, S2CID 8599596
  5. ^ Krener, A.J.; Isidori, Alberto (1983), "Linearization by output injection and nonlinear observers", System and Control Letters, 3: 47–52, doi:10.1016/0167-6911(83)90037-3
  6. ^ Krener, A.J.; Respondek, W. (1985), "Nonlinear observers with linearizable error dynamics", SIAM Journal on Control and Optimization, 23 (2): 197–216, doi:10.1137/0323016
  7. ^ Gauthier, J.P.; Hammouri, H.; Othman, S. (1992), "A simple observer for nonlinear systems applications to bioreactors", IEEE Transactions on Automatic Control, 37 (6): 875–880, doi:10.1109/9.256352
  8. ^ Hammouri, H.; Kinnaert, M. (1996), "A New Procedure for Time-Varying Linearization up to Output Injection", System and Control Letters, 28 (3): 151–157, doi:10.1016/0167-6911(96)00022-9
  9. ^ Ciccarella, G.; Dalla Mora, M.; Germani, A. (1993), "A Luenberger-like observer for nonlinear systems", International Journal of Control, 57 (3): 537–556, doi:10.1080/00207179308934406
  10. ^ Guo, Bao-Zhu; Zhao, Zhi-Liang (January 2011). "Extended State Observer for Nonlinear Systems with Uncertainty". IFAC Proceedings Volumes. 44 (1). International Federation of Automatic Control: 1855–1860. doi:10.3182/20110828-6-IT-1002.00399. Retrieved 8 August 2023.
  11. ^ "The Wayback Machine has not archived that URL". Retrieved 8 August 2023.[dead link]
  12. ^ Kumar, Sunil; Kumar Pal, Anil; Kamal, Shyam; Xiong, Xiaogang (19 May 2023). "Design of switched high-gain observer for nonlinear systems". International Journal of Systems Science. 54 (7). Science Publishing Group: 1471–1483. Bibcode:2023IJSS...54.1471K. doi:10.1080/00207721.2023.2178863. S2CID 257145897. Retrieved 8 August 2023.
  13. ^ "Registration". IEEE Xplore. Retrieved 8 August 2023.
  14. ^ Drakunov, S.V. (1992). "Sliding-mode observers based on equivalent control method". [1992] Proceedings of the 31st IEEE Conference on Decision and Control. pp. 2368–2370. doi:10.1109/CDC.1992.371368. ISBN 978-0-7803-0872-5. S2CID 120072463.
  15. ^ Narendra, K.S.; Han, Z. (August 2012). "A new approach to adaptive control using multiple models". International Journal of Adaptive Control and Signal Processing. 26 (8): 778–799. doi:10.1002/acs.2269. ISSN 1099-1115. S2CID 60482210.
  16. ^ Combastel, C. (2003). "A state bounding observer based on zonotopes" (PDF). 2003 European Control Conference (ECC). pp. 2589–2594. doi:10.23919/ECC.2003.7085991. ISBN 978-3-9524173-7-9. S2CID 13790057.
  17. ^ Rami, M. Ait; Cheng, C. H.; De Prada, C. (2008). "Tight robust interval observers: An LP approach" (PDF). 2008 47th IEEE Conference on Decision and Control. pp. 2967–2972. doi:10.1109/CDC.2008.4739280. ISBN 978-1-4244-3123-6. S2CID 288928.
  18. ^ Efimov, D.; Raïssi, T. (2016). "Design of interval observers for uncertain dynamical systems". Automation and Remote Control. 77 (2): 191–225. doi:10.1134/S0005117916020016. hdl:20.500.12210/25069. S2CID 49322177.
  19. ^ http://www.iaeng.org/publication/WCE2010/WCE2010_pp656-661.pdf [bare URL PDF]
  20. ^ Hadj-Sadok, M.Z.; Gouzé, J.L. (2001). "Estimation of uncertain models of activated sludge processes with interval observers". Journal of Process Control. 11 (3): 299–310. doi:10.1016/S0959-1524(99)00074-8.
  21. ^ Rami, Mustapha Ait; Tadeo, Fernando; Helmke, Uwe (2011). "Positive observers for linear positive systems, and their implications". International Journal of Control. 84 (4): 716–725. Bibcode:2011IJC....84..716A. doi:10.1080/00207179.2011.573000. S2CID 21211012.
General references
  • Sontag, Eduardo (1998), Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition, Springer, ISBN 978-0-387-98489-6

External links

  • Kalman Filter Explained Simply, Step-by-Step Tutorial of the Kalman Filter with Equations