A Quick Recap

So far, we have focused on solving the following linear state-space equation [REF]: \[ \begin{equation} \mathbf{\dot{x}} = \mathbf{Ax+Bu}\tag{1} \end{equation} \] We set the initial time \(t_0\) to be zero, i.e., \(t_0 = 0\), and with initial condition \(\mathbf{x}(0)\), the solution of \(\mathbf{x}(t)\) is [1]: \[ \begin{equation} \mathbf{x}(t) = \exp({\mathbf{A}t})\mathbf{x}(0) + \int_{0}^{t} \exp\{\mathbf{A}(t-\tau)\} \mathbf{B} \mathbf{u}(\tau) d\tau \tag{2} \end{equation} \] where \(\exp(\cdot):\mathbb{R}^{n\times n} \rightarrow \mathbb{R}^{n\times n}\) is an operator defined as: \[ \begin{equation} \exp({\mathbf{A}t}) = \mathbf{I} + \mathbf{A}t + \frac{1}{2!}\mathbf{A^2}t^2 + \cdot + \frac{1}{n!}\mathbf{A^k}t^k + \cdots = \sum_{n=0}^{\infty}\frac{1}{n!}(\mathbf{A}t)^{n} \tag{3} \end{equation} \] To further simplify equation (3), which is an infinite power series of matrices, it is necessary to conduct an eigenvalue/eigenvector analysis of the state-matrix \(\mathbf{A}\).



Diagonalizable Matrix

To ease our analysis, we initially neglect the input \(\mathbf{u}(t)\) term, i.e., \(\mathbf{u}(t)=\mathbf{0}\)), and solve the following equation: \[ \mathbf{\dot{x}} = \mathbf{Ax} \tag{4} \]

If \(\mathbf{A}\) is a diagonalizable matrix, i.e., there exists \(n\) independent eigenvectors for matrix \(\mathbf{A}\) [REF], matrix \(\mathbf{A}\) can be described as:

\[ \mathbf{A} \begin{bmatrix} \vert & \vert & & \vert \\ \mathbf{v_1} & \mathbf{v_2} & \cdots & \mathbf{v_n} \\ \vert & \vert & & \vert \end{bmatrix} = \begin{bmatrix} \vert & \vert & & \vert \\ \mathbf{v_1} & \mathbf{v_2} & \cdots & \mathbf{v_n} \\ \vert & \vert & & \vert \end{bmatrix} \begin{bmatrix} \lambda_{1} & & \\ & \lambda_{2} & \\ & & \ddots & \\ & & & \lambda_{n} \end{bmatrix} \;\;\; \Longrightarrow \;\;\; \mathbf{AV}=\mathbf{V\Lambda} \] where \(\mathbf{v}_i\) and \(\lambda_i\), \(i \in \{1,2,\cdots, n\}\) are \(n\) independent eigenvectors and their corresponding eigenvalues of \(\mathbf{A}\), respectively. \(\mathbf{\Lambda}\) is a diagonal matrix, in which the diagonal entries are eigenvalues of matrix \(\mathbf{A}\) and other entries are all zero. Note that there can be repeated eigenvalues or even the eigenvalues/eigenvectors can be complex numbers – our focus is on \(n\) independent eigenvectors.

As \(\mathbf{A}\) can be described as \(\mathbf{A}=\mathbf{V\Lambda V^{-1}}\): \[ \mathbf{A}^n = (\mathbf{V\Lambda V^{-1}})^n = \underbrace{\mathbf{V\Lambda V^{-1}V\Lambda V^{-1}\cdots V\Lambda V^{-1}}}_{\text{n Terms}}= \mathbf{V\Lambda^n V^{-1}} \] and equation (3) can be further simplified as: \[ \begin{align*} \exp({\mathbf{A}t}) &= \mathbf{I} + \mathbf{A}t + \frac{1}{2!}\mathbf{A^2}t^2 + \cdot + \frac{1}{n!}\mathbf{A^n}t^n + \cdot \\ &= \mathbf{V}\Big( \mathbf{I} + \mathbf{\Lambda} t + \frac{1}{2!}\mathbf{\Lambda^2}t^2 + \cdots + \frac{1}{n!}\mathbf{\Lambda^n}t^n + \cdots \Big) \mathbf{V^{-1}} \\ &= \mathbf{V} \exp({ \mathbf{\Lambda}t}) \mathbf{V^{-1}} \tag{5} \end{align*} \] where \(\exp({\mathbf{\Lambda}}t)\) is a diagonal matrix defined by: \[ \exp({\mathbf{\Lambda}}t) = \begin{bmatrix} \exp(\lambda_{1}t) & & \\ & \exp(\lambda_{2}t) & \\ & & \ddots & \\ & & & \exp(\lambda_{n}t) \end{bmatrix} \] Note that \(\lambda_i\) and \(t\) are scalar values, hence it is straightforward to calculate \(\exp(\lambda_i t) \triangleq e^{\lambda_i t}\).

Substituting equation (4) with equation (5) results in: \[ \mathbf{x}(t) = \mathbf{V} \exp({ \mathbf{\Lambda}t}) \mathbf{V^{-1}} \mathbf{x}(0) \;\;\; \Longrightarrow \;\;\; \mathbf{V^{-1}}\mathbf{x}(t) = \exp({ \mathbf{\Lambda}t}) \mathbf{V^{-1}} \mathbf{x}(0) \] By defining a new coordinate \(\mathbf{z}(t) = \mathbf{V^{-1}}\mathbf{x}(t)\) leads us to: \[ \mathbf{z}(t) = \exp({ \mathbf{\Lambda}t}) \mathbf{z}(0) \;\;\; \Longrightarrow \;\;\; z_i(t) = \exp({\lambda_i t}) z_i(0) \tag{6} \] where \(z_i\) is the \(i\)-th component of vector \(\mathbf{z}\).


This result shows that if \(\mathbf{A}\) is diagonalizable, we can always find a coordinate transformation matrix which results into \(n\) decoupled scalar equations, and that transformation matrix is \(\mathbf{V}\), a column-wise collection of eigenvectors of matrix \(\mathbf{A}\). This simplifies equation (1) — once we solve \(n\) scalar equation, i.e., equation (6), we can map back to the original state vector \(\mathbf{x}(t)\) by multiplying matrix \(\mathbf{V}\).

Note that \(\mathbf{z = V^{-1}x}\) is simply transforming the components of the original basis vectors \(\mathbf{x}\) to \(\mathbf{z}\):1 2.

\[ \begin{bmatrix} \vert & \vert & & \vert \\ \mathbf{v_1} & \mathbf{v_2} & \cdots & \mathbf{v_n} \\ \vert & \vert & & \vert \end{bmatrix} \begin{bmatrix} \vert \\ \mathbf{z} \\ \vert \end{bmatrix} = \begin{bmatrix} \vert & \vert & & \vert \\ \mathbf{e_1} & \mathbf{e_2} & \cdots & \mathbf{e_n} \\ \vert & \vert & & \vert \end{bmatrix} \begin{bmatrix} \vert \\ \mathbf{x} \\ \vert \end{bmatrix} \;\; \Longrightarrow \;\; \mathbf{z} = \mathbf{V^{-1}I_nx} = \mathbf{V^{-1}x} \] where \(\mathbf{e}_i\), \(i \in \{1,2,\cdots, n\}\) denotes the standard basis and \(\mathbf{I_n}\in\mathbb{R}^{n\times n}\) is the identity matrix.

The key observation is that the eigenvalues of the state matrix \(\mathbf{A}\) fully determine the system’s property. If all the eigenvalues are strictly negative, then for any initial condition, the system converges to \(\mathbf{0}\). If there exists at least one eigenvalue that is strictly positive, the system response is unbounded. We discuss in detail about this in this post.


A Quick Example

Consider a following state matrix \(\mathbf{A}\), which is diagonalizable: \[ \mathbf{A} = \begin{bmatrix} -7 & -3 \\ -1 & -5 \end{bmatrix} \;\;\; \Longrightarrow \;\;\; \mathbf{V} = \begin{bmatrix} -1 & 3 \\ 1 & 1 \end{bmatrix}, \;\; \mathbf{\Lambda} = \begin{bmatrix} -4 & 0 \\ 0 & -8 \end{bmatrix} \] Since \(\mathbf{A}\) is diagonalizable, the two eigenvectors are independent. Hence, the two eigenvectors of matrix \(\mathbf{A}\) consist a basis of \(\mathbb{R}^2\) space. In other words, any vector in \(\mathbb{R}^2\) can be uniquely described as the sum of two eigenvectors.3

Let the initial condition of the system be \(\mathbf{x}(0)=[1,3]^\mathrm{T}\). Using the eigenvectors of matrix \(\mathbf{A}\) as the basis, vector \(\mathbf{x}(0)\) can be described as: \[ \mathbf{x}(0) = \begin{bmatrix} 1 \\ 3 \end{bmatrix} = 2 \cdot \begin{bmatrix} -1 \\ 1 \end{bmatrix} + 1 \cdot \begin{bmatrix} 3 \\ 1 \end{bmatrix} = \begin{bmatrix} -1 & 3 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 2 \\ 1 \end{bmatrix} = \mathbf{Vz}(0) \] Hence, the components of the initial condition \(\mathbf{x}(0)\) with respect to basis \(\mathbf{V}\) is \(\mathbf{z}(0)=[2,1]^\mathrm{T}\). Equation (2) provides us 2 equations: \[ \begin{align*} \mathbf{z_1}(t) &= z_1(0) \exp(-4t ) = 2e^{-4t} \\ \mathbf{z_2}(t) &= z_2(0) \exp(-8t ) = 1e^{-8t} \end{align*} \]

Therefore, the system response is: \[ \mathbf{x}(t) = \mathbf{Vz} = \begin{bmatrix} -1 & 3 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} z_1(t) \\ z_2(t) \end{bmatrix} = 2 e^{-4t} \cdot \begin{bmatrix} -1 \\ 1 \end{bmatrix} + 1 e^{-8t} \cdot \begin{bmatrix} 3 \\ 1 \end{bmatrix} = \begin{bmatrix} -2e^{-4t} + 3 e^{-8t} \\ \phantom{-}2e^{-4t} + 1 e^{-8t} \end{bmatrix} \]



What about Non-diagonalizable Matrices?

While diagonalization of \(\mathbf{A}\) guarantees a much simpler form of solution (4), not all matrices are diagonalizable. Consider an example matrix \(\mathbf{K}\): \[ \mathbf{K} = \begin{bmatrix} 3 & 1 \\ 0 & 3 \end{bmatrix} \;\; \Longrightarrow \;\; \det( \mathbf{K} - \lambda \mathbf{I} ) = (\lambda - 3)^2 = 0 \] The eigenvalue of matrix \(\mathbf{K}\) is repeated with value \(\lambda = 3\), and the dimension of the null space of \(\mathbf{K}-3\mathbf{I}\) is 1: \[ N(\mathbf{K} - 3\mathbf{I}) = \bigg\{ \mathbf{v} \in \mathbb{R}^3 \; \big| \; \mathbf{v} = \begin{bmatrix} a \\ 0 \end{bmatrix} \text{ for $a\in \mathbb{R}$} \bigg\} \] Meaning, we cannot find 2 independent eigenvectors of \(\mathbf{K}\) for the diagonalization.

We call such matrix as defective matrix, and deriving the solution of equation (4) is trickier. This case appears for critically damped mass-spring-damper system, i.e., damping ratio \(\zeta = 1\): \[ \ddot{x} + 2\zeta \omega_0 \dot{x} + \omega_0^2 x = 0 \;\;\; \Longrightarrow \;\;\; \ddot{x} + 2 \omega_0 \dot{x} + \omega_0^2 x = 0 \] By choosing the state vector as \(\mathbf{x}=[x,\dot{x}]^T\), it is easy to show that the eigenvalues are repeated with value \(-\omega_0\), and there exists only one independent eigenvector. For this case, calculating \(\exp({\mathbf{A}t})\) gives us: \[ \exp({\mathbf{A}t}) = \begin{bmatrix} e^{\lambda t} & te^{\lambda t} \\ 0 & e^{\lambda t} \end{bmatrix} \mathbf{x_0} \] where \(\lambda = -\omega_0\). But still, the fact that the eigenvalues of the state matrix \(\mathbf{A}\) fully determine the system’s property. To understand this in great detail, please take a look at this post



Full-state Feedback

One might be curious for cases when input \(\mathbf{u}(t)\) is non-zero. In the field of control, we often use the whole state vector \(\mathbf{x}\) to define our input \(\mathbf{u}\). This method is called Full-state feedback. Full-state feedback is crucial in feedback control system theory to place the eigenvalues of the closed-loop system to the desired locations4, which thereby controls the characteristics of the response of the system.

The method defines the system input using the full state vector \(\mathbf{x}\): \[ \mathbf{u} = -\mathbf{K}\mathbf{x} \] where matrix \(\mathbf{K}\in\mathbb{R}^{n\times m}\) is the feedback gain that we can choose. The system’s state equation is simplified as follows using full-state feedback: \[ \mathbf{\dot{x}} = \mathbf{Ax-BKx} = (\mathbf{A-BK})\mathbf{x} \equiv (\mathbf{A_{cl}})\mathbf{x} \] Meaning, the system acts as if there is no input and the state-space matrix is \(\mathbf{A_{cl}}\).



References

[1]
B. Friedland, Control system design: An introduction to state-space methods. Courier Corporation, 2012, pp. 59–62.

  1. As from the great lectures from Prof. Frederic Schuller, the vector components change with respect to the change of coordinate. Vector itself does not [REF1] [REF2].↩︎

  2. One might find it odd and confusing about the multiplication of \(\mathbf{V}^{-1}\) rather than \(\mathbf{V}\). This is actually the starting point of tensor analysis — vector is also called as a contravariant vector. The name comes from the fact that vector components are contravariant with respect to the coordinate transformation. There is also a covariant vector, where we discuss in detail later [REF1] [REF2]↩︎

  3. Quizzes and exams often ask students to calculate the eigenvalues of the matrix with your pencil and paper. You can quickly double-check your calculation by checking the “trace” and “determinant” of the matrix. The trace of the matrix is the sum of its eigenvalues, and determinant of the matrix is the product of its eigenvalues. If we use this trick for our example, our calculation seems correct!↩︎

  4. Strictly speaking, we need to first check whether the system is controllable. Details are in this post.↩︎