Linear Maps

Let \((V,\oplus_V,\odot_V)\) and \((W,\oplus_V,\odot_V)\) be \(K\)-vector spaces. A map \(\phi:V\rightarrow W\) is called a linear map (also called linear mapping, linear transformation, Homomorphism) if map \(\phi(\cdot)\) satisfies: \[ \begin{align*} \forall v_1, v_2 \in V&: \phi(v_1 \oplus_V v_2)= \phi(v_1) \oplus_W \phi(v_2) \\ \forall c \in K, \forall v \in W &: \phi(c\odot_V v) = c\odot_W\phi(v) \end{align*} \] Although the subscripts make the equation notation-heavy, for clarification we included it. In case if the map is linear, we add a tilde on top of the arrow, e.g., \(\phi: V \stackrel{\sim}{\smash{\mathcal{\longrightarrow}}\rule{0pt}{0.7ex}}W\)


A Quick Example

Consider a vector space \((P_3(\mathbb{R}), +, \cdot)\), where \(P_3(\mathbb{R})\) is a set of 3rd-order polynomials with real coefficients: \[ P_3(\mathbb{R}) = \{ p: \mathbb{R} \rightarrow \mathbb{R} \; | \; p(x)=a_2x^2 + a_1x + a_0, \;\; a_0, a_1, a_2 \in \mathbb{R} \} \] We define the Differentiation Operator \(\delta\): \[ \begin{align*} \delta : P_3(\mathbb{R}) & \longrightarrow P_3(\mathbb{R}) \\ p & \longmapsto p' \end{align*} \] We show that a differentiation operator is a linear map. We choose two polynomials \(p_1, p_2 \in P_3(\mathbb{R})\). For any \(\lambda \in \mathbb{R}\), it is straightforward to show that: \[ \begin{align*} \delta(p_1 + p_2) &= (p_1+p_2)' = p_1' + p_2' = \delta( p_1 ) + \delta( p_2 ) \\ \delta( \lambda p_1 ) &= (\lambda p_1)' = \lambda p_1' = \lambda \delta( p_1 ) \end{align*} \]

Hence, \(\delta\) is a linear map.

In case if the map is linear, we add a tilde on top of the arrow, \(\delta: P_3(\mathbb{R}) \stackrel{\sim}{\smash{\mathcal{\longrightarrow}}\rule{0pt}{0.7ex}}P_3(\mathbb{R})\).



Linear Maps are Matrices!



Subspaces

Let \((V,+,\cdot)\) be a \(K\)-vector space [REF]. Then subset \(W\subseteq V\) is a subspace if: \[ \begin{align*} \forall w_1, w_2 \in W&: w_1 + w_2 \in W \\ \forall w \in W, \forall c \in K &: cw_1 \in W \end{align*} \] Meaning, subspace \(W\) is a subset of vector space \((V,+,\cdot)\) that is a vector space by itself. Note that the addition and s-multiplication of subspace \(W\) is endowed from \((V,+,\cdot)\), by restricting the operation to the elements of \(W\).


A Quick Example

Consider a real vector space \((\mathbb{R}^{3}, +, \cdot)\). Then the following subset W: \[ W = \{ (a,b,0)\in\mathbb{R}^{3} \; | \; a,b\in \mathbb{R} \} \] is a subspace of \((\mathbb{R}^{3}, +, \cdot)\). It is straightforward to show that for \(w_1=(a_1, b_1, 0)\in W\), \(w_2=(a_2, b_2, 0)\in W\), \(\lambda \in K\): \[ \begin{align*} w_1 + w_2 &= (a_1+a_2, b_1+b_2, 0) \in W \\ \lambda \cdot w_1 &= (\lambda a_1, \lambda a_2, 0 ) \in W \end{align*} \]


Invariant Subspaces

Consider a linear map \(T: V\stackrel{\sim}{\smash{\mathcal{\longrightarrow}}\rule{0pt}{0.7ex}}V\), where \(\mathbf{A}\) maps vector space \(V\) onto itself.
A subspace \(W \subseteq V\) is called a T-invariant subspace of \(V\) if all vectors in \(W\) are transformed by \(T\) into vectors that are also contained in \(W\): \[ \forall w \in W: \;\; T(w) \in W \]


Null-space (Kernel) and Range Space (Image)

Consider a linear map \(T: V\stackrel{\sim}{\smash{\mathcal{\longrightarrow}}\rule{0pt}{0.7ex}}W\), where \(V\) and \(W\) are vector spaces. The null-space (or Kernel) of linear map \(T\) denoted as \(N(T)\) is the subspace of \(V\) where the elements \(v\in V\) satisfy \(T(v) = 0\): \[ N(T) = \{ v\in V \; | \; T(v) = 0 \} \] Moreover, the range space (or Image) of linear map \(T\) denoted as \(R(T)\) is defined to be the set: \[ R(T) = \{ T(v) \in W \; | \; v\in V \} \] It is straightforward to show that for a given linear map \(T\) that \(N(T)\) and \(R(T)\) are subspaces of \(V\) and \(W\), respectively.

In detail, for \(N(T) \subseteq V\): \[ \begin{align*} \forall v_1, v_2 \in N(T): T(v_1 + v_2) = T(v_1) + T(v_2) = 0 \;\; &\Longrightarrow \;\; v_1 + v_2 \in N(T) \\ \forall \lambda \in K, \forall v \in N(T): T(\lambda v) = \lambda T(v) = 0\;\; &\Longrightarrow \;\; \lambda v \in N(T) \end{align*} \] and for \(R(T) \subseteq W\), consider two vectors \(w_1, w_2 \in R(T)\). Then, \(\exists v_1, v_2 \in V: T(v_1)=w_1, T(v_2)=w_2\). Hence, \(\forall \lambda \in K\): \[ \begin{align*} w_1 + w_2 = T(v_1) + T(v_2) = T(v_1+v_2): v_1 + v_2 \in V &\Longrightarrow w_1 + w_2 \in R(T) \\ \lambda w_1 = \lambda T(v_1) = T(\lambda v_1 ): \lambda v_1 \in V &\Longrightarrow \lambda w_1 \in R(T) \end{align*} \]



Eigenvalues and Eigenvectors

Consider a linear map \(T: V\stackrel{\sim}{\smash{\mathcal{\longrightarrow}}\rule{0pt}{0.7ex}}V\), where \(T\) maps \(K\)-vector space \(V\) onto itself. Then a non-zero vector element \(v\) is an eigenvector of \(T\) if \(T(v)\) is a scalar multiple of \(v\): \[ T(v) = \lambda v \] where \(\lambda\) is a scalar in \(K\). We call \(\lambda\) the eigenvalue of \(T\).

It is clear to show that the eigenspace of \(T\), which we denote as \(E_\lambda(T)\), is an \(T\)-invariant subspace, since \(T\) only maps \(v\) into its scalar multiple \(\lambda v\):

\[ \forall v \in E_\lambda(T): T(v) = \lambda v \in E_\lambda(T) \]


A Quick Example

It is high-school mathematics that is required to do the calculation: Consider a map \(\mathbb{A}: \mathbb{R}^3 \stackrel{\sim}{\smash{\mathcal{\longrightarrow}}\rule{0pt}{0.7ex}}\mathbb{R}\)



References