Linear Combination

Consider a \(K\)-vector space \((V,+,\cdot)\). Given a finite set of vectors \(\{v_1, v_2, \cdots, v_n\}\) and a corresponding set of elements of field \(K\), \(\{\alpha_1, \alpha_2, \cdots, \alpha_n \}\), The linear combination of vectors \(\{v_1, v_2, \cdots, v_n\}\) with the corresponding set \(\{\alpha_1, \alpha_2, \cdots, \alpha_n \}\) as coefficients is: \[ \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_nv_n = \sum_{i=1}^{n} \alpha_i v_i \] which is again, an element of vector space \(V\).

Since we introduced the definition of linear combination, consider the set of all linear combinations of a given finite set of \(S=\{v_1, v_2, \cdots, v_n\}\). This set is called the span of set \(S=\{v_1, v_2, \cdots, v_n\}\), denoted as \(\text{span}(S)\): \[ \text{span}(S) = \big\{ v \in V \; | \; v=\sum_{i=1}^{n}\alpha_iv_i, \;\; v_i \in S, \;\; \alpha_i \in K \big\} \]



Linear Independence and Dependence

Consider a \(K\)-vector space \((V,+,\cdot)\). A finite set of vectors \(\{v_1, v_2, \cdots v_n\}\) is called linearly independent if: \[ \sum_{i=1}^{n} \alpha_i v_i = 0 \;\; \Longrightarrow \;\; \alpha_i = 0, \; i \in \{1,2,\cdots n\}, \] Meaning, \(\sum_{i=1}^{n} \alpha_i v_i = 0\) implies that \(\alpha_i=0\) for each \(i\). If the set \(\{v_1, v_2, \cdots v_n\}\) is not linearly independent (i.e., if there is a nontrivial linear combination that equals the zero vector), the set is called linear dependent [1].



Bases

Consider a \(K\)-vector space \((V,+,\cdot)\). A basis of \(V\) is a subset \(B \subseteq V\) of linearly independent vectors such that every vector in \(V\) is a linear combination of elements of \(B\). A vector space \(V\) is called a finite-dimensional vector space if it has a finite basis, and the number of basis is called the dimension of the vector space [2].

We introduce some crucial theorems regarding to bases of vector spaces.


Theorem 1

Consider a finite \(K\)-vector space \((V,+,\cdot)\) and its basis \(B=\{v_1, v_2, \cdots, v_n\}\). For any vector \(v\in V\), the representation of \(v\) as a linear combination of basis \(B\) is unique

Proof

Assume the representation of \(v\) is non-unique and has two representations: \[ v = \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_n v_n = \beta_1 v_1 + \beta_2v_2 + \cdots +\beta_n v_n \]

Then it follows that: \[ (\alpha_1 - \beta_1) v_1 + (\alpha_2 - \beta_2) v_2 + \cdots (\alpha_n - \beta_n) v_n = 0 \] Since set basis \(B\) is linear independent, \(\alpha_i = \beta_i\):

The significance of this proof is the fact that given basis \(B\) for an \(n\)-dimensional vector space \(V\), we can uniquely determine any element of vector space \(V\) with \(n\) coefficients. That is the reason why people often describe vector as an \(n\)-tuple of numbers.



References

[1]
P. R. Halmos, Finite dimensional vector spaces. 1958, p. 7.
[2]
P. R. Halmos, Finite dimensional vector spaces. 1958, p. 10.