A robotic manipulator is said to be kinematically redundant when it possesses more degrees of freedom than the minimum number required to execute a given task [1], [2].
In mathematical terminology, let \(\mathbf{q}\in\mathbb{R}^{n}\) denote the joint configuration of an \(n\)-DOF open-chain robotic manipulator. Let \(\mathbf{x}\in\mathbb{R}^{m}\) be an array of task variables, e.g., the 3D Cartesian position of the end-effector or any point on the robot. The kinematic relation between \(\mathbf{x}\) and \(\mathbf{q}\) can be derived by the Forward Kinematics map \(\mathbf{f}(\cdot):\mathbb{R}^{n}\rightarrow \mathbb{R}^{m}\) of the robot: \[ \mathbf{x} = \mathbf{f(q)} \]
Assuming that the Foward Kinematics map \(\mathbf{f}(\cdot)\) is smooth, one can define the Jacobian \(\mathbf{J}(q)\) matrix by taking the partial derivatives of \(\mathbf{f}\) with respect to \(\mathbf{q}\). This results in the following kinematic relation: \[ \dot{\mathbf{x}} = \mathbf{J(q) \dot{q}} \] Meaning, for a given posture of \(\mathbf{q}\), the relation between \(\dot{\mathbf{x}}\) and \(\dot{\mathbf{q}}\) is a linear relation, no matter how complex or nonlinear the Forward Kinematics map \(\mathbf{f}(\cdot)\) is.
If the robotic manipulator is kinetically redundant, for any \(\mathbf{q}\) there exists a null-space for the Jacobian matrix \(\mathbf{J(q)}\). In other words, given \(\mathbf{\dot{x}}\), there exists infinite number of solutions for \(\mathbf{\dot{q}}\) which maps to \(\mathbf{\dot{x}}\).
To select a single solution from an infinite number of candidates, one can formulate the following constrained optimization problem: \[ \min_{\mathbf{\dot{q}}\in\mathbb{R}^n} \frac{1}{2}\mathbf{\dot{q}^TW\dot{q}} ~~~ \text{s.t.} ~~ \mathbf{\dot{x}=J(q)\dot{q}} \] In this equation, \(\mathbf{W}\in\mathbb{R}^{n\times n}\) is the weighting matrix which is chosen to be positive-definite. The constrained optimization problem can be reformulated as an unconstrained optimization problem by introducing the Lagrange multipler \(\boldsymbol{\lambda}\in\mathbb{R}^{m}\): \[ \min_{\mathbf{\dot{q}, \boldsymbol{\lambda}} } L(\mathbf{\dot{q}},\boldsymbol{\lambda}) = \min_{\mathbf{\dot{q}, \boldsymbol{\lambda}} } \frac{1}{2}\mathbf{\dot{q}^TW\dot{q}}+ \boldsymbol{\lambda}\mathbf{^T}\{\mathbf{\dot{x}-J(q)\dot{q}} \} \] Solving this problem results in: \[ \begin{align} \frac{\partial L(\mathbf{\dot{q}},\boldsymbol{\lambda})}{\partial \mathbf{\dot{q}}} &= \mathbf{W\dot{q}-J^T(q)}\boldsymbol{\lambda} = \mathbf{0} \\ \frac{\partial L(\mathbf{\dot{q}},\boldsymbol{\lambda})}{\partial \boldsymbol{\lambda}} &= \mathbf{\dot{x}-J(q)\dot{q}} = \mathbf{0} \\ \end{align} \] \[ \boldsymbol{\lambda} = \big( \mathbf{J(q)W^{-1}J^T(q)} \big)^{-1} \mathbf{\dot{x}} ~~~~~~~~ \mathbf{\dot{q}} = \mathbf{W^{-1}J^{T}(q) \big\{ \mathbf{J(q)W^{-1}J^T(q)} \big\}^{-1} \mathbf{\dot{x}} } \equiv \mathbf{J(q)^{W^{+}}}\mathbf{\dot{x}} \] Hence, the map from \(\mathbf{\dot{x}}\) to \(\mathbf{\dot{q}}\) is defined via \(\mathbf{J(q)^{W^{+}}}\). While several choices of \(\mathbf{W}\) is possible, for \(\mathbf{W}=\mathbf{I}_n\), then \(\mathbf{J(q)^{W^{+}}}=\mathbf{J(q)^{\dagger}}\) is the pseudo-inverse (or the Moore-Penrose inverse) matrix. The physical meaning of using \(\mathbf{J(q)^{\dagger}}\) is that it chooses \(\mathbf{\dot{q}}\) with minimal Euclidean norm that still satisfies \(\mathbf{\dot{x}=J(q)\dot{q}}\) [3].
One could quickly check the following property: \[ \mathbf{J(q)J(q)^{W^{+}}} = \mathbf{I}_n ~~~~~~ \text{or} ~~~~~~ \mathbf{J(q)J(q)^{W^{+}}J(q)} = \mathbf{J(q)} \]
Since the Jacobian matrix \(\mathbf{J(q)}\) has a null-space, one can
project any vector \(\boldsymbol{\xi}\in\mathbb{R}^{n}\) into
the null-space of \(\mathbf{J(q)}\).
The Null-space projector \(\mathbf{P_W}\) is defined by \(\mathbf{I_n-J^{W{^+}}(q)J(q)}\), and
therefore, the general solution of \(\mathbf{\dot{q}}\) given \(\mathbf{\dot{x}=J(q)\dot{q}}\) is [4], [5]:
\[
\mathbf{\dot{q}} = \mathbf{J(q)^{W^{+}}}\mathbf{\dot{x}} + \{
\mathbf{I_n-J^{W{^+}}(q)J(q)} \}\boldsymbol{\xi}
\]
Some remarks are shown below:
Several different choices are made for \(\boldsymbol{\xi}\). One can define \(\boldsymbol{\xi}\) as the gradient of some potential function \(\phi(\mathbf{q})\), and therefore the second term can be used as maximizing (or minimizing) a secondary task quantified by the function \(\phi(\cdot)\): \[ \mathbf{\dot{q}} = \mathbf{J(q)^{W^+}} \mathbf{\dot{x}} + \alpha \{ \mathbf{I_n-J^{W^+}(q)J(q)} \} \nabla \phi(\mathbf{q}) \]