written by Antonius Ringlayer

https://ringlayer.github.io – http://www.ringlayer.com

In order to calculate inverse kinematic solution for robotic arm, we can use several methods. One of my favourite method is using pseudo inverse of a jacobian matrix.

So what is pseudo inverse of matrix and how it differs from regular inverse of matrix ?

**Regular Inverse of a Square Matrix**

Inversing a Matrix means a process to create a reciprocal version of a Matrix.

In math, reciprocal of a number means inverse of a number, for example we have a number called b, inverse of b number can be written as b^{-1 }. b^{-1} comes from: 1/b (reciprocal of b). Meanwhile for a matrix, e.g matrix A, inverse of A matrix can be written as A^{-1}

In order to be inverted, a matrix must met 2 conditions :

- the matrix has the same number of rows and columns (square matrix)
- determinant of the matrix is not zero.

**Pseudo Inverse of a Matrix**

Pseudo inversi matrix is symbolized as A dagger.

However, sometimes there are some matrices that do not met those 2 requirements, thus can not be inverted. Where the condition is overdetermined, we can use a method called pseudo inversing to create a pseudo inverse matrix version of our original matrix.

We are going to use SVD (Singular Value Decomposition) Trick to decompose a non square matrix which doesn’t have eigenvalue and eigenvector.

**Inversing an Invertible Square Matrix**

Inversing a square matrix is easy. For example, We have A matrix:

In order to get inverse of A, we are going to use adjoint method :

Inverse of A can be acquired from these steps:

- Step 1. calculating determinant of A
- Step 2. find cofactor of A
- Step 3. find adjoint of A
- Step 4. find the inverse

Based on https://ringlayer.wordpress.com/2018/07/03/matrix-determinant-laplace-and-sarrus-method , we found determinant of our matrix = -6.

So we can go straight ahead to step 2 : “find cofactor of A”.

First of all, we need to get minor of A. Minor of A can be obtained by getting 2×2 possible matrix determinants of A :

Visually, we can make the trick a lot easier:

Next:

hence we got :

|a| = 2 , |b| = 1 , |c| = -3, |d| = 2, |e| = -1, |f| = -3, |g| = -1, |h| = -2 , |i|= -3

Thus we got minor matrix:

Cofactor matrix can be obtained easily:

Hence we got cofactor matrix:

Step 3, find adjoint of A :

Adjoint can be obtained by transforming cofactor matrix columns into rows, Adjoint :

Finally based on our formula:

A^{-1} = 1/-6 * adjoint matrix, Hence we got our inverse matrix :

**Pseudo Inverse Matrix using SVD
**

Sometimes, we found a matrix that doesn’t met our previous requirements (doesn’t have exact inverse), such matrix doesn’t have eigenvector and eigenvalue. In order to find pseudo inverse matrix, we are going to use SVD (Singular Value Decomposition) method.

For Example, Pseudo inverse of matrix A is symbolized as A^{+}

When the matrix is a square matrix :

A^{+} = A^{-1 }

^{when the matrix is overdetermined :
}

A^{+} = VΣ^{-1 }U^{t}

^{V, Σ and U are matrices from SVD of A.
}

Where SVD of A^{ :}

UΣV^{t}

can be notated :

**UsV ^{t}**

or

**USV ^{t}**

or

**UDV**^{t}

^{t}

**Example of Pseudo Inverse Calculation using SVD Method**

For example, We have a non squared matrix called “A” as follow :

U, s, VT of this matrix can be acquired manually or automatically using this simple python numpy script:

##### #!/usr/bin/env python3

import numpy as np

# example taken from Video Tutorials – All in One

# https://www.youtube.com/watch?v=P5mlg91as1c

a = np.array([[1, 2, 1],[2, 1, -1]])

np.set_printoptions(suppress=True)

np.set_printoptions(precision=3)

U, s, VT = np.linalg.svd(a, full_matrices=True)

print (“U:\n {}”.format(U))

print (“s:\n {}”.format(s))

print (“VT:\n {}”.format(VT))

or we can calculate SVD manually using this following steps:

Step 1. Finding singular vector and singular value of U & V

in order to find singular vector and singular value of U & V, we have to find eigen vector and eigen value of kU & kV, where :

kU = A . A^{t}

kV = A^{t } _{. }A

=

Finding eigenvalue and eigenvector of kU:

| A – λ I | = 0

λ = scalar (eigenvalue corresponding to u)

I = identity matrix

So :

(6 – λ)^{2 }– 25 = 0

we got 2 eigenvalue :

λ_{1} = 11

and

λ_{2} = 1

next, we are going to find eigenvector of kU.

eigenvector of kU = x

(A – λ.I) * x = 0

for λ = 11 :

(-5 * x1) + (5 * x2) = 0

hence:

x1 = x2

x1 = -1

x2 = -1

for λ = 1 :

(5 * x1 ) + (5 * x2) = 0

hence:

x1 – x2 = 0

x1 = -1

x2 = 1

Finally we have collected enough informations for U:

λ_{1} = 11 -> x1 = -1 and x2 = -1

λ_{2} = 1 -> x1 = -1 and x2 = 1

Du =

u =

in order to get U we need to normalize u (orthonormal) using this following steps:

We have our u matrix:

to normalize u :

hence:

we got:

2 power 1/2 = square root of 2.

we got:

using the same method to normalize, another column, we got:

We got our normalize matrix:

so U =

in order to find V, we can do the same steps just like U, we got :

By transposing matrix V, we got:

Step 2. Find D.

We can acquire D matrix by multiplying Du with Dv:

D = Du * Dv

= *

Finally we have collected enough informations to calculate pseudo inverse matrix using SVD method, we got this informations:

# U =

Based on our pseudo inverse matrix formula:

A^{+} = VΣ^{-1 }U^{t}

or

A^{+} = VD^{-1 }U^{t}

We can got our pseudo inverse matrix easily:

# * U^{t}

We got our Pseudo inverse matrix (A dagger):