# Regular Inverse & Pseudo Inverse Matrix Calculation using Singular Value Decomposition

written by Antonius Ringlayer

• In order to calculate inverse kinematic solutions for robotic arm, we can use several methods. One of my favorite method is using the pseudo inverse of a jacobian matrix.

So what is pseudo inverse of a matrix and how it differs from the regular inverse of a matrix ?

Regular Inverse of a Square Matrix

Inversion a Matrix means a process to create a reciprocal version of a Matrix.

In math, the reciprocal of a number means inverse of a number, for example, we have a number called b, inverse of b number can be written as b-1 . b-1 comes from: 1/b (reciprocal of b). Meanwhile for a matrix, e.g matrix A, inverse of A matrix can be written as A-1

In order to be inverted, a matrix must meet 2 conditions :

1. the matrix has the same number of rows and columns (square matrix)
2. determinant of the matrix is not zero.

The Pseudo Inverse of a Matrix

The Pseudo inverse matrix is symbolized as A dagger.

However, sometimes there are some matrices that do not meet those 2 requirements, thus cannot be inverted. Where the condition is overdetermined, we can use a method called pseudo inversing to create a pseudo inverse matrix version of our original matrix.

We are going to use SVD (Singular Value Decomposition) Trick to decompose a non square matrix which doesn’t have eigenvalue and eigenvector.

Inversing an Invertible Square Matrix

Inversing a square matrix is easy. For example, We have A matrix: In order to get inverse of A, we are going to use adjoint method : Inverse of A can be acquired from these steps:

• Step 1. calculating determinant of A
• Step 2. find cofactor of A
• Step 3. find adjoint of A
• Step 4. find the inverse

Based on https://ringlayer.wordpress.com/2018/07/03/matrix-determinant-laplace-and-sarrus-method , we found determinant of our matrix = -6.

So we can go straight ahead to step 2 : “find cofactor of A”.

First of all, we need to get minor of A.  Minor of A can be obtained by getting 2×2 possible matrix determinants of A :

Visually, we can do the trick a lot easier: Next: hence we got :

|a| = 2 ,  |b| = 1 , |c| = -3, |d| = 2, |e| = -1, |f| = -3, |g| = -1, |h| = -2 , |i|= -3

Thus we got minor matrix: Cofactor matrix can be obtained easily: Hence we got cofactor matrix: Step 3, find adjoint of A :

Adjoint can be obtained by transforming cofactor matrix columns into rows, Adjoint : Finally based on our formula: A-1 =  1/-6 *  adjoint matrix, Hence we got our inverse matrix : Pseudo Inverse Matrix using SVD

Sometimes, we found a matrix that doesn’t meet our previous requirements (doesn’t have exact inverse), such matrix doesn’t have eigenvector and eigenvalue. In order to find pseudo inverse matrix, we are going to use SVD (Singular Value Decomposition) method.

For Example,  Pseudo inverse of matrix A is symbolized as A+

When the matrix is a square matrix :

A+ = A-1

when the matrix is overdetermined :

A+ = -1 Ut

V, Σ and U are matrices from SVD of A.

Where SVD of A :

UΣVt

can be notated :

UsVt

or

USVt

or

# UDVt

Example of Pseudo Inverse Calculation using SVD Method For example, We have a non squared matrix called “A”  as follow : U, s, VT of this matrix can be acquired manually or automatically using this simple python numpy script:

##### #!/usr/bin/env python3 import numpy as np # example taken from Video Tutorials – All in One # https://www.youtube.com/watch?v=P5mlg91as1c a = np.array([[1, 2, 1],[2, 1, -1]]) np.set_printoptions(suppress=True) np.set_printoptions(precision=3) U, s, VT = np.linalg.svd(a, full_matrices=True) print (“U:\n {}”.format(U)) print (“s:\n {}”.format(s)) print (“VT:\n {}”.format(VT)) or we can calculate SVD manually using this following steps:

Step 1. Finding singular vector and singular value of U & V

in order to find singular vector and singular value of U & V, we have to find eigenvector and eigenvalue of kU & kV, where :

kU = A . At

kV = At . A = Finding eigenvalue and eigenvector of kU:

| A – λ I | = 0

λ = scalar (eigenvalue corresponding to u)

I = identity matrix  So :

(6 – λ)2 – 25 = 0

we got 2 eigenvalue :

λ1 = 11

and

λ2 = 1

next, we are going to find eigenvector of kU.

eigenvector of kU = x

(A – λ.I) * x = 0 for λ = 11 : (-5 * x1) + (5 * x2) = 0

hence:

x1 = x2

x1 = -1

x2 = -1

for λ = 1 : (5 * x1 ) + (5 * x2) = 0

hence:

x1 – x2 = 0

x1 = -1

x2 = 1

Finally we have collected enough informations for U:

λ1 = 11 -> x1 = -1 and x2 = -1

λ2 = 1 -> x1 = -1 and x2 = 1

Du = u = in order to get U we need to normalize u (orthonormal) using this following steps:

We have our u matrix: to normalize u : hence: we got: 2 power 1/2 = square root of 2. we got: using the same method to normalize, another column, we got: We got our normalize matrix: so U = in order to find V, we can do the same steps just like U, we got :  By transposing matrix V, we got: Step 2. Find D.

We can acquire D matrix by multiplying Du with Dv:

D = Du * Dv

=   Finally we have collected enough informations to calculate pseudo inverse matrix using SVD method, we got this informations:

# U =   Based on  our pseudo inverse matrix formula:

A+ = VΣ-1 Ut

or

A+ = VD-1 Ut

We can got our pseudo inverse matrix easily:

# * Ut

We got our Pseudo inverse matrix (A dagger): # Calculus 3 – First Order Partial Derivatives

Recently, I encountered a matrix called jacobian matrix. Jacobian matrix is a matrix that consists of first order partial derivatives of vector value function. Pseudo Inverse of jacobian matrix can be used to solve inverse kinematic problem in robotic field. So what is partial derivatives ? Partial derivative  symbolized as (partial dee). Partial Derivative is  a part of calculus. Based on literature :

“a derivative of a function of two or more variables with respect to one variable, the other(s) being treated as constant.”

example function:

f(x,y) = y³x + 4x + 5y

∂f/∂x  means partial derivative of f(x,y) in respect to x. where we treat y as constant.

∂f/∂x = y³ + 4

steps:

• partial derivative (∂f/∂x) of y³x = y³x / x  = y³
• partial derivative  (∂f/∂x) of 4x = 4x  / x = 4
• partial derivative (∂f/∂x) of constant 5y = 0
• so ∂f/∂x = y³ + 4

∂f/∂y  means partial derivative of f(x,y) in respect to y. where we treat x as constant.

f(x,y) = y³x + 4x + 5y

∂f/∂y = 2y²x + 5

steps :

• partial derivative (∂f/∂y) of y³x = 2y²x
• partial derivative  (∂f/∂y) of contsant 4x = 0
• partial derivative (∂f/∂y) of  5y = 5y / y = 5
• so ∂f/∂y = 2y²x + 5

Another example, partial derivative of this explicit function :

y = 3x2 – 5z2 + 2x2z – 4xz2 – 9

∂y/∂x = ?

steps :

• ∂y/∂x of 3x2  = 2.3x2-1 = 6x
• ∂y/∂x of 5z2 = 0
• ∂y/∂x of 2x2z  = 2.2x2-1z = 4xz
• ∂y/∂x of 4xz2 = 4z2
• ∂y/∂x of 9 = 0
• So ∂y/∂x of y = 6x + 4xz- 4z2

steps:

• ∂y/∂z of 3x2  = 0
• ∂y/∂z of 5z2 = 2.5z2-1 = 10z
• ∂y/∂z of 2x2z  =  2x2z * 1/z = 2x2
• ∂y/∂z of 4xz2 = 2. 4xz2-1 =  8xz
• ∂y/∂z of 9 = 0
• So ∂y/∂z of y = -10z + 2x2 – 8xz

First Order Partial Derivative of a function that consists of 3 variables ?

Example : f(x,y,z) = z4 − x6y2 + 5

f’x = ?

steps :

• ∂f/∂x of z4 = 0
• ∂f/∂x of x6y2 = 6x6-1y2   = 6x5y2
• ∂f/∂x of 5 = 0
• thus ∂f/∂x of  f(x,y,z) = -6x5y2

f’y = ?

• ∂f/∂y of z4 = 0
• ∂f/∂y of x6y2 = x6 2y2-1   = x6 2y
• ∂f/∂y of 5 = 0
• thus ∂f/∂y of  f(x,y,z)  = -x6 2y

f’z = ?

• ∂f/∂z of z4 = 4z4-1 = 4z3
• ∂f/∂z of x6y2 = 0
• ∂f/∂z of 5 = 0
• thus ∂f/∂z of  f(x,y,z)  =  4z3

www.ringlayer.com

https://ringlayer.github.io

# Matrix Determinant – Laplace and Sarrus Method

N.B
some of matrix picture might be good since it was generated from https://www.symbolab.com/solver/matrix-vector-calculator , meanwhile some of them are my handwriting.

https://ringlayer.github.io

It has been so long since my last time  playing with matrix. The last time I used matrix is when I was study about input and output optimization using leontief matrix. Since it has been so long, I decided to write an interesting topic about matrix called determinant.

What is matrix determinant ? Here is a short  description from wikipedia:

In linear algebra, the determinant is a value that can be computed from the elements of a square matrix. The determinant of a matrix A is denoted det, det A, or |A|

First of all, it’s only possible to find determinant of a matrix when a matrix has the same number of columns and rows. In order to calculate determinant of a matrix, there are many methods. There are 2 methods which I used frequently. They are : laplace method and sarrus method.

The most simple way to calculate a matrix determinant is when the matrix consists of 2 rows and 2 columns only.

Example we have this matrix called U matrix : |U|  = (2 * 6) – (1 * 3) = 9

## Sarrus Method

Based on sarrus method : “3×3 matrix determinant” is the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements. Suppose we have 3×3 matrix called A : To calculate |A| of matrix A using sarrus method : We added 2 columns from matrix beside our original matrix, so we get the sum of the products of three diagonal north-west to south-east lines of matrix elements :

(1 *  3 * 2 ) + (2 * 1 * 2) + (1 * 3 * 1)

Then we need to get the sum of the products of three diagonal south-west to north-east lines of elements : (2 * 3 * 1) + (1 * 1 * 1) + (2 * 3 * 2)

Based on above informations, we got this:

|A| = ( (1 * 3 * 2 ) + (2 * 1 * 2) + (1 * 3 * 1) ) – ( (2 * 3 * 1) + (1 * 1 * 1) + (2 * 3 * 2) )

|A| = (6 + 4 + 3) – ( 6 + 1 + 12) =  -6

## Laplace Method

Calculating determinant of 3×3 matrix using laplace is simple.

Based on literature, to calculate 3×3 matrix using laplace : So the matrix is  splitted into 3 small matrices which  2×2 matrix, where a,b and c are constants. Those 3 small 2×2 matrices are permutation from this columns and rows: the 2×2 matrices :   Back again to A matrix:  a = 1 , b = 2, c = 1

We got 2×2 permutation matrices :    determinant of first matrix = 5

determinant of second matrix = 4

determinant of third matrix = -3

so we got : (1 * 5 ) – (2 * 4) + (1 * -3) = 5 – 8 – 3 = -6

# Forward Kinematics Calculation for Robotic Arm

written by Antonius Ringlayer

www.ringlayer.com

https://github.com/ringlayer?tab=repositories

Kinematics is a branch of mathematics, physics and classic mechanical engineering.   There are 2 mostly used kinematics in robotic field, they are : forward kinematics and inverse kinematics. Forward kinematics is frequently used to calculate the position of end effector when we know the degree value of each joint, meanwhile inverse kinematics is used to compute the degree of each joint when we know the position of the end effector.

To calculate forward kinematic, we can use simple trigonometry or denavit hartenberg parameter or screw theory . In this example we are going to use simple trigonometry to calculate 2d forward kinematics for 1 DOF and 3d forward kinematics for 3 DOF robotic arm.

Calculating 2D Forward Kinematics for 1 DOF robot arm For example here we have 1 dof robotic arm. link length is 10 cm.  θ is 45°. The position of end effector on our cartesian coordinate (x, y) can be calculated easily using simple trigonometry.  cos θ = x / l

x = cos θ * l  = cos 45° * 10 cm

sin θ = y / l

y = sin θ * l = sin 45°  * 10 cm

Here is python 2d 1 dof forward kinematic solver:

```#!/usr/bin/env python3
'''
robotic forward kinematics solver
coded by Antonius Ringlayer
'''
import math
try:
x = 0
y = 0
return x, y
except:
raise
x, y = solve_1dof_fk(10, 45)
print ("x=" + str(x) + " y=" + str(y))
```

Result:

\$ ./2d_1dof_forward_kinematic_solver.py
x=7.0710678118654755 y=7.071067811865475

Position of end effector = p(7.07 cm, 7.07 cm)

Calculating 3D Forward Kinematics for 3 DOF robot arm

Calculating the position of the end effector on 3 dimensional space using trigonometry is not so hard.

For example here we have 3 dof robot arm :

Side view of robot arm : Where : d2 is the height of second dof towards the floor, z is another dimension that we add to our cartesian geometry (the height of end effector from the floor), l1 = length of link 1, l2 = length of link 2, θ2 is d2 joint value, θ3 is d3 joint value.

Top View of Robot Arm : Cartesian coordinate represented from the top view of our robotic arm. θ1 is d1 joint value.

We are going to calculate the position of end effector (E) at 3 dimensional spaces (x, y , z).

Known variables : °

l1 = l2 = 10 cm

d2 = 7 cm

θ1 =  60°

θ2 = 30°

θ3 = 140°

We mark some more degrees and lengths : Steps:

since z = d2 + d3 – d6, first step is finding d3

step 1. finding d3

sin θ2 = d3 / l1

sin 30° = d3 / 10

d3 = sin 30° * 10 = 4.9 cm

Next step is finding d2 and d6 length. In order to get d2 and d6 length, we need to get more informations.

step 2. find θ4

180° = θ2 + 90° + θ4

θ4 = 180° –  (30° + 90°) = 60°

step 3. find θ5

θ3 = θ4 + θ5

140° = 60° + θ5 , hence θ5 = 80°

Based on all informations that we obtained, we can redraw our picture as follow: Step 6. finding d6 and z

Since we have θ5, finding  d6 is simply child play.

cos θ5 = d6 / l2

cos 80° = d6 / 10

d6 = cos 80° * 10 = 1.7 cm

So we have below information:

z = 7 cm + 4.9 cm – 1.7 cm = 13,6 cm

Next we need to find x and y. We figure out that x and y can be obtained if we got the hypotenuse length (d1).

We figure out from side view that d1 = d4 + d5

step 7. finding d4

cos θ2 = d4 / l1

d4 = cos 30° * 10 = 8.66 cm

step 8. finding d5 and d1

since we have θ5 = 80°

sin  80° = d5 / 10, so d5 = sin  80° * 10 = 9.85 cm.

d1 = d4 + d5 = 8.66 +  9.85 = 18.51 cm

Back again to our top view, we figure out that we have collected enough information to find x and y. cos  θ1 = x / d1

cos 60° = x / 18.51

x = cos 60° * 18.51  = 9.25 cm

sin 60° = y / 18.51 , y = sin 60°  *  18.51 = 16.03 cm

Finally we find that p(x,y,z) = p(9.25 ,  16.03, 13,6)