Jacobi
The Jacobi method is the simplest iterative method for solving systems of lienar ecuations and it applies, only on square systems, i.e., systems with as many unknoun variables as equations
The general iterative method for solving Ax = b is defined in terms of the following iterative formula:
where A = S − T and it is fairly easy to solve systems of the form Sx = b.
The Jacobi method exploits the fact that diagonal systems can be solved with one division per unknown,
i.e., in O(n) flops. To implement Jacobi’s method, write A = L+D+U where D is the n×n matrix
containing the diagonal of A, L is the n × n matrix containing the lower triangular part of A, and
U is the n × n matrix containing the upper triangular part of A. (Note that this is not the L–D–U
factorization of A.) Let S be the diagonal part of A, S = D. Then, T = S − A = −(L + U).
Also this method is easily derived by examinig each of the n ecuations in the linear system
Ax=b in isolation. If in the i-th ecuation:
we solve for the value of
while assuming the other entries of x remain fixed, we obtain:
This suggests an iterative method defined by
which is the Jacobi method. Note that the order in which the equations are examined is irrelevant, since the Jacobi method treats them independently. For this reason, the Jacobi method is also known as the method of simultaneous displacements, since the updates could in principle be done simultaneously.
Gauss Seidel
The Gauss-Seidel method is an iterative method for solving linear systems such as
Ax=B
For this, we use a sequence x^k which converges to the fixed point(solution) x. For x^0 given, we build a sequence x^k such
with k€N A= M-N
where M is an invertible matrix.
where F is an affine function.
We decompose A in the following way : A=D-E-Fwith
D the diagonal
-E the strictly lower triangular part of A
-F the strictly upper triangular part of A.
M=D-E and N=F (in the Jacobi method, M=D et N= E+F).
We obtain:

SOR
The successive overrelaxation method (SOR) is a method of solving a linear system of equations
derived by extrapolating the Gauss-Seidel method. This extrapolation takes the form of a weighted average between the previous iterate and the computed Gauss-Seidel iterate successively for each component,
where
denotes a Gauss-Seidel iterate and w is the extrapolation factor. The idea is to choose a value for w that will accelerate the rate of convergence of the iterates to the solution.
In matrix terms, the SOR algorithm can be written as
where the matrices D, -L, and -U represent the diagonal, strictly lower-triangular, and strictly upper-triangular parts of A, respectively.
If w=1, the SOR method simplifies to the Gauss-Seidel method. A theorem due to Kahan (1958) shows that SOR fails to converge if w is outside the interval (0,2).
In general, it is not possible to compute in advance the value of w that will maximize the rate of convergence of SOR. Frequently, some heuristic estimate is used, such as w= 2-0(h) where h his the mesh spacing of the discretization of the underlying physical domain.