If the answer is a little or nothing, I invite you to begin to learn about numerical methods.

This space was created for you as a learning tool...

I hope the blog becomes in something useful. Take it

Tuesday, July 27, 2010

Cholesky Decompsition



While symmetric, positive definite matrices are rather special, they occur quite frequently in some applications, so their special factorization, called Cholesky decomposition, is good to know about. When you can use it, Cholesky decomposition is about a factor of two faster than alternative methods for solving linear equations.Instead of seeking arbitrary lower and upper triangular factors L and U, Cholesky decomposition constructs a lower triangular matrix L whose transpose L^T can itself serve as the upper triangular part. In other words we replace equation:

This factorization is sometimes referred to as “taking the square root” of the matrix A. The

components of LT are of course related to those of L by:


Writing out this equation in components one readily obtains the analogs of equations:


and


If you apply the last two equations in the order i = 1, 2, . . . ,N, you will see
that the L’s that occur on the right-hand side are already determined by the time they are
needed.
Source:
http//:www.mpi-hd.mpg.de/astrophysik/HEA/internal/nuermical_Recipes/f2-9.pdf

Monday, July 26, 2010

Gauss Seidel

Gauss Seidel (SOR)





The Jacobi, Gauss Seide and SOR Method

Jacobi
The Jacobi method is the simplest iterative method for solving systems of lienar ecuations and it applies, only on square systems, i.e., systems with as many unknoun variables as equations


The general iterative method for solving Ax = b is defined in terms of the following iterative formula:



where A = S − T and it is fairly easy to solve systems of the form Sx = b.
The Jacobi method exploits the fact that diagonal systems can be solved with one division per unknown,
i.e., in O(n) flops. To implement Jacobi’s method, write A = L+D+U where D is the n×n matrix
containing the diagonal of A, L is the n × n matrix containing the lower triangular part of A, and
U is the n × n matrix containing the upper triangular part of A. (Note that this is not the L–D–U
factorization of A.) Let S be the diagonal part of A, S = D. Then, T = S − A = −(L + U).

Also this method is easily derived by examinig each of the n ecuations in the linear system
Ax=b in isolation. If in the i-th ecuation:

we solve for the value of while assuming the other entries of x remain fixed, we obtain:


This suggests an iterative method defined by



which is the Jacobi method. Note that the order in which the equations are examined is irrelevant, since the Jacobi method treats them independently. For this reason, the Jacobi method is also known as the method of simultaneous displacements, since the updates could in principle be done simultaneously.

Gauss Seidel

The Gauss-Seidel method is an iterative method for solving linear systems such as

Ax=B

For this, we use a sequence x^k which converges to the fixed point(solution) x. For x^0 given, we build a sequence x^k such

x^{(k+1)}=F(x^{(k)}) with kN
A= M-N
where M is an invertible matrix.
 \begin{array}{cccc} Ax=b \Leftrightarrow  Mx=Nx+b \Leftrightarrow & x &=& M^{-1}Nx+M^{-1}b \\ & &=& F(x) \end{array}

where F is an affine function.

We decompose A in the following way : A=D-E-Fwith

- D the diagonal
- -E the strictly lower triangular part of A
- -F the strictly upper triangular part of A
.

M=D-E and N=F (in the Jacobi method, M=D et N= E+F).

We obtain:

x^{(k+1)}_i = \displaystyle\frac{b_i - \displaystyle\sum_{j=1}^{i-1} a_{ij} x^{(k+1)}_j - \displaystyle\sum_{j=i+1}^{n} a_{ij} x^{(k)}_j }{a_{ii}}

SOR

The successive overrelaxation method (SOR) is a method of solving a linear system of equations Ax=b derived by extrapolating the Gauss-Seidel method. This extrapolation takes the form of a weighted average between the previous iterate and the computed Gauss-Seidel iterate successively for each component,

 x_i^((k))=omegax^__i^((k))+(1-omega)x_i^((k-1)),

where x^_ denotes a Gauss-Seidel iterate and w is the extrapolation factor. The idea is to choose a value for w that will accelerate the rate of convergence of the iterates to the solution.

In matrix terms, the SOR algorithm can be written as

 x^((k))=(D-omegaL)^(-1)[omegaU+(1-omega)D]x^((k-1))+omega(D-omegaL)^(-1)b,

where the matrices D, -L, and -U represent the diagonal, strictly lower-triangular, and strictly upper-triangular parts of A, respectively.

If w=1, the SOR method simplifies to the Gauss-Seidel method. A theorem due to Kahan (1958) shows that SOR fails to converge if w is outside the interval (0,2).

In general, it is not possible to compute in advance the value of w that will maximize the rate of convergence of SOR. Frequently, some heuristic estimate is used, such as w= 2-0(h) where h his the mesh spacing of the discretization of the underlying physical domain.

Wednesday, July 21, 2010

Lu decomposition

Check out this SlideShare Presentation:

Gauss elimination method

Check out this SlideShare Presentation:

Linear Algebraic Ecuations

Check out this SlideShare Presentation:
Lae
View more presentations from gilandio.