Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- # Eli Simhayev - 2017
- # Question 1
- # I implemented the General Iterative Method for
- # the 3 stationary methods. In each method, I calculated
- # the preconditioner M differently (as needed by the method).
- ### Note: r⁽ᴷ⁾ = b-Ax⁽ᴷ⁾
- """
- The general iterative method - page 59.
- x⁽ᴷ⁺¹⁾ = x⁽ᴷ⁾ + ω*M⁻¹*r⁽ᴷ⁾
- M - preconditioner matrix.
- """
- function general(A,M,b,x,maxIter,omega=1)
- r = b-A*x; # r⁽⁰⁾
- # Residual norms & convergence factors for task b
- residual_norms, conv_factors = zeros(maxIter+1), zeros(maxIter);
- residual_norms[1] = norm(r);
- for k=1:maxIter
- x = x+omega*(M\r);
- r = b-A*x;
- residual_norms[k+1] = norm(r); # save residual norm history
- conv_factors[k] = residual_norms[k+1]/residual_norms[k];
- end
- return x, residual_norms, conv_factors;
- end
- """
- The Gauss Seidel method.
- x⁽ᴷ⁺¹⁾ = x⁽ᴷ⁾ + (L+D)⁻¹*r⁽ᴷ⁾
- M = L+D
- """
- function gs(A,b,x,maxIter)
- M = tril(A); # M = L+D
- return general(A,M,b,x,maxIter);
- end
- """
- The Jacobi method.
- x⁽ᴷ⁺¹⁾ = x⁽ᴷ⁾ + ω*D⁻¹*r⁽ᴷ⁾
- M = D = diag(A)
- """
- function jacobi(A,b,x,maxIter,omega=1)
- M = diagm(diag(A));
- return general(A,M,b,x,maxIter,omega);
- end
- """
- The SOR method.
- x⁽ᴷ⁺¹⁾ = x⁽ᴷ⁾ + ω(ωL+D)⁻¹*r⁽ᴷ⁾
- M = ωL+D
- """
- function sor(A,b,x,maxIter,omega=1)
- D = diagm(diag(A));
- LD = tril(A);
- L = LD-D;
- M = omega*L+D; # M = ωL+D
- return general(A,M,b,x,maxIter,omega);
- end
- using PyPlot;
- """
- plot the residual norms and convergence factors.
- """
- function plot_norms_factors()
- figure();
- # plot norms
- norms_range = 0:maxIter;
- subplot(1,2,1);
- semilogy(norms_range,gs_norms,"-m");
- semilogy(norms_range,jacobi_norms1,"-y");
- semilogy(norms_range,jacobi_norms2,"-g");
- semilogy(norms_range,sor_norms1,"-b");
- semilogy(norms_range,sor_norms2,"-r");
- legend(("Gauss-Seidel", "Jacobi (ω = 0.75)","Jacobi (ω = 0.5)", "SOR (ω = 1.25)", "SOR (ω = 1.5)"));
- title("Residual Norms");
- xlabel("0:$maxIter");
- ylabel("residual norm");
- # plot factors
- factors_range = 1:maxIter;
- subplot(1,2,2);
- semilogy(factors_range,gs_factors,"-m");
- semilogy(factors_range,jacobi_factors1,"-y");
- semilogy(factors_range,jacobi_factors2,"-g");
- semilogy(factors_range,sor_factors1,"-b");
- semilogy(factors_range,sor_factors2,"-r");
- legend(("Gauss-Seidel","Jacobi (ω = 0.75)","Jacobi (ω = 0.5)", "SOR (ω = 1.25)", "SOR (ω = 1.5)"));
- title("Convergence Factors");
- xlabel("1:$maxIter");
- ylabel("convergence factor");
- show();
- end
- # ==== task b ==== #
- n = 256;
- A = sprandn(n,n,2/n); A = transpose(A)*A+0.2*speye(n);
- A = full(A);
- x = zeros(n); # x⁽⁰⁾
- b = rand(n);
- maxIter = 100;
- # run Gauss Seidel
- gs_ans, gs_norms, gs_factors = gs(A,b,x,maxIter);
- # run Jacobi
- jacobi_ans1, jacobi_norms1, jacobi_factors1 = jacobi(A,b,x,maxIter,0.75); # ω = 0.75
- jacobi_ans2, jacobi_norms2, jacobi_factors2 = jacobi(A,b,x,maxIter,0.5); # ω = 0.5
- # run SOR
- sor_ans1, sor_norms1, sor_factors1 = sor(A,b,x,maxIter,1.25); # ω = 1.25
- sor_ans2, sor_norms2, sor_factors2 = sor(A,b,x,maxIter,1.5); # ω = 1.5
- plot_norms_factors();
- ### Concolusions:
- # Jacobi method with ω = 0.75 gets the worst results, and does not converge to
- # the solution due to the fact that the residual norm increases (and not decreases).
- # As for the other methods, the residual norm monotonically decreases to zero
- # and therefore they converge to the solution. Note that the SOR method with ω = 1.25 gets
- # the best results, as can seen according to the residual.
- # Moreover, as we can see in the convergence factors graph, it indeed holds that |C|<1
- # as we required in class for convergence.
- # ==== task c ====
- # As we learend in class, SOR is the weighted version of Gauss-Seidel method.
- # In the general case, it is difficult to determine an optimal (or even a good)
- # SOR parameter ω due to the difficulty in determining the spectral radius of the
- # iteration matrix. Therefore, SOR method is not guaranteed to be more
- # efficiet than Gauss-Seidel in general. However, if some information
- # about the iteration matrix is known, ω can be chosen wisely to make
- # SOR more efficient.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement