Advertisement
elisim

ass3_q1

Jun 10th, 2017
471
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Julia 4.35 KB | None | 0 0
  1. # Eli Simhayev - 2017
  2. # Question 1
  3.  
  4. # I implemented the General Iterative Method for
  5. # the 3 stationary methods. In each method, I calculated
  6. # the preconditioner M differently (as needed by the method).
  7. ### Note: r⁽ᴷ⁾ = b-Ax⁽ᴷ⁾
  8.  
  9. """
  10. The general iterative method - page 59.
  11.    x⁽ᴷ⁺¹⁾ =  x⁽ᴷ⁾ + ω*M⁻¹*r⁽ᴷ⁾
  12.    M - preconditioner matrix.
  13. """
  14. function general(A,M,b,x,maxIter,omega=1)
  15.     r = b-A*x; # r⁽⁰⁾
  16.  
  17.     # Residual norms & convergence factors for task b
  18.     residual_norms, conv_factors = zeros(maxIter+1), zeros(maxIter);
  19.     residual_norms[1] = norm(r);
  20.  
  21.     for k=1:maxIter
  22.         x = x+omega*(M\r);
  23.         r = b-A*x;
  24.         residual_norms[k+1] = norm(r); # save residual norm history
  25.         conv_factors[k] = residual_norms[k+1]/residual_norms[k];
  26.     end
  27.  
  28.     return x, residual_norms, conv_factors;
  29. end
  30.  
  31. """
  32. The Gauss Seidel method.
  33.    x⁽ᴷ⁺¹⁾ =  x⁽ᴷ⁾ + (L+D)⁻¹*r⁽ᴷ⁾
  34.    M = L+D
  35. """
  36. function gs(A,b,x,maxIter)
  37.     M = tril(A); # M = L+D
  38.  
  39.     return general(A,M,b,x,maxIter);
  40. end
  41.  
  42. """
  43. The Jacobi method.
  44.    x⁽ᴷ⁺¹⁾ =  x⁽ᴷ⁾ + ω*D⁻¹*r⁽ᴷ⁾
  45.    M = D = diag(A)
  46. """
  47. function jacobi(A,b,x,maxIter,omega=1)
  48.     M = diagm(diag(A));
  49.  
  50.     return general(A,M,b,x,maxIter,omega);
  51. end
  52.  
  53. """
  54. The SOR method.
  55.    x⁽ᴷ⁺¹⁾ =  x⁽ᴷ⁾ + ω(ωL+D)⁻¹*r⁽ᴷ⁾
  56.    M = ωL+D
  57. """
  58. function sor(A,b,x,maxIter,omega=1)
  59.     D = diagm(diag(A));
  60.     LD = tril(A);
  61.     L = LD-D;
  62.     M = omega*L+D; # M = ωL+D
  63.  
  64.     return general(A,M,b,x,maxIter,omega);
  65. end
  66.  
  67. using PyPlot;
  68. """
  69. plot the residual norms and convergence factors.
  70. """
  71. function plot_norms_factors()
  72.     figure();
  73.  
  74.     # plot norms
  75.     norms_range = 0:maxIter;
  76.     subplot(1,2,1);
  77.     semilogy(norms_range,gs_norms,"-m");
  78.     semilogy(norms_range,jacobi_norms1,"-y");
  79.     semilogy(norms_range,jacobi_norms2,"-g");
  80.     semilogy(norms_range,sor_norms1,"-b");
  81.     semilogy(norms_range,sor_norms2,"-r");
  82.     legend(("Gauss-Seidel", "Jacobi (ω = 0.75)","Jacobi (ω = 0.5)", "SOR (ω = 1.25)", "SOR (ω = 1.5)"));
  83.     title("Residual Norms");
  84.     xlabel("0:$maxIter");
  85.     ylabel("residual norm");
  86.  
  87.     # plot factors
  88.     factors_range = 1:maxIter;
  89.     subplot(1,2,2);
  90.     semilogy(factors_range,gs_factors,"-m");
  91.     semilogy(factors_range,jacobi_factors1,"-y");
  92.     semilogy(factors_range,jacobi_factors2,"-g");
  93.     semilogy(factors_range,sor_factors1,"-b");
  94.     semilogy(factors_range,sor_factors2,"-r");
  95.     legend(("Gauss-Seidel","Jacobi (ω = 0.75)","Jacobi (ω = 0.5)", "SOR (ω = 1.25)", "SOR (ω = 1.5)"));
  96.     title("Convergence Factors");
  97.     xlabel("1:$maxIter");
  98.     ylabel("convergence factor");
  99.  
  100.     show();
  101. end
  102.  
  103. # ==== task b ==== #
  104. n = 256;
  105. A = sprandn(n,n,2/n); A = transpose(A)*A+0.2*speye(n);
  106. A = full(A);
  107. x = zeros(n); # x⁽⁰⁾
  108. b = rand(n);
  109. maxIter = 100;
  110.  
  111. # run Gauss Seidel
  112. gs_ans, gs_norms, gs_factors = gs(A,b,x,maxIter);
  113.  
  114. # run Jacobi
  115. jacobi_ans1, jacobi_norms1, jacobi_factors1 = jacobi(A,b,x,maxIter,0.75); # ω = 0.75
  116. jacobi_ans2, jacobi_norms2, jacobi_factors2 = jacobi(A,b,x,maxIter,0.5);  # ω = 0.5
  117.  
  118. # run SOR
  119. sor_ans1, sor_norms1, sor_factors1 = sor(A,b,x,maxIter,1.25); # ω = 1.25
  120. sor_ans2, sor_norms2, sor_factors2 = sor(A,b,x,maxIter,1.5);  # ω = 1.5
  121.  
  122. plot_norms_factors();
  123.  
  124. ### Concolusions:
  125. # Jacobi method with ω = 0.75 gets the worst results, and does not converge to
  126. # the solution due to the fact that the residual norm increases (and not decreases).
  127. # As for the other methods, the residual norm monotonically decreases to zero
  128. # and therefore they converge to the solution. Note that the SOR method with ω = 1.25 gets
  129. # the best results, as can seen according to the residual.
  130. # Moreover, as we can see in the convergence factors graph, it indeed holds that |C|<1
  131. # as we required in class for convergence.
  132.  
  133. # ==== task c ====
  134. # As we learend in class, SOR is the weighted version of Gauss-Seidel method.
  135. # In the general case, it is difficult to determine an optimal (or even a good)
  136. # SOR parameter ω due to the difficulty in determining the spectral radius of the
  137. # iteration matrix. Therefore, SOR method is not guaranteed to be more
  138. # efficiet than Gauss-Seidel in general. However, if some information
  139. # about the iteration matrix is known, ω can be chosen wisely to make
  140. # SOR more efficient.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement