Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- # Eli Simhayev - 2017
- # Question 3
- #==== task b ====
- Let Ar := A*r⁽ᵏ⁾. Hence:
- α = <r⁽ᵏ⁾, A*r⁽ᵏ⁾>/<(A*r⁽ᵏ⁾)ᵀ,A*r⁽ᵏ⁾> = <r⁽ᵏ⁾,Ar>/<(Ar)ᵀ, Ar>
- So Ar is the only computation of matrix vector multiplication, as we desired. =#
- # ==== task c ====
- function miners(A,b,x,maxIter)
- residual_norms = zeros(maxIter+1)
- r = b-A*x; # r⁽⁰⁾
- residual_norms[1] = norm(r);
- for k=1:maxIter
- Ar = A*r; # save the result for the next computations
- alpha = dot(r,Ar)/dot(transpose(Ar),Ar);
- x = x+alpha*r;
- r = r-alpha*A*r;
- residual_norms[k+1] = norm(r); # save residual norm history
- end
- return x, residual_norms;
- end
- A = [5 4 4 -1 0;
- 3 12 4 -5 -5;
- -4 2 6 0 3;
- 4 5 -7 10 2;
- 1 2 5 3 10]
- b = ones(5); # b = [1,1,1,1,1]ᵀ
- x = zeros(5); # x⁽⁰⁾ = [0,0,0,0,0]ᵀ
- maxIter = 50;
- miners_ans, miners_norms = miners(A,b,x,maxIter);
- println("A*miners_ans-b: ", A*miners_ans-b);
- # A*miners_ans-b: [3.29778e-9,-4.38476e-9,-2.78555e-9,-8.85434e-9,7.79554e-9]
- #### Method indeed converge
- ### plot
- using PyPlot;
- figure();
- y = miners_norms;
- semilogy(0:maxIter,y,"-r");
- title("Residual norms");
- xlabel("0:$maxIter");
- ylabel("Residual norm");
- show();
- #==== task d ====
- The residual is defined as r⁽ᴷ⁾ = b-Ax⁽ᴷ⁾.
- In each iteration, x⁽ᴷ⁾ becomes closer to the real solution x*
- because by definition of convergence: lim (k to ∞) x⁽ᴷ⁾ = x*.
- Because of the fact that Ax* = b, we can infer that r⁽ᴷ⁾ monotonically decreases to zero,
- which explains why the graph of r⁽ᴷ⁾ is monotone. =#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement