Advertisement
elisim

ass3_q4

Jun 11th, 2017
592
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Julia 4.64 KB | None | 0 0
  1. # Eli Simhayev - 2017
  2. # Question 4
  3.  
  4. I = eye(10);
  5.  
  6. # ==== task a ====
  7. rowIdx = [1,1,1,2,2,2,3,3,3,3,4,4,4,4,4,5,5,5,5,5,6,6,6,6,7,7,7,7,7,7,7,8,8,8,8,8,9,9,9,10,10,10,10,10];
  8. colIdx = [1,2,3,1,2,3,1,2,3,4,3,4,5,7,10,4,5,6,7,8,5,6,7,8,4,5,6,7,8,9,10,5,6,7,8,10,7,9,10,4,7,8,9,10];
  9. vals = [2,-1,-1,-1,2,-1,-1,-1,3,-1,-1,4,-1,-1,-1,-1,4,-1,-1,-1,-1,3,-1,-1,-1,-1,-1,6,-1,-1,-1,-1,-1,-1,4,-1,-1,2,-1,-1,-1,-1,-1,4];
  10. A = sparse(rowIdx,colIdx,vals);
  11. A = full(A);
  12. #=
  13. 10×10 Array{Int64,2}:
  14.   2  -1  -1   0   0   0   0   0   0   0
  15.  -1   2  -1   0   0   0   0   0   0   0
  16.  -1  -1   3  -1   0   0   0   0   0   0
  17.   0   0  -1   4  -1   0  -1   0   0  -1
  18.   0   0   0  -1   4  -1  -1  -1   0   0
  19.   0   0   0   0  -1   3  -1  -1   0   0
  20.   0   0   0  -1  -1  -1   6  -1  -1  -1
  21.   0   0   0   0  -1  -1  -1   4   0  -1
  22.   0   0   0   0   0   0  -1   0   2  -1
  23.   0   0   0  -1   0   0  -1  -1  -1   4   as we desired...
  24. =#
  25.  
  26. # ==== task b ====
  27. #=
  28. Claim: p(A) <= ||A|| for any matrix A and any norm || * ||
  29. Proof: proved in question 2 - Lemma 1.
  30. ||A||₁ is the Maximum Absolute Column Sum Norm, obtained
  31. by column number 7 and equals to 12.
  32. =#
  33. p = norm(A,1); # norm1 = 12
  34.  
  35. # ==== task c ====
  36. #=
  37. Claim: v correspond to λmax of pI-A == v correspond to λmin of A
  38. Proof: Let (v,λ) be pair of eigenvector & eigenvalue such that
  39.        v is the eigenvector with maximal eigenvalue of ρI-A. Therefore:
  40.        (ρI-A)v = (ρ-λ)*v
  41.        ρv-Av = ρ*v-λ*v
  42.        Av = λv
  43.        
  44.        λ is the minimal eigenvalue of A:
  45.        Assume that γ is eigenvalue of A such that γ<λ
  46.        Therefore, ρ-λ < ρ-γ because 0<<=ρ and that is contradiction because
  47.        ρ-λ is the maximal eigenvalue of ρI-A.  
  48. =#
  49. max_val, max_vec = eigs(p*I-A, which=:LM, nev=1); # maximal eigenvalue of ρ-A
  50. min_val, min_vec = eigs(A, which=:SM, nev=1);     # minimal eigenvalue of A
  51.  
  52. println("max_vec: ", max_vec); # max_vec: [0.316228; ... ;0.316228; 0.316228]
  53. println("min_vec: ", min_vec); # min_vec: [0.316228; ... ;0.316228; 0.316228]
  54.  
  55. # ==== task d ====
  56. #=
  57. I will show that A*[1,1,...,1]ᵀ = λmin*[1,1,...,1]
  58. Denote the i-row of A as Rᵢ and v=[1,1,...,1]ᵀ.
  59.      Av = Σᵢ Rᵢ (sum of rows)
  60. A is a Laplacian matrix, and therefore every row sum and column sum of L is zero.
  61. Indeed, in the sum, the degree of the vertex is summed with a "−1" for each neighbor.
  62. Hence:
  63.      Av = 0+0+...+0 = 0 = 0v.
  64. So v is eigenvector correspond to 0.
  65. A is semi-positive definite and therefore: ∀λ. λ>=0.
  66. Conclusion: v is eigenvector of A with the smallest eigenvalue, as we desired.
  67. =#
  68. println("Av: ", A*ones(10)); # Av = [0.0,0.0,...,0.0]
  69. println("λmin*v: ", min_val[1]*ones(10)); # λmin*v = [1.77636e-16,1.77636e-16,...,1.77636e-16]
  70.  
  71. # ==== task e ====
  72. """
  73. x⁽ᴷ⁺¹⁾ =  (ρI-A)*x⁽ᴷ⁾
  74. """
  75. maxIter = 100;
  76. x = 0.25*ones(10);
  77. for k=1:maxIter
  78.     x = (p*I-A)*x;
  79.     x = x/norm(x,1);
  80. end
  81. println("x: ", x); # x: [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1]
  82. # x = 0.1*[1,1,...,1] so it is the constant vector multiplied by scalar.
  83.  
  84. # ==== task f ====
  85. """
  86. x⁽ᴷ⁺¹⁾ =  (ρ(I-vvᵀ/vᵀv)-A)*x⁽ᴷ⁾
  87. """
  88. x = 0.25*ones(10);
  89. v = ones(10); vvT = v*transpose(v);
  90. for k=1:maxIter
  91.     x = (p*(I-vvT/dot(v',v))-A)*x;
  92.    x = x/norm(x,1);
  93. end
  94. println("Fiedler Vector: ", x);
  95. # Fiedler Vector: [0.185656,0.185656,0.128687,-0.0247382,-0.0724213,-0.0853326,-0.0747033,-0.0826887,-0.0871913,-0.0729246]
  96. # In other words, the partition is {1,2,3} and {4,5,6,7,8,9,10}.
  97. # This result makes sense since these groups are connected by only one edge, unlike
  98. # any other subgroups in the given graph.
  99. # An explanation why the method converges to eigenvector with the
  100. # second smallest eigenvalue of A:
  101. #     Intuitively, ρI-A and (ρ(I-vvᵀ/vᵀv)-A) has the same eigenvectors except for v.
  102. #     In (ρ(I-vvᵀ/vᵀv)-A), v correspond to 0 (because of the operation I-vvᵀ/vᵀv)
  103. #     and therefore he is no longer the eigenvector with the maximal eigenvalue.
  104. #     Hence, this method converges to the *second maximal eigenvector* of ρI-A,
  105. #     which is the eigenvector with the second smallest eigenvalue of A (inferred from c & d).
  106.  
  107.  
  108. #==== task g ====
  109. Let B := (ρ(I-vvᵀ/vᵀv)-A) and define:
  110.    x⁽ᴷ⁺¹⁾ =  (B-ρI)⁻¹*x⁽ᴷ⁾
  111. As we learned in class, the Inverse Power Method find an approximate eigenvector
  112. corresponding to the eigenvalue μ given. In our case, μ = ρ and the matrix is B.
  113. ρ is the closest approximation to the maximal eigenvector in B, since it is
  114. bigger than any eigenvalue of B.
  115. So the method return the maximal eigenvector of B, i.e. the eigenvector with the
  116. second smallest eigenvalue of A, as we desired.
  117. =#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement