Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- # Eli Simhayev - 2017
- # Question 4
- I = eye(10);
- # ==== task a ====
- rowIdx = [1,1,1,2,2,2,3,3,3,3,4,4,4,4,4,5,5,5,5,5,6,6,6,6,7,7,7,7,7,7,7,8,8,8,8,8,9,9,9,10,10,10,10,10];
- colIdx = [1,2,3,1,2,3,1,2,3,4,3,4,5,7,10,4,5,6,7,8,5,6,7,8,4,5,6,7,8,9,10,5,6,7,8,10,7,9,10,4,7,8,9,10];
- vals = [2,-1,-1,-1,2,-1,-1,-1,3,-1,-1,4,-1,-1,-1,-1,4,-1,-1,-1,-1,3,-1,-1,-1,-1,-1,6,-1,-1,-1,-1,-1,-1,4,-1,-1,2,-1,-1,-1,-1,-1,4];
- A = sparse(rowIdx,colIdx,vals);
- A = full(A);
- #=
- 10×10 Array{Int64,2}:
- 2 -1 -1 0 0 0 0 0 0 0
- -1 2 -1 0 0 0 0 0 0 0
- -1 -1 3 -1 0 0 0 0 0 0
- 0 0 -1 4 -1 0 -1 0 0 -1
- 0 0 0 -1 4 -1 -1 -1 0 0
- 0 0 0 0 -1 3 -1 -1 0 0
- 0 0 0 -1 -1 -1 6 -1 -1 -1
- 0 0 0 0 -1 -1 -1 4 0 -1
- 0 0 0 0 0 0 -1 0 2 -1
- 0 0 0 -1 0 0 -1 -1 -1 4 as we desired...
- =#
- # ==== task b ====
- #=
- Claim: p(A) <= ||A|| for any matrix A and any norm || * ||
- Proof: proved in question 2 - Lemma 1.
- ||A||₁ is the Maximum Absolute Column Sum Norm, obtained
- by column number 7 and equals to 12.
- =#
- p = norm(A,1); # norm1 = 12
- # ==== task c ====
- #=
- Claim: v correspond to λmax of pI-A == v correspond to λmin of A
- Proof: Let (v,λ) be pair of eigenvector & eigenvalue such that
- v is the eigenvector with maximal eigenvalue of ρI-A. Therefore:
- (ρI-A)v = (ρ-λ)*v
- ρv-Av = ρ*v-λ*v
- Av = λv
- λ is the minimal eigenvalue of A:
- Assume that γ is eigenvalue of A such that γ<λ
- Therefore, ρ-λ < ρ-γ because 0<=λ<=ρ and that is contradiction because
- ρ-λ is the maximal eigenvalue of ρI-A.
- =#
- max_val, max_vec = eigs(p*I-A, which=:LM, nev=1); # maximal eigenvalue of ρ-A
- min_val, min_vec = eigs(A, which=:SM, nev=1); # minimal eigenvalue of A
- println("max_vec: ", max_vec); # max_vec: [0.316228; ... ;0.316228; 0.316228]
- println("min_vec: ", min_vec); # min_vec: [0.316228; ... ;0.316228; 0.316228]
- # ==== task d ====
- #=
- I will show that A*[1,1,...,1]ᵀ = λmin*[1,1,...,1]ᵀ
- Denote the i-row of A as Rᵢ and v=[1,1,...,1]ᵀ.
- Av = Σᵢ Rᵢ (sum of rows)
- A is a Laplacian matrix, and therefore every row sum and column sum of L is zero.
- Indeed, in the sum, the degree of the vertex is summed with a "−1" for each neighbor.
- Hence:
- Av = 0+0+...+0 = 0 = 0v.
- So v is eigenvector correspond to 0.
- A is semi-positive definite and therefore: ∀λ. λ>=0.
- Conclusion: v is eigenvector of A with the smallest eigenvalue, as we desired.
- =#
- println("Av: ", A*ones(10)); # Av = [0.0,0.0,...,0.0]
- println("λmin*v: ", min_val[1]*ones(10)); # λmin*v = [1.77636e-16,1.77636e-16,...,1.77636e-16]
- # ==== task e ====
- """
- x⁽ᴷ⁺¹⁾ = (ρI-A)*x⁽ᴷ⁾
- """
- maxIter = 100;
- x = 0.25*ones(10);
- for k=1:maxIter
- x = (p*I-A)*x;
- x = x/norm(x,1);
- end
- println("x: ", x); # x: [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1]
- # x = 0.1*[1,1,...,1] so it is the constant vector multiplied by scalar.
- # ==== task f ====
- """
- x⁽ᴷ⁺¹⁾ = (ρ(I-vvᵀ/vᵀv)-A)*x⁽ᴷ⁾
- """
- x = 0.25*ones(10);
- v = ones(10); vvT = v*transpose(v);
- for k=1:maxIter
- x = (p*(I-vvT/dot(v',v))-A)*x;
- x = x/norm(x,1);
- end
- println("Fiedler Vector: ", x);
- # Fiedler Vector: [0.185656,0.185656,0.128687,-0.0247382,-0.0724213,-0.0853326,-0.0747033,-0.0826887,-0.0871913,-0.0729246]
- # In other words, the partition is {1,2,3} and {4,5,6,7,8,9,10}.
- # This result makes sense since these groups are connected by only one edge, unlike
- # any other subgroups in the given graph.
- # An explanation why the method converges to eigenvector with the
- # second smallest eigenvalue of A:
- # Intuitively, ρI-A and (ρ(I-vvᵀ/vᵀv)-A) has the same eigenvectors except for v.
- # In (ρ(I-vvᵀ/vᵀv)-A), v correspond to 0 (because of the operation I-vvᵀ/vᵀv)
- # and therefore he is no longer the eigenvector with the maximal eigenvalue.
- # Hence, this method converges to the *second maximal eigenvector* of ρI-A,
- # which is the eigenvector with the second smallest eigenvalue of A (inferred from c & d).
- #==== task g ====
- Let B := (ρ(I-vvᵀ/vᵀv)-A) and define:
- x⁽ᴷ⁺¹⁾ = (B-ρI)⁻¹*x⁽ᴷ⁾
- As we learned in class, the Inverse Power Method find an approximate eigenvector
- corresponding to the eigenvalue μ given. In our case, μ = ρ and the matrix is B.
- ρ is the closest approximation to the maximal eigenvector in B, since it is
- bigger than any eigenvalue of B.
- So the method return the maximal eigenvector of B, i.e. the eigenvector with the
- second smallest eigenvalue of A, as we desired.
- =#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement