Advertisement
pszemraj

K-means: the elbow letter - summarized by chatGPT

Jan 20th, 2023
248
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 2.28 KB | Science | 0 0
  1. The paper discusses the problem of choosing the parameter k, the number of clusters, in k-means clustering. The author argues that the commonly used "elbow method" for determining the number of clusters lacks theoretical support and suggests that better alternatives exist. The paper also briefly describes the k-means algorithm, its complexity, and some of the proposed improvements. The author encourages educators to teach alternatives to the elbow method, and researchers and reviewers to reject conclusions drawn from it.
  2.  
  3. The paper goes on to discuss the problem of choosing the number of clusters in k-means clustering and presents a critique of the commonly used "elbow method". The author argues that the method is not useful and that better alternatives should be used. The author suggests that instead of using the sum of squared errors (SSE), one should use the square root of the SSE and normalize the results by dividing by the number of points. The author also suggests using a variation of SSE that is more appropriate for this context, such as SSE/(N-k).
  4. The paper also presents a table that compares the "optimal" k chosen by different heuristics on different datasets. The table shows that the results obtained by the elbow-based method are poor, and the other methods, such as L-Method, Kneedle, Curvature, Pyclustering, Silhouette, etc. perform better.
  5.  
  6. The paper also discusses alternative methods for choosing the number of clusters in k-means clustering. The author introduces distance-based criteria, simulation-based criteria, and information-theoretic criteria. The distance-based criteria include the Gap statistic, the Dunn index, the Davie-Bouldin index, and the silhouette width measure. Simulation-based criteria include the Gap statistic. The information-theoretic criteria include the Bayesian Information Criterion (BIC) and the Variance Ratio Criterion (VRC). The author suggests that instead of using the "elbow method", one should use one of these alternative methods to choose k, and pay attention to how the data is prepared before clustering, as "garbage in, garbage out" applies. The author also notes that k-means is closely related to k-medoids clustering and the PAM algorithm, and that k-means assumes that errors are invariant across the data space, which is not always the case.
  7.  
  8.  
  9.  
  10.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement