12  Summarising and comparing clustering results

12.1 Summarising results

The key elements for summarising cluster results are the centres of the clusters and the within-cluster variability of the observations. Adding cluster means to any plot, including tour plots, is easy. You add the additional rows, or a new data set, and set the point shape to be distinct.

Summarising the variability is difficult. For model-based clustering, the shape of the clusters is assumed to be elliptical, so \(p\)-dimensional ellipses can be used to show the solution, as done in Chapter 10. Generally, it is common to plot a convex hull of the clusters, as in Figure 12.1. This can also be done in high-dimensions, using the R package cxhull to compute the \(p\)-D convex hull.

Code to do clustering
library(mclust) 
library(tidyr)
library(dplyr)
library(gt)
library(cxhull)
library(ggplot2)
library(colorspace)
load("data/penguins_sub.rda")
p_dist <- dist(penguins_sub[,1:4])
p_hcw <- hclust(p_dist, method="ward.D2")

p_cl <- data.frame(cl_w = cutree(p_hcw, 3))

penguins_mc <- Mclust(penguins_sub[,1:4], 
                      G=3, 
                      modelNames = "EEE")
p_cl <- p_cl %>% 
  mutate(cl_mc = penguins_mc$classification)

p_cl <- p_cl %>% 
  mutate(cl_w_j = jitter(cl_w),
         cl_mc_j = jitter(cl_mc))

# Arranging by cluster id is important to define edges 
penguins_cl <- penguins_sub %>%
  mutate(cl_w = p_cl$cl_w,
         cl_mc = p_cl$cl_mc) %>%
  arrange(cl_w)
Code for convex hulls in 2D
# Penguins in 2D
# Duplicate observations need to be removed fo convex hull calculation
psub <- penguins_cl %>%
  select(bl, bd) 
dup <- duplicated(psub)
psub <- penguins_cl %>%
  select(bl, bd, cl_w) %>%
  filter(!dup) %>%
  arrange(cl_w)

ncl <- psub %>%
  count(cl_w) %>%
  arrange(cl_w) %>%
  mutate(cumn = cumsum(n))
phull <- NULL
for (i in unique(psub$cl_w)) {
  x <- psub %>%
    dplyr::filter(cl_w == i) %>%
    select(bl, bd) 
  ph <- cxhull(as.matrix(x))$edges
  if (i > 1) {
    ph <- ph + ncl$cumn[i-1]
  }
  ph <- cbind(ph, rep(i, nrow(ph)))
  phull <- rbind(phull, ph)
}
phull <- as.data.frame(phull)
colnames(phull) <- c("from", "to", "cl_w") 
phull_segs <- data.frame(x = psub$bl[phull$from],
                         y = psub$bd[phull$from],
                         xend = psub$bl[phull$to],
                         yend = psub$bd[phull$to],
                         cl_w = phull$cl_w)
phull_segs$cl_w <- factor(phull$cl_w) 
psub$cl_w <- factor(psub$cl_w)
p_chull2D <- ggplot() +
  geom_point(data=psub, aes(x=bl, y=bd, 
                            colour=cl_w)) + 
  geom_segment(data=phull_segs, aes(x=x, xend=xend,
                                    y=y, yend=yend,
                                    colour=cl_w)) +
  scale_colour_discrete_divergingx(palette = "Zissou 1") +
  theme_minimal() +
  theme(aspect.ratio = 1)
Code to generate pD convex hull and view in tour
ncl <- penguins_cl %>%
  count(cl_w) %>%
  arrange(cl_w) %>%
  mutate(cumn = cumsum(n))
phull <- NULL
for (i in unique(penguins_cl$cl_w)) {
  x <- penguins_cl %>%
    dplyr::filter(cl_w == i) 
  ph <- cxhull(as.matrix(x[,1:4]))$edges
  if (i > 1) {
    ph <- ph + ncl$cumn[i-1]
  }
  ph <- cbind(ph, rep(i, nrow(ph)))
  phull <- rbind(phull, ph)
}
phull <- as.data.frame(phull)
colnames(phull) <- c("from", "to", "cl_w") 
phull$cl_w <- factor(phull$cl_w)
penguins_cl$cl_w <- factor(penguins_cl$cl_w)

animate_xy(penguins_cl[,1:4], col=penguins_cl$cl_w,
           edges=as.matrix(phull[,1:2]), edges.col=phull$cl_w)
render_gif(penguins_cl[,1:4], 
           tour_path = grand_tour(),
           display = display_xy(col=penguins_cl$cl_w,
                                edges=as.matrix(phull[,1:2]),
                                edges.col=phull$cl_w),
           gif_file = "gifs/penguins_chull.gif",
           frames = 500, 
           width = 400,
           height = 400)
(a) 2D
(b) 4D
Figure 12.1

Convex hulls summarising the extent of Wards linkage clustering in 2D and 4D.

12.2 Comparing two clusterings

Each cluster analysis will result in a vector of class labels for the data. To compare two results we would tabulate and plot the pair of integer variables. The labels given to each cluster will likely differ. If the two methods agree, there will be just a few cells with large counts among mostly empty cells.

Below is a comparison between the three cluster results of Wards linkage hierarchical clustering (rows) and model-based clustering (columns). The two methods mostly agree, as seen from the three cells with large counts, and most cells with zeros. They disagree only on eight penguins. These eight penguins would be considered to be part of cluster 1 by Wards, but model-based considers them to be members of cluster 2.

The two methods label them clusters differently: what Wards labels as cluster 3, model-based labels as cluster 2. The labels given by any algorithm are arbitrary, and can easily be changed to coordinate between methods.

Code for confusion table
p_cl %>% 
  count(cl_w, cl_mc) %>% 
  pivot_wider(names_from = cl_mc, 
              values_from = n, 
              values_fill = 0) %>%
  gt() %>%
  tab_spanner(label = "cl_mc", columns=c(`2`, `3`, `1`)) %>%
  cols_width(everything() ~ px(60))
cl_w cl_mc
2 3 1
1 8 0 149
2 0 119 0
3 57 0 0

We can examine the disagreement by linking a plot of the table, with a tour plot. Here is how to do this with liminal. Figure 12.2 and Figure 12.3 show screenshots of the exploration of the eight penguins on which the methods disagree. It makes sense that there is some confusion. These penguins are part of the large clump of observations that don’t separate cleanly into two clusters. The eight penguins are in the middle of this clump. Realistically, both methods result in a plausible clustering, and it is not clear how these penguins should be grouped.

Code to do linked brushing with liminal
library(liminal)
limn_tour_link(
  p_cl[,3:4],
  penguins_cl,
  cols = bl:bm,
  color = cl_w
)
Figure 12.2: Linking the confusion table with a tour using liminal. Points are coloured according to Wards linkage. The disagreement on eight penguins is with cluster 1 from Wards and cluster 2 from model-based.
Figure 12.3: Highlighting the penguins where the methods disagree so we can see where these observations are located relative to the two clusters.

Exercises

  1. Compare the results of the four cluster model-based clustering with that of the four cluster Wards linkage clustering of the penguins data.
  2. Compare the results from clustering of the fake_trees data for two different choices of \(k\). (This follows from the exercise in Chapter 9.) Which choice of \(k\) is best? And what choice of \(k\) best captures the 10 known branches?
  3. Compare and contrast the cluster solutions for the first four PCs of the aflw data, conducted in Chapter 8 and Chapter 9. Which provides the most useful clustering of this data?
  4. Pick your two clusterings on one of the challenge data sets, c1-c7 from the mulgar package, that give very different results. Compare and contrast the two solutions, and decide which is the better solution.

Project

Most of the time your data will not neatly separate into clusters, but partitioning it into groups of similar observations can still be useful. In this case our toolbox will be useful in comparing and contrasting different methods, understanding to what extend a cluster mean can describe the observations in the cluster, and also how the boundaries between clusters have been drawn. To explore this we will use survey data that examines the risk taking behavior of tourists. The data was collected in Australia in 2015 (Hajibaba et al., 2016) and includes six types of risks (recreational, health, career, financial, safety and social) with responses on a scale from 1 (never) to 5 (very often). You can download the data from risk_MSA.rds .

  1. We first examine the data in a grand tour. Do you notice that each variable was measured on a discrete scale?
  2. Next we explore different solutions from hierarchical clustering of the data. For comparison we will keep the number of clusters fixed to 6 and we will perform the hierarchical clustering with different combinations of distance functions (Manhattan distance and Euclidean distance) and linkage (single, complete and Ward linkage). Which combinations make sense based on what we know about the method and the data?
  3. For each of the hierarchical clustering solutions draw the dendrogram in 2D and also in the data space. You can also map the grouping into 6 clusters to different colors. How would you describe the different solutions?
  4. Using the method introduced in this chapter, compare the solution using Manhattan distance and complete linkage to one using Euclidean distance and Ward linkage. First compute a confusion table and then use liminal to explore some of the differences. For example, you should be able to see how small subsets where the two clustering solutions disagree can be outlying and are grouped differently depending on the choices we make.
  5. Selecting your preferred solution from hierarchical clustering, we will now compare it to what is found using \(k\)-means clustering with \(k=6\). Use a tour to show the cluster means together with the data points (make sure to pick an appropriate symbol for the data points to avoid too much overplotting). What can you say about the variation within the clusters? Can you match some of the clusters with the most relevant variables from following the movement of the cluster means during the tour?
  6. Use a projection pursuit guided tour to best separate the clusters identified with \(k\)-means clustering. How are the clusters related to the different types of risk?
  7. Use the approaches from this chapter to summarize and compare the \(k\)-means solution to your selected hierarchical clustering results. Are the groupings mostly similar? You can also use convex hulls to better compare what part of the space is occupied. Either look at subsets (selected from the liminal display) or you could facet the display using tourr::animate_groupxy.
  8. Some other possible activities include examining how model-based methods would cluster the data. We expect it should be similar to Wards hierarchical or \(k\)-means, that it will partition into roughly equal chunks with an EII variance-covariance model being optimal. Also examining an SOM fit. SOM is not ideal for this data because the data fills the space. If the SOM model is fitted properly it should be a tangled net where the nodes (cluster means) are fairly evenly spread out. Thus the result should again be similar to Wards hierarchical or \(k\)-means. A common problem with fitting an SOM is that optimisation stops early, before fully capturing the data set. This is the reasons to use the tour for SOM. If the net is bunched in one part of the data space, it means that the optimisation wasn’t successful.