Deep Learning for Clusttering

Model

FCN

CNN

AutoEncoder

GAN/VAE

Loss functions

Principal Clustering Loss

Auxiliary Clustering Loss

Metrics

ACC

Neutralized Mutual Information

Training strategy

Loss = Loss(network, clustering)

To minimize reconstruction loss

Robustness: Adding noise

Restrictions on latent features

Architecture

DCN: AutoEncoder + KMeans (for clustering loss)

DEN

Deep Subspace Clustering

CDNN

Core concept: only clustering loss for training

Initialize methods to avoid overfitting

Architectures

Deep Embedded Clustering

Pretrain an autoencoder using reconstruction loss

Train the encoder using the cluster assignment hardening loss

VAE-based Deep Clustering

Unsupervised Fine-tuning for Text Clustering

settings

Loss function = Loss(masked language model loss, clustering loss)

Text data set X, number_samples=n

Number of clusters=K; centers are μ

encoder f(θ): making turning X to Z for better clustering properties

Trainables: f(θ) and μ

Lm: BERT

Lc: KL-Divergence

Average pooling of all hidden states

More details in section 2.2

Training

Metrics: clustering purity (Manning et al., 2008)

Hyerparameters: section 3.2