Please enable JavaScript.
Coggle requires JavaScript to display documents.
Experiment - Coggle Diagram
Experiment
Ablation study
Benefit of the aggregator (fixed features and scale)
On WSI datasets
May need to use contrastive learning features, end-to-end training is prohibitive
On classical MIL datasets
Benefit of contrastive learning
Whether or not to include all models, these will help to stress that contrastive learning is generally beneficial to MIL models
Model with and without contrastive learning
Our model with and without multiscale
Compare with baseline multiscale method?
Results on 2 WSI datasets
Results on Camelyon16
Dataset description
Unbalanced
Baseline
May need to include contrastive learning features, end-to-end training is prohibitive
Traditional operators + NN based operators
Whether or not to control multiscale representation
Classification results
Localization results