Please enable JavaScript.
Coggle requires JavaScript to display documents.
Human vs ML -- Medical Brainstorming - Coggle Diagram
Human vs ML -- Medical
Brainstorming
Goal definition
Overall Idea what we want to show
Or Questions that could be answered
Are ML methods better than radiologists?
How do ML methods / humans
perform under domain shifts?
Basically testing robustness
What are error-modes of Humans vs what are error-modes of ML methods?
Do they overlap / are they disjoint?
Variance between humans and ML methods
quality assurance of challenge data sets
Participation Incentives
-- How to make people participate?
Challenge + Prizes + conferences?
reputation
training
Overcoming
Data Issues
Issue of "groundtruths" in
real medical settings
MiBi's Masterstudent ist working on something that evalutes the importance of accurate labeling of the data -- Might be interesting to get inspired?.
ToDo: MiBi nach Datasets fragen die "schwierig" sind und bei denen Ärzte größere Varianzen zeigen.
missing base line performance in most ML applications (base line is often assumed to be 100% correct)
Experiment Design
First ideas
Which Task setting to choose from?
Global (e.g. Accuracy)
Local (e.g. Detection)
Render shapes into the Images?
Simulated Dataset?
ToDo:
Meeting with Joerg Peter (With Peter Full?)
to see if the simulated Phantoms
are interesting for medical Guys?
Idea:
If using a simulated Dataset: Give some radiologists of maybe HD uni clinic real samples and artificial samples and make them guess if they are from a real setting.
Any existing (challenging) detection datasets?
ToDo: Tim Rätsch zu Einfluss der Instructions genauigkeit
beim Labeln von Segmentierungen fragen.
use past challenges which have their test set publicly availbale
work with current challenge organizers
Data holder incentives
how to make data holder to make their data publicly available?
visiblity / citations
quality assurance
distribution of annotations
force publication of data sets in challenges?