Please enable JavaScript.
Coggle requires JavaScript to display documents.
Interpretable ML (可解释性综述 (以人为本的AI (发展要关注对社会的影响, 增强人而非取代人,…
Interpretable ML
可解释性综述
-
-
-
可解释的ML的方法 :check:
-
-
建模后(重点) :star:
隐层分析法
从分层角度去理解神经网络
Matthew D. Zeiler, Rob Fergus. Visualizing and Understanding Convolutional Networks[J]. 2013, 8689:818-833.
-
用网络切割的方法来提取CNN的概念表示。
David Bau, Bolei Zhou, Aditya Khosla, et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations[J]. 2017:3319-3327.
-
泛化与特化随层数如何变化
Yosinski J, Clune J, Bengio Y, et al. How transferable are features in deep neural networks?[J]. Eprint Arxiv, 2014, 27:3320-3328.
-
隐层数与表征性能的关系
Alain G, Bengio Y. Understanding intermediate layers using linear classifier probes[J]. 2016.
-
模拟模型法 :star:
用可解释性更强的传统模型模拟黑箱模型
Schwartz, R., Thomson, S., & Smith, N. A. (2018). SoPa: Bridging CNNs, RNNs, and weighted finite-state machines. arXiv preprint arXiv:1805.06061.
-
-
-
-
敏感分析法
基于连接权
Garson G D. Interpreting neural-network connection weights[M]. Miller Freeman, Inc. 1991
-
基于统计方法
Olden J D, Jackson D A. Illuminating the “black box”: a randomization approach for understanding variable contributions in artificial neural networks[J]. Ecological Modelling, 2002, 154(1–2):135-150
-
基于偏导
Dimopoulos Y, Bourret P, Lek S. Use of some sensitivity criteria for choosing networks with good generalization ability[J]. Neural Processing Letters, 1995, 2(6):1-4
-
-
基于样本
Koh P W, Liang P. Understanding Black-box Predictions via Influence Functions[J]. 2017
-
-
-
-
-
-
-
-
-
-
-
-
-