Interpretability of machine learning models
In this research, we investigate the effect of cognitive biases on human understanding of machine learning models, focusing on inductively learnt rules. The interim results are presented in:
- Johannes Fürnkranz, and Tomáš Kliegr. "The Need for Interpretability Biases." International Symposium on Intelligent Data Analysis. Springer, Cham, 2018. https://link.springer.com/chapter/10.1007/978-3-030-01768-2_2.
- Johannes Fürnkranz, Tomáš Kliegr, and Heiko Paulheim. "On Cognitive Preferences and the Interpretability of Rule-based Models." Accepted for publication in Machine Learning journal (Springer). arXiv preprint arXiv:1803.01316 (2018). https://arxiv.org/pdf/1803.01316.pdf
- Tomáš Kliegr, Štěpán Bahník, and Johannes Fürnkranz. "A review of possible effects of cognitive biases on interpretation of rule-based machine learning models." arXiv preprint arXiv:1804.02969 (2018). https://arxiv.org/pdf/1804.02969.pdf
- Tomáš Kliegr. "Quantitative CBA: Small and Comprehensible Association Rule Classification Models." arXiv preprint arXiv:1711.10166 (2017). https://arxiv.org/pdf/1711.10166.pdf
Cognitive biases that demonstrated in our crowdsroucing experiments with association rules include base rate neglect (Kahneman and Tversky, 1973) and insensitivity to sample size (Tversky and Kahneman, 1974). These biases make the user focus on the confidence of the rule and neglect its support. As follows from our review of 20 cognitive biases possibly affecting interpretability of rules, number of debiasing techniques have been proposed in psychology.
Some of these, such as frequency formats (Gigerenzer and Hoffrage, 1995), require only changes in user interfaces that present machine learning results, others imply introduction of "interpretability biases" into learning algorithms. A commonly adopted assumption is that shorter models are more interpretable. We review evidence for and against the use of the Occam's razor principle as an optimization criterion in machine learning algorithms.
With the QCBA algorithm, we attempt to improve the interpretability of models generated by CBA by reducing the size of the generated models both in tems of the rule count and the number of rules.