Explainable machine learning models
This page provides a list of my papers (and other contributions) related to explainable machine learning:
Software for explainable machine learning
EasyMiner.eu (project lead)
Assocation rule classification rCBA R package (contributor)
Explanation of association rule models (contributor)
Software created within master theses that I supervise or have supervised: action rule discovery in Python (ActionRules, RandomForestRules), Assocation rule classification in Python (pyARC), Interpretable Decision Sets in Python (pyIDS).
Tutorial on Explainable machine learning for FORTISS (Research institute of the Free State of Bavaria for software-intensive systems and services) research center, Munich, 2020
Articles in scientific journals
Stanislav Vojíř, Tomáš Kliegr. Editable Machine Learning Models? A rule-based framework for user studies of explainability Journal of Advances in Data Analysis and Classification. Springer. Preprint:https://nb.vse.cz/~klit01/papers/RuleEditor.pdf
Václav Zeman, Tomáš Kliegr and Vojtěch Svátek. RDFRules: Making RDF Rule Mining Easier and Even More Efficient http://www.semantic-web-journal.net/system/files/swj2398.pdf Accepted in Semantic Web Journal
Kliegr, Tomáš, Štěpán Bahník, and Johannes Fürnkranz. "Advances in Machine Learning for the Behavioral Sciences." American Behavioral Scientist 64.2 (2020): 145-175.
Fürnkranz, J., Kliegr, T., & Paulheim, H. (2019). On cognitive preferences and the plausibility of rule-based models. Machine Learning, Springer, 1-46.
Vojíř, S., Zeman, V., Kuchař, J., & Kliegr, T. (2018). EasyMiner. eu: Web framework for interpretable machine learning based on rules and frequent itemsets. Knowledge-Based Systems, 150, 111-115.
Kliegr, T., Svátek, V., Ralbovský, M., & Šimůnek, M. (2011). SEWEBAR-CMS: semantic analytical report authoring for data mining results. Journal of Intelligent Information Systems, 37(3), 371-395.
Papers in conference proceedings
Filip, Jiri, and Tomáš Kliegr. "PyIDS–Python Implementation of Interpretable Decision Sets Algorithm by Lakkaraju et al, 2016⋆." RuleML Challenge, CEUR-WS (2019). Best paper award at RuleML Challenge 2019.
Johannes Fürnkranz, and Tomáš Kliegr. "The Need for Interpretability Biases." International Symposium on Intelligent Data Analysis. Springer, Cham, 2018.
Genský, Oliver, Žárský, Jiří, Kliegr, Tomáš. Empirical Evaluation of Explainability of Topic modelling and Clustering Visualizations. Znalosti 2019.
Tomáš Kliegr, Štěpán Bahník, and Johannes Fürnkranz. "A review of possible effects of cognitive biases on interpretation of rule-based machine learning models." arXiv preprint arXiv:1804.02969 (2018). Under revision in Artificial Intelligence (Elsevier)
Tomáš Kliegr. "QCBA: Postoptimization of Quantitative Attributes in Classifiers based on Association Rules." arXiv preprint arXiv:1711.10166 (2017).
Kliegr, Tomáš. Effect of cognitive biases on human understanding of rule-based machine learning models. Queen Mary University of London, 2017. Ph.D. Dissertation
Can AI be free of bias?, interview featured in an article of German broadcaster Deutsche Welle.
I serve as a program co-chair of RuleML+RR 2020@DeclarativeAI (originally to be held in Oslo). The theme of the conference is Explainable algorithmic decision-making.
I serve or have served as a reviewer specializing on topics related to explainable machine learning at multiple artificial intelligence and semantic web conferences, such as AAAI, ECAI, IJCAI, ECML/PKDD, ISWC, ... (cf. academic service for a list) .