Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
-
Updated
Jun 17, 2024 - Jupyter Notebook
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
💡 Adversarial attacks on explanations and how to defend them
📍 Interactive Studio for Explanatory Model Analysis
Implements the Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, Weighted Tsetlin Machine, and Embedding Tsetlin Machine, with support for continuous features, multigranularity, clause indexing, and literal budget
Boosting the AI research efficiency
Pytorch implementation of various neural network interpretability methods
Summarization of static graphs using the Minimum Description Length principle
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
SurvSHAP(t): Time-dependent explanations of machine learning survival models
Code for Interpretable Adversarial Perturbation in Input Embedding Space for Text, IJCAI 2018.
This code repository is associated with the paper "A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography." Nature Machine Intelligence, 2021. https://www.nature.com/articles/s42256-021-00423-x
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Mechanistically interpretable neurosymbolic AI (Nature Comput Sci 2024): losslessly compressing NNs to computer code and discovering new algorithms which generalize out-of-distribution and outperform human-designed algorithms
This repo contains code for Invariant Grounding for Video Question Answering
graph neural networks, information theory, AI for Sciences
Trustworthy LoS Prediction Based on Multi-modal Data (AIME 2023)
Visual explanations of supervised classification models
Framework for material structure exploration
Maximal Linkability metric to evaluate the linkability of (protected) biometric templates. Paper: "Measuring Linkability of Protected Biometric Templates using Maximal Leakage", IEEE-TIFS, 2023.
Add a description, image, and links to the interpretable topic page so that developers can more easily learn about it.
To associate your repository with the interpretable topic, visit your repo's landing page and select "manage topics."