ReX is a causal explainability tool for image classifiers. ReX is black-box, that is, agnostic to the internal structure of the classifier. We assume that we can modify the inputs and send them to the classifier, observing the output. ReX outperforms other tools on single explanations, non-contiguous explanations (for partially obscured images), and multiple explanations.
Variations of ReX: Med-ReX for medical applications, Yo-ReX for the YOLO object detector.
Variations of ReX
- ReX for explaining the output of image classifiers
Multi-ReX for extracting multiple explanations
- Med-Rex for medical image classifiers
- YO-ReX for YOLO and other object detectors
- Explaining Image Classifiers using Statistical Fault Localization. In ECCV’20. The first paper on ReX. Note: the tool is called DeepCover in this paper.
- Explanations for Occluded Images. In ICCV’21. This paper introduces causality to the tool. Note: the tool is called DC-Causal in this paper.
- Multiple Different Explanations for Image Classifiers. Under review. This paper introduces MULTI-ReX for multiple explanations.
Contact me to get a working version.
Daniel Kroening, AWS
- Youcheng Sun, University of Manchester