CNN image classifiers often suffer from biases that impede their practical applications. Most existing bias investigation techniques are either inapplicable to general image classification tasks or require significant user efforts in perusing all data subgroups to manually specify which data attributes to inspect.
We present VISCUIT, an interactive visualization system that reveals how and why a CNN classifier is biased.
We implemented VISCUIT using the standard HTML/CSS/JavaScript web technology stack and the D3.js visualization library. CNN model training and inference are all implemented using PyTorch.
Led by Seongmin Lee, VisCuit is created by Machine Learning and Human-computer Interaction researchers at Georgia Tech. The team includes Seongmin Lee, Jay Wang, Judy Hoffman, and Polo Chau.
If you have any questions or feedback, feel free to open an issue or contact Seongmin Lee. We’d love to hear your experience with VISCUIT! If you’d like to share (e.g., why you use VISCUIT, what features you find most helpful), please reach out to us. VISCUIT is an open-source project, and we welcome pull requests for new feature implementations and bug fixes, etc.