About

NN Inspector is an interactive web application designed to visualize and explore neural network architectures. It allows users to adjust parameters, visualize model behaviors, and gain insights into how neural networks learn and make predictions. It supports various types of neural networks and provides real-time feedback on how changes to the network affect its performance. Unlike other platforms, our system uniquely enables users to upload and analyze their own operational models. For physical models, where we often hypothesize about key features but lack explicit visualization/confirmation of these embeddings, this tool offers a way to interpret simplified versions, bringing us closer to a clearer understanding of model learning behaviour.

Video Demo

Why is Model Interpretability Important?

As machine learning models become increasingly complex, understanding how they make decisions becomes more challenging. Model interpretability is crucial for ensuring transparency, building trust, and facilitating debugging in AI systems. Interpretable models help us to:

However, achieving interpretability is not straightforward due to the complexity of modern models. The idea here being to zoom in, start simple and small, and see how individual neurons behave for physical models.

What Can It Be Used For?

NN Inspector is designed for students, educators, and researchers who are interested in understanding the inner workings of neural networks. It can be used as a teaching aid, a learning tool, or a platform for experimenting with different neural network configurations.

Who is Developing It?

This project is developed by Fraser King, a researcher at the University of Michigan.

Inspiration & Resources

This project draws inspiration from the work of Daniel Smilkov and Shan Carter, particularly their efforts in creating interactive tools that make complex machine learning concepts accessible to a wider audience (https://playground.tensorflow.org/). Further, researchers like Chris Olah and his colleagues at Anthropic (especially their work on Toy Models of Superposition) are pioneering techniques in interpretability research to tackle these types of challenges.

If you are interested in learning more about the fundamental structure of neural networks, I'd recommend viewing Grant Sanderson's deep learning series on YouTube. For practice with TensorFlow, the interactive notebook tutorial series developed by the team at Google are also excellent starting points.

Last Updated

This application was last updated in November 2024.