Visualize predictions with tables
6 minute read
This covers how to track, visualize, and compare model predictions over the course of training, using PyTorch on MNIST data.
You will learn how to:
- Log metrics, images, text, etc. to a
wandb.Table()
during model training or evaluation - View, sort, filter, group, join, interactively query, and explore these tables
- Compare model predictions or results: dynamically across specific images, hyperparameters/model versions, or time steps.
Examples
Compare predicted scores for specific images
Live example: compare predictions after 1 vs 5 epochs of training →

The histograms compare per-class scores between the two models. The top green bar in each histogram represents model “CNN-2, 1 epoch” (id 0), which only trained for 1 epoch. The bottom purple bar represents model “CNN-2, 5 epochs” (id 1), which trained for 5 epochs. The images are filtered to cases where the models disagree. For example, in the first row, the “4” gets high scores across all the possible digits after 1 epoch, but after 5 epochs it scores highest on the correct label and very low on the rest.
Focus on top errors over time
See incorrect predictions (filter to rows where “guess” != “truth”) on the full test data. Note that there are 229 wrong guesses after 1 training epoch, but only 98 after 5 epochs.

Compare model performance and find patterns
See full detail in a live example →
Filter out correct answers, then group by the guess to see examples of misclassified images and the underlying distribution of true labels—for two models side-by-side. A model variant with 2X the layer sizes and learning rate is on the left, and the baseline is on the right. Note that the baseline makes slightly more mistakes for each guessed class.

Sign up or login
Sign up or login to W&B to see and interact with your experiments in the browser.
In this example we’re using Google Colab as a convenient hosted environment, but you can run your own training scripts from anywhere and visualize metrics with W&B’s experiment tracking tool.
log to your account
0. Setup
Install dependencies, download MNIST, and create train and test datasets using PyTorch.
1. Define the model and training schedule
- Set the number of epochs to run, where each epoch consists of a training step and a validation (test) step. Optionally configure the amount of data to log per test step. Here the number of batches and number of images per batch to visualize are set low to simplify the demo.
- Define a simple convolutional neural net (following pytorch-tutorial code).
- Load in train and test sets using PyTorch
2. Run training and log test predictions
For every epoch, run a training step and a test step. For each test step, create a wandb.Table()
in which to store test predictions. These can be visualized, dynamically queried, and compared side by side in your browser.
What’s next?
The next tutorial, you will learn how to optimize hyperparameters using W&B Sweeps.
Feedback
Was this page helpful?
Glad to hear it! If you have further feedback, please let us know.
Sorry to hear that. Please tell us how we can improve.