PyTorch Training & Debugging

Evaluate your model training and performance tuning skills.

1. In a PyTorch training loop, which step typically follows loss.backward()?
2. Which of the following are common debugging techniques in PyTorch?
3. The torch.no_grad() context manager is used to disable gradient computation during inference.
4. What PyTorch class is used to wrap datasets for batching, shuffling, and parallel loading? (abbreviation)
5. What is the primary purpose of calling loss.backward() in a training loop?
6. Which method zeros out gradients before the backward pass to prevent accumulation?
7. Which are valid PyTorch optimizer classes?
8. Calling model.train() enables dropout and batch normalization layers during training.
9. What PyTorch function saves a model's state dictionary to a file? (full name)
10. Which can cause NaN gradients during training?
Answered 0 of 0 — 0 correct