PythonFan.org
Toggle Menu
Home
Online Python Compiler
Tutorials
Python FastAPI
Python Pandas
Python PyTorch
Python Seaborn
Blog
All Posts
PyTorch Training & Debugging
Evaluate your model training and performance tuning skills.
1. In a PyTorch training loop, which step typically follows loss.backward()?
optimizer.step()
model.forward()
loss.item()
torch.save()
2. Which of the following are common debugging techniques in PyTorch?
Printing tensor values with print()
Using torch.set_printoptions() to adjust output precision
Enabling autograd anomaly detection with torch.autograd.detect_anomaly()
Setting numpy.random.seed() for reproducibility
3. The torch.no_grad() context manager is used to disable gradient computation during inference.
True
False
4. What PyTorch class is used to wrap datasets for batching, shuffling, and parallel loading? (abbreviation)
5. What is the primary purpose of calling loss.backward() in a training loop?
Updates model weights
Computes gradients of loss w.r.t. parameters
Initializes model parameters
Resets optimizer state
6. Which method zeros out gradients before the backward pass to prevent accumulation?
optimizer.zero_grad()
model.zero_grad()
torch.zero_grad()
loss.zero_grad()
7. Which are valid PyTorch optimizer classes?
torch.optim.SGD
torch.optim.Adam
torch.optim.RMSprop
torch.optim.GradientDescent
8. Calling model.train() enables dropout and batch normalization layers during training.
True
False
9. What PyTorch function saves a model's state dictionary to a file? (full name)
10. Which can cause NaN gradients during training?
Learning rate set too high
Division by zero in the forward pass
Using ReLU activation with negative inputs
Unnormalized input data with large values
Reset
Answered 0 of 0 — 0 correct