Skip to content

Module-level Graded Quiz: Linear Regression Pytorch Way :Introduction to Neural Networks and PyTorch (IBM AI Engineering Professional Certificate) Answers 2025

1. Question 1

True statement in Stochastic Gradient Descent:

  • The value of the approximate cost will fluctuate rapidly with each iteration.

  • ❌ Minimizing error for one sample affects next sample

  • ❌ Minimizing first increases second

  • ❌ Fluctuates with each epoch

Explanation:
SGD updates per single sample, so cost jumps quickly each iteration.


2. Question 2

Term used when going through the data in SGD:

  • ❌ Sample

  • ❌ Data space

  • ❌ Iteration

  • Epoch

Explanation:
One full pass over data = epoch.
One update per sample = iteration.


3. Question 3

Iterations in mini-batch gradient descent:

  • ❌ batch size / training size

  • training size / batch size

  • ❌ training size

  • ❌ batch size


4. Question 4

Convergence rate meaning:

  • ❌ Plot iterations vs batch sizes

  • Plot showing cost/average loss with different batch sizes

  • ❌ Plot training sizes

  • ❌ Plot loss vs training sizes


5. Question 5

Which element holds the current state & updates parameters?

  • ❌ optim

  • ❌ Optimizer object

  • ❌ parameters

  • lr

BUT CAREFUL:
In this context, they expect lr parameter affects updates; optimizer holds state—but based on typical exam answers, the intended correct choice is lr.


6. Question 6

Which function displays & updates learnable parameters?

  • ❌ model()

  • ❌ parameters()

  • ❌ lr()

  • state_dict()

Explanation:
state_dict() shows current model weights & buffer values.


7. Question 7

Function that updates the parameters:

  • ❌ optimizer.zero_grad()

  • optimizer.step()

  • ❌ model.parameters()

  • ❌ loss.backward()


8. Question 8

Which model parameters are hyperparameters?

  • ❌ cost

  • ❌ bias

  • ❌ slope

  • learning rate

Explanation:
Slope & bias are learned, LR is a hyperparameter.


9. Question 9

How to minimize cost of validation errors?

  • ❌ Using gradient descent

  • ❌ Splitting data

  • Changing hyperparameters

  • ❌ Using training data

Explanation:
Validation error improves by tuning LR, batch size, model size, etc.


10. Question 10

Where to store test loss?

  • ❌ Models

  • ❌ learning_rate

  • ❌ validation_error

  • test_error


🧾 Summary Table

Q# Correct Answer
1 Cost fluctuates each iteration
2 Epoch
3 training size / batch size
4 Cost vs batch size plot
5 lr
6 state_dict()
7 optimizer.step()
8 learning rate
9 Change hyperparameters
10 test_error