Module-level Graded Quiz: Multiple Input Output Linear Regression :Introduction to Neural Networks and PyTorch (IBM AI Engineering Professional Certificate) Answers 2025
1. Question 1
Each sample in predictor matrix X represents:
-
❌ A single feature
-
✅ One row of predictor variables
-
❌ Bias term
-
❌ All weights
Explanation:
Each row = one data sample containing all its features.
2. Question 2
How is prediction y^\hat{y}y^ computed?
-
❌ Sum of features
-
❌ Sigmoid
-
❌ Multiply bias
-
✅ Dot product x⋅wx \cdot wx⋅w + bias
3. Question 3
Correct dimension relationship:
-
✅ #columns in X must equal #rows in w
-
❌ Rows = columns
-
❌ Dimensions don’t matter
-
❌ Rows must match
Explanation:
Matrix multiplication rule:
(\text{n_samples} \times \text{n_features}) \cdot (\text{n_features} \times 1)
4. Question 4
Role of criterion:
-
❌ Forward pass
-
✅ Compute loss between prediction and targets
-
❌ Initialize parameters
-
❌ Update weights
5. Question 5
Gradient descent weight update:
-
❌ Multiply by gradient
-
❌ Set weights to gradient
-
❌ Add gradient
-
✅ Subtract gradient × learning rate
6. Question 6
Main difference in multi-output regression:
-
❌ No bias
-
❌ More features
-
❌ Simpler cost
-
✅ Weights become a matrix instead of a vector
7. Question 7
Purpose of creating a custom PyTorch module:
-
✅ Customize forward pass, add layers, add logic
-
❌ Simpler loss
-
❌ Increase features
-
❌ Manual backprop
8. Question 8
Cost function measures:
-
❌ Average distance
-
❌ Number of samples
-
✅ Sum of squared distances (MSE)
-
❌ Parameter count
9. Question 9
How are weights/bias updated?
-
❌ Random values
-
❌ Averaging
-
❌ Multiply by fixed factor
-
✅ Using gradients of cost w.r.t each weight & bias
10. Question 10
Key difference in training multi-output regression:
-
❌ Optimizer changes
-
❌ Fewer epochs
-
❌ Single-output cost
-
✅ Adjust prediction matrix & weight matrix dimensions
🧾 Summary Table
| Q# | Correct Answer |
|---|---|
| 1 | One row of predictor variables |
| 2 | Dot product + bias |
| 3 | Columns(X) = rows(w) |
| 4 | Compute loss |
| 5 | Subtract gradient |
| 6 | Weights become a matrix |
| 7 | Customize forward pass |
| 8 | Sum of squared distances |
| 9 | Use gradients for updates |
| 10 | Adjust matrix dimensions |