Module-level Graded Quiz: Linear Classifiers :Introduction to Neural Networks and PyTorch (IBM AI Engineering Professional Certificate) Answers 2025
1. Question 1
Logistic regression predicts:
-
❌ Price
-
❌ Age
-
❌ Weight
-
✅ The class a sample belongs to
Explanation:
Logistic regression is a classification model.
2. Question 2
Class vector y represents:
-
❌ Features
-
✅ Discrete class labels
-
❌ Continuous values
-
❌ Bias terms
Explanation:
For binary logistic regression, y ∈ {0, 1}.
3. Question 3
If a dataset can be separated by a line, it is:
-
❌ Nonlinear
-
❌ Multiclass
-
❌ Unclassifiable
-
✅ Linearly separable
4. Question 4
In w · x + b, the term b is:
-
❌ Feature
-
❌ Weight
-
✅ Bias term
-
❌ Sample value
5. Question 5
Function used in logistic regression:
-
❌ ReLU
-
❌ Tanh
-
✅ Sigmoid function
-
❌ Linear function
6. Question 6
PyTorch package for quickly building logistic regression models:
-
❌ torch.optim
-
❌ torch.autograd
-
❌ torch.nn.functional
-
✅ torch.nn.Sequential
Explanation:nn.Sequential allows stacking linear + sigmoid quickly.
7. Question 7
Function of nn.Sigmoid():
-
❌ Initialize parameters
-
❌ Create linear model
-
✅ Apply sigmoid activation function
-
❌ Linear transformation
8. Question 8
θ in a Bernoulli distribution represents:
-
❌ Probability of failure
-
❌ Variance
-
❌ Standard deviation
-
✅ Probability of success
9. Question 9
Likelihood of a sequence of Bernoulli events:
-
❌ Add
-
❌ Divide
-
✅ Multiply probabilities
-
❌ Subtract
Explanation:
Independence assumption → likelihood = product of probabilities.
10. Question 10
Purpose of cross-entropy loss:
-
❌ Increase learning rate
-
❌ Maximize misclassification
-
❌ Regularize parameters
-
✅ Minimize misclassified samples
Explanation:
Cross-entropy penalizes wrong predictions strongly → improves classification.
🧾 Summary Table
| Q# | Correct Answer |
|---|---|
| 1 | Class a sample belongs to |
| 2 | Discrete class labels |
| 3 | Linearly separable |
| 4 | Bias term |
| 5 | Sigmoid function |
| 6 | torch.nn.Sequential |
| 7 | Applies sigmoid activation |
| 8 | Probability of success |
| 9 | Multiply probabilities |
| 10 | Minimize misclassified samples |