Skip to content

Graded Quiz: Checklist: Pre-trained Model Loading and Evaluation :AI Capstone Project with Deep Learning (IBM AI Engineering Professional Certificate) Answers 2025

1. Did the notebook define and apply data transformations (e.g., normalization, resizing) for both pipelines?

✔️ Yes
❌ No

Explanation:
Ensures both Keras and PyTorch models receive inputs on the same scale, making the comparison fair and unbiased.


2. Did the notebook define a custom PyTorch nn.Module class for the pretrained model architecture?

✔️ Yes
❌ No

Explanation:
PyTorch requires a subclass of nn.Module to structure the pretrained model for forward propagation.


3. Did you correctly handle device placement (CPU/GPU) in PyTorch?

✔️ Yes
❌ No

Explanation:
Using .to(device) ensures the model and tensors run on GPU when available, maximizing performance.


4. Did you calculate the number of samples evaluated using the Keras model?

✔️ Yes
❌ No

Explanation:
The number of samples is obtained from datagen.flow_from_directory().samples, required for correct metric computation.


5. Did the notebook skip dropout layers during PyTorch evaluation?

✔️ Yes
❌ No

Explanation:
model.eval() disables dropout and batchnorm updates, ensuring deterministic predictions.


6. Did the notebook evaluate both models on a validation set without gradient updates?

✔️ Yes
❌ No

Explanation:
Disabling gradients using torch.no_grad() avoids unnecessary memory usage and keeps evaluation unbiased.


7. Did the notebook compute and display accuracy metrics for both models?

✔️ Yes
❌ No

Explanation:
Accuracy provides a direct, comparable measure of model performance across frameworks.


8. Did the notebook calculate and plot a confusion matrix for both models?

✔️ Yes
❌ No

Explanation:
Confusion matrices highlight class-wise misclassifications and provide deeper diagnostic insight.


9. Did you define the model_metrics function returning all key metrics?

✔️ Yes
❌ No

Explanation:
This function centralizes all metric calculations: Accuracy, Precision, Recall, F1, Log Loss, ROC-AUC, Confusion Matrix, and Classification Report.


10. Is a reusable plot_roc function defined to create ROC curves?

✔️ Yes
❌ No

Explanation:
A dedicated ROC plotting function is necessary for binary/multi-class ROC comparison across models.


11. Did you complete all tasks and download the Jupyter notebook?

✔️ Yes
❌ No

Explanation:
The downloaded notebook is required for submission and evaluation.


🧾 Summary Table

Q# Correct Answer Key Concept
1 Yes Consistent transformations across frameworks
2 Yes Custom PyTorch model definition
3 Yes Device management
4 Yes Sample count for Keras metrics
5 Yes Evaluation mode disables dropout
6 Yes Gradient-free validation
7 Yes Accuracy comparison
8 Yes Confusion matrix analysis
9 Yes Unified metrics function
10 Yes ROC curve plotting
11 Yes Notebook completion & download