Skip to content

Module-level Graded Quiz: Convolutional Neural Networks :Deep Learning with PyTorch (IBM AI Engineering Professional Certificate) Answers 2025

1. Question 1 — Purpose of convolution

  • ❌ To increase channels

  • To detect local patterns in the input image

  • ❌ To apply activation

  • ❌ To reduce image size


2. Question 2 — Zero padding effect

  • ❌ Decreases size

  • Increases the size of the activation map

  • ❌ Doubles size

  • ❌ No effect


3. Question 3 — Max pooling

  • ❌ Adds non-linearity

  • ❌ Enhances contrast

  • Reduces spatial dimensions

  • ❌ Increases channels


4. Question 4 — Activation setting negatives to 0

  • ReLU

  • ❌ Tanh

  • ❌ Sigmoid

  • ❌ Softmax


5. Question 5 — Activation with multiple channels

  • ❌ Only last channel

  • ❌ Only first channel

  • Applied individually to each element in every channel

  • ❌ Applied to sum of channels


6. Question 6 — Purpose of flattening

  • ❌ Reduce channels

  • ❌ Increase spatial dimensions

  • Convert 2D feature map into 1D vector

  • ❌ Apply pooling


7. Question 7 — Output channels

  • ❌ Image width

  • Number of feature maps

  • ❌ Number of input images

  • ❌ Image height


8. Question 8 — Benefit of pre-trained models

  • ❌ Auto fine-tuning

  • ❌ No dataset needed

  • Provides a strong starting point

  • ❌ Optimized only for speed


9. Question 9 — Why requires_grad=False

  • ❌ Save memory

  • ❌ Speed up forward pass

  • ❌ Auto LR adjust

  • To prevent modifying pre-trained weights


10. Question 10 — Why GPU

  • ❌ Simplify code

  • ❌ Improve visualization

  • ❌ Reduce model size

  • Accelerate matrix operations


🧾 Summary Table

Q# Correct Answer Key Concept
1 Detect local patterns Convolutions learn features
2 Increases size Zero padding preserves spatial dims
3 Reduce spatial dimensions Pooling downsamples
4 ReLU Removes negatives
5 Applied per-element per-channel Channel-wise activation
6 Convert to 1D Required for fully connected layers
7 Feature maps Output channels = number of filters
8 Strong starting point Transfer learning
9 Prevent weight updates Freeze layers
10 Accelerate matrix ops GPU parallel computing