Skip to content

Graded Quiz: Advanced CNNs in Keras :Deep Learning with Keras and Tensorflow (IBM AI Engineering Professional Certificate) Answers 2025

1. Question 1

Architecture that uses small 3×3 filters and increases network depth:

  • ❌ GRU

  • ❌ RNN

  • ❌ LSTM

  • VGG

Explanation:
VGG16/VGG19 use repeat blocks of 3×3 convolutions to form deep CNNs.


2. Question 2

Purpose of MaxPooling2D((2,2))?

  • Reduces dimensionality

  • ❌ Flattens feature maps

  • ❌ Final classification

  • ❌ Extracts features

Explanation:
MaxPooling reduces spatial size, computation, and helps retain dominant features.


3. Question 3

How ImageDataGenerator helps in augmentation?

  • ❌ Reducing resolution

  • ❌ Cropping only

  • ❌ Adding noise

  • Rotating, shifting, flipping images

Explanation:
ImageDataGenerator creates varied images to reduce overfitting.


4. Question 4

Purpose of featurewise_center?

  • ❌ Normalize each sample individually

  • Set the dataset’s mean to 0

  • ❌ Rotate images

  • ❌ Add noise

Explanation:
featurewise_center subtracts the global dataset mean from all images.


5. Question 5

Common ImageNet pre-trained model for transfer learning?

  • ❌ RNN

  • ❌ GRU

  • VGG16

  • ❌ LSTM

Explanation:
VGG16 is a classic CNN widely used for transfer learning.


6. Question 6

Meaning of include_top=False?

  • ❌ Exclude batch norm

  • ❌ Exclude pooling layers

  • ❌ Exclude convolution layers

  • Exclude fully connected (classification) layers

Explanation:
This keeps the convolutional base but removes the dense classifier.


7. Question 7

Why freeze pre-trained model layers initially?

  • ❌ Speed up training (secondary effect)

  • Keep pre-trained weights unchanged

  • ❌ Reduce memory

  • ❌ Prevent overfitting

Explanation:
Freezing preserves learned features from ImageNet.


8. Question 8

Fine-tuning a pre-trained model means:

  • ❌ Freeze all layers

  • ❌ Train only top layers

  • ❌ Change architecture

  • Unfreeze and retrain some deeper layers

Explanation:
You refine higher-level feature representations.


9. Question 9

Role of flow_from_directory?

  • Load images + apply augmentation from directory

  • ❌ Generate synthetic images

  • ❌ Convert to grayscale

  • ❌ Compile model

Explanation:
It loads batches of images directly from folders.


10. Question 10

How does transpose convolution upsample images?

  • ❌ Compile with Adam

  • ❌ Downsampling

  • ❌ Simplify architecture

  • Inserting zeros between elements of the feature map

Explanation:
Transpose convolution performs learnable upsampling via zero-insertion + convolution.


🧾 Summary Table

Q# Correct Answer
1 VGG
2 Reduces dimensionality
3 Rotating, shifting, flipping
4 Set dataset mean to 0
5 VGG16
6 Exclude top fully connected layers
7 Keep pretrained weights unchanged
8 Unfreeze and retrain layers
9 Load images & augment
10 Insert zeros (transpose conv)