Skip to content

Module 4 Graded Quiz: Deep Learning Models :Introduction to Deep Learning & Neural Networks with Keras (IBM AI Engineering Professional Certificate) Answers 2025

1. Question 1

Which layer performs flattening and connects feature maps to outputs?

  • ❌ Input layer

  • ❌ Pooling layer

  • Fully connected layer

  • ❌ ReLU layer

Explanation:
Fully connected (Dense) layers require flattened input and map extracted features to class probabilities.


2. Question 2

What enables RNNs to retain sequential dependencies?

  • ❌ Attention weights

  • ❌ Parallel processing

  • Hidden state propagation

  • ❌ Convolutional filters

Explanation:
RNNs pass a hidden state from one timestep to the next to capture context.


3. Question 3

Key advantage of deep networks for image classification?

  • ❌ Faster convergence

  • ❌ Lower memory

  • ❌ Reduced gradient issues

  • Learn hierarchical features (edges → shapes → objects)

Explanation:
Deep layers allow building complex feature hierarchies, essential for vision.


4. Question 4

Primary transformer architectural components?

  • Self-attention + positional encoding

  • ❌ Convolution + pooling

  • ❌ Recurrent layers

  • ❌ Hidden + cell states

Explanation:
Transformers process sequences in parallel using attention + positional information.


5. Question 5

Principle enabling autoencoder-based anomaly detection?

  • Learn normal patterns → high reconstruction error = anomaly

  • ❌ Reinforcement learning

  • ❌ Labeled data

  • ❌ Generating synthetic anomalies

Explanation:
Autoencoders reconstruct only normal patterns well.


6. Question 6

Architecture optimized for visual recognition?

  • ❌ Autoencoders

  • ❌ RBMs

  • Convolutional Neural Networks (CNNs)

  • ❌ RNNs

Explanation:
CNNs excel at spatial pattern extraction.


7. Question 7 — Select all that apply

What benefits do pooling layers provide?

  • ❌ Feature flattening

  • Spatial resolution reduction → lower computation

  • ❌ Negative value elimination

  • Translation & scaling invariance

Explanation:
Pooling reduces dimensionality & makes features more robust to shifts.


8. Question 8

Optimal CNN layer configuration?

  • ❌ GlobalAveragePooling2D → Dense → Activation

  • ❌ Flatten → Dense → Dropout → Softmax

  • Conv2D → BatchNormalization → ReLU → MaxPooling2D

  • ❌ MaxPooling2D → Conv2D → Dense → ReLU

Explanation:
Correct ordering: Conv → Norm → Activation → Pool.


9. Question 9

Essential for transformers to understand word order?

  • ❌ Convolution

  • ❌ Pooling

  • Positional encoding

  • ❌ RNN layers

Explanation:
Transformers need explicit positional information.


10. Question 10

Key limitation of transfer learning?

  • ❌ Pretrained models can’t be modified

  • ❌ Requires larger datasets

  • ❌ Always needs more compute

  • Domain mismatch may limit performance

Explanation:
If pretrained domain ≠ target domain, performance may drop.


🧾 Summary Table

Q# Correct Answer
1 Fully connected layer
2 Hidden state propagation
3 Hierarchical feature learning
4 Self-attention + positional encoding
5 Reconstruction error detects anomalies
6 CNN
7 Spatial reduction, translation/scaling invariance
8 Conv2D → BatchNorm → ReLU → MaxPool
9 Positional encoding
10 Domain mismatch risk