Module 1 Graded Quiz: Introduction to Neural Networks and Deep Learning:Introduction to Deep Learning & Neural Networks with Keras (IBM AI Engineering Professional Certificate) Answers 2025
1. Question 1 — Applications of Deep Learning (Select all that apply)
-
❌ Speech enactment
-
✅ Color restoration in grayscale images
-
❌ Automatic coding
-
✅ Automatic handwriting generation
Explanation:
Deep learning is widely used for image-to-image tasks and generative tasks like handwriting.
2. Question 2 — Components of a Neural Network (Select all that apply)
-
✅ Input layer
-
✅ Hidden layer
-
❌ Sparse layer
-
✅ Output layer
-
❌ Intermediate layer
Explanation:
Standard network components: Input → Hidden → Output.
3. Question 3 — Lip-syncing for dubbed videos
-
❌ Color restoration
-
✅ Speech enactment
-
❌ Automatic sound generation
-
❌ Text-to-image
Explanation:
Speech enactment aligns mouth movements with speech — ideal for lip-sync automation.
4. Question 4 — Weighted Sum Calculation
Given:
x1=0.3, x2=0.7
w1=0.2, w2=0.4
b=0.1
Compute:
z = x1w1 + x2w2 + b
= (0.3×0.2) + (0.7×0.4) + 0.1
= 0.06 + 0.28 + 0.1
= 0.44
-
✅ 0.440
-
❌ 0.340
-
❌ 0.520
-
❌ 0.380
5. Question 5 — Sigmoid Activation
z = –0.4
σ(z) = 1 / (1 + e^0.4) ≈ 1 / (1 + 1.4918) ≈ 0.401
-
❌ 0.350
-
✅ 0.401
-
❌ 0.376
-
❌ 0.450
6. Question 6 — Where are signals processed?
-
❌ Synapse
-
❌ Axon
-
❌ Dendrite
-
✅ Soma
Explanation:
Soma = processing unit of the biological neuron.
7. Question 7 — Correct Biological Neural Flow
-
✅ Dendrites → Soma → Axon → Synapse
-
❌ Other options
Explanation:
This is the natural order of information flow.
8. Question 8 — Primary Purpose of Deep Learning
-
❌ Rule-based systems
-
❌ Only image recognition
-
❌ Replace programming
-
✅ Learn complex patterns from large data automatically
9. Question 9 — Correct Layer Connectivity
-
❌ Bidirectional all layers
-
❌ Hidden layers independent
-
❌ Random connectivity
-
✅ Information flows forward: Input → Hidden → Output
10. Question 10 — Role of Weights and Biases
-
❌ Control learning rate
-
✅ Weights = connection strength; Biases = shift activation for flexibility
-
❌ Only used for error correction
-
❌ Store final outputs
🧾 Summary Table
| Q# | Correct Answer |
|---|---|
| 1 | Color restoration, Automatic handwriting generation |
| 2 | Input layer, Hidden layer, Output layer |
| 3 | Speech enactment |
| 4 | 0.440 |
| 5 | 0.401 |
| 6 | Soma |
| 7 | Dendrites → Soma → Axon → Synapse |
| 8 | Learn complex patterns from big data |
| 9 | Forward flow: Input → Hidden → Output |
| 10 | Weights = connection strength; Bias = activation shift |