Advanced Keras Techniques :Deep Learning with Keras and Tensorflow (IBM AI Engineering Professional Certificate) Answers 2025
1. Question 1
Primary benefit of using a custom training loop:
-
❌ Faster training
-
❌ Less validation data
-
❌ Auto-handles training
-
✅ Greater control over the training process
Explanation:
Custom loops give you full control over forward pass, backward pass, metrics, and loss computation.
2. Question 2
Key component of a custom training loop:
-
❌ Model regularization
-
❌ Data augmentation
-
❌ Model callbacks
-
✅ Dataset
Explanation:
Custom loops manually iterate over tf.data.Dataset batches.
3. Question 3
Purpose of HyperParameters object in Keras Tuner:
-
❌ Store training data
-
✅ Define ranges/values for hyperparameters
-
❌ Compile model
-
❌ Save architecture
Explanation:
HyperParameters lets you define values like learning rate, layer size, dropout ranges, etc.
4. Question 4
Main benefit of hyperparameter tuning:
-
✅ Find best hyperparameters for highest performance
-
❌ Simplify architecture
-
❌ Reduce parameters
-
❌ Increase dataset size
Explanation:
Tuning maximizes accuracy/score by testing different configurations.
5. Question 5
Search algorithm provided in Keras Tuner:
-
❌ Simulated Annealing
-
❌ Particle Swarm Optimization
-
✅ Hyperband
-
❌ Genetic Algorithm
Explanation:
Built-in options include Hyperband, RandomSearch, BayesianOptimization.
6. Question 6
Best initialization for ReLU layers:
-
✅ He Initialization
-
❌ Zero
-
❌ Xavier
-
❌ Random
Explanation:
He initialization prevents vanishing/exploding activations in ReLU networks.
7. Question 7
Purpose of learning rate scheduling:
-
❌ Reduce dataset
-
✅ Adjust learning rate during training for better convergence
-
❌ Increase epochs
-
❌ Decrease model complexity
Explanation:
Schedulers gradually reduce or modify learning rate to stabilize training.
8. Question 8
Main benefit of batch normalization:
-
❌ Increases parameters
-
✅ Normalizes layer inputs to improve training stability
-
❌ Removes activation
-
❌ Eliminates dropout
Explanation:
Batch norm reduces internal covariate shift → smoother, faster training.
9. Question 9
Primary purpose of mixed precision training:
-
✅ Speed up training & reduce memory usage
-
❌ Remove validation
-
❌ Improve accuracy
-
❌ Simplify architecture
Explanation:
Mixed precision uses float16 + float32 for faster GPU compute.
10. Question 10
How quantization improves TensorFlow model performance:
-
❌ Removes optimizer
-
❌ Improves input quality
-
❌ Adds more layers
-
✅ Reduces model size + increases inference speed
Explanation:
Quantizing weights/ops → faster & smaller models, especially on edge devices.
🧾 Summary Table
| Q# | Correct Answer |
|---|---|
| 1 | Greater control over training |
| 2 | Dataset |
| 3 | Define hyperparameter ranges |
| 4 | Find optimal hyperparameters |
| 5 | Hyperband |
| 6 | He Initialization |
| 7 | Adjust learning rate |
| 8 | Improve training stability |
| 9 | Speed + memory efficiency |
| 10 | Reduce model size + faster inference |