Skip to content

Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization(Deep Learning Specialization)

Course Assignments

Practical Aspects of Deep Learning:Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization:(Deep Learning Specialization) Answers:2025

Question 1 If you have 10,000 examples, how would you split the train/dev/test set? Choose the best option. ❌ 98% train. 1% dev. 1% test.✅ 60% train. 20% dev. 20% test.❌ 33% train. 33% dev. 33% test. Explanation:A common, balanced split is 60/20/20 (or 70/15/15). 60/20/20 gives enough training data while keeping sizeable dev/test sets… <a href="https://codeshala.io/platform/coursera/course/improving-deep-neural-networks-hyperparameter-tuning-regularization-and-optimizationdeep-learning-specialization/assignment/practical-aspects-of-deep-learningimproving-deep-neural-networks-hyperparameter-tuning-regularization-and-optimizationdeep-learning-specialization-answers2025/" rel="bookmark"><span class="screen-reader-text">Practical Aspects of Deep Learning:Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization:(Deep Learning Specialization) Answers:2025</span></a>

Optimization Algorithms:Improving Deep: Neural Networks: Hyperparameter Tuning, Regularization and Optimization:(Deep Learning Specialization) Answers:2025

Question 1 Which notation would you use to denote the 3rd layer’s activations when the input is the 7th example from the 8th minibatch? ✅ a[3]{8}(7)❌ a[8]{7}(3)❌ a[3]{7}(8)❌ a[8]{3}(7) Explanation:Notation format: [l] → layer number {k} → minibatch index (i) → training example indexHence, the activations for layer 3, 7th example, 8th minibatch → a[3]{8}(7).… <a href="https://codeshala.io/platform/coursera/course/improving-deep-neural-networks-hyperparameter-tuning-regularization-and-optimizationdeep-learning-specialization/assignment/optimization-algorithmsimproving-deep-neural-networks-hyperparameter-tuning-regularization-and-optimizationdeep-learning-specialization-answers2025/" rel="bookmark"><span class="screen-reader-text">Optimization Algorithms:Improving Deep: Neural Networks: Hyperparameter Tuning, Regularization and Optimization:(Deep Learning Specialization) Answers:2025</span></a>

Hyperparameter tuning, Batch Normalization, Programming Frameworks:Improving Deep: Neural Networks: Hyperparameter Tuning, Regularization and Optimization:(Deep Learning Specialization) Answers:2025

Question 1 Which of the following are true about hyperparameter search? ✅ Choosing random values for hyperparameters is convenient since we might not know which are most important.❌ When using random values they must always be uniformly distributed.❌ Choosing grid values is better when number of hyperparameters is high.✅ When sampling from a grid, the… <a href="https://codeshala.io/platform/coursera/course/improving-deep-neural-networks-hyperparameter-tuning-regularization-and-optimizationdeep-learning-specialization/assignment/hyperparameter-tuning-batch-normalization-programming-frameworksimproving-deep-neural-networks-hyperparameter-tuning-regularization-and-optimizationdeep-learning-specialization-answers2025/" rel="bookmark"><span class="screen-reader-text">Hyperparameter tuning, Batch Normalization, Programming Frameworks:Improving Deep: Neural Networks: Hyperparameter Tuning, Regularization and Optimization:(Deep Learning Specialization) Answers:2025</span></a>