Skip to content

Graded Quiz: Building Supervised Learning Models :Machine Learning with Python (IBM AI Engineering Professional Certificate) Answers 2025

1. Question 1

Telecom predicts service cancellations → classification problem. Which model?

  • ❌ Decision trees

  • ❌ Neural networks

  • ❌ Naïve Bayes

  • K-nearest neighbors

Explanation:
All models can be used, but KNN is the intended answer here because it is commonly introduced as a simple, effective classification model based on past behavior similarity.


2. Question 2

One-vs-One classification strategy uses which method for final class?

  • ❌ Maximal margin vote

  • Popularity vote

  • ❌ Confidence-based ranking

  • ❌ Probability average

Explanation:
One-vs-one trains multiple binary classifiers. Each classifier “votes” for a class, and the class with the most votes wins.


3. Question 3

In a decision tree, entropy measures:

  • The level of disorder or randomness in a node

  • ❌ Count of final nodes

  • ❌ Average feature value

  • ❌ Depth of the tree

Explanation:
Entropy quantifies impurity — how mixed the classes are at a node.


4. Question 4

Regression tree on continuous features — which splitting method does not scale well?

  • ❌ Midpoints method

  • ❌ MSE method

  • ❌ Entropy reduction method

  • Exhaustive search method

Explanation:
Exhaustive search evaluates every possible split, which becomes extremely slow on large datasets.


5. Question 5

Why does KNN accuracy drop when K increases?

  • ❌ Scaling errors

  • Too much smoothing of patterns

  • ❌ Small training data

  • ❌ Many irrelevant features

Explanation:
Large K averages over too many neighbors, blurring class boundaries → underfitting.


6. Question 6

Adjusting epsilon (ɛ) in SVR controls:

  • ❌ Number of support vectors

  • Maximum allowed error within the margin (epsilon tube)

  • ❌ Kernel choice

  • ❌ Decision boundary complexity

Explanation:
Epsilon defines the width of the tube where errors are ignored.


7. Question 7

What is the primary goal of AdaBoost?

  • ❌ Reduce overfitting with deep trees

  • Create a strong learner from weak learners by reducing bias

  • ❌ Combine models in parallel

  • ❌ Dimensionality reduction

Explanation:
AdaBoost trains weak learners sequentially, each focusing on correcting prior errors.


🧾 Summary Table

Q# Correct Answer Key Concept
1 K-nearest neighbors Classification using similarity
2 Popularity vote One-vs-One strategy
3 Disorder/randomness Entropy in decision trees
4 Exhaustive search Splitting continuous features
5 Too much smoothing High K → underfitting
6 Max allowed error (epsilon tube) SVR epsilon parameter
7 Strong learner from weak learners AdaBoost goal