Skip to content

Neural Network Basics:Neural Networks and Deep Learning(Deep Learning Specialization) Answers:2025

Question 1

What does a neuron compute?

❌ A neuron computes an activation function followed by a linear function z = Wx + b
❌ A neuron computes the mean of all features before applying the output to an activation function
❌ A neuron computes a function g that scales the input x linearly (Wx + b)
✅ A neuron computes a linear function z = Wx + b followed by an activation function

Explanation:
A neuron first computes a linear combination of inputs (Wx + b) and then applies a non-linear activation function g(z) to introduce non-linearity.


Question 2

Which of these is the “Logistic Loss”?

❌ |y(i) − ŷ(i)|
❌ |y(i) − ŷ(i)|²
❌ max(0, y(i) − ŷ(i))
✅ −[y(i)log(ŷ(i)) + (1 − y(i))log(1 − ŷ(i))]

Explanation:
The logistic loss (binary cross-entropy) measures how well predicted probabilities match actual binary labels (0 or 1).


Question 3

Suppose x is a (8,1) array. Which of the following is a valid reshape?

✅ x.reshape(2,2,2)
❌ x.reshape(-1,3)
❌ x.reshape(2,4,4)
❌ x.reshape(1,4,3)

Explanation:
8 elements can be reshaped into (2×2×2=8). Other options do not preserve the total number of elements.


Question 4

Given a.shape = (3,4), b.shape = (1,4), then c = a + b → what’s c.shape?

✅ (3,4)
❌ (1,4)
❌ Error
❌ (3,1)

Explanation:
Broadcasting allows row vector (1,4) to expand across all rows, producing (3,4).


Question 5

Given a.shape = (4,3), b.shape = (1,3), c = a * b → what’s c.shape?

✅ (4,3)
❌ (1,3)
❌ Not possible
❌ Size mismatch

Explanation:
The (1,3) array broadcasts along the first dimension, producing (4,3).


Question 6

What is the dimension of X = [x(1), x(2), …, x(m)] ?

❌ (m,1)
❌ (m, nₓ)
❌ (1,m)
✅ (nₓ, m)

Explanation:
Each column is one training example, so nₓ features × m examples → shape = (nₓ, m).


Question 7

Given a = [[2,1],[1,3]], what is a * a ?

❌ Error
❌ [[5,5],[5,10]]
❌ [[4,2],[2,6]]
✅ [[4,1],[1,9]]

Explanation:
The * operator in NumPy performs element-wise multiplication:
2×2=4, 1×1=1, etc.


Question 8

Vectorize this loop:

for i in range(3):
for j in range(4):
c[i][j] = a[i][j] * b[j]

✅ c = a * b
❌ c = a.T * b
❌ c = np.dot(a,b)
❌ c = a * b.T

Explanation:
When a.shape = (3,4) and b.shape = (4,1), NumPy broadcasting allows element-wise multiply directly as a * b.


Question 9

a = [[1,1],[1,−1]], b = [[2],[3]], c = a + b

✅ [[3,3],[4,2]]
❌ Error
❌ [[3,3],[3,1],[4,4],[5,2]]
❌ [[3,4],[3,2]]

Explanation:
Broadcasting adds column-wise:
→ 1st row: [1+2, 1+2] = [3,3]
→ 2nd row: [1+3, -1+3] = [4,2]


Question 10

What is the output of J?

✅ (a − b) * (a − c)
❌ a² − c²
❌ a² − b²
❌ a² + b² − c²

Explanation:
By combining operations in the computational graph, J simplifies to (a−b)(a−c).


🧾 Summary Table

Q# ✅ Correct Answer Key Concept
1 z = Wx + b → activation Neuron computation
2 −[y log(ŷ)+(1−y)log(1−ŷ)] Logistic loss
3 (2,2,2) Valid reshape (8 elements)
4 (3,4) Broadcasting rule
5 (4,3) Element-wise broadcast
6 (nₓ, m) Training matrix dimension
7 [[4,1],[1,9]] Element-wise multiply
8 c = a * b Vectorization
9 [[3,3],[4,2]] Broadcasting addition
10 (a − b)(a − c) Computational graph