Forward & Backpropagation
In the previous lesson, we learned how activation functions allow neural networks to learn complex, non-linear patterns.
Now we reach the heart of Deep Learning — how a neural network actually learns from mistakes.
This learning process happens through two tightly connected steps: forward propagation and backpropagation.
The Learning Loop in Neural Networks
Every neural network learns by repeating the same loop:
1️⃣ Make a prediction
2️⃣ Measure how wrong the prediction is
3️⃣ Adjust internal parameters to reduce the error
Forward propagation handles step 1, while backpropagation handles steps 2 and 3.
Forward Propagation (Making Predictions)
Forward propagation is the process where input data moves forward through the network layer by layer.
Each neuron performs two operations:
• Compute a weighted sum of inputs
• Apply an activation function
z = w1*x1 + w2*x2 + ... + wn*xn + b
a = activation(z)
This continues across all layers until the network produces a final output.
Real-World Intuition
Imagine predicting house prices.
Input features (size, location, age) flow through multiple decision stages before producing a final predicted price.
Forward propagation is simply the network making its best guess based on current knowledge.
Loss Function – Measuring the Mistake
Once a prediction is made, we need a way to measure how wrong it is.
This is done using a loss function.
The loss function outputs a single number that represents prediction error.
Loss = Actual Output - Predicted Output
Lower loss means better predictions. The goal of training is to minimize this value.
Backpropagation (Learning from Errors)
Backpropagation is the process of sending the error backward through the network.
It answers a critical question:
Which weights contributed most to the error?
Using calculus (chain rule), the network computes gradients — which tell us how much each weight should be increased or decreased.
Gradient Intuition (Without Heavy Math)
A gradient tells us the direction in which loss increases fastest.
To reduce loss, we move in the opposite direction.
This idea powers gradient descent, which we will explore deeper in later lessons.
new_weight = old_weight - learning_rate * gradient
Why Backpropagation Is So Powerful
Backpropagation allows deep networks with millions of parameters to learn efficiently.
Without backpropagation, training deep models would be computationally impossible.
This single idea is what enabled modern AI breakthroughs.
Mini Practice
Think carefully:
If learning rate is too large, what might happen during training?
Exercises
Exercise 1:
What is the purpose of forward propagation?
Exercise 2:
Why do we need backpropagation?
Exercise 3:
What role does the loss function play?
Quick Quiz
Q1. Which process moves error backward through the network?
Q2. What happens if gradients are ignored?
In the next lesson, we will dive deeper into loss functions and understand how different problems require different ways of measuring error.