DL Lesson 29 – Build NN from Scratch | Dataplexa

Build a Neural Network from Scratch

So far, we have used high-level frameworks like TensorFlow and PyTorch to build neural networks quickly.

While this is practical, it hides what is really happening inside.

In this lesson, we slow things down and build a neural network step by step to deeply understand how learning happens.


Why Build from Scratch?

Frameworks are powerful, but they abstract away the core logic.

If you do not understand:

• How weights are updated • How errors flow backward • Why gradients matter

you will struggle when models fail.

Building from scratch removes the magic.


Neural Network Structure

We start with the simplest possible network:

• One input layer • One hidden layer • One output neuron

Each neuron performs two steps:

1. Weighted sum of inputs 2. Apply an activation function


Step 1: Define Inputs and Weights

Assume we have two input features and one hidden neuron.

import numpy as np

# inputs
X = np.array([0.5, 0.8])

# weights
W = np.array([0.4, 0.7])

# bias
b = 0.2

At this stage, these weights are just random numbers.


Step 2: Forward Pass

The neuron computes a weighted sum:

z = np.dot(X, W) + b
z

This value alone is not enough. We need non-linearity.


Step 3: Activation Function

We apply a sigmoid activation function to squash the output between 0 and 1.

def sigmoid(z):
    return 1 / (1 + np.exp(-z))

output = sigmoid(z)
output

This is the predicted output of the neuron.


Step 4: Loss Calculation

Learning happens by measuring error.

Assume the true label is:

y_true = 1

We use Mean Squared Error for simplicity.

loss = (y_true - output) ** 2
loss

This loss tells us how wrong the prediction is.


Step 5: Why Backpropagation Is Needed

To improve the model, we must adjust weights in the direction that reduces loss.

Backpropagation applies calculus to compute how much each weight contributes to the error.

This is where gradients come from.


Conceptual Gradient Intuition

Think of loss as height on a hill.

Weights are your position.

Gradient tells you:

“Which direction should I move to go downhill fastest?”


Why Frameworks Matter After This

For large networks, manual gradient calculation is impossible.

Frameworks automate this process, but internally, they do exactly what you saw here — just at scale.


Mini Practice

Try changing:

• Input values • Weights • Bias

Observe how output and loss change.


Exercises

Exercise 1:
Why do neural networks need activation functions?

They introduce non-linearity, allowing networks to learn complex patterns.

Exercise 2:
What happens if weights never change?

The model never learns and keeps making the same predictions.

Quick Quiz

Q1. What does the forward pass compute?

Predicted output from inputs, weights, and bias.

Q2. What guides weight updates?

Gradients of the loss function.

In the next lesson, we will verify our gradients using gradient checking and understand why numerical stability matters in deep learning.