DL Lesson 27 – PyTorch Basics | Dataplexa

PyTorch Basics

In the previous lesson, we learned how to build neural networks using the Keras Sequential API.

Keras focuses on simplicity and speed. PyTorch, on the other hand, focuses on flexibility, control, and research-grade modeling.

Many cutting-edge deep learning models today are built using PyTorch.


What Is PyTorch?

PyTorch is a deep learning framework developed by Meta (Facebook).

Unlike Keras, which hides many low-level details, PyTorch exposes the internal mechanics of neural networks.

This makes PyTorch especially popular in research, experimentation, and advanced architectures.


Dynamic Computation Graphs (Key Difference)

The most important difference between PyTorch and Keras is how they handle computation graphs.

PyTorch builds computation graphs dynamically, meaning the graph is created as the code runs.

This feels very similar to writing normal Python code, which makes debugging and experimentation much easier.


Tensors – The Core Building Block

In PyTorch, everything starts with a tensor.

A tensor is similar to a NumPy array, but with additional capabilities like GPU acceleration and automatic differentiation.

import torch

x = torch.tensor([1.0, 2.0, 3.0])
print(x)

Tensors can represent:

• Input data • Model weights • Intermediate activations


Creating a Simple Neural Network in PyTorch

Unlike Keras, PyTorch does not use a predefined stack.

Instead, you explicitly define the structure by creating a class.

import torch.nn as nn

class SimpleNet(nn.Module):
    def __init__(self):
        super(SimpleNet, self).__init__()
        self.fc1 = nn.Linear(10, 16)
        self.fc2 = nn.Linear(16, 1)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = torch.sigmoid(self.fc2(x))
        return x

Here, you manually define how data flows through the network.

This gives you full control over every operation.


Instantiating the Model

Once the model class is defined, you create an instance.

model = SimpleNet()

At this point, the model has structure, but it has not learned anything yet.


Loss Function and Optimizer

Just like Keras, PyTorch requires a loss function and an optimizer.

criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

The loss function tells the model how wrong its predictions are.

The optimizer tells the model how to update its weights.


The Training Loop (Important Concept)

In PyTorch, you explicitly write the training loop.

This is where PyTorch becomes more powerful — and more demanding.

optimizer.zero_grad()

outputs = model(X_train)
loss = criterion(outputs, y_train)

loss.backward()
optimizer.step()

Each step is explicit:

• Forward pass • Loss calculation • Gradient computation • Weight update


Why Researchers Prefer PyTorch

Because you control everything, PyTorch allows:

• Custom architectures • Dynamic behavior • Advanced experimentation

This is why models like transformers were first developed in PyTorch.


Keras vs PyTorch – Intuition

Think of it this way:

Keras is like driving an automatic car.

PyTorch is like driving a manual car — more effort, but more control.


Mini Practice

Answer this:

Why do you think PyTorch requires a manual training loop?


Exercises

Exercise 1:
What is the biggest advantage of PyTorch’s dynamic graphs?

They allow flexible model behavior and easier debugging during execution.

Exercise 2:
Why must we define a forward() method?

Because it explicitly defines how input data flows through the network.

Quick Quiz

Q1. Which framework gives more control over training?

PyTorch.

Q2. What does backward() compute?

Gradients of the loss with respect to model parameters.

In the next lesson, we will move deeper into TensorFlow fundamentals and see how it compares with both Keras and PyTorch.