DL Lesson 28 – TensorFlow Fundamentals | Dataplexa

TensorFlow Fundamentals

In the previous lesson, we explored PyTorch and understood why researchers prefer it for flexibility and experimentation.

In this lesson, we shift focus to TensorFlow, one of the most widely used deep learning frameworks in production environments.

TensorFlow is designed not just for training models, but for deploying them at scale.


What Is TensorFlow?

TensorFlow is an open-source deep learning framework developed by Google.

Its core idea is simple: represent computations as graphs and execute them efficiently on CPUs, GPUs, or TPUs.

TensorFlow is heavily used in real-world systems such as recommendation engines, search ranking, and speech recognition.


TensorFlow vs PyTorch – Practical Perspective

PyTorch feels like normal Python code. TensorFlow is more structured and optimized for production.

This makes TensorFlow slightly harder to learn, but extremely powerful when models need to run at scale.

In practice:

• PyTorch dominates research • TensorFlow dominates deployment


TensorFlow Tensors

Just like PyTorch, TensorFlow is built around tensors.

A tensor is a multi-dimensional array that flows through the network.

import tensorflow as tf

x = tf.constant([1.0, 2.0, 3.0])
print(x)

TensorFlow automatically tracks how tensors are transformed, which allows it to compute gradients later.


Building a Simple Model with TensorFlow

TensorFlow provides low-level APIs as well as high-level ones.

Here, we start with a simple fully connected network using Keras layers inside TensorFlow.

model = tf.keras.Sequential([
    tf.keras.layers.Dense(16, activation="relu", input_shape=(10,)),
    tf.keras.layers.Dense(1, activation="sigmoid")
])

This structure defines how data flows forward through the network.


Compiling the Model

Before training, TensorFlow requires the model to be compiled.

This step connects:

• Loss function • Optimizer • Evaluation metrics

model.compile(
    optimizer="adam",
    loss="binary_crossentropy",
    metrics=["accuracy"]
)

Compilation prepares the model for efficient execution.


Training the Model

Training in TensorFlow is handled using the fit() method.

model.fit(X_train, y_train, epochs=10, batch_size=32)

TensorFlow handles:

• Forward pass • Loss calculation • Backpropagation • Weight updates

This abstraction makes TensorFlow very efficient for production pipelines.


Why TensorFlow Is Popular in Industry

TensorFlow integrates easily with:

• Web applications • Mobile apps • Cloud platforms

It supports exporting models to multiple formats, making deployment smoother.


TensorFlow Execution Model

TensorFlow converts Python code into optimized computation graphs.

These graphs can be saved, shared, and executed independently of Python.

This is a key reason TensorFlow scales well in real-world systems.


Mini Practice

Think about this:

Why would a company prefer TensorFlow over PyTorch for a large-scale deployed system?


Exercises

Exercise 1:
What is the purpose of model.compile()?

It configures the model with loss function, optimizer, and metrics.

Exercise 2:
Why does TensorFlow focus on graphs?

Graphs allow optimized execution and easier deployment across platforms.

Quick Quiz

Q1. Which framework is more deployment-focused?

TensorFlow.

Q2. Which method trains a TensorFlow model?

model.fit()

In the next lesson, we will build a neural network from scratch and understand how forward and backward passes work internally.