Algorithms Lesson 22 – DP Introduction | Dataplexa

Introduction to Dynamic Programming

In the previous lesson, we studied Dijkstra’s Algorithm and learned how greedy strategies can efficiently solve shortest path problems.

However, not all problems can be solved by making a greedy choice at each step.

In this lesson, we introduce one of the most powerful ideas in algorithms — Dynamic Programming (DP).


What Is Dynamic Programming?

Dynamic Programming is a technique used to solve complex problems by breaking them down into smaller overlapping subproblems.

Instead of solving the same subproblem again and again, we store its result and reuse it when needed.

This simple idea can reduce time complexity from exponential to polynomial.


When Do We Use Dynamic Programming?

Dynamic Programming is useful when a problem has two key properties:

1. Optimal Substructure
The optimal solution of the problem can be built from optimal solutions of its subproblems.

2. Overlapping Subproblems
The same subproblems appear repeatedly during computation.

If both conditions are present, Dynamic Programming is usually a good fit.


Why Normal Recursion Is Not Enough

Consider a recursive solution that explores all possible paths.

While recursion is elegant, it often recomputes the same values many times.

This leads to huge performance issues for large inputs.

Dynamic Programming fixes this by remembering previous results.


Real-World Analogy

Imagine you are studying for exams and solving practice problems.

If you forget every solution and solve the same question from scratch each time, you waste effort.

Instead, you keep notes and refer back to them.

Dynamic Programming works the same way — it remembers answers.


Two Main Approaches in DP

There are two common ways to implement Dynamic Programming.

Memoization (Top-Down)
This approach uses recursion and stores results as they are computed.

Tabulation (Bottom-Up)
This approach builds solutions from the smallest subproblems using iteration.

We will study both approaches in detail in upcoming lessons.


A Simple Example Idea

Suppose you want to compute the nth Fibonacci number.

A recursive approach recalculates the same Fibonacci values multiple times.

Dynamic Programming avoids this by storing intermediate results and reusing them.


Why DP Is So Important

Many famous problems rely on DP, such as:

• Knapsack problems
• Longest Common Subsequence
• Shortest paths in DAGs
• Coin change problems

Once you understand DP, a wide class of problems becomes much easier to solve.


Mini Practice

Think carefully:

  • Why does storing previous results save time?
  • Can a problem have optimal substructure but no overlapping subproblems?

Exercises

Exercise 1:
What are the two main properties required for Dynamic Programming?

Optimal substructure and overlapping subproblems.

Exercise 2:
Why is naive recursion inefficient for DP problems?

Because it recomputes the same subproblems repeatedly.

Exercise 3:
Which DP approach builds solutions iteratively?

Tabulation (bottom-up approach).

Quick Quiz

Q1. What does DP stand for?

Dynamic Programming.

Q2. What problem characteristic allows reuse of solutions?

Overlapping subproblems.

Dynamic Programming is a mindset, not just a technique.

In the next lesson, we will compare Memoization vs Tabulation and learn when to use each approach.