Algorithms Lesson 5 – Recursion Fundamentals | Dataplexa

Recursion Fundamentals

In the previous lesson, we learned how to mathematically analyze loops and nested logic using formal techniques.

Now we move into one of the most powerful — and sometimes confusing — concepts in algorithms: recursion.

Recursion allows a function to solve a problem by calling itself, breaking a large problem into smaller versions of the same problem.


What Is Recursion?

Recursion is a technique where a function calls itself to solve a smaller subproblem.

Instead of writing repeated loops, we let the function repeat itself until a stopping condition is reached.

Every recursive algorithm has two mandatory parts:

  • A base case (stopping condition)
  • A recursive case (function calling itself)

Without a base case, recursion will run forever and crash the program.


Why Recursion Is Important in Algorithms

Many classical algorithms are naturally recursive.

Examples include:

  • Divide and Conquer algorithms
  • Tree traversals
  • Graph algorithms
  • Dynamic Programming

Understanding recursion early makes advanced algorithms much easier to learn.


Simple Recursion Example

Let us start with a very simple example: printing numbers from n to 1.

def print_numbers(n):
    if n == 0:
        return
    print(n)
    print_numbers(n - 1)

print_numbers(5)

Here is what happens step by step:

The function keeps calling itself with a smaller value of n until n == 0.

When the base case is reached, the recursion stops.


Understanding the Call Stack

Every recursive call is stored in memory using a structure called the call stack.

Each function call waits for the next call to finish.

For print_numbers(5), the stack looks like this:

5 → 4 → 3 → 2 → 1 → 0

Once the base case is reached, the stack starts returning back upward.


Recursive vs Iterative Thinking

Consider the same task done using a loop:

for i in range(5, 0, -1):
    print(i)

Both approaches work.

Recursion focuses on problem definition, while loops focus on execution steps.

Some problems are easier to think about recursively, even if they can be solved iteratively.


Classic Example: Factorial

Factorial is one of the most common recursion examples.

Mathematically:

n! = n × (n − 1)!

def factorial(n):
    if n == 1:
        return 1
    return n * factorial(n - 1)

print(factorial(5))

Each call reduces the problem size until the base case is reached.


Time Complexity of Recursive Algorithms

To analyze recursion, we count how many times the function is called.

For factorial:

T(n) = T(n − 1) + O(1)

This resolves to:

O(n)

We will study more complex recursive analysis using the Master Theorem later.


Common Mistakes in Recursion

Beginners often make these mistakes:

  • Forgetting the base case
  • Incorrect base condition
  • Too deep recursion causing stack overflow

Always test recursion with small input first.


Real-World Analogy

Think of opening nested boxes.

Each box contains a smaller box inside, until the smallest box has nothing inside.

That smallest box is the base case.


Mini Practice

Think carefully:

  • What happens if the base case is never reached?
  • Why is recursion dangerous for very large inputs?

Exercises

Exercise 1:
What are the two mandatory components of recursion?

A base case and a recursive case.

Exercise 2:
What happens if recursion has no base case?

It leads to infinite recursion and program crash.

Exercise 3:
What memory structure stores recursive calls?

The call stack.

Quick Quiz

Q1. What does recursion simplify?

Breaking large problems into smaller subproblems.

Q2. Is recursion always faster than loops?

No. Recursion may add overhead due to function calls.

In the next lesson, we will use recursion to understand basic sorting algorithms and see why recursion fits naturally into divide-and-conquer strategies.