Time and Space Complexity
In the previous lesson, we learned what algorithms are and why they form the foundation of computer science.
Now it is time to answer a very important question:
How efficient is an algorithm?
This is where time and space complexity come into play. They help us measure how an algorithm behaves as the input size grows.
Why Efficiency Matters
Imagine two algorithms that both solve the same problem.
One finishes in milliseconds, while the other takes several seconds for large input sizes.
On small data, you may not notice the difference. But in real systems with millions of inputs, a poor algorithm can:
- Slow down applications
- Consume excessive memory
- Increase infrastructure costs
That is why companies deeply care about algorithm efficiency.
What Is Time Complexity?
Time complexity describes how the execution time of an algorithm grows as the size of input increases.
It does not measure actual seconds. Instead, it focuses on the number of operations performed.
We usually express time complexity in terms of input size n.
Simple Example
Consider this loop that prints elements of an array:
arr = [5, 8, 2, 9, 1]
for x in arr:
print(x)
If the array has 5 elements, the loop runs 5 times.
If the array has 1,000 elements, the loop runs 1,000 times.
So the time taken grows directly with input size.
What Is Space Complexity?
Space complexity measures how much extra memory an algorithm uses relative to input size.
This includes:
- Variables
- Arrays
- Recursive call stacks
Memory usage becomes critical in systems with limited resources such as mobile devices or embedded systems.
Real-World Analogy
Think of searching for a name in a phone book.
If you scan from the first page to the last, the time taken increases as the book grows.
But the memory you use stays almost the same — just your finger and eyes.
This shows how time and space can behave differently.
Constant vs Growing Time
Some algorithms always take the same amount of time, no matter the input size.
Others grow slowly, moderately, or very fast as input increases.
Understanding this growth behavior helps us choose the right algorithm.
Example: Constant Time
Accessing an element by index:
arr = [10, 20, 30, 40]
print(arr[2])
No matter how large the array is, this operation always takes the same time.
Example: Linear Time
Searching for a value in an unsorted list:
arr = [4, 7, 1, 9, 3]
target = 9
found = False
for x in arr:
if x == target:
found = True
break
In the worst case, the algorithm checks every element.
Why We Analyze Worst Case
Algorithms may behave differently depending on input.
We usually analyze the worst case because it guarantees performance limits.
This ensures the system will not fail under heavy load.
Mini Practice
Think about this:
- Does printing the first element depend on array size?
- Does searching through the whole array depend on size?
Exercises
Exercise 1:
What does time complexity measure?
Exercise 2:
What does space complexity measure?
Exercise 3:
Why do we analyze the worst-case scenario?
Quick Quiz
Q1. Is time complexity measured in seconds?
Q2. Can an algorithm have low time complexity but high space usage?
In the next lesson, we will formally introduce Big-O, Big-Theta, and Big-Omega notations to mathematically express algorithm efficiency.