Concurrency Basics in Scala
In this lesson, you will learn the fundamentals of concurrency in Scala. Concurrency allows a program to perform multiple tasks at the same time, making applications faster, more responsive, and more scalable.
Modern applications such as web servers, data pipelines, and distributed systems depend heavily on concurrent execution. Scala provides powerful tools to handle concurrency safely and efficiently.
What Is Concurrency?
Concurrency means executing multiple tasks during overlapping periods of time. These tasks may run:
- On multiple CPU cores
- On a single core using time-sharing
- Independently or in coordination
Concurrency is different from parallelism. Parallelism means tasks run simultaneously, while concurrency focuses on managing multiple tasks efficiently.
Why Concurrency Matters
Concurrency helps applications:
- Handle multiple users at once
- Improve performance on multi-core systems
- Remain responsive during long-running operations
- Scale effectively in cloud environments
Scala is designed with concurrency in mind and provides safe abstractions to avoid common problems.
Processes vs Threads
Before diving into Scala concurrency, it’s important to understand the difference between processes and threads.
- Process – An independent program with its own memory space
- Thread – A lightweight unit of execution within a process
Threads within the same process share memory, which makes communication faster but more error-prone.
Concurrency Challenges
Concurrency introduces several challenges:
- Race conditions
- Deadlocks
- Data inconsistency
- Difficult debugging
Scala’s concurrency tools are designed to reduce these risks.
Creating Threads in Scala
Scala runs on the JVM, so it can use Java threads directly.
val thread = new Thread(() => {
println("Running in a separate thread")
})
thread.start()
This creates and starts a new thread that runs independently from the main program.
Main Thread vs Child Threads
The main program runs on the main thread. When you start a new thread, it runs concurrently.
println("Main thread started")
new Thread(() => {
println("Child thread running")
}).start()
println("Main thread finished")
The order of output is not guaranteed, because threads execute independently.
Thread Scheduling
Thread execution order is controlled by the operating system and JVM. You should never assume a fixed execution order.
This unpredictability is why concurrency requires careful design.
Shared Memory and Race Conditions
When multiple threads access the same variable, race conditions can occur.
var counter = 0
val t1 = new Thread(() => counter += 1)
val t2 = new Thread(() => counter += 1)
t1.start()
t2.start()
The final value of counter may be incorrect
due to simultaneous updates.
Synchronization Basics
To avoid race conditions, shared resources must be synchronized.
val lock = new Object
var counter = 0
def increment(): Unit = lock.synchronized {
counter += 1
}
Synchronization ensures that only one thread accesses the critical section at a time.
Why Threads Are Hard to Manage
While threads are powerful, they can be difficult to manage directly:
- Manual synchronization is error-prone
- Hard to scale with many threads
- Difficult to reason about execution flow
Scala provides higher-level concurrency abstractions to solve these problems.
Scala’s Concurrency Philosophy
Scala encourages:
- Immutability
- Message passing
- Asynchronous computation
Later lessons will cover:
- Futures and Promises
- Execution Contexts
- Actors
📝 Practice Exercises
Exercise 1
Create a thread that prints a message five times.
Exercise 2
Demonstrate a race condition using a shared variable.
Exercise 3
Protect a shared variable using synchronization.
✅ Practice Answers
Answer 1
new Thread(() => {
for (i <- 1 to 5)
println("Hello from thread")
}).start()
Answer 2
var count = 0
new Thread(() => count += 1).start()
new Thread(() => count += 1).start()
Answer 3
val lock = new Object
var count = 0
lock.synchronized {
count += 1
}
What’s Next?
In the next lesson, you will learn about Futures, Scala’s primary abstraction for asynchronous computation.