Time Series Lesson 37 – RNNs | Dataplexa

Recurrent Neural Networks (RNNs)

Up to now, our forecasting methods treated time as a set of engineered features. But real sequences have memory.

What happens today depends on what happened yesterday — and the day before. Recurrent Neural Networks were created to learn this dependency directly.


Why Standard Neural Networks Fail for Time Series

Traditional neural networks assume that all inputs are independent.

In time series, this assumption is false.

Examples:

  • Today’s electricity demand depends on yesterday’s demand
  • Stock prices depend on recent price movements
  • User activity depends on previous interactions

We need a model that remembers the past.


The Core Idea Behind RNNs

An RNN processes data step by step. At each step, it combines:

  • The current input value
  • Information from previous time steps

This memory is stored in something called the hidden state.


RNNs in a Real-World Context

Imagine forecasting daily sales.

Instead of feeding all past values at once, an RNN reads sales day by day, updating its understanding as it goes.

This makes RNNs powerful for:

  • Sequential forecasting
  • Pattern continuation
  • Temporal dependency learning

How an RNN Processes a Sequence

At each time step:

  • The current value enters the network
  • The previous hidden state is reused
  • A new hidden state is produced

This loop continues through time.


Visualizing Sequence Learning

Let’s visualize how an RNN follows a time series pattern.

The plot below shows:

  • Actual time series values
  • RNN-style rolling predictions

Notice how predictions depend on previous values — this is temporal memory in action.


What This Plot Tells You

When reading the chart:

  • The black line is the real time series
  • The purple line shows RNN-style predictions
  • Predictions smoothly follow past behavior

Unlike static models, RNN predictions evolve step by step.


Simple RNN Concept (Python Logic)

Python: RNN-style Sequential Prediction
import numpy as np

series = np.sin(np.linspace(0, 8*np.pi, 100))
predictions = []

hidden = 0
for value in series[:-1]:
    hidden = 0.8 * hidden + 0.2 * value
    predictions.append(hidden)

print(predictions[:5])

What’s happening conceptually:

  • The hidden state carries past information
  • Each new value updates that memory
  • The model learns continuity over time

Why RNNs Work Well for Time Series

  • They learn temporal dependencies automatically
  • No manual lag feature creation
  • They adapt to sequence length

However, they are not perfect.


Limitations of Basic RNNs

  • Struggle with long-term memory
  • Vanishing gradients
  • Training instability

These limitations led to more advanced architectures — which we will study next.


Practice Questions

Q1. Why are RNNs better than feedforward networks for time series?

Because RNNs retain information from previous time steps, capturing temporal dependencies.

Q2. What is the role of the hidden state?

It stores learned information from previous inputs and influences future predictions.

Next lesson: LSTMs — solving long-term memory problems in RNNs.