Regression for Time Series Forecasting (with sklearn)

Let’s explore an example of how to use regression to perform trend analysis with time series data.

Data Loading

As an example time series dataset, let’s consider this dataset of U.S. population over time, from the Federal Reserve Economic Data (FRED).

Fetching the data, going back as far as possible:

from pandas_datareader import get_data_fred

DATASET_NAME = "POPTHM"
df = get_data_fred(DATASET_NAME, start="1900-01-01")
print(len(df))
df
789
POPTHM
DATE
1959-01-01 175818.0
1959-02-01 176044.0
1959-03-01 176274.0
... ...
2024-07-01 337005.0
2024-08-01 337185.0
2024-09-01 337362.0

789 rows × 1 columns

Data Source

Here is some more information about the “POPTHM” dataset:

“Population includes resident population plus armed forces overseas. The monthly estimate is the average of estimates for the first of the month and the first of the following month.”

The data is expressed in “Thousands”, and is “Not Seasonally Adjusted”.

Wrangling the data, including renaming columns and converting the date index to be datetime-aware, may make it easier for us to work with this data:

from pandas import to_datetime

df.rename(columns={DATASET_NAME: "population"}, inplace=True)
df.index.name = "date"
df.index = to_datetime(df.index)
df
population
date
1959-01-01 175818.0
1959-02-01 176044.0
1959-03-01 176274.0
... ...
2024-07-01 337005.0
2024-08-01 337185.0
2024-09-01 337362.0

789 rows × 1 columns

Data Exploration

Exploring trends:

import plotly.express as px

px.scatter(df, y="population", title="US Population (Monthly) vs Trend",
            labels={"population":"US Population (thousands)", "value":""},
            trendline="ols", trendline_color_override="red", height=350,
)

Looks like a possible linear trend. Let’s perform a more formal regression analysis.

Data Encoding

Because we need numeric features to perform a regression, we convert the dates to a linear time step of integers (after sorting the data first for good measure):

df.sort_values(by="date", ascending=True, inplace=True)

df["time_step"] = range(1, len(df) + 1)
df
population time_step
date
1959-01-01 175818.0 1
1959-02-01 176044.0 2
1959-03-01 176274.0 3
... ... ...
2024-07-01 337005.0 787
2024-08-01 337185.0 788
2024-09-01 337362.0 789

789 rows × 2 columns

We will use the numeric time step as our input variable (x), to predict the population (y).

Data Splitting

X/Y Split

Identifying dependent and independent variables:

#x = df[["date"]] # we need numbers not strings
x = df[["time_step"]]

y = df["population"]

print("X:", x.shape)
print("Y:", y.shape)
X: (789, 1)
Y: (789,)

Train/Test Split

Splitting into training vs testing datasets:

#from sklearn.model_selection import train_test_split
#
#x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=99)
#print("TRAIN:", x_train.shape, y_train.shape)
#print("TEST:", x_test.shape, y_test.shape)

Splitting data sequentially where earlier data is used in training and recent data is used for testing:

print(len(df))

training_size = round(len(df) * .8)
print(training_size)

x_train = x.iloc[:training_size] # slice all before
y_train = y.iloc[:training_size] # slice all before

x_test = x.iloc[training_size:] # slice all after
y_test = y.iloc[training_size:] # slice all after
print("TRAIN:", x_train.shape)
print("TEST:", x_test.shape)
789
631
TRAIN: (631, 1)
TEST: (158, 1)

Model Selection and Training

Training a linear regression model on the training data:

from sklearn.linear_model import LinearRegression

model = LinearRegression()

model.fit(x_train, y_train)
LinearRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

After training, we have access to the learned weights, as well as the line of best fit (i.e. the trend line):

print("COEFS:", model.coef_)
print("INTERCEPT:", model.intercept_)
print("------------------")
print(f"y = {model.coef_[0].round(3)}x + {model.intercept_.round(3)}")
COEFS: [212.8667855]
INTERCEPT: 174578.54744547582
------------------
y = 212.867x + 174578.547

In this case, we interpret the line of best fit to observe how much the population is expected to grow on average per time step, as well as the population trend value at the earliest time step.

Note

Remember in this dataset the population is expressed in thousands.

Model Prediction and Evaluation

We use the trained model to make predictions on the test set, and then calculate regression metrics to see how well the model is doing:

from sklearn.metrics import mean_squared_error, r2_score

y_pred = model.predict(x_test)

mse = mean_squared_error(y_test, y_pred)
print("MSE:", round(mse, 4))

r2 = r2_score(y_test, y_pred)
print("R2:", round(r2, 4))
MSE: 9206491.9526
R2: 0.8178

A very high r-squared value represents a strong linear trend.

Plotting predictions (trend line):

#df_preds = df.copy()
#df_preds["prediction"] = model.predict(df_preds[["time_step"]])
#df_preds["error"] = df_preds["population"] - df_preds["prediction"]
#
#px.line(df_preds, y=["population", "prediction"], height=350,
#    title="US Population (Monthly) vs Regression Predictions (Trend)",
#    labels={"value":""}
#)
df["prediction"] = model.predict(df[["time_step"]])
df["error"] = df["population"] - df["prediction"]

px.line(df, y=["population", "prediction"], height=350,
    title="US Population (Monthly) vs Regression Predictions (Trend)",
    labels={"value":""}
)

Forecasting

Assembling a dataset of future dates and time steps (which we can use as inputs to the trained model to make predictions about the future):

from pandas import date_range, DateOffset, DataFrame

last_time_step = df['time_step'].iloc[-1]
last_date = df.index[-1]
next_date = last_date + DateOffset(months=1)

FUTURE_MONTHS = 36
# frequency of "M" for end of month, "MS" for beginning of month
future_dates = date_range(start=next_date, periods=FUTURE_MONTHS, freq='MS')
future_time_steps = range(last_time_step + 1, last_time_step + FUTURE_MONTHS + 1)

df_future = DataFrame({'time_step': future_time_steps}, index=future_dates)
df_future.index.name = "date"
df_future
time_step
date
2024-10-01 790
2024-11-01 791
2024-12-01 792
... ...
2027-07-01 823
2027-08-01 824
2027-09-01 825

36 rows × 1 columns

Predicting future values:

df_future["prediction"] = model.predict(df_future[["time_step"]])
df_future
time_step prediction
date
2024-10-01 790 342743.307992
2024-11-01 791 342956.174777
2024-12-01 792 343169.041563
... ... ...
2027-07-01 823 349767.911913
2027-08-01 824 349980.778699
2027-09-01 825 350193.645484

36 rows × 2 columns

Concatenating historical data with future data:

from pandas import concat

chart_df = concat([df, df_future])
chart_df
population time_step prediction error
date
1959-01-01 175818.0 1 174791.414231 1026.585769
1959-02-01 176044.0 2 175004.281016 1039.718984
1959-03-01 176274.0 3 175217.147802 1056.852198
... ... ... ... ...
2027-07-01 NaN 823 349767.911913 NaN
2027-08-01 NaN 824 349980.778699 NaN
2027-09-01 NaN 825 350193.645484 NaN

825 rows × 4 columns

Note

The population and error values for future dates are null, because we don’t know them yet. Although we are able to make predictions about these values, based on historical trends.

Plotting trend vs actual, with future predictions:

px.line(chart_df[-180:], y=["population", "prediction"], height=350,
    title="US Population (Monthly) vs Regression Predictions (Trend)",
    labels={"value":""}
)