9  Regression for Time Series Forecasting (with sklearn)

Let’s explore an example of how to use regression to perform trend analysis with time series data.

9.1 Data Loading

As an example time series dataset, let’s consider this dataset of U.S. population over time, from the Federal Reserve Economic Data (FRED).

Fetching the data, going back as far as possible:

from pandas_datareader import get_data_fred

DATASET_NAME = "POPTHM"
df = get_data_fred(DATASET_NAME, start="1900-01-01")
print(len(df))
df
796
POPTHM
DATE
1959-01-01 175818
1959-02-01 176044
1959-03-01 176274
... ...
2025-02-01 341588
2025-03-01 341729
2025-04-01 341874

796 rows × 1 columns

Data Source

Here is some more information about the “POPTHM” dataset:

“Population includes resident population plus armed forces overseas. The monthly estimate is the average of estimates for the first of the month and the first of the following month.”

The data is expressed in “Thousands”, and is “Not Seasonally Adjusted”.

Wrangling the data, including renaming columns and converting the date index to be datetime-aware, may make it easier for us to work with this data:

from pandas import to_datetime

df.rename(columns={DATASET_NAME: "population"}, inplace=True)
df.index.name = "date"
df.index = to_datetime(df.index)
df
population
date
1959-01-01 175818
1959-02-01 176044
1959-03-01 176274
... ...
2025-02-01 341588
2025-03-01 341729
2025-04-01 341874

796 rows × 1 columns

9.2 Data Exploration

Exploring trends:

import plotly.express as px

px.scatter(df, y="population", title="US Population (Monthly) vs Trend",
            labels={"population":"US Population (thousands)", "value":""},
            trendline="ols", trendline_color_override="red", height=350,
)

Looks like a possible linear trend. Let’s perform a more formal regression analysis.

9.3 Data Encoding

Because we need numeric features to perform a regression, we convert the dates to a linear time step of integers (after sorting the data first for good measure):

df.sort_values(by="date", ascending=True, inplace=True)

df["time_step"] = range(1, len(df) + 1)
df
population time_step
date
1959-01-01 175818 1
1959-02-01 176044 2
1959-03-01 176274 3
... ... ...
2025-02-01 341588 794
2025-03-01 341729 795
2025-04-01 341874 796

796 rows × 2 columns

We will use the numeric time step as our input variable (x), to predict the population (y).

9.4 Data Splitting

9.4.1 X/Y Split

Identifying dependent and independent variables:

#x = df[["date"]] # we need numbers not strings
x = df[["time_step"]]

y = df["population"]

print("X:", x.shape)
print("Y:", y.shape)
X: (796, 1)
Y: (796,)

9.4.2 Train/Test Split

Splitting data sequentially, where earlier data is used in training, and recent data is used for testing:

print(len(df))

training_size = round(len(df) * .8)
print(training_size)

x_train = x.iloc[:training_size] # slice all before
y_train = y.iloc[:training_size] # slice all before

x_test = x.iloc[training_size:] # slice all after
y_test = y.iloc[training_size:] # slice all after
print("TRAIN:", x_train.shape)
print("TEST:", x_test.shape)
796
637
TRAIN: (637, 1)
TEST: (159, 1)

9.5 Model Selection and Training

Training a linear regression model on the training data:

from sklearn.linear_model import LinearRegression

model = LinearRegression()

model.fit(x_train, y_train)
LinearRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

After training, we have access to the learned weights, as well as the line of best fit (i.e. the trend line):

print("COEFS:", model.coef_)
print("INTERCEPT:", model.intercept_)
print("------------------")
print(f"y = {model.coef_[0].round(3)}x + {model.intercept_.round(3)}")
COEFS: [213.17440418]
INTERCEPT: 174513.3870442226
------------------
y = 213.174x + 174513.387

In this case, we interpret the line of best fit to observe how much the population is expected to grow on average per time step, as well as the population trend value at the earliest time step.

Note

Remember in this dataset the population is expressed in thousands.

9.6 Model Prediction and Evaluation

We use the trained model to make predictions on the test set, and then calculate regression metrics to see how well the model is doing:

from sklearn.metrics import mean_squared_error, r2_score

y_pred = model.predict(x_test)

mse = mean_squared_error(y_test, y_pred)
print("MSE:", round(mse, 4))

r2 = r2_score(y_test, y_pred)
print("R2:", round(r2, 4))
MSE: 6071540.1057
R2: 0.8967

A very high r-squared value represents a strong linear trend.

Plotting predictions (trend line):

df["prediction"] = model.predict(df[["time_step"]])
df["error"] = df["population"] - df["prediction"]

px.line(df, y=["population", "prediction"], height=350,
    title="US Population (Monthly) vs Regression Predictions (Trend)",
    labels={"value":""}
)

9.7 Forecasting

Assembling a dataset of future dates and time steps (which we can use as inputs to the trained model to make predictions about the future):

from pandas import date_range, DateOffset, DataFrame

last_time_step = df['time_step'].iloc[-1]
last_date = df.index[-1]
next_date = last_date + DateOffset(months=1)

FUTURE_MONTHS = 36
# frequency of "M" for end of month, "MS" for beginning of month
future_dates = date_range(start=next_date, periods=FUTURE_MONTHS, freq='MS')
future_time_steps = range(last_time_step + 1, last_time_step + FUTURE_MONTHS + 1)

df_future = DataFrame({'time_step': future_time_steps}, index=future_dates)
df_future.index.name = "date"
df_future
time_step
date
2025-05-01 797
2025-06-01 798
2025-07-01 799
... ...
2028-02-01 830
2028-03-01 831
2028-04-01 832

36 rows × 1 columns

Predicting future values:

df_future["prediction"] = model.predict(df_future[["time_step"]])
df_future
time_step prediction
date
2025-05-01 797 344413.387177
2025-06-01 798 344626.561581
2025-07-01 799 344839.735985
... ... ...
2028-02-01 830 351448.142515
2028-03-01 831 351661.316919
2028-04-01 832 351874.491323

36 rows × 2 columns

Concatenating historical data with future data:

from pandas import concat

chart_df = concat([df, df_future])
chart_df
population time_step prediction error
date
1959-01-01 175818.0 1 174726.561448 1091.438552
1959-02-01 176044.0 2 174939.735853 1104.264147
1959-03-01 176274.0 3 175152.910257 1121.089743
... ... ... ... ...
2028-02-01 NaN 830 351448.142515 NaN
2028-03-01 NaN 831 351661.316919 NaN
2028-04-01 NaN 832 351874.491323 NaN

832 rows × 4 columns

Note

The population and error values for future dates are null, because we don’t know them yet. Although we are able to make predictions about these values, based on historical trends.

Plotting trend vs actual, with future predictions:

px.line(chart_df[-180:], y=["population", "prediction"], height=350,
    title="US Population (Monthly) vs Regression Predictions (Trend)",
    labels={"value":""}
)