11  Regression for Seasonality Analysis

We’ve explored using a regression for time series forecasting, but what if there are seasonal or cyclical patterns in the data?

Let’s explore an example of how to use regression to identify cyclical patterns and perform seasonality analysis with time series data.

11.1 Data Loading

For a time series dataset that exemplifies cyclical patterns, let’s consider this dataset of U.S. employment over time, from the Federal Reserve Economic Data (FRED).

Fetching the data, going back as far as possible:

from pandas_datareader import get_data_fred
from datetime import datetime

DATASET_NAME = "PAYNSA"
df = get_data_fred(DATASET_NAME, start=datetime(1900,1,1))
print(len(df))
df
1030
PAYNSA
DATE
1939-01-01 29296
1939-02-01 29394
1939-03-01 29804
... ...
2024-08-01 158731
2024-09-01 159181
2024-10-01 160007

1030 rows × 1 columns

Data Source

Here is some more information about the “PAYNSA” dataset:

“All Employees: Total Nonfarm, commonly known as Total Nonfarm Payroll, is a measure of the number of U.S. workers in the economy that excludes proprietors, private household employees, unpaid volunteers, farm employees, and the unincorporated self-employed.”

“Generally, the U.S. labor force and levels of employment and unemployment are subject to fluctuations due to seasonal changes in weather, major holidays, and the opening and closing of schools.”

“The Bureau of Labor Statistics (BLS) adjusts the data to offset the seasonal effects to show non-seasonal changes: for example, women’s participation in the labor force; or a general decline in the number of employees, a possible indication of a downturn in the economy.

To closely examine seasonal and non-seasonal changes, the BLS releases two monthly statistical measures: the seasonally adjusted All Employees: Total Nonfarm (PAYEMS) and All Employees: Total Nonfarm (PAYNSA), which is not seasonally adjusted.”

This “PYYNSA” data is expressed in “Thousands of Persons”, and is “Not Seasonally Adjusted”.

The dataset frequency is “Monthly”.

Wrangling the data, including renaming columns and converting the date index to be datetime-aware, may make it easier for us to work with this data:

from pandas import to_datetime

df.rename(columns={DATASET_NAME: "employment"}, inplace=True)
df.index.name = "date"
df.index = to_datetime(df.index)
df
employment
date
1939-01-01 29296
1939-02-01 29394
1939-03-01 29804
... ...
2024-08-01 158731
2024-09-01 159181
2024-10-01 160007

1030 rows × 1 columns

11.2 Data Exploration

Visualizing the data:

import plotly.express as px

px.line(df, y="employment", height=450,
        title="US Employment by month (non-seasonally adjusted)",
        labels={"employment": "Employment (in thousands of persons)"},
)

Cyclical Patterns

Exploring cyclical patterns in the data:

px.line(df[(df.index.year >= 1970) & (df.index.year <= 1980)], y="employment",
        title="US Employment by month (selected years)", height=450,
        labels={"Employment": "Employment (in thousands)"},
)
Interactive dataviz

Hover over the dataviz to see which month(s) typically have higher employment, and which month(s) typically have lower employment.

Trend Analysis

Exploring trends:

import plotly.express as px

px.scatter(df, y="employment",  height=450,
        title="US Employment by month (vs Trend)",
        labels={"employment": "Employment (in thousands)"},
        trendline="ols", trendline_color_override="red"
)

Looks like evidence of a possible linear relationship. Let’s perform a more formal regression analysis.

11.3 Data Encoding

Because we need numeric features to perform a regression, we convert the dates to a linear time step of integers (after sorting the data first for good measure):

df.sort_values(by="date", ascending=True, inplace=True)

df["time_step"] = range(1, len(df) + 1)
df
employment time_step
date
1939-01-01 29296 1
1939-02-01 29394 2
1939-03-01 29804 3
... ... ...
2024-08-01 158731 1028
2024-09-01 159181 1029
2024-10-01 160007 1030

1030 rows × 2 columns

We will use the numeric time step as our input variable (x), to predict the employment (y).

11.4 Data Splitting

X/Y Split

Identifying dependent and independent variables:

x = df[["time_step"]]

y = df["employment"]

print("X:", x.shape)
print("Y:", y.shape)
X: (1030, 1)
Y: (1030,)

Adding Constants

We are going to use statsmodels, so we add a column of constant ones representing the intercept:

import statsmodels.api as sm

# adding in a column of constants, as per the OLS docs
x = sm.add_constant(x)
x.head()
const time_step
date
1939-01-01 1.0 1
1939-02-01 1.0 2
1939-03-01 1.0 3
1939-04-01 1.0 4
1939-05-01 1.0 5

Train/Test Split

Splitting into training vs testing datasets:

#from sklearn.model_selection import train_test_split
#
#x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=99)
#print("TRAIN:", x_train.shape, y_train.shape)
#print("TEST:", x_test.shape, y_test.shape)

Splitting data sequentially where earlier data is used in training and recent data is use for testing:

#print(len(df))
#
#training_size = round(len(df) * .8)
#print(training_size)
#
#x_train = x.iloc[:training_size] # slice all before
#y_train = y.iloc[:training_size] # slice all before
#
#x_test = x.iloc[training_size:] # slice all after
#y_test = y.iloc[training_size:] # slice all after
#print("TRAIN:", x_train.shape)
#print("TEST:", x_test.shape)

For this example, we will not split the data. To help illustrate a story about predictions over the entire time period.

11.5 Model Selection and Training

Training a linear regression model on the training data:

import statsmodels.api as sm

model = sm.OLS(y, x, missing="drop")
print(type(model))

results = model.fit()
print(type(results))
<class 'statsmodels.regression.linear_model.OLS'>
<class 'statsmodels.regression.linear_model.RegressionResultsWrapper'>

Examining training results:

print(results.summary())
                            OLS Regression Results                            
==============================================================================
Dep. Variable:             employment   R-squared:                       0.984
Model:                            OLS   Adj. R-squared:                  0.984
Method:                 Least Squares   F-statistic:                 6.187e+04
Date:                Tue, 19 Nov 2024   Prob (F-statistic):               0.00
Time:                        20:54:53   Log-Likelihood:                -10209.
No. Observations:                1030   AIC:                         2.042e+04
Df Residuals:                    1028   BIC:                         2.043e+04
Df Model:                           1                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
const       2.688e+04    304.676     88.213      0.000    2.63e+04    2.75e+04
time_step    127.3417      0.512    248.728      0.000     126.337     128.346
==============================================================================
Omnibus:                        5.936   Durbin-Watson:                   0.047
Prob(Omnibus):                  0.051   Jarque-Bera (JB):                5.355
Skew:                           0.120   Prob(JB):                       0.0687
Kurtosis:                       2.741   Cond. No.                     1.19e+03
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.19e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
print(results.params)
print("------------")
print(f"y = {results.params['time_step'].round(3)}x + {results.params['const'].round(3)}")
const        26876.352204
time_step      127.341727
dtype: float64
------------
y = 127.342x + 26876.352
df["prediction"] = results.fittedvalues
df["residual"] = results.resid
#from pandas import DataFrame
#
## get all rows from the original dataset that wound up in the training set:
#training_set = df.loc[x_train.index].copy()
#print(len(training_set))
#
## create a dataset for the predictions and the residuals:
#training_preds = DataFrame({
#    "prediction": results.fittedvalues,
#    "residual": results.resid
#})
## merge the training set with the results:
#training_set = training_set.merge(training_preds,
#    how="inner", left_index=True, right_index=True
#)
#
## calculate error for each datapoint:
#training_set

Regression Trends

Plotting trend line:

px.line(df, y=["employment", "prediction"], height=350,
    title="US Employment (monthly) vs linear trend",
    labels={"value":""}
)

Regression Residuals

Removing the trend, plotting just the residuals:

px.line(df, y="residual",
    title="US Employment (monthly) vs linear trend residuals", height=350
)

There seem to be some periodic movements in the residuals.

11.5.0.1 Seasonality via Means of Periodic Residuals

Observe there may be some cyclical patterns in the residuals, by calculating periodic means:

df["year"] = df.index.year
df["quarter"] = df.index.quarter
df["month"] = df.index.month

Here we are grouping the data by quarter and calculating the average residual. This shows us for each quarter, on average, whether predictions are above or below trend:

df.groupby("quarter")["residual"].mean()
quarter
1   -1003.215801
2     146.297777
3      59.338487
4     803.810627
Name: residual, dtype: float64
df.groupby("month")["residual"].mean()
month
1    -1283.730663
2    -1069.572390
3     -656.344350
4     -349.500030
5      145.611731
6      642.781631
7     -176.769398
8      -38.471591
9      393.256449
10     734.647280
11     820.152360
12     857.445927
Name: residual, dtype: float64

11.5.0.2 Seasonality via Regression on Periodic Residuals

Let’s perform a regression using months as the features and the trend residuals as the target. This can help us understand the degree to which employment will be over or under trend for a given month.

# https://pandas.pydata.org/docs/reference/api/pandas.get_dummies.html
# "one hot encode" the monthly values:
from pandas import get_dummies as one_hot_encode

x_monthly = one_hot_encode(df["month"])
x_monthly.columns=["Jan", "Feb", "Mar", "Apr",
                "May", "Jun", "Jul", "Aug",
                "Sep", "Oct", "Nov", "Dec"]
x_monthly = x_monthly.astype(int)
x_monthly
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
date
1939-01-01 1 0 0 0 0 0 0 0 0 0 0 0
1939-02-01 0 1 0 0 0 0 0 0 0 0 0 0
1939-03-01 0 0 1 0 0 0 0 0 0 0 0 0
... ... ... ... ... ... ... ... ... ... ... ... ...
2024-08-01 0 0 0 0 0 0 0 1 0 0 0 0
2024-09-01 0 0 0 0 0 0 0 0 1 0 0 0
2024-10-01 0 0 0 0 0 0 0 0 0 1 0 0

1030 rows × 12 columns

y_monthly = df["residual"]

ols_monthly = sm.OLS(y_monthly, x_monthly)
print(type(ols_monthly))

results_monthly = ols_monthly.fit()
print(type(results_monthly))

print(results_monthly.summary())
<class 'statsmodels.regression.linear_model.OLS'>
<class 'statsmodels.regression.linear_model.RegressionResultsWrapper'>
                            OLS Regression Results                            
==============================================================================
Dep. Variable:               residual   R-squared:                       0.021
Model:                            OLS   Adj. R-squared:                  0.010
Method:                 Least Squares   F-statistic:                     1.953
Date:                Tue, 19 Nov 2024   Prob (F-statistic):             0.0298
Time:                        20:54:53   Log-Likelihood:                -10199.
No. Observations:                1030   AIC:                         2.042e+04
Df Residuals:                    1018   BIC:                         2.048e+04
Df Model:                          11                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
Jan        -1283.7307    523.900     -2.450      0.014   -2311.778    -255.683
Feb        -1069.5724    523.900     -2.042      0.041   -2097.620     -41.525
Mar         -656.3443    523.900     -1.253      0.211   -1684.392     371.703
Apr         -349.5000    523.900     -0.667      0.505   -1377.548     678.548
May          145.6117    523.900      0.278      0.781    -882.436    1173.659
Jun          642.7816    523.900      1.227      0.220    -385.266    1670.829
Jul         -176.7694    523.900     -0.337      0.736   -1204.817     851.278
Aug          -38.4716    523.900     -0.073      0.941   -1066.519     989.576
Sep          393.2564    523.900      0.751      0.453    -634.791    1421.304
Oct          734.6473    523.900      1.402      0.161    -293.400    1762.695
Nov          820.1524    526.973      1.556      0.120    -213.925    1854.230
Dec          857.4459    526.973      1.627      0.104    -176.631    1891.523
==============================================================================
Omnibus:                        6.808   Durbin-Watson:                   0.025
Prob(Omnibus):                  0.033   Jarque-Bera (JB):                6.001
Skew:                           0.125   Prob(JB):                       0.0498
Kurtosis:                       2.721   Cond. No.                         1.01
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

The coefficients tell us how each month contributes towards the regression residuals, in other words, for each month, to what degree does the model predict we will be above or below trend?

Monthly Predictions of Residuals

df["prediction_monthly"] = results_monthly.fittedvalues
df["residual_monthly"] = results_monthly.resid

Decomposition of the original data into trend, seasonal component, and residuals:

px.line(df, y=["employment", "prediction"], title="Employment vs trend", height=350)
px.line(df, y="prediction_monthly", title="Employment seasonal component", height=350)
px.line(df, y="residual_monthly", title="Employment de-trended residual", height=350)