“Population includes resident population plus armed forces overseas. The monthly estimate is the average of estimates for the first of the month and the first of the following month.”
The data is expressed in “Thousands”, and is “Not Seasonally Adjusted”.
Wrangling the data, including renaming columns and converting the date index to be datetime-aware, may make it easier for us to work with this data:
from pandas import to_datetimedf.rename(columns={DATASET_NAME: "population"}, inplace=True)df.index.name ="date"df.index = to_datetime(df.index)df
population
date
1959-01-01
175818
1959-02-01
176044
1959-03-01
176274
...
...
2025-02-01
341588
2025-03-01
341729
2025-04-01
341874
796 rows × 1 columns
9.2 Data Exploration
Exploring trends:
import plotly.express as pxpx.scatter(df, y="population", title="US Population (Monthly) vs Trend", labels={"population":"US Population (thousands)", "value":""}, trendline="ols", trendline_color_override="red", height=350,)
Looks like a possible linear trend. Let’s perform a more formal regression analysis.
9.3 Data Encoding
Because we need numeric features to perform a regression, we convert the dates to a linear time step of integers (after sorting the data first for good measure):
We will use the numeric time step as our input variable (x), to predict the population (y).
9.4 Data Splitting
9.4.1 X/Y Split
Identifying dependent and independent variables:
#x = df[["date"]] # we need numbers not stringsx = df[["time_step"]]y = df["population"]print("X:", x.shape)print("Y:", y.shape)
X: (796, 1)
Y: (796,)
9.4.2 Train/Test Split
Splitting data sequentially, where earlier data is used in training, and recent data is used for testing:
print(len(df))training_size =round(len(df) *.8)print(training_size)x_train = x.iloc[:training_size] # slice all beforey_train = y.iloc[:training_size] # slice all beforex_test = x.iloc[training_size:] # slice all aftery_test = y.iloc[training_size:] # slice all afterprint("TRAIN:", x_train.shape)print("TEST:", x_test.shape)
796
637
TRAIN: (637, 1)
TEST: (159, 1)
9.5 Model Selection and Training
Training a linear regression model on the training data:
from sklearn.linear_model import LinearRegressionmodel = LinearRegression()model.fit(x_train, y_train)
LinearRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
LinearRegression()
After training, we have access to the learned weights, as well as the line of best fit (i.e. the trend line):
COEFS: [213.17440418]
INTERCEPT: 174513.3870442226
------------------
y = 213.174x + 174513.387
In this case, we interpret the line of best fit to observe how much the population is expected to grow on average per time step, as well as the population trend value at the earliest time step.
Note
Remember in this dataset the population is expressed in thousands.
9.6 Model Prediction and Evaluation
We use the trained model to make predictions on the test set, and then calculate regression metrics to see how well the model is doing:
from pandas import concatchart_df = concat([df, df_future])chart_df
population
time_step
prediction
error
date
1959-01-01
175818.0
1
174726.561448
1091.438552
1959-02-01
176044.0
2
174939.735853
1104.264147
1959-03-01
176274.0
3
175152.910257
1121.089743
...
...
...
...
...
2028-02-01
NaN
830
351448.142515
NaN
2028-03-01
NaN
831
351661.316919
NaN
2028-04-01
NaN
832
351874.491323
NaN
832 rows × 4 columns
Note
The population and error values for future dates are null, because we don’t know them yet. Although we are able to make predictions about these values, based on historical trends.
Plotting trend vs actual, with future predictions:
px.line(chart_df[-180:], y=["population", "prediction"], height=350, title="US Population (Monthly) vs Regression Predictions (Trend)", labels={"value":""})