Linear regression relates a continuous response (dependent) variable to one or more predictors (features, independent variables), using the assumption that the relationship is linear in nature:
The model assumes that the response variable can be explained or predicted by a linear combination of the features, except for random deviations from this linear relationship.
%matplotlib inline
from pathlib import Path
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import statsmodels.stats.api as sms
import statsmodels.api as sm
import statsmodels.tsa.api as smt
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import StandardScaler
#from statsmodels.tsa.stattools import acf, q_stat, adfuller
from scipy.stats import probplot, moment
import warnings
plt.style.use('ggplot')
pd.options.display.float_format = '{:,.2f}'.format
warnings.filterwarnings('ignore')
def plotting_3_chart(df, feature):
## Importing seaborn, matplotlab and scipy modules.
import matplotlib.gridspec as gridspec
from scipy import stats
## Creating a customized chart. and giving in figsize and everything.
fig = plt.figure(constrained_layout=True, figsize=(8,5))
## creating a grid of 3 cols and 3 rows.
grid = gridspec.GridSpec(ncols=3, nrows=3, figure=fig)
## Customizing the histogram grid.
ax1 = fig.add_subplot(grid[0, :2])
## Set the title.
ax1.set_title(f'{feature} Histogram')
## plot the histogram.
sns.distplot(df.loc[:,feature], norm_hist=True, ax = ax1, bins=20, fit=stats.norm)
ax1.legend(('normal', f'{feature}'))
# customizing the QQ_plot.
ax2 = fig.add_subplot(grid[1, :2])
## Set the title.
ax2.set_title('QQ_plot')
## Plotting the QQ_Plot.
stats.probplot(df.loc[:,feature], plot = ax2)
## Customizing the Box Plot.
ax3 = fig.add_subplot(grid[:, 2])
## Set title.
ax3.set_title('Box Plot')
## Plotting the box plot.
sns.boxplot(df.loc[:,feature], orient='v', ax = ax3 );
def tsplot(y, lags=None, title='', figsize=(12, 6)):
# Examine the patterns of ACF and PACF, along with the time series plot and histogram.
# Original source: https://tomaugspurger.github.io/modern-7-timeseries.html
fig = plt.figure(figsize=figsize)
layout = (2, 2)
ts_ax = plt.subplot2grid(layout, (0, 0))
hist_ax = plt.subplot2grid(layout, (0, 1))
acf_ax = plt.subplot2grid(layout, (1, 0))
pacf_ax = plt.subplot2grid(layout, (1, 1))
y.plot(ax=ts_ax)
ts_ax.set_title(title)
y.plot(ax=hist_ax, kind='hist', bins=30, alpha=.5)
hist_ax.set_title('Histogram')
smt.graphics.plot_acf(y, lags=lags, ax=acf_ax)
smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax)
[ax.set_xlim(0) for ax in [acf_ax, pacf_ax]]
sns.despine()
plt.tight_layout()
return ts_ax, acf_ax, pacf_ax
x = np.linspace(-5, 50, 100)
y = 50 + 2 * x + np.random.normal(0, 20, size=len(x))
data = pd.DataFrame({'X': x, 'Y': y})
ax = data.plot.scatter(x='X', y='Y');
Our linear model with a single independent variable on the left-hand side assumes the following form:
$$y = \beta_0 + \beta_1 X_1 + \epsilon$$$\epsilon$ accounts for the deviations or errors that we will encounter when our data do not actually fit a straight line. When $\epsilon$ materializes, that is when we run the model of this type on actual data, the errors are called residuals
.
The upper part of the summary displays the dataset characteristics, namely the estimation method, the number of observations and parameters, and indicates that standard error estimates do not account for heteroskedasticity
.
The middle panel shows the coefficient values that closely reflect the artificial data generating process. We can confirm that the estimates displayed in the middle of the summary result can be obtained using the OLS formula:
X = sm.add_constant(data['X'])
model = sm.OLS(data['Y'], X).fit()
print(model.summary())
Heteroskedasticity
occurs when the residual variance is not constant
but differs across observations. If the residual variance is positively correlated with an input variable, that is, when errors are larger for input values that are far from their mean, then OLS standard error estimates will be too low, and, consequently, the t -statistic will be inflated leading to false discoveries of relationships where none actually exist.
Diagnostics starts with a visual inspection of the residuals. Systematic patterns in the (supposedly random) residuals suggest statistical tests of the null hypothesis that errors are homoscedastic
against various alternatives. These tests include the Breusch—Pagan
and White tests
.
There are several ways to correct OLS estimates for heteroskedasticity
:
Robust standard errors (sometimes called white standard errors) take heteroskedasticity into account when computing the error variance using a so-called sandwich estimator.
Clustered standard errors assume that there are distinct groups in your data that are homoskedastic but the error variance differs between groups. These groups could be different asset classes or equities from different industries.
Several alternatives to OLS estimate the error covariance matrix using different assumptions when there is heteroskedasticity. The following are available in statsmodels:
Weighted least squares (WLS): For heteroskedastic errors where the covariance matrix has only diagonal entries as for OLS, but now the entries are allowed to vary
Feasible generalized least squares (GLSAR), for autocorrelated errors that follow an autoregressive AR (p) process (see the chapter on linear time series models)
Generalized least squares (GLS) for arbitrary covariance matrix structure; yields efficient and unbiased estimates in the presence of heteroskedasticity or serial correlation
beta = np.linalg.inv(X.T.dot(X)).dot(X.T.dot(y))
pd.Series(beta, index=X.columns)
data['y-hat'] = model.predict()
data['residuals'] = model.resid
ax = data.plot.scatter(x='X', y='Y', c='darkgrey')
data.plot.line(x='X', y='y-hat', ax=ax);
for _, row in data.iterrows():
plt.plot((row.X, row.X), (row.Y, row['y-hat']), 'k-')
res = pd.DataFrame(model.resid, columns=['resid'], index=data.index)
plotting_3_chart(res, 'resid')
tsplot(res, lags=50);
For two independent variables, the model changes as follows:
$$y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \epsilon$$## Create data
size = 25
X_1, X_2 = np.meshgrid(np.linspace(-50, 50, size), np.linspace(-50, 50, size), indexing='ij')
data = pd.DataFrame({'X_1': X_1.ravel(), 'X_2': X_2.ravel()})
data['Y'] = 50 + data.X_1 + 3 * data.X_2 + np.random.normal(0, 50, size=size**2)
## Plot
three_dee = plt.figure(figsize=(15, 5)).gca(projection='3d')
three_dee.scatter(data.X_1, data.X_2, data.Y, c='g');
X = data[['X_1', 'X_2']]
y = data['Y']
The upper right part of the panel displays the goodness-of-fit measures just discussed, alongside the F-test that rejects the null hypothesis that all coefficients are zero and irrelevant. Similarly, the t-statistics indicate that intercept and both slope coefficients are, highly significant.
The bottom part of the summary contains the residual diagnostics. The left panel displays skew and kurtosis that are used to test the normality hypothesis. Both the Omnibus and the Jarque—Bera test fails to reject the null hypothesis that the residuals are normally distributed. The Durbin—Watson statistic tests for serial correlation in the residuals and has a value near 2 which, given 2 parameters and 625 observations, fails to reject the hypothesis of no serial correlation.
Lastly, the condition number provides evidence about multicollinearity: it is the ratio of the square roots of the largest and the smallest eigenvalue of the design matrix that contains the input data. A value above 30 suggests that the regression may have significant multicollinearity.
X_ols = sm.add_constant(X)
model = sm.OLS(y, X_ols).fit()
print(model.summary())
beta = np.linalg.inv(X_ols.T.dot(X_ols)).dot(X_ols.T.dot(y))
pd.Series(beta, index=X_ols.columns)
plt.rc('figure', figsize=(12, 7))
plt.text(0.01, 0.05, str(model.summary()), {'fontsize': 14}, fontproperties = 'monospace')
plt.axis('off')
plt.tight_layout()
plt.subplots_adjust(left=0.2, right=0.8, top=0.8, bottom=0.1)
# plt.savefig('multiple_regression_summary.png', bbox_inches='tight', dpi=300);
res = pd.DataFrame(model.resid, columns=['resid'], index=data.index)
plotting_3_chart(res, 'resid')
tsplot(res, lags=50);
The following diagram illustrates the hyperplane fitted by the model to the randomly generated data points
three_dee = plt.figure(figsize=(15, 5)).gca(projection='3d')
three_dee.scatter(data.X_1, data.X_2, data.Y, c='g')
data['y-hat'] = model.predict()
to_plot = data.set_index(['X_1', 'X_2']).unstack().loc[:, 'y-hat']
three_dee.plot_surface(X_1, X_2, to_plot.values, color='black', alpha=0.2, linewidth=1, antialiased=True)
for _, row in data.iterrows():
plt.plot((row.X_1, row.X_1), (row.X_2, row.X_2), (row.Y, row['y-hat']), 'k-');
three_dee.set_xlabel('$X_1$');three_dee.set_ylabel('$X_2$');three_dee.set_zlabel('$Y, \hat{Y}$')
# plt.savefig('multiple_regression_plot.png', dpi=300);
Additional diagnostic tests
test = sms.het_breuschpagan(model.resid, model.model.exog)
print(f"Breusch-Pagan Test Statistic: {str(test[0])} P-Value: {str(test[1])}")
For our regression, the test yields a fairly low test statistic of 0.494
(corresponding to a p-value 0.781
) and thus supports our intuition that there is not heteroskedasticity
present in our regression model’s residuals.
The sklearn library includes an SGDRegressor model in its linear_models module. To learn the parameters for the same model using this method, we need to first standardize the data because the gradient is sensitive to the scale.
The gradient is sensitive to scale and so is SGDRegressor. Use the StandardScaler
or scale
to adjust the features.
We use StandardScaler() for this purpose that computes the mean and the standard deviation for each input variable during the fit step, and then subtracts the mean and divides by the standard deviation during the transform step that we can conveniently conduct in a single fit_transform() command:
scaler = StandardScaler()
X_ = scaler.fit_transform(X)
Then we instantiate the SGDRegressor using the default values except for a random_state setting to facilitate replication:
sgd = SGDRegressor(loss='squared_loss', fit_intercept=True,
shuffle=True, random_state=42,
learning_rate='invscaling',
eta0=0.01, power_t=0.25)
Now we can fit the sgd model, create the in-sample predictions for both the OLS and the sgd models, and compute the root mean squared error for each:
sgd.fit(X=X_, y=y)
As expected, both models yield the same result. We will now take on a more ambitious project using linear regression to estimate a multi-factor asset pricing model.
coeffs = (sgd.coef_ * scaler.scale_) + scaler.mean_
pd.Series(coeffs, index=X.columns)
resids = pd.DataFrame({'sgd': y - sgd.predict(X_),
'ols': y - model.predict(sm.add_constant(X))})
resids.pow(2).sum().div(len(y)).pow(.5)
resids.plot.scatter(x='sgd', y='ols');