A second, robust linear modelĀ¶
Our first linear model was motivated by the probability model $$\begin{aligned} y_i &= x_i \cdot \beta + \epsilon_i \\ \epsilon_i &\sim \text{Normal}(\text{mean}=0, \text{sd}=\sigma) , \end{aligned}$$ which led us to choosing $\beta$ by minimizing the sum of squared errors, $$ \sum_i (y_i - x_i \cdot \beta)^2 . $$
A natural question is: how much do the results depend on the "distributional assumptions", i.e., the "Normal( )"?
The CauchyĀ¶
The Cauchy distribution is another probability distribution: it can take any value (like the Normal), its histogram looks like a hump (like the Normal), and it has both "location" and "scale" parameters (like the Normal). Let's have a look at the model $$\begin{aligned} y_i &= x_i \cdot \beta + \epsilon_i \\ \epsilon_i &\sim \text{Cauchy}(\text{loc}=0, \text{scale}=\sigma) . \end{aligned}$$
First we'll simulate from the model and see how well doing the usual method works.
n = 200
k = 2
x = rng.uniform(size=(n, k))
beta = [10, -2]
y = x.dot(beta) + rng.standard_cauchy(size=n)
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.scatter(x[:,0], y); ax1.set_xlabel(f"x0"); ax1.set_ylabel("y")
ax2.scatter(x[:,1], y); ax2.set_xlabel(f"x1"); ax2.set_ylabel("y");
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.scatter(x[:,0], y); ax1.set_xlabel(f"x0"); ax1.set_ylabel("y"); ax1.set_ylim((-5, 12))
ax2.scatter(x[:,1], y); ax2.set_xlabel(f"x1"); ax2.set_ylabel("y"); ax2.set_ylim((-5, 12));
from sklearn import linear_model
line_reg = linear_model.LinearRegression(fit_intercept=True)
line_reg.fit(x, y)
line_reg.coef_
array([ 0.04686807, -9.46323921])
y_hat = line_reg.predict(x)
fig, ax = plt.subplots()
ax.scatter(y, y_hat)
ax.set_xlim(-20, 20)
ax.set_xlabel("observed value")
ax.set_ylabel("predicted value")
ax.set_title(f"MSE = {np.median((y - y_hat)**2):.2f}")
ax.axline((0,0), slope=1);
What to do? Well, the Cauchy density with scale $\sigma$ is $$ f(u) = \frac{1}{\pi \sigma \left(1 + (u / \sigma)^2\right)} . $$
So, the likelihood of the data are $$ \prod_{i=1}^n f(y_i - \hat y_i) , $$ and so the log-likelihood, with $\hat y = X \beta$, is $$ - \sum_{i=1}^n \log\left( 1 + ((y_i - x_i \beta) / \sigma)^2 \right) - n \log(\pi \sigma) . $$
from scipy.optimize import minimize
def logl(u):
beta = u[:-1]
sigma = u[-1]
yhat = x.dot(beta)
return np.sum(np.log(1 + ((y - yhat) / sigma)**2)) + n * np.log(np.pi * sigma)
res = minimize(logl, [1, 1, 1])
est_beta = res['x'][:-1]
est_sigma = res['x'][-1]
new_yhat = x.dot(est_beta)
fig, ax = plt.subplots()
ax.scatter(y, new_yhat)
ax.set_xlim(-20, 20)
ax.set_xlabel("observed value")
ax.set_ylabel("predicted value")
ax.set_title(f"MSE = {np.median((y - new_yhat)**2):.2f}")
ax.axline((0,0), slope=1);
What did we do there?Ā¶
We adjusted the loss function: a more general way of describing what we're doing is that we're trying to minimize the loss $$ \sum_i L(y_i - \hat y_i), $$ where $L( )$ is a function that quantifies how "bad" an "error" of a given size is.
The standard linear model used $L(u) = u^2$; the Cauchy model led us to use $$ L(u) = \log(1 + u^2) . $$
What's the difference? Well, here's the PDFs of the two:
from scipy.stats import norm, cauchy
fig, ax = plt.subplots()
xvals = np.linspace(-5, 5, 101)
ax.plot(xvals, norm.pdf(xvals), label='Normal')
ax.plot(xvals, cauchy.pdf(xvals), label='Cauchy')
ax.legend();
Those don't look too different. But have a look at the (negative) log PDF (which is what we use for a loss function:
from scipy.stats import norm, cauchy
fig, ax = plt.subplots()
xvals = np.linspace(-5, 5, 101)
ax.plot(xvals, -1 * norm.logpdf(xvals), label='Normal')
ax.plot(xvals, -1 * cauchy.logpdf(xvals), label='Cauchy')
ax.legend();