Suppose I've collected data on rent in Eugene. Consider the following questions:
Question: What would good answers to these questions look like?
We could answer these questions by
(Discuss pros and cons and how these would actually work.)
- How much does rent cost in the Bethel neighborhood this year?
Note that this has two notions of uncertainty: rents come in a range of values, but also there will be uncertainty in our estimates of that range.
If we have $n$ independent samples from some source, and the sample mean and SD are $\bar x$ and $s$ respectively, then a 95% confidence interval is $$ \bar x - t_n s \qquad \text{to} \qquad \bar x + t_n s , $$ where $t_n$ is the 97.5% quantile for the $t$ distribution with $n-2$ degrees of freedom.
Said another way: $\bar x$ is our estimate of the mean, and math gives us an estimate of the uncertainty behind our estimate.
(Take the mean of the rents in Bethel this year.)
(Take the mean difference in rent of places that go up for rent in adjacent years.)
(Get 95% CIs for them.)
For example: suppose we fit a model where the price of the $i^\text{th}$ apartment is $$\begin{aligned} p_i &= \beta_{n_i} + \beta_\text{year} y_i + \beta_\text{area} a_i + \epsilon_i \\ \epsilon_i &\sim \text{Normal}(0, \sigma) , \end{aligned}$$ where
(Our estimate of $\beta_\text{Bethel}$ plus terms for year and area.)
(Our estimate of $\beta_\text{year}$.)
???
First, let's read in the data. (Note: it's fake data.) (github link)
df = pd.read_csv("data/housing.csv", index_col=0)
df
Next, let's shift the variables, so that for instance $\beta_\text{Bethel}$ is the average price of a 500 ft${}^2$ apartment in 2010 (instead of a 0 ft${}^2$ apartment in the year 0):
df['area'] -= 500
df['year'] -= 2010
Now, fit the model, using patsy to make the design matrix (see board):
import patsy
outcome, predictors = patsy.dmatrices("rent ~ 0 + hood + year + area", df)
fit = np.linalg.lstsq(predictors, outcome, rcond=None)
estimates = { k.lstrip('hood[').rstrip("]") : v for k, v in zip(predictors.design_info.column_names, fit[0].ravel())}
estimates
np.asarray(predictors)[:10,:]
df
Okay, so we'd like to know how far off our guesses of $\beta_\text{Bethel}$ or $a$ are from "the truth".
Well, one way to do this is to simulate data where we know the truth, and then see how far off our guesses are. If the simulations describe the real data well, then this should give us a good estimate.
This is sometimes known as the parametric bootstrap. (We'll meet it's more famous cousin, "the" bootstrap, next.) We actually did it, last class. Let's do it again!
Okay, from our "real data" we estimated a certain set of estimated parameter values. To make the simulated datasets look as much like the real data as possible, we should simulate using those parameter values.
def sim_rents(df, beta):
sim_df = df.copy()
sim_df['rent'] = np.round(
np.array([beta[h] for h in sim_df.hood]) + beta['year'] * sim_df.year + beta['area'] * sim_df.area
+ rng.normal(scale=150, size=len(df))
)
return sim_df
def fit_model(df):
outcome, predictors = patsy.dmatrices("rent ~ 0 + hood + year + area", df)
fit = np.linalg.lstsq(predictors, outcome, rcond=None)
estimates = { k.lstrip('hood[').rstrip("]") : v for k, v in zip(predictors.design_info.column_names, fit[0].ravel())}
return estimates
experiments = pd.DataFrame([
fit_model(sim_rents(df, beta=estimates))
for _ in range(1000)
])
experiments
plt.hist(experiments.bethel)
plt.xlabel("mean bethel rent"); plt.ylabel("frequency");
plt.axvline(estimates['bethel'], color='red');
Conclusion: The mean rent in Bethel for 500 ft${}^2$ apartments in 2010 as about \$556, with a 95% confidence interval of about \$517 to \$594.
Widths of other 95% confidence intervals are below.
experiments.quantile(0.975) - experiments.quantile(0.025)
However, sometimes we don't have (or can't be bothered to make) a good generative model to simulate from. Or we tried and it was hard to get realistic-looking data. What to do then?
Well, what we'd like to do is to draw a bunch of new datasets from wherever we got the first one.
This seems unrealistic... but, our best guess at what that would be like if we could... is our dataset itself!
Suppose we have a method that estimates something. To get an estimate of uncertainty of that estimate,
Obtain a new dataset, of the same size as the original, by resampling with replacement observations from the original dataset.
Apply the method to this 'bootstrapped' dataset.
Repeat lots of times.
Then, the middle 95% of the estimates gives you the "bootstrap 95% confidence interval".
Here are 10 numbers.
array([176, 255, 53, 94, 61, 119, 42, 109, 0, 30])
Here's a (very) simple method to obtain a bootstrap resampled version of a data frame:
def bootstrap(df):
n = df.shape[0]
return df.loc[rng.choice(n, n)]
And, here, let's apply thefit_model
method above to 1000 of these.
boots = pd.DataFrame([
fit_model(bootstrap(df))
for _ in range(1000)
])
boots
And, here is the distribution of estimates of $\beta_\text{Bethel}$. It is similar, but not identical, to our other distribution above (as expected).
plt.hist(boots.bethel)
plt.xlabel("mean bethel rent"); plt.ylabel("frequency");
plt.axvline(estimates['bethel'], color='red');
Conclusion: The mean rent in Bethel for 500 ft${}^2$ apartments in 2010 as about \$556, with a 95% confidence interval of about \$516 to \$595.
Widths of other 95% confidence intervals are below.
boots.quantile(0.975) - experiments.quantile(0.025)