-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathREADME.Rmd
219 lines (150 loc) · 9.26 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```
## [🚧 WIP 🚧] `jive`
<!-- badges: start -->
<!-- badges: end -->
The goal of jive is to implement jackknife instrumental-variable estimators (JIVE) and various alternatives.
## Installation
You can install the development version of jive like so:
``` r
devtools::install_github("kylebutts/jive")
```
## Example Usage
We are going to use the data from Stevenson (2018). Stevenson leverages the quasi-random assignment of 8 judges (magistrates) in Philadelphia to study the effects pretrial detention on several outcomes, including whether or not a defendant subsequently pleads guilty.
```{r}
library(jive)
data(stevenson)
```
### Juke n' JIVE
```{r jive-example}
jive(
guilt ~ i(black) + i(white) | bailDate | jail3 ~ 0 | judge_pre,
data = stevenson
)
```
```{r ujive-example}
ujive(
guilt ~ i(black) + i(white) | bailDate | jail3 ~ 0 | judge_pre,
data = stevenson
)
```
```{r ijive-example}
ijive(
guilt ~ i(black) + i(white) | bailDate | jail3 ~ 0 | judge_pre,
data = stevenson
)
```
```{r cjive-example, cache = TRUE}
# Leave-cluster out
ijive(
guilt ~ i(black) + i(white) | bailDate | jail3 ~ 0 | judge_pre,
data = stevenson,
cluster = ~ bailDate,
lo_cluster = TRUE # Default, but just to be explicit
)
```
### (Leave-out) Leniency Measures
The package will allow you to estimate (leave-out) leniency measures:
```{r estimate-leniency, dpi = 300}
out = ijive(
guilt ~ i(black) + i(white) | bailDate | jail3 ~ 0 | judge_pre,
data = stevenson, return_leniency = TRUE
)
stevenson$judge_lo_leniency = out$That
hist(stevenson$judge_lo_leniency)
```
```{r binsreg-jail3-leniency, dpi = 300}
library(binsreg)
# filter out outliers
stevenson = subset(stevenson, judge_lo_leniency > -0.02 & judge_lo_leniency < 0.02)
binsreg::binsreg(
y = stevenson$jail3, x = stevenson$judge_lo_leniency,
cb = TRUE
)
```
## Econometric Details on JIVE, UJIVE, IJIVE, and CJIVE
Consider the following instrumental variables setup
$$
y_i = T_i \beta + W_i' \psi + \varepsilon_i,
$$
where $T_i$ is a scalar endogenous variable, $W_i$ is a vector of exogenous covariates, and $\varepsilon_i$ is an error term. Additionally, assume there is a set of valid instruments, $Z_i$. The intuition of two-stage least squares is to use the instruments to "predict" $T_i$:
$$
T_i = Z_i' \pi + W_i \gamma + \eta_i.
$$
Then, the prediction, $\hat{T}\_i$, is used in place of $T_i$ in the original regression.
When the dimension of $Z_i$ grows with the number of observations, two-stage least squares is biased (Kolesar, 2013). Without getting into the details, the problem comes because you're predicting $T_i$ using $i$'s own observation in the first-stage. For this reason, Angrist, Imbens, and Krueger (1999) developed the jackknife instrumental-variable estimator (JIVE). In short, for each $i$, $T_i$ is predicted using the first-stage equation leaving out $i$'s own observation.
In general, the JIVE estimator (and variants) are given by
$$
\frac{\hat{P}' Y}{\hat{P}' T}
$$
where $\hat{P}$ is a function of $W$, $T$, and $Z$. The particulars differ across the JIVE, the unbiased JIVE (UJIVE), the improved JIVE (IJIVE), and the cluster JIVE (CJIVE).
### JIVE definition
**Source:** Kolesar (2013) and Angrist, Imbens, and Kreuger (1999)
The original JIVE estimate produces $\hat{T}$ via a leave-out procedure, which can be expressed in matrix notation as:
$$
\hat{T}\_{JIVE} = (I - D_{(Z,W)})^{-1} (H_{(Z,W)} - D_{(Z,W)}) T,
$$
where $H_{(Z,W)}$ is the hat/projection matrix for $(Z,W)$ and $D_{(Z,W)}$ is the diagonal matrix with diagonal elements corresponding to $H_{(Z,W)}$. Then, after partialling out covariates in the second-stage, we have
$$
\hat{P}\_{JIVE} = M_W \hat{T}\_{JIVE}
$$
### UJIVE definition
**Source:** Kolesar (2013)
For UJIVE, a leave-out procedure is used in the first-stage for fitted values $\hat{T}$ and in the second stage for residualizing the covariates. The terms are given by:
$$
\hat{T}\_{UJIVE} = (I - D_{(Z,W)})^{-1} (H_{(Z,W)} - D_{(Z,W)}) T = \hat{T}\_{JIVE}
$$
$$
\hat{P}\_{UJIVE} = \hat{T}\_{UJIVE} - (I - D_{W})^{-1} (H_{W} - D_{W}) T
$$
### IJIVE definition
**Source:** Ackerberg and Devereux (2009)
The IJIVE procedure, first residualizes $T$, $Y$, and $Z$ by the covariates $W$. The authors show that this reduces small-sample bias. Then, the standard leave-out JIVE procedure is carried out on the residualized matrices (denoted by $\tilde):
$$
\hat{T}\_{IJIVE} = \hat{P}\_{IJIVE} = (I - D_{\tilde{Z}})^{-1} (H_{\tilde{Z}} - D_{\tilde{Z}}) \tilde{T}
$$
Note that $\hat{P}\_{IJIVE} = \hat{T}\_{IJIVE}$ because the residualization has already occured and doesn't need to occur in the second stage.
### CJIVE definition
**Source:** Frandsen, Leslie, and McIntyre (2023)
This is a modified version of IJIVE as proposed by Frandsen, Leslie, and McIntyre (2023). This is necessary if the errors are correlated within clusters (e.g. court cases assigned on the same day to the same judge). The modified version is given by:
$$
\hat{T}\_{CJIVE} = \hat{P}\_{CJIVE} = (I - \mathbb{D}(P_Z, \{ n_1, \dots, n_G \}))^{-1} (H_{\tilde{Z}} - \mathbb{D}(P_Z, \{ n_1, \dots, n_G \})) \tilde{T},
$$
where $\mathbb{D}(P_Z, \{ n_1, \dots, n_G \})$ is a block-diagonal matrix equal to the projection matrix of $Z$ zerod out except for each cluster.
In this package, the same adjustment (replacing the diagonal $D$ with the cluster block-diagonal $\mathbb{D}$ version) can be done for all three estimators, though the details have only been worked out for IJIVE in particular (free research idea).
### Standard errors
Heteroskedastic-robust standard errors are given by
$$
\frac{\sqrt{\sum_i \hat{P}\_i^2 \hat{\epsilon}_i^2}}{\sum_i \hat{P}\_i T_i},
$$
where $\hat{\epsilon}_i = M_W Y_i - M_W T_i * \hat{\beta}$. See https://github.com/kolesarm/ivreg/blob/master/ivreg.pdf for derivations.
### Advantages of alternative estimators:
Quoting from the papers that propose each:
- UJIVE: "UJIVE is consistent for a convex combination of local average treatment effects under many instrument asymptotics that also allow for many covariates and heteroscedasticity"
- IJIVE: "We introduce two simple new variants of the jackknife instrumental variables (JIVE) estimator for overidentified linear models and show that they are superior to the existing JIVE estimator, significantly improving on its small-sample-bias properties"
- CJIVE: "In settings where inference must be clustered, however, the [IJIVE] fails to eliminate the many-instruments bias. We propose a cluster-jackknife approach in which first-stage predicted values for each observation are constructed from a regression that leaves out the observation's entire cluster, not just the observation itself. The cluster-jackknife instrumental variables estimator (CJIVE) eliminates many-instruments bias, and consistently estimates causal effects in the traditional linear model and local average treatment effects in the heterogeneous treatment effects framework."
## Computation Efficiency
This package uses a ton of algebra tricks to speed up the computation of the JIVE (and variants). First, the package is written using `fixest` which estiamtes high-dimensional fixed effects very quickly. All the credit to @lrberge for this.
All uses of the projection matrix, $H$, can be efficiently calculated using the fitted values of regressiones. The diagonal of the hat matrix needs to be calculated using $X_i (X'X)^{-1} X_i'$ which could be quite slow when there are high-dimensional fixed effects. However, we use sparse matrices and the following trick. Partition the covariates $W = [A, B]$ where $B$ are the fixed effects. Then the projection matrix can be decomposed as $P_W = P_B + P_{M_B A}$. $P_B$ is quick to calculate since it's a bunch of dummy variables. $P_{M_B A}$ is quick to calculate since you $M_B$ "sweeps out" the fixed effects and is a byproduct of `fixest` estimates.
For IJIVE, another trick is used. Instead of actually residualizing everything, it's useful to use the original $Z$ for computational effiicency (e.g. if $Z$ is high-dimensional FEs). If we actually residualized the $Z$ fixed-effects, then we couldn't use `fixest` to estimate them. For this reason, we can use [block projection matrix decomposition](https://en.wikipedia.org/wiki/Projection_matrix#Blockwise_formula) to transform the hat matrix into two components that are easier to calculate with `fixest` regressions:
$$
H_{\tilde{Z}} = H_{M_W Z} = H_{[W Z]} - H_{W}.
$$
Then, through some algebra $H_{\tilde{Z}} M_W T$ can be written as $H_{[W Z]} T - H_W T$, i.e. the fitted values of the unresidualized $T$ from a regression on $W$ and $Z$ and from a regression on $W$. Likewise, $D_{\tilde{Z}}$ can be written as $D_{[W Z]} - D_W$.
## Sources
[Ackerberg and Deverux (2009)](literature/Ackerberg_Deverux_2009.pdf)
[Angrist, Imbens, and Krueger (1999)](literature/Angrist_Imbens_Krueger_1999.pdf)
[Frandsen, Leslie, and McIntyre (2023)](literature/Frandsen_Leslie_McIntyre_2023.pdf)
[Kolesar (2013)](literature/Kolesar_2013.pdf)
[kolesarm/ivreg](https://github.com/kolesarm/ivreg)
[ghpk-metrics/stata-manyiv](https://github.com/gphk-metrics/stata-manyiv)