dml-401k.Rmd
Here we reproduce the empirical example of Chernozhukov, V., Cinelli, C., Newey, W., Sharma A., and Syrgkanis, V. (2021). “Long Story Short: Omitted Variable Bias in Causal Machine Learning.”
# loads package
library(dml.sensemakr)
## loads data
data("pension")
y <- pension$net_tfa # net total financial assets
d <- pension$e401 # 401K eligibility
x <- model.matrix(~ -1 + age + inc + educ+ fsize + marr + twoearn + pira + hown, data = pension)
# run DML
dml.401k.plm <- dml(y, d, x, model = "plm", cf.folds = 5, cf.reps = 5)
#> Debiased Machine Learning
#>
#> Model: Partially Linear
#> Target: ate
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger), treatment (ranger)
#> Tuning: dirty
#>
#>
#> ====================================
#> Tuning parameters using all the data
#> ====================================
#>
#> - Tuning Model for D.
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#> - Tuning Model for Y (partially linear).
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#>
#> ======================================
#> Repeating 5-fold cross-fitting 5 times
#> ======================================
#>
#> -- Rep 1 -- Folds: 1 2 3 4 5
#>
#> -- Rep 2 -- Folds: 1 2 3 4 5
#>
#> -- Rep 3 -- Folds: 1 2 3 4 5
#>
#> -- Rep 4 -- Folds: 1 2 3 4 5
#>
#> -- Rep 5 -- Folds: 1 2 3 4 5
# results under conditional ignorability
summary(dml.401k.plm)
#>
#> Debiased Machine Learning
#>
#> Model: Partially Linear
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger, R2 = 27.242%), treatment (ranger, R2 = 11.517%)
#> Tuning: dirty
#>
#> Average Treatment Effect:
#>
#> Estimate Std. Error t value P(>|t|)
#> ate.all 9042 1361 6.643 3.07e-11 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> Note: DML estimates combined using the median method.
#>
#> Verbal interpretation of DML procedure:
#>
#> -- Average treatment effects were estimated using DML with 5-fold cross-fitting. In order to reduce the variance that stems from sample splitting, we repeated the procedure 5 times. Estimates are combined using the median as the final estimate, incorporating variation across experiments into the standard error as described in Chernozhukov et al. (2018). The outcome regression uses Random Forest from the R package ranger; the treatment regression uses Random Forest from the R package ranger.
# sensitivity analysis
sens.401k <- sensemakr(dml.401k.plm, cf.y = 0.04, cf.d = 0.03,)
# summary
summary(sens.401k)
#> ==== Original Analysis ====
#>
#> Debiased Machine Learning
#>
#> Model: Partially Linear
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger, R2 = 27.242%), treatment (ranger, R2 = 11.517%)
#> Tuning: dirty
#>
#> Average Treatment Effect:
#>
#> Estimate Std. Error t value P(>|t|)
#> ate.all 9042 1361 6.643 3.07e-11 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> Note: DML estimates combined using the median method.
#>
#> Verbal interpretation of DML procedure:
#>
#> -- Average treatment effects were estimated using DML with 5-fold cross-fitting. In order to reduce the variance that stems from sample splitting, we repeated the procedure 5 times. Estimates are combined using the median as the final estimate, incorporating variation across experiments into the standard error as described in Chernozhukov et al. (2018). The outcome regression uses Random Forest from the R package ranger; the treatment regression uses Random Forest from the R package ranger.
#>
#> ==== Sensitivity Analysis ====
#>
#> Null hypothesis: theta = 0
#> Signif. level: alpha = 0.05
#>
#> Robustness Values:
#> RV (%) RVa (%)
#> ate.all 7.3047 5.4621
#>
#> Verbal interpretation of robustness values:
#>
#> -- Robustness Value for the Bound (RV): omitted variables that explain more than RV% of the residual variation both of the treatment (cf.d) and of the outcome (cf.y) are sufficiently strong to make the estimated bounds include 0. Conversely, omitted variables that do not explain more than RV% of the residual variation of both the treatment and the outcome are not sufficiently strong to do so.
#>
#> -- Robustness Value for the Confidence Bound (RVa): omitted variables that explain more than RV% of the residual variation both of the treatment (cf.d) and of the outcome (cf.y) are sufficiently strong to make the confidence bounds include 0, at the significance level of alpha = 0.05. Conversely, omitted variables that do not explain more than RV% of the residual variation of both the treatment and the outcome are not sufficiently strong to do so.
#>
#> Confidence Bounds for Sensitivity Scenario:
#> lwr upr
#> ate.all 2573.494 15560.906
#>
#> Confidence level: point = 95%; region = 90%.
#> Sensitivity parameters: cf.y = 0.04; cf.d = 0.03; rho2 = 1.
#>
#> Verbal interpretation of confidence bounds:
#>
#> -- The table shows the lower (lwr) and upper (upr) limits of the confidence bounds on the target quantity, considering omitted variables with postulated sensitivity parameters cf.y, cf.d and rho2. The confidence level "point" is the relevant coverage for most use cases, and stands for the coverage rate for the true target quantity. The confidence level "region" stands for the coverage rate of the true bounds.
# contour plots
oldpar <- par(mfrow = c(1,2))
plot(sens.401k, which.bound = "lwr", bound.label = "Max match (3 years)", col.contour = "blue")
plot(sens.401k, which.bound = "upr", bound.label = "Max match (3 years)", col.contour = "blue")
par(oldpar)
# benchmarks
bench.plm <- dml_benchmark(dml.401k.plm, benchmark_covariates = c("inc", "pira", "twoearn"))
#>
#> === Computing benchmarks using covariate: inc ===
#>
#> Debiased Machine Learning
#>
#> Model: Partially Linear
#> Target: ate
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger), treatment (ranger)
#> Tuning: dirty
#>
#>
#> ====================================
#> Tuning parameters using all the data
#> ====================================
#>
#> - Tuning Model for D.
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#> - Tuning Model for Y (partially linear).
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#>
#> ======================================
#> Repeating 5-fold cross-fitting 5 times
#> ======================================
#>
#> -- Rep 1 -- Folds: 1 2 3 4 5
#>
#> -- Rep 2 -- Folds: 1 2 3 4 5
#>
#> -- Rep 3 -- Folds: 1 2 3 4 5
#>
#> -- Rep 4 -- Folds: 1 2 3 4 5
#>
#> -- Rep 5 -- Folds: 1 2 3 4 5
#>
#>
#> === Computing benchmarks using covariate: pira ===
#>
#> Debiased Machine Learning
#>
#> Model: Partially Linear
#> Target: ate
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger), treatment (ranger)
#> Tuning: dirty
#>
#>
#> ====================================
#> Tuning parameters using all the data
#> ====================================
#>
#> - Tuning Model for D.
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#> - Tuning Model for Y (partially linear).
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#>
#> ======================================
#> Repeating 5-fold cross-fitting 5 times
#> ======================================
#>
#> -- Rep 1 -- Folds: 1 2 3 4 5
#>
#> -- Rep 2 -- Folds: 1 2 3 4 5
#>
#> -- Rep 3 -- Folds: 1 2 3 4 5
#>
#> -- Rep 4 -- Folds: 1 2 3 4 5
#>
#> -- Rep 5 -- Folds: 1 2 3 4 5
#>
#>
#> === Computing benchmarks using covariate: twoearn ===
#>
#> Debiased Machine Learning
#>
#> Model: Partially Linear
#> Target: ate
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger), treatment (ranger)
#> Tuning: dirty
#>
#>
#> ====================================
#> Tuning parameters using all the data
#> ====================================
#>
#> - Tuning Model for D.
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#> - Tuning Model for Y (partially linear).
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#>
#> ======================================
#> Repeating 5-fold cross-fitting 5 times
#> ======================================
#>
#> -- Rep 1 -- Folds: 1 2 3 4 5
#>
#> -- Rep 2 -- Folds: 1 2 3 4 5
#>
#> -- Rep 3 -- Folds: 1 2 3 4 5
#>
#> -- Rep 4 -- Folds: 1 2 3 4 5
#>
#> -- Rep 5 -- Folds: 1 2 3 4 5
bench.plm
#> gain.Y gain.D rho theta.s theta.sj delta
#> inc 0.14769 0.047807 0.35259 8990 12418 3428.7
#> pira 0.04655 0.005179 0.06979 8990 9101 111.4
#> twoearn 0.03641 0.005975 -0.36553 8990 8485 -504.6
## compute income quartiles
g1 <- cut(x[,"inc"], quantile(x[,"inc"], c(0, 0.25,.5,.75,1), na.rm = TRUE),
labels = c("q1", "q2", "q3", "q4"), include.lowest = T)
## Nonparametric model
dml.401k.npm <- dml(y, d, x, groups = g1, model = "npm", cf.folds = 5, cf.reps = 5)
#> Debiased Machine Learning
#>
#> Model: Nonparametric
#> Target: ate
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger), treatment (ranger)
#> Tuning: dirty
#>
#>
#> ====================================
#> Tuning parameters using all the data
#> ====================================
#>
#> - Tuning Model for D.
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#> - Tuning Model for Y (non-parametric).
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 3 5 variance
#>
#>
#> ======================================
#> Repeating 5-fold cross-fitting 5 times
#> ======================================
#>
#> -- Rep 1 -- Folds: 1 2 3 4 5
#>
#> -- Rep 2 -- Folds: 1 2 3 4 5
#>
#> -- Rep 3 -- Folds: 1 2 3 4 5
#>
#> -- Rep 4 -- Folds: 1 2 3 4 5
#>
#> -- Rep 5 -- Folds: 1 2 3 4 5
# results under conditional ignorability
summary(dml.401k.npm)
#>
#> Debiased Machine Learning
#>
#> Model: Nonparametric
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger, R2 = 27.015%), treatment (ranger, R2 = 11.538%)
#> Tuning: dirty
#>
#> Average Treatment Effect:
#>
#> Estimate Std. Error t value P(>|t|)
#> ate.all 7992 1212 6.596 4.21e-11 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Group Average Treatment Effect:
#>
#> Estimate Std. Error t value P(>|t|)
#> gate.q1 4316.3 839.2 5.144 2.69e-07 ***
#> gate.q2 2623.6 1317.8 1.991 0.046496 *
#> gate.q3 6894.2 1843.9 3.739 0.000185 ***
#> gate.q4 18068.1 4279.1 4.222 2.42e-05 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Note: DML estimates combined using the median method.
#>
#> Verbal interpretation of DML procedure:
#>
#> -- Average treatment effects were estimated using DML with 5-fold cross-fitting. In order to reduce the variance that stems from sample splitting, we repeated the procedure 5 times. Estimates are combined using the median as the final estimate, incorporating variation across experiments into the standard error as described in Chernozhukov et al. (2018). The outcome regression uses Random Forest from the R package ranger; the treatment regression uses Random Forest from the R package ranger.
# load ggplot2
library(ggplot2)
# coefficient plot under conditional ignorability
group.names <- paste0("gate.q",1:4)
df <- data.frame(groups = 1:4, estimate = coef(dml.401k.npm)[group.names])
cis <- confint(dml.401k.npm)[group.names, ]
cis <- setNames(as.data.frame(cis), c("lwr.ci", "upr.ci"))
df <- cbind(df, cis)
ggplot(df, aes(x = groups, y = estimate)) + geom_line() +
geom_ribbon(aes(ymin = lwr.ci, ymax = upr.ci), alpha = 0.1, col = "blue", fill = "blue") +
theme_bw() +
xlab("Income Groups by Quartiles") +
ylab("ATE")
# sensitivity analysis
sens.401k.npm <- sensemakr(dml.401k.npm, cf.y = 0.04, cf.d = 0.03)
# summary
summary(sens.401k.npm)
#> ==== Original Analysis ====
#>
#> Debiased Machine Learning
#>
#> Model: Nonparametric
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger, R2 = 27.015%), treatment (ranger, R2 = 11.538%)
#> Tuning: dirty
#>
#> Average Treatment Effect:
#>
#> Estimate Std. Error t value P(>|t|)
#> ate.all 7992 1212 6.596 4.21e-11 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Group Average Treatment Effect:
#>
#> Estimate Std. Error t value P(>|t|)
#> gate.q1 4316.3 839.2 5.144 2.69e-07 ***
#> gate.q2 2623.6 1317.8 1.991 0.046496 *
#> gate.q3 6894.2 1843.9 3.739 0.000185 ***
#> gate.q4 18068.1 4279.1 4.222 2.42e-05 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Note: DML estimates combined using the median method.
#>
#> Verbal interpretation of DML procedure:
#>
#> -- Average treatment effects were estimated using DML with 5-fold cross-fitting. In order to reduce the variance that stems from sample splitting, we repeated the procedure 5 times. Estimates are combined using the median as the final estimate, incorporating variation across experiments into the standard error as described in Chernozhukov et al. (2018). The outcome regression uses Random Forest from the R package ranger; the treatment regression uses Random Forest from the R package ranger.
#>
#> ==== Sensitivity Analysis ====
#>
#> Null hypothesis: theta = 0
#> Signif. level: alpha = 0.05
#>
#> Robustness Values:
#> RV (%) RVa (%)
#> ate.all 6.0887 4.5149
#> gate.q1 10.1360 7.0609
#> gate.q2 4.2935 0.6497
#> gate.q3 6.9704 3.6793
#> gate.q4 9.0980 5.5868
#>
#> Verbal interpretation of robustness values:
#>
#> -- Robustness Value for the Bound (RV): omitted variables that explain more than RV% of the residual variation of the outcome (cf.y) and generate an additional RV% of variation on the Riesz Representer (cf.d) are sufficiently strong to make the estimated bounds include 0. Conversely, omitted variables that do not explain more than RV% of the residual variation of the outcome nor generate an additional RV% of variation on the Riesz Representer are not sufficiently strong to do so.
#>
#> -- Robustness Value for the Confidence Bound (RVa): omitted variables that explain more than RV% of the residual variation of the the outcome (cf.y) and generate an additional RV% of variation on the Riesz Representer (cf.d) are sufficiently strong to make the confidence bounds include 0, at the significance level of alpha = 0.05. Conversely, omitted variables that do not explain more than RV% of the residual variation of the outcome nor generate an additional RV% of variation on the Riesz Representer are not sufficiently strong to do so.
#>
#> The interpretation of sensitivity parameters can be further refined for each target quantity. See more below.
#>
#> Confidence Bounds for Sensitivity Scenario:
#> lwr upr
#> ate.all 1455.6475 14613.9203
#> gate.q1 1541.9920 7174.2669
#> gate.q2 -2073.1651 6651.5635
#> gate.q3 246.4833 13297.8375
#> gate.q4 4308.0514 31966.3894
#>
#> Confidence level: point = 95%; region = 90%.
#> Sensitivity parameters: cf.y = 0.04; cf.d = 0.03; rho2 = 1.
#>
#> Verbal interpretation of confidence bounds:
#>
#> -- The table shows the lower (lwr) and upper (upr) limits of the confidence bounds on the target quantity, considering omitted variables with postulated sensitivity parameters cf.y, cf.d and rho2. The confidence level "point" is the relevant coverage for most use cases, and stands for the coverage rate for the true target quantity. The confidence level "region" stands for the coverage rate of the true bounds.
#>
#> Interpretation of sensitivity parameters:
#>
#> -- cf.y: percentage of the residual variation of the outcome explained by latent variables.
#> -- cf.d: percentage gains in the variation of the Riesz Representer generated by latent variables:
#> ATE: cf.d measures the percentage gains in the average precision on the treatment regression.
#> -- For Group Average Treatment Effects (GATE), parameters are conditional on the relevant group.
# sensitivity contour plots
oldpar <- par(mfrow = c(1,2))
plot(sens.401k.npm, which.bound = "lwr", bound.label = "Max match (3 years)", col.contour = "blue")
plot(sens.401k.npm, which.bound = "upr", bound.label = "Max match (3 years)", col.contour = "blue")
par(oldpar)
# confidence bounds plot
bds <- confidence_bounds(dml.401k.npm, cf.y = 0.04, cf.d = 0.03, level = 0)
bds <- setNames(as.data.frame(bds), c("lwr.bound", "upr.bound"))
cbds <- confidence_bounds(dml.401k.npm, cf.y = 0.04, cf.d = 0.03, level = .95)
cbds <- setNames(as.data.frame(cbds), c("lwr.cbound", "upr.cbound"))
df2 <- cbind(df, bds[-1,], cbds[-1, ])
ggplot(df2, aes(x = groups, y = estimate)) + geom_line() +
geom_ribbon(aes(ymin = lwr.bound, ymax = upr.bound), alpha = 0.1, col = "red", fill = "red") +
geom_ribbon(aes(ymin = lwr.cbound, ymax = upr.cbound), alpha = 0.1, col = "blue", fill = "blue") +
theme_bw() +
xlab("Income Groups by Quartiles") +
ylab("ATE")
# benchmarks
bench.npm <- dml_benchmark(dml.401k.npm, benchmark_covariates = c("inc", "pira", "twoearn"))
#>
#> === Computing benchmarks using covariate: inc ===
#>
#> Debiased Machine Learning
#>
#> Model: Nonparametric
#> Target: ate
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger), treatment (ranger)
#> Tuning: dirty
#>
#>
#> ====================================
#> Tuning parameters using all the data
#> ====================================
#>
#> - Tuning Model for D.
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#> - Tuning Model for Y (non-parametric).
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#>
#> ======================================
#> Repeating 5-fold cross-fitting 5 times
#> ======================================
#>
#> -- Rep 1 -- Folds: 1 2 3 4 5
#>
#> -- Rep 2 -- Folds: 1 2 3 4 5
#>
#> -- Rep 3 -- Folds: 1 2 3 4 5
#>
#> -- Rep 4 -- Folds: 1 2 3 4 5
#>
#> -- Rep 5 -- Folds: 1 2 3 4 5
#>
#>
#> === Computing benchmarks using covariate: pira ===
#>
#> Debiased Machine Learning
#>
#> Model: Nonparametric
#> Target: ate
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger), treatment (ranger)
#> Tuning: dirty
#>
#>
#> ====================================
#> Tuning parameters using all the data
#> ====================================
#>
#> - Tuning Model for D.
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#> - Tuning Model for Y (non-parametric).
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#>
#> ======================================
#> Repeating 5-fold cross-fitting 5 times
#> ======================================
#>
#> -- Rep 1 -- Folds: 1 2 3 4 5
#>
#> -- Rep 2 -- Folds: 1 2 3 4 5
#>
#> -- Rep 3 -- Folds: 1 2 3 4 5
#>
#> -- Rep 4 -- Folds: 1 2 3 4 5
#>
#> -- Rep 5 -- Folds: 1 2 3 4 5
#>
#>
#> === Computing benchmarks using covariate: twoearn ===
#>
#> Debiased Machine Learning
#>
#> Model: Nonparametric
#> Target: ate
#> Cross-Fitting: 5 folds, 5 reps
#> ML Method: outcome (ranger), treatment (ranger)
#> Tuning: dirty
#>
#>
#> ====================================
#> Tuning parameters using all the data
#> ====================================
#>
#> - Tuning Model for D.
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#> - Tuning Model for Y (non-parametric).
#> -- Best Tune:
#> mtry min.node.size splitrule
#> 1 2 5 variance
#>
#>
#> ======================================
#> Repeating 5-fold cross-fitting 5 times
#> ======================================
#>
#> -- Rep 1 -- Folds: 1 2 3 4 5
#>
#> -- Rep 2 -- Folds: 1 2 3 4 5
#>
#> -- Rep 3 -- Folds: 1 2 3 4 5
#>
#> -- Rep 4 -- Folds: 1 2 3 4 5
#>
#> -- Rep 5 -- Folds: 1 2 3 4 5
bench.npm
#> gain.Y gain.D rho theta.s theta.sj delta
#> inc 0.13180 0.133750 0.2280 8061 11686 3624.9
#> pira 0.04305 0.005263 0.1002 8061 8402 341.5
#> twoearn 0.01649 0.003024 -0.4763 8061 7523 -537.6