Recent Publications

We show that using modern estimation techniques (with penalized regression and cross-validation to select comparable peer firms) for event studies can both reduce expert witness discretion and produce more accurate stock return predictions.

This paper shows that, both conceptually and empirically, the exclusion of dual-class shares by index providers is unlikely to act as a deterrence mechanism.

This paper shows that the type of event studies commonly used in securities litigation fail during periods of market volatility, and proposes alternatives more suitable during such periods.

Recent Posts

In this memo we will test empirically whether there are issues with the event-study DiD even if the dynamic treatment effects are constant across cohorts. To do this I will conduct a similar simulation to the one shown in my slides. The data generating process is \[y_{it} = \alpha_i + \alpha_t + \tau_{it} + \epsilon_{it}\] where \(\alpha_i\) are unit fixed effects drawn from \(\sim N(0, 1)\), \(\alpha_t\) are period fixed effects, also drawn from \(\sim N(0, 1)\), and the white-noise error term \(\epsilon_{it}\) is drawn from \(\sim N\left(0, \left(\frac{1}{2}\right)^2\right)\).

In doing research for my dissertation I keep on running across models that control for somewhat arbitrary variables (or where at least there is no justification for their inclusion). This is common in applied corporate finance / managerial accounting papers. As a result, I snarked. The issue here is that in regressions for some outcome - say firm valuation (the dreaded Q) - we want to look at the the change in the outcome variable around some treatment shock, but we want to control for some variables.

Introduction In this post I expand on the implications of recent econometric work on issues with difference-in-difference (DiD) designs with staggered treatment rollout. For a longer discussion of these issues, and the details of new proposed modifications to the standard two-way fixed effect regression-based DiD models, refer to my prior post here. Here I will demonstrate the practical importance of correcting for these issues with staggered DiD on a live policy issue - whether the adoption of legalized medical cannabis laws has a causal effect on opioid overdose mortality.

In recent years there has been a growing movement within certain factions of Congress, the judicial branch, and the legal academy to require all financial regulations to be subject to strict cost-benefit review (CBR). Rival commentators meanwhile argue that historically conceived CBR would be ill-advised for financial regulations due to the interconnected nature of financial markets, poor data quality, and questions of practical political economy. In this review I will explain the rationale behind agency-required CBR analysis, survey the arguments both for and against detailed CBR as applied to financial regulation, and explain how strictly implemented CBR would affect the viability of reforms like increased capital requirements.

Introduction In this methodological section I will explain the issues with difference-in-differences (DiD) designs when there are multiple units and more than two time periods, and also the particular issues that arise when the treatment is conducted at staggered periods in time. In the canonical DiD set-up (e.g. the Card and Kreuger minimum wage study comparing New Jersey and Pennsylvania) there are two units and two time periods, with one of the units being treated in the second period.

Recent & Upcoming Talks

The difference-in-differences (DiD) research design is popular method for testing changes in outcome variables across treated and untreated groups. While the set-up is intuitive and easy to implement in the canonical setting of two time periods and two groups, most modern research using DiD exploits the staggered implementation of treatment across many units and different time periods. Unfortunately, the common practice using unit and time fixed effects, along with an indicator variable for active treatment (the two-way fixed effects TWFE estimator) has known flaws that potentially biases the parameter estimates in most use settings. In this talk I’ll discuss the pitfalls of the common approach. Using simple simulation analyses I’ll show how the bias arises, and where the potential for bias is largest. In addition I will discuss new methods for conducting DiD analyses that overcome the flaws in the TWFE approach, and show the implications with an example from prior literature.

Figuring out how to make sense of difference-in-differences analysis when treatment is staggered across units and time.

We investigate the use of modern statistical techniques in the application of event studies conducted on single securities for the purpose of securities litigation. Single-firm event studies are widely used in civil litigation, with billions of dollars in settlements hinging on the outcome of the exercise. Prior work has explored modifying the standard single-firm event study design to provide more robust statistical inference. But little work has been done to determine methods that can directly increase the precision of the excess return estimate. We take a prediction approach to the excess return calculation and find substantial performance improvement is possible using modern machine learning methods.



PhD Candidate

Stanford GSB

Aug 2016 – Present California

Summer Associate

Cravath, Swaine & Moore

Jun 2016 – Aug 2016 New York

Enforcement Intern

Securities & Exchange Commission

Jun 2015 – Aug 2015 New York

Research Fellow

Stanford Law School

Jun 2013 – Jun 2013 Stanford

Senior Consultant

Navigant Economics

Mar 2011 – May 2013 Oakland


  • 655 Knight Way, Stanford, CA 94305
  • DM Me