The difference-in-differences (DiD) research design is popular method for testing changes in outcome variables across treated and untreated groups. While the set-up is intuitive and easy to implement in the canonical setting of two time periods and two groups, most modern research using DiD exploits the staggered implementation of treatment across many units and different time periods. Unfortunately, the common practice using unit and time fixed effects, along with an indicator variable for active treatment (the two-way fixed effects TWFE estimator) has known flaws that potentially biases the parameter estimates in most use settings. In this talk I’ll discuss the pitfalls of the common approach. Using simple simulation analyses I’ll show how the bias arises, and where the potential for bias is largest. In addition I will discuss new methods for conducting DiD analyses that overcome the flaws in the TWFE approach, and show the implications with an example from prior literature.