Introduction
Interrupted time series (ITS) has become a core study design for the evaluation of public health interventions and health policies [1]. The design takes advantage of natural experiments whereby an intervention is introduced at a known point in time and a series of observations on the outcome of interest exist both before and after the intervention. The effect of the intervention is estimated by examining any change following the intervention compared with the “counterfactual”, represented by the expected ongoing trend in the absence of the intervention (Figure 1) [2]. ITS involves a pre–post comparison, controlling for the counterfactual baseline trend, within the same population; therefore, it can be used in situations where no control population is available [3], [4]. This also has the advantage that selection bias and confounding due to group differences, which threaten the reliability of nonrandomized controlled designs, are rarely a problem in ITS studies [2], [3]. Furthermore, because ITS incorporates the underlying trend, it controls for short-term fluctuations, secular trends, and regression to the mean [3], [4]. The basic ITS design also has limitations; for example, there is the potential for history bias whereby other events concurrent to the intervention may be responsible for an observed effect. In addition, instrumentation effects can occur if there are changes in the way the outcome is measured over time [3]. Previous studies have described these strengths and limitations of ITS in more detail and have provided guidance on its application [2], [4], [5]. Furthermore, methodological publications have discussed effective approaches for limiting the risk of history bias, including controlled ITS designs and multiple baseline designs [6], [7], [8].
One area that has not been covered in detail in the existing literature is how researchers should approach specifying the ITS model used in the analysis. As discussed previously, the ITS design involves making a comparison between the outcome observed following the intervention and the counterfactual. This comparison reduces to two key questions that define the estimated effect of the intervention [2]. First, how is the counterfactual defined? This involves modeling the preintervention trend. Second, how is the impact model of the intervention defined? That is, what type of effect do we hypothesize that the intervention will have on the outcome (such as whether the effect is gradual or abrupt, immediate or lagged)? This involves parameterizing the effect of the intervention relative to the counterfactual. Multiple alternative approaches exist to defining the counterfactual and the intervention impact model and inappropriate model selection could bias results, yet ITS studies often fail to provide a clear justification for their choice of modeling approach [9].
In this article, we suggest approaches to ensure that model specification is objective and appropriate to the intervention and outcome under investigation. The first section discusses the factors that contribute to defining the counterfactual and the second, the factors that contribute to defining the impact model. For each of these sections, we use illustrative examples from a recent ITS study of the impact of major reforms to the English National Health Service on hospital activity (described in Box 1) [10] to highlight the pitfalls of incorrect model specification and then provide a framework for a suggested approach to select the model. Finally, we also discuss sensitivity analysis and other approaches to dealing with uncertainty in model specification.Case study
To illustrate the strengths and limitations of different approaches to model specification we use data from a recent study evaluating the impact of the 2012 Health and Social Care Act in England on hospital admissions and outpatient specialist visits [10]. This policy aimed to involve general practitioners (GPs) in commissioning (planning and purchasing) secondary care through the establishment of GP-led Clinical Commissioning Groups. GP-led commissioning is expected to reduce health care costs by shifting care away from secondary care to primary and community settings [11]. We therefore hypothesized that the reforms would result in a relative reduction in secondary care activity (inpatient admissions and outpatient visits). The health and social care act was enacted in April 2012, there was then a 12 month period during which the Clinical Commissioning Groups worked alongside the existing health care administrative bodies before taking over fully independent commissioning in April 2013. We had quarterly data on all NHS hospital admissions and outpatient visits between the second quarter of 2007 and the final quarter of 2015. More details about the intervention and the data can be found in the original study [10].