# Interrupted Time Series Power Calculation using DO Loop Simulations - .Paper 1339-2017 Interrupted

date post

01-Sep-2018Category

## Documents

view

222download

0

Embed Size (px)

### Transcript of Interrupted Time Series Power Calculation using DO Loop Simulations - .Paper 1339-2017 Interrupted

Paper 1339-2017

Interrupted Time Series Power Calculation using DO Loop Simulations

Nigel L. Rozario, Charity G. Moore and Andy McWilliams, CORE-CHS/UNCC

ABSTRACT

Interrupted time series analysis (ITS) is a statistical method that uses repeated snap shots over regular time intervals to evaluate healthcare interventions in settings where randomization is not feasible. This method can be used to evaluate programs aimed at improving patient outcomes in real-world, clinical settings. In practice, the number of patients and the timing of observations are restricted. This paper describes a statistical program, which will help statisticians identify optimal time segments within a fixed population size for an interrupted time series analysis. This program creates simulations using DO loops to calculate the power needed to detect changes over time that may be due to the interventions under evaluation. Parameters used in this program are total sample size in each time period, number of time periods, and the rate of the event before and after the intervention. The program gives the user the ability to specify different assumptions about these parameters and to assess the resultant power. The output from the program can help statisticians communicate to stakeholders the optimal evaluation design.

INTRODUCTION

Definition of ITS

Interrupted time series (ITS) is a statistical tool for detecting if a policy or intervention has a greater effect than an underlying secular trend, when a randomized trial design is not feasible (Ramsay et al, 2003). Ideally, ITS is used when outcomes can be evaluated using data collected for other purposes, such as administrative data or electronic medical records. Data are collected at multiple time points equally spread before and after an intervention. Additionally, the data requires valid repeated measures and outcomes collected at short time intervals. The analysis entails an autoregressive form of segmented regression analysis to analyze the interrupted time series data (Wagner et al, 2002).

= 0 + 1 + 2 + 3 + (Wagner et al, 2002)

In the above equation Yt is the average event rate (e.g. rates of 30-day readmission), which is a dependent variable, while independent variables are time as a continuous variable, intervention indicator (no intervention, intervention), and time after intervention, as a continuous variable that counts the number of time units after the intervention is implemented.

As an example, ITS analysis has been used by Du et al to detect whether the addition of a black boxed warning label of suicidal thinking on atomoxetine was associated with a change in prescribing patterns for this Attention Deficit Hyperactivity Disorder (ADHD) medication. The population included patients with an ADHD diagnosis who were prescribed either atomoxetine or stimulants, during January 2004 to December 2007 from the IMC LifeLink Health Plan Claims database. The authors discovered that adults were three times more likely to use atomoxetine than children aged 12 years or younger. An analysis stratified on age showed that the impact of the black box warning differed among the age groups of 12 years and younger, 13 to 18 years, and 18 years and over age groups (Du et al, 2012)

ITS designs allow the investigator to test not only the change in level (2) but the change in slope of an outcome (3) which is associated with change in policy or intervention. The method can also be used to assess the unintended consequences of intervention and policy changes through evaluation of other

outcomes. Additionally, it can be used to conduct stratified analysis to evaluate the differential impact of a policy change or intervention on sub populations. (Penfold and Zhang, 2013; Du et al., 2012). For example, in the study by Du et al (2012), a stratified analysis on age showed that the impact of the black box warning differed among age groups of 12 years and younger, 13 to 18 years, and 18 years and over age groups.

There are also a few limitations on applying ITS analysis. They include having at least 8 observation pre as well as post intervention for sufficient power. Also even when there exists a control population randomization is not employed, which leaves a significant chance for bias. Finally, inferences cannot be made on the individual level outcomes when the time series outcomes is looking at population rates (Penfold and Zhang, 2013).

Statisticians working with healthcare leaders often encounter the question of how to best evaluate the implementation of an intervention. From a study validity perspective, a pre/post design has major limitations due to secular trends, regression to the mean, and confounding. Conversely, the ITS design adds additional rigor with the inclusion of multiple time points pre and post intervention, thus testing for linear trends before and after intervention implementation, which may also be compared to trends within a contemporaneous control group. A minimum of 12 data points before intervention and 12 after intervention was suggested by Wagner et al. (2002), not for purposes of power but to adequately evaluate seasonal variation. Penfold and Zhang (2013) indicate that a minimum of 8 observations before and after the intervention are needed to have sufficient power to estimate the regression. A methodologist must balance the desire for multiple observations with the reality that too many segments within a fixed number of patients could result in small patient numbers compromising the stability of the estimates. For example, if 1000 patients are seen during 1 year, we could slice the time points 4 times, providing n=250 per period or 10 times, providing n=100 per period. We sought to have a tool available that allowed us to quickly determine the optimal ITS parameters with a given number of patients per time period regardless of the population being studied. As an example, the study that prompted the creation of this simulation pertained to implementing a transition program for patients being discharged from the hospital after a chronic obstructive pulmonary disease (COPD) exacerbation with the intention of decreasing rates of 30-day readmissions.

Impact of n per time period and # of time periods

The main purpose of this simulation exercise is to determine the design parameters for an interrupted time series analysis that will optimize power for testing effectiveness of an intervention. Simulations were created to assess the power to detect a change in outcome immediately after the intervention and in the deviation of the slope of the outcome during the post intervention period.

Figure 1: Simulation scenario for readmission rate with time

As an example, Figure 1 shows the simulated readmission rates for N=2000 patients with eight intervals before and after intervention deployment with a resulting sample size of 250 patients per interval. 30% of patients had a 30-dayreadmission before the intervention while 20% of subjects had the event after intervention, suggesting an immediate drop in the rate. In addition, the slope of improvement continues over the next 7 periods demonstrates a continued decline in the event rate to just over 10%. Time After is counted only after the intervention. Power for detecting the intervention effect is calculated by simulating the random rates per interval then statistical testing of the coefficients from an autoregressive model and determining the proportion of simulated sets where the null hypothesis is rejected for testing 2 and 3.Two hypotheses are being tested. Firstly, whether the decrease in the event rate is significant comparing pre- and post-intervention (intervention effect). Secondly, whether the slope comparing the pre-intervention trend line is significantly different from the post-intervention trend line.

RESULTS/PROGRAMMING

- Part 1: Data Step

All the analysis in this paper was done using SAS Enterprise Guide software version 6.1.

Below is the programming (Figure 2) which is used for the simulation using the do loop. This data step creates 1000 scenarios in which the following parameters are varied: Sample size in the pre and post period is the same (Npre or post=500, 1000, 1500, 2000) and the probability of the event before (pre_prop=0.15, 0.20, 0.25, 0.30) and after (post_prop=pre_prop-0.10). The number of intervals before and after (time_slice) has been varied from 4 to 10, including slightly below and above the number suggested by Penfold and Zhang (2013).

Simulation Parameters

Simul the number of datasets simulated

N- Sample size for either the pre-intervention (or post-intervention) period

Intervention Indicator for intervention (0=no, 1=yes)

Pre_Prop Event rate before the intervention

Post_Prop Event rate after the intervention

Time_Slice Number of intervals for the time period divided before or after

Nevent Number of people having the event generated from a random binomial distribution with N/Time_Slice as sample size pre_prop or post_prop as the population event rate

Time_Axis The count of the number of time points during the pre or post period (e.g. Time Slice of 4 would have four time points for pre- and post)

Time_After Time points after the intervention

Pinterval Probability of the event with time (which may decrease or stay the same with time)

libname save "\\yourpath";

data Simulation;

do simul= 1 to 1000;

do n= 500 to 2000 by 500;

do intervention=0 to 1;

do time_slice=4 to 10 by 2;

do time=1 to time_slice;

do pre_prop=0.15 to 0.30 by 0.05;

do post_prop=(pre_prop-0.10) to

(pre_prop-0.05) by 0.05;

nin