Blog Post

Adaptive Design Series: A Lesson in the Interpretation of Results

June 19, 2012

Note: This article is one of a series about adaptive design that come from a blog written by Dr. Karen Kesler from 2010 to 2011.  That blog is no longer active, but it contained some great information, so we wanted to re-post it here.

I had this fascinating discussion with one of my colleagues this week about the Whitehead boundaries and the interpretation of crossing the futility boundary. I had always treated the area below the futility boundary as the null hypothesis acceptance region and glossed over the fact that that region is actually two regions. The lowest portion is where you conclude that your treatment of interest is actually significantly worse than your control (solid portion of the line) and the upper region where you conclude that there’s no significant difference (dashed portion of the line).

futility in adaptive design

In the case he was examining, the investigators had crossed the boundary at the dashed line portion and rightfully stopped the study due to futility. (Note that the picture is from Whitehead’s book The Design and Analysis of Sequential Clinical Trials and does not reflect the results of the study in question.) The investigators concluded that the trial demonstrated that the experimental method provided no benefit over the standard method. Other people claimed that their conclusion was flawed because they stopped early and therefore had little power to demonstrate a lack of difference between the methods. I feel like both sides have a valid argument, but that they’re comparing apples to oranges.

There’s a big difference between stopping a study because you can’t show efficacy and showing conclusively that two treatments are equivalent. I can understand the frustration of the research community at not having a conclusive answer about the new method, but if the new method is more risky, expensive or labor intensive, there may not be interest in using it if you can’t show it’s significantly better. It would have been irresponsible for the study investigators to continue the study once they realized that they couldn’t possibly achieve their goal of showing the new method to be better.

Besides learning more about a method I thought I had down pat, this also reminded me of the subtleties of working in clinical trials and how we need to be very careful of our interpretation of results.