Saturday, January 26, 2013

Lies, damned lies and statistics of antidepressant effectiveness

The BMJ has published a head-to-head about whether antidepressants are overprescribed, with Des Spence saying Yes and Ian Reid saying No. Reid quotes the study by Fountoulakis & Möller (2011) that provided a re-analysis and re-interpretation of the Kirsch data, which I have mentioned previously (eg. see post). Reid concludes, "Sadly, demonstrations of methodological flaws and selective reporting suggest that the conclusions [of Kirsch] were 'unjustified.'"

What Reid doesn't quote is the response by Kirsch et al (2012) which shows that the original calculations were in fact correct. The discrepancy comes from using different statistical techniques, the effect of which is that the analysis by Fountoulakis & Möller treats individual studies as though they are equivalently powered. This is contrary to the standard meta-analytic technique of weighting studies with a large sample size more than the ones with a small sample size.

Let's not get too hung up about the statistics! What is significant is that Reid uses a discrepancy like this to try and undermine Kirsch's conclusion. The fact is that the effect size in antidepressant trials is much smaller than is commonly assumed. Not everyone responds to antidepressants even in the clinical trials. It is possible that the small effect size could be explained by expectancy effects introduced through unblinding (eg. see the article by Jo Moncrieff and myself).

2 comments:

  1. please see our latest publication which clearly shows that Kirsch et al were mistaken

    http://www.ncbi.nlm.nih.gov/pubmed/23941527

    KN Fountoulakis

    ReplyDelete
  2. Thanks for the reference, Dr Fountoulakis. But, as I said, "Let's not get too hung up by the statistics". Meta-analysis is not an exact science. Of course there are different results depending on which method is used. There's no one right method.

    I don't think the placebo amplification hypothesis depends on an initial severity effect, which I've always been sceptical about anyway (see my BMJ eletter). I take it you don't believe that any statistically significant effect could be due to placebo amplification. If so, why not?

    ReplyDelete