Showing posts with label placebo. Show all posts
Showing posts with label placebo. Show all posts

Tuesday, 18 December 2012

Ketamine: Magic Antidepressant Or Illusion? Revisited

There's a lot of interest in the idea that ketamine provides unparalleled rapid, powerful antidepressant effects, even in people who haven't responded to conventional antidepressants.

Earlier this year, I asked:
Ketamine - Magic Antidepressant, or Expensive Illusion?
There have now been several studies finding dramatic antidepressant effects of ketamine, the "club drug" aka "horse-tranquilizer". Great news? If you believe it. But hold your, er, horses... there's a problem.
My concern was that although depressed patients certainly do report feeling better after an injection of ketamine, compared to people given placebo, that doesn't prove that the drug is actually an antidepressant.

Rather, patients might be experiencing an enhanced, 'active' placebo effect, because ketamine causes subjectively powerful hallucinogenic experiences. So the placebo-controlled trials weren't really blinded.

To settle the question, I suggested a three-way trial comparing it both to an inert placebo, and to some other hallucinogen; if ketamine has a specific antidepressant effect, it should produce more improvement than the comparison drug.

This has never been done.

Given this background, a new trial from NIMH's Ketamine King, Carlos Zarate, makes interesting reading: A Randomized Trial of a Low-Trapping Nonselective N-Methyl-D-Aspartate Channel Blocker in Major Depression.

Zarate et al tried a novel drug, AZD6765 in depressed people. AZD6765 works much like ketamine in that it blocks brain NMDA receptors. But it is a less powerful trapping blocker than ketamine, meaning that AZD6765 causes less dramatic effects on the target receptors, in some respects.

In practice, this makes AZD6765, much less hallucinogenic than ketamine.

So it's interesting that, compared to placebo, the new drug only produced small benefits. On the MADRS depression symptom scale, patients felt a little better on AZD6765, but the boost only lasted a few hours.

The effect was far smaller than in an earlier ketamine trial as my crudely-mashed-up graph shows (although note that the patient populations were somewhat different, one bipolar and one unipolar depression, although their baseline severity was the same.)


While on ketamine people experienced significant subjective effects, with AZD6765 they didn't, and couldn't tell whether they got drug or placebo. Is that why they got a smaller benefit?

This is what we'd see if NMDA blockers do have a modest antidepressant effect but the dramatic improvements seen on ketamine are largely active placebo phenomena. Then again, it's also consistent with ketamine being a powerful antidepressant and AZD6765 just being less effective because it's a milder blocker of NMDA - effectively, a low dose of ketamine.

To tell the difference, we need... an active placebo controlled trial, like I've been banging on about for ages. But I wasn't the first one to suggest it - that was none other than Carlos Zarate et al in 2006.


ResearchBlogging.orgZarate CA Jr, Mathews D, Ibrahim L, Chaves JF, Marquardt C, Ukoh I, Jolkovsky L, Brutsche NE, Smith MA, and Luckenbaugh DA (2012). A Randomized Trial of a Low-Trapping Nonselective N-Methyl-D-Aspartate Channel Blocker in Major Depression. Biological psychiatry PMID: 23206319

Saturday, 15 December 2012

Neither Drugs Nor Therapy Prevent Psychosis

Neither medication nor psychotherapy is effective in improving the prognosis for youngsters considered to be at high risk of developing psychosis, according to a major study just published.

The idea of identifying and treating young people at risk of becoming psychotic - because of a family history of schizophrenia, or because they're showing some mild symptoms - has become very fashionable lately. But can we really do anything to pre-empt the disorder?

In this trial, 115 "ultra-high risk" Australian subjects were randomized to three different treatment conditions, or if they didn't agree to treatment, they were just followed up to see what happened.

The treatments didn't work. Here's the smoking gun, showing the proportion who didn't go psychotic over time:

This shows all four of the subject groups did pretty much the same in terms of their likelihood of becoming psychotic. Neither cognitive therapy, nor the antipsychotic drug risperidone (at a low dose) had any effect: those given 'supportive therapy' (basically: sympathetic chats) and a placebo pill did just as well.

There probably wasn't even a placebo effect: none of the three treatment groups did better than people who got no treatment at all (monitoring group), although people weren't randomly assigned to that group, so that's a little less clear.

Is this a surprise? Yes, if you believed the early studies to examine this question which claimed great things for drugs and therapy. But the current findings are no shock if you've been following the (much larger) recent trials - for example the British one from earlier in the year, which found zero benefit of cognitive therapy.

Early small trials have a nasty habit of not working out in the long run.

The other lesson here is that even "ultra-high risk" folks usually don't get psychotic: only about 10-20% of them, in fact, became ill in the first two years of this study; the British results I mentioned are very similar.

So is this really "ultra high"? Relatively, yes it is; even a 10% risk is far higher than the chance that a random person on the street would have. But in absolute terms, perhaps not.

A concern here is that rounding these folks up, labelling and 'treating' them might make their lives worse, or even increase the risk of psychosis. That's not just my opinion: that's what the very cognitive therapists who eagerly run these trials believe (or ought to, if they're being consistent with their own theories).

One of the key ideas in cognitive accounts of psychosis is that the belief and fear that one is 'going crazy', or that you're otherwise abnormal, is itself a major source of stress that actually leads to worsening of symptoms.

What could be scarier than being told you're at "ultra high risk"?

Preventing psychosis is a great idea in theory. But most bad ideas are.

ResearchBlogging.orgMcGorry, P., Nelson, B., Phillips, L., Yuen, H., Francey, S., Thampi, A., Berger, G., Amminger, G., Simmons, M., Kelly, D., Thompson, A., and Yung, A. (2012). Randomized Controlled Trial of Interventions for Young People at Ultra-High Risk of Psychosis The Journal of Clinical Psychiatry DOI: 10.4088/JCP.12m07785

Monday, 5 November 2012

Exercise And Depression Revisited

A new study has found little evidence that aerobic exercise helps treat depression, contrary to popular belief.


Danish researchers Krogh and colleagues randomly 115 assigned depressed people to one of two exercise programs. One was a strenuous aerobic workout - cycling for 30 minutes, 3 times per week, for 3 months. The other was various stretching exercises.

The idea was that stretching was a kind of placebo control group on the grounds that, while it is an intervention, it's not the kind of exercise that gets you fit. It doesn't burn many calories, it doesn't improve your cardiovascular system, etc. Aerobic exercise is the kind that's most commonly been proposed as having an antidepressant effect.

So what happened? Not much. Both groups got less depressed but there was zero difference between the two conditions. The cyclists did get physically fitter than the stretchers, losing more weight and improving on other measures. But they didn't feel any better.

If this is true, it might mean that the antidepressant effects of aerobic exercise are psychological rather than physical - it's about the idea of 'exercising', not the process of becoming fitter.

While many trials have found modest beneficial effects of exercise vs a "control condition", the control condition was often just doing nothing much - such as being put on a waiting-list. So the placebo effect or the motivational benefits of 'doing something', rather than the effects of exercise per se, could be behind it. In the current study though the stretching avoided that problem.

As I said in a post about a previous paper, I said that Exercise and Depression: It's Complicated
The idea that exercise is a useful treatment for depression: it's got something for everyone. For doctors, it's attractive because it means they can recommend exercise - which is free, quick, and easy, at least for them - instead of spending the time and money on drugs or therapy. Governments like it for the same reason, and because it's another way of improving the nation's fitness. For people who don't like psychiatry, exercise offers a lovely alternative to psych drugs - why take those nasty antidepressants if exercise will do just as well? But this doesn't mean it's true.
This was a moderate sized study, and one study by itself doesn't prove much - any more than one single political poll does. From personal experience I think there's a good chance strenuous aerobic exercise can boost mood... but this is a reminder that the picture on exercise and depression is not quite as clear as the recent enthusiasm for it suggests...

ResearchBlogging.orgKrogh J, Videbech P, Thomsen C, Gluud C, & Nordentoft M (2012). DEMO-II Trial. Aerobic Exercise versus Stretching Exercise in Patients with Major Depression-A Randomised Clinical Trial. PloS one, 7 (10) PMID: 23118981

Friday, 3 August 2012

DSM-5 R.I.P?

Yesterday, the proposed new DSM-5 revision of the American Psychiatric Associations "Bible of Psychiatry" came under yet more criticism.



Aaron T. Beck, the father of currently-mega-popular cognitive behavioural therapy, started it off with an attack on the upcoming changes to one diagnosis, Generalized Anxiety Disorder; but many of the points also apply to the other DSM-5 proposals:
The lack of specific features, which is the primary issue for GAD, will not be addressed in DSM-5. The hallmark of the condition will remain pathological worry, although it also characterizes other disorders. Likewise, the proposed behavioral diagnostic criteria lack specificity for GAD, and it is not clear how these will be assessed. The proposed changes will lower the diagnostic threshold for GAD in DSM-5... many currently subthreshold cases will qualify for this diagnosis. The likely inclusion of many such "false-positives" will result in an artificial increase in the prevalence of GAD and will have further negative consequences.
Then from across the Atlantic, and also across the psychotherapy-vs-medication divide, came another piece of criticism. The authors are all associated with the European Medicines Agency (EMA, Europe's equivalent of the FDA), or with national drug regulators. Although they're writing in a personal capacity, this is still big news if you ask me.

These authors start out by saying that the EMA is broadly in favour of DSM reform, but they then attack one of the key DSM-5 innovations - the move towards 'dimensional measures' of symptoms in addition to diagnoses:
One of our main concerns is related to potential future [drug] indications based on an effect on a dimension that is independent of diagnostic categories (although we acknowledge that non-specific claims are common in other areas, such as analgesics for pain). As an example, cognitive impairments are common in psychiatric disorders, but they do not have a unique clinical pattern or a unitary cause.

We therefore believe that, at present, such a cross-cutting approach may increase heterogeneity in patient populations and make the assessment of the benefit–risk balance more difficult. Similarly, the use of dimensions as key secondary end points in many different diagnostic categories may lead to pseudospecific indications and polypharmacy. As a general rule, a therapeutic indication should be a well-recognized clinical entity that is clearly distinguishable from other conditions...
They also echo Beck in warning of over-diagnosis and over-medicalization:
Current proposals to reclassify some conditions that were subthreshold or prodromal as distinct syndromes or disorders could have implications for clinical trials. The inclusion of milder or very early cases of psychiatric disorders may lead to an increase in the number of non-disordered (false-positive) patients in clinical trials, and to an increase in the placebo effect, as less severe cases are more likely to respond to placebo. It may therefore be difficult to show a statistically significant difference [of drug over placebo]...
This raises another highly controversial issue: the risk of medicalization of the normal population. In this respect, a strong concern comes from the proposal to remove bereavement exclusion from the criteria for major depressive disorder, implying that all individuals with ‘normal grief’ might be considered as patients in the future.
Regular readers will remember that I've covered both overdiagnosis screwing up clinical trials, and the bereavement debate.

Two and a half years ago, shortly after the first draft of the DSM-5 was made public, I predicted that the eventual release of DSM-5 would be a non-event because, by then, it would have been widely debated and criticized, destroying the illusion of expert consensus that any such document must have in order to succeed.

I think events have borne this out. An awful lot of professionals, patients, and their relatives, will reject the changes in favour of sticking with the DSM-IV or other criteria. Without swift and general acceptance, a document like the DSM is just paper. It seems increasingly likely that the DSM-5 is going to be dead on arrival.

ResearchBlogging.orgStarcevic V, Portman ME, & Beck AT (2012). Generalized anxiety disorder: between neglect and an epidemic. The Journal of nervous and mental disease, 200 (8), 664-7 PMID: 22850300

Florence Butlen-Ducuing et al (2012). DSM‑5 and clinical trials in psychiatry: challenges to come? Nature Reviews: Drug Discovery DOI: 10.1038/nrd3811

Thursday, 17 May 2012

Another Antidepressant Crashes & Burns


Yet another "promising" novel antidepressant has failed to actually treat depression.

That's not an uncommon occurrence these days, but this time, the paper reporting the findings is almost as rubbish as the drug: Translational evaluation of JNJ-18038683, a 5-HT7 receptor antagonist, on REM sleep and in major depressive disorder

So, Pharma giant Janssen invented JNJ-18038683. It's a selective antagonist at serotonin 5HT-7 receptors, making it pharmacologically rather unusual. They hoped it would work as an antidepressant. It didn't - in a multicentre randomized controlled trial of 230 depressed people, it had absolutely no benefits over placebo. A popular existing drug, citalopram, failed as well:

About the only thing JNJ-18038683 did do in humans was to reduce the amount of dreaming REM sleep per night. This REM suppressing effect is also seen with other antidepressants and this is evidence that the drug does do something - just not what it's meant to. Being charitable you could call this a failed trial.

Ouch! But it gets better. Unhappy that JNJ-18038683 bombed, Janssen reached for their copy of the Cherrypicker's Manifesto. This is a new statistical method, proposed by fellow Pharma company GSK in a 2010 paper, which consists of excluding data from study centres with a very high (or very low) placebo response rate.

Anyway, after applying this "filter" JNJ-18038683 seemed to do a bit better than placebo, but the benefit over placebo still wasn't statistically significant - with a p value of 0.057, the wrong side of the sacred p=0.05 line (on page 33).
Yet Page 33's "trend towards statistical significance" magically becomes "significant" - in the Abstract:
[with] a post hoc analyses (sic) using an enrichment window strategy... there was a clinically meaningful and statistically significant difference between JNJ-18038683 and placebo.
Well, no, there wasn't actually. It was only a trend. Look it up.

That aside, the problem with the whole filter idea is that it could end up biasing your analysis in favour of the drug, leading to misleading results. The original authors warned that "data enrichment is often perceived as a way of improperly introducing a source of bias... In conventional RCTs, to overcome the bias risk, the enrichment strategy should be accounted for and pre-planned in the study protocol." They should know, as they invented it, but Janssen rather oddly say the exact opposite: "This methodology cannot be included in a protocol prospectively as it will introduce operational bias in that scheme."

Hmm.

Anyway, even after the filter technique, citalopram didn't work either... bad news for citalopram, except, was it citalopram at all? This is really unbelievable: Janssen don't seem clear on whether they compared their drug to citalopram, or to escitalopram - a quite different drug.

They say "citalopram" in most cases, but they have "escitalopram" instead, in three places, including, mysteriously, in a "hidden" text box in that graph I showed earlier:

I'm not making this up: I stumbled upon a text box which is invisible, but if you select it with the cursor, you find it contains "escitalopram"! I have no idea what the story behind that is, but at best it is seriously sloppy.

Come on Janssen. Raise your game. In the glory days of dodgy antidepressant research, your rivals were (allegedly) concealing data on suicides and brushing whole studies under the carpet, to make their drugs look better. Despicable, but at least it had a certain grandeur to it.

ResearchBlogging.orgBonaventure, P., Dugovic, C., Kramer, M., De Boer, P., Singh, J., Wilson, S., Bertelsen, K., Di, J., Shelton, J., Aluisio, L., Dvorak, L., Fraser, I., Lord, B., Nepomuceno, D., Ahnaou, A., Drinkenburg, W., Chai, W., Dvorak, C., Carruthers, N., Sands, S., and Lovenberg, T. (2012). Translational evaluation of JNJ-18038683, a 5-HT7 receptor antagonist, on REM sleep and in major depressive disorder Journal of Pharmacology and Experimental Therapeutics DOI: 10.1124/jpet.112.193995

Sunday, 22 April 2012

The Amazing Financial Robot Scam

The BBC reports on an interesting example of a very modern scam: US charges British twins over $1.2m 'stock robot' fraud.


The scam had two parts. For investors, there was the the "stock picking robot" called Marl, which supposedly told you which stocks to buy. You could buy a copy of Marl for $28,000 - or get a newsletter featuring Marl's wisdom, for just $47.

In reality Marl didn't pick anything. The stock tips were provided by the teenage scammers, the Hunters, themselves. Not because they thought they were good stocks, but because the companies behind the stocks paid the Hunters fees for their promotional services via a separate "equitypromoter.com".

What's interesting about the scheme is that everything "worked", just not the way it was meant to. Investors paid to get tips as to what stocks would rise; they did rise, just not for the reasons they thought.

So Marl was a lot like one of those quack treatments in medicine, that claim to treat a certain disease, and do indeed make people who take it feel better, but - contrary to what they claim - through the placebo effect.

There's other similarities too, as you can find out on the rather fascinating good-stocks.com site which helped sell Marl. Like many quack treatments it had:
  • An elaborate 'mechanism of action' that blinds with science - Marl uses an "evolutionary framework"to "Develop what professional traders call a 'sixth sense'" and can "process 1,986,832 mathematical calculations per second."
  • Lots of amazing success stories and testimonials from satisfied customers
  • An attractive creation myth - Marl was invented by "Two Uber Geeks" who both had a record of success in more conventional stock trading, but unlike their conventional colleagues, were able to invent Marl by thinking outside the box; this is reminiscent of the many quacks who simultaneously flout their medical or academic qualifications while accusing medicine and academia of ignoring them.
Overall this is a fascinating story of greed and lies and if you like that sort of thing you'll enjoy surveying the electronic ruins of a classic scam e.g. here and here...

Wednesday, 7 March 2012

Ketamine - Magic Antidepressant, or Expensive Illusion?

Not one but two new papers have appeared from the Carlos Zarate group at NIMH reporting that a single injection of the drug ketamine has rapid, powerful antidepressant effects.

One placebo-controlled study found a benefit in depressed bipolar patients who were already on mood stabilizers. The other found benefits in treatment-resistant major depression, though ketamine wasn't compared to placebo that time. Here's the bipolar trial:


There have now been several studies finding dramatic antidepressant effects of ketamine, a compound that all journalists seem contractually bound to call either a or a "club drug" or a "horse-tranquilizer". Great news?

If you believe it. But hold your, er, horses... there's a problem. As I said almost 3 years ago about one of the earlier ketamine trials:
In theory, the trial was double blind - neither the patients nor the doctors knew whether they were getting ketamine or placebo. But you'll know when you've been injected with 0.5mg/kg ketamine. You get high. That's why people take it [recreationally]. The study can't really be called double blind.
To their credit, Zarate et al did acknowledge this, and suggested that in future ketamine could be compared to another drug which produces noticeable effects. But they really should have done that to begin with.
It's now 2012, and there have still not been any published studies comparing ketamine to an active comparator i.e. a different drug that produces noticable psychoactive effects, to avoid unblinding. This means it's 12 years since the initial pilot report on ketamine in depression, and 6 years since the first large trial appeared.

The authors of the 2006 paper themselves wrote that "limitations in preserving study blind may have biased patient reporting... One potential study design in future studies with ketamine might be to include an active comparator" and suggested amphetamine for the big role.

Good idea. But six years later, we're still waiting. Which is really a bit silly. There have been dozens of papers written about the possible antidepressant effects of ketamine, from human trials to mouse work. That's a lot of research dollars (and dead mice) on something that might just be an active placebo.

Looking at the registered ketamine research on clinicaltrials.gov, I found that four active-comparator ketamine trials are in the pipeline (1,2,3,4), plus one cancelled (5). Only one is for depression though. The others being for OCD, cocaine dependence and suicidal ideation.

In all of these trials a benzodiazepine is the active comparator. Is that a good idea? Well, it's certainly better than nothing, but I wonder.

An active comparator has to "make an impression" on the patient equal to that produced by the real drug.  The null hypothesis, remember, is that ketamine has no specific antidepressant effect. That means it produces improvement through a combination of a) the placebo effect (expectation) and b) non-specific psychoactive changes.

More on that second one: any psychoactive drug might relieve depression by "taking your mind off it" and a change in mental state, as provided by a drug, also provides a demonstration that "I won't always feel this way". By showing that states of consciousness are products of brain chemistry, almost any drug could therefore offer a "glimmer of hope" to the depressed. If all this sounds very subjective, it is, but that's the point. Psychiatry is.

Would a benzo make as big an impression as 0.5 mg/kg ketamine IV? It's impossible to predict, really; so we'd need to ask people about the subjective strength of the drug effect. Personally, I worry that a lot of people just get sleepy on benzos and don't really feel much, so I'd prefer they used something a bit more hard-hitting like amphetamine, but maybe that's just me.

There's a deeper problem though. Suppose our ketamine-benzo trial finds no difference between ketamine and benzo. A critic could say, ah, but maybe it was just a "failed trial", so it doesn't overturn the positive studies. The patients weren't properly diagnosed, or weren't depressed enough, or were too depressed, etc.

Nitpicking such differences between studies is a well-practiced art.

Critics could complain in other ways if the study did find a benefit of ketamine. As I see it, the only way to settle this once and for all is to do a three-way randomized controlled trial - inactive placebo vs. active comparator vs. ketamine.

That way, if it's a failed trial, we'd know: there'd be no difference between ketamine and the inactive placebo. If there was a difference, but the active comparator was just as good as ketamine, that means it was all about nonspecific effets. Finally, if ketamine was better than the other two conditions, we could be pretty confident it was really working.

Also important is the question of volunteer expertise; subjects shouldn't be able to tell what drug they're on, but people who'd taken ketamine and/or the comparator drug before might be able to do that, so you'd want naive volunteers.

In conclusion: It's possible that ketamine has no specific antidepressant effects. To find out we ideally need a three-way trial, with both active and inactive comparators, careful monitoring of subjective drug effects and patient knowledge and expectations. Until that happens, I will be skeptical of ketamine in depression.

This is not because I just think it's impossible. Ketamine profoundly affects the brain in ways that we don't understand. I've suffered depression and I know it can come and go in a matter of minutes. So I think it's entirely possible that it works - but it's also possible that it's a nonspecific effect.

Look. I really want to know the answer to this. Both as a neuroscientist, and as a depression sufferer, this is very important to me. That's why we urgently need a good trial.

Link: See also the discussion and the comments over at The Neurocritic and this Scientific American piece which is pretty good except that it doesn't cover the active placebo issue.

ResearchBlogging.orgZarate CA Jr, Brutsche NE, Ibrahim L, Franco-Chaves J, Diazgranados N, Cravchik A, Selter J, Marquardt CA, Liberty V, and Luckenbaugh DA (2012). Replication of Ketamine's Antidepressant Efficacy in Bipolar Depression: A Randomized Controlled Add-On Trial. Biological psychiatry PMID: 22297150

Ibrahim, L., et al. (2012). Course of Improvement in Depressive Symptoms to a Single Intravenous Infusion of Ketamine vs Add-on Riluzole: Results from a 4-Week, Double-Blind, Placebo-Controlled Study Neuropsychopharmacology DOI: 10.1038/npp.2011.338

Thursday, 26 January 2012

Take Your Placebos, Or Die

People who take their medication as directed are less likely to die - even when that "medication" is just a sugar pill.


This is the surprising finding of a paper just published, Adherence to placebo and mortality in the Beta Blocker Evaluation of Survival Trial (BEST)

BEST was a clinical trial of beta blockers, drugs used in certain kinds of heart disease. The patients were aged about 60 and they all suffered from heart failure. Everyone was randomly assigned to get a beta blocker or placebo, then followed up for 3 years to see how they did.

Here's the big finding: in the placebo group of 1174 patients, the people who took all of their placebo pills on time (the good adherers), were significantly less likely to die than the patients who missed lots of doses. People who took over 75% as directed were 40% less likely to die than those with less than 75% adherence:




That's pretty interesting. The pills were placebos - they can't have had any benefit. So what's going on?

It gets even better. You might be tempted to write off these results as obvious: "Clearly, people who follow the study instructions are just 'healthy' people in other ways - maybe they take more exercise, eat better, etc. and that's what protects them."

Certainly, that's what I'd have said.

But what's remarkable is that when the authors corrected the statistics for all the confounding variables they measured - including things like age, gender, ethnicity, smoking, body mass index and blood pressure - it barely changed the effect. Some of the factors did correlate with adherence, but not in a way that it could explain the adherence effect on mortality.

This isn't the first study to find this effect. The authors themselves have already reported it, as have other researchers going back decades (many of which also tried, and failed, to explain it through confounding factors.) They say that it's unlikely to be a case of publication bias.

So what we have is a large effect, which cannot be causal, yet which can't be explained by any obvious confounds. Logically then, it must be the result of a confound (or more than one) that aren't obvious.

This is an important lesson. It's common for someone to do a study and find an interesting / scary / controversial correlation between two things. Often one is some kind of lifestyle factor, diet, environmental exposure, or whatever, and the other is some nasty disease. "And it wasn't explained by confounds!", such studies often conclude.

What the placebo adherence effect demonstrates is that there may be confounds no-one has thought of. They might even be impossible to measure. And if these mystery confounds can literally kill you, they can probably cause all kinds of other effects too.

In other words this illustrates the truism that correlation is not causation - not even when you're really sure it is...

ResearchBlogging.orgPressman, A., Avins, A., Neuhaus, J., Ackerson, L., and Rudd, P. (2012). Adherence to placebo and mortality in the Beta Blocker Evaluation of Survival Trial (BEST) Contemporary Clinical Trials DOI: 10.1016/j.cct.2011.12.003

Thursday, 19 January 2012

Challenging the Antidepressant Severity Dogma?

Regular readers will be familiar with the idea that "antidepressants only work in severe depression".

A number of recent studies have shown this. I've noted some important questions over how we ought to define "severe" in this context, and see the comments here for some other caveats, but I'm not aware of any studies that directly contradict this idea.

Until now. A new paper has just come out which seeks to challenge this dogma - not the author's term, but I think it's fair to say that the severity theory is becoming a dogma, even if it's an evidence-based one (but then, all dogmas start out seeming reasonable).

However, while the new paper is interesting, I think the dogma survives intact.

The authors went through the archives of all of the trials of antidepressants for depressive disorders conducted at the famous New York State Psychiatric Institute for the past 30 years. They excluded any patients who were severely depressed, and just looked at the milder cases. The drugs were mostly the older tricyclic antidepressants.

With a mean HAMD17 score of about 14, the patients they looked at were certainly mild. By comparison, most trials today have a mean of well over 20, and according to the main studies supporting the severity dogma, you need a score of about 25ish to benefit substantially:


So what happened? They reanalyzed 6 trials with over 800 patients. Overall there was a highly significant effect of antidepressants over placebo in mild depression, with an effect size d=0.52, or about 3.5 HAMD points. This is actually better than most other studies have found in "severe" depression. If valid, these results would torpedo the severity theory.

This seems very interesting... but. There's a big but (I cannot lie). Although the authors say they wanted to include all the relevant trials from the NYSPI, they only had access to the data from 6. There were another 6 projects, but they were "pharmaceutical company studies from which data were not released to the investigators."

This pretty much wrecks the whole deal. If those 6 studies all found no benefit of the drug, the overall average results would be much less impressive. We have no way of knowing what those studies found, but I'd wager that most of them were negative, because of publication bias - we know that drug companies tend to publish positive studies and bury negative ones. Or at least they did, at the time these studies took place (there are better regulations now).

By contrast, severity dogma classic Kirsch et al (2008) avoided publication bias by looking at unpublished data. Fournier et al (2010), the other major severity study, didn't but the data were very similar to Kirsch et al so it's not hard to believe them.

So in my view, until we know what happened in the other 6 trials, we can't really interpret these results, and the severity theory stands.

ResearchBlogging.orgStewart, J., Deliyannides, D., Hellerstein, D., McGrath, P., and Stewart, J. (2011). Can People With Nonsevere Major Depression Benefit From Antidepressant Medication? The Journal of Clinical Psychiatry DOI: 10.4088/JCP.10m06760

Sunday, 11 December 2011

Do Antidepressants Make Some People Worse?

Antidepressants may help depression in some people but make it worse for others, according to a new paper.

This is a tough one so bear with me.

Gueorguieva, Mallinckrodt and Krystal re-analysed the data from a number of trials of duloxetine (Cymbalta) vs placebo. Most of the trials also had another antidepressant (an SSRI) as well. And the SSRIs and duloxetine seemed to be indistinguishable so from now on I'll just call it antidepressants vs. placebo as the authors did.

People on placebo got, on average, moderately better over 8 weeks.

People on antidepressants fell into two classes. The largest class got, on average, a lot better. But about 25% did poorly, staying just as depressed as before. This "nonresponder" group did much worse than the placebo group - again on average. Here you can see the mean "trajectories" of depression symptoms (HAMD scores) in the three groups:

This raises the scary possibility that while antidepressants are helping some people, they're harming others. But hang on. It's complicated.

First off, maybe this is all a statistical illusion. When the authors say that the people on drug fell into two classes, what they mean is that when you try to model the data according to a certain mathematical model, assuming either 1, 2, 3 or 4 underlying classes, the 2 class solution was the best fit. While for placebo a 1 class solution was best.
We considered linear, quadratic, and cubic trends over time, with between 1 and 4 trajectory classes. We also considered piecewise models with a change point at 2 weeks, linear change before week 2, and quadratic change after week 2. The selection of the best model was based on the Schwartz-Bayesian information criterion and on the Lo-Mendell-Rubin (LMR) likelihood ratio test...
That's nice... but they don't present the raw data. They don't tell us whether, looking at the individual trajectories of people on antidepressants, you'd actually see two classes. What I want is a graph of how likely people are to get better by a certain amount. If Gueorguieva et al are right, I want it to look like this i.e. bimodal -


We're not shown this graph. I'll eat my hat if it does look like that, frankly, because if it did people would have noticed the bimodality in antidepressant trials ages ago.

True, statistical models can tell us things that aren't obvious by inspection, so even if this isn't what the data look like, they might still be right. It could be that the two "peaks" are so broad, and there's so much random noise, that they blur into one.

However, it's also true that you can fit an infinite number of models to any set of data and at some point you have to step back and say - am I making this more complicated than it needs to be?

It could be that a 2-class model is better than a 1-class model for the people on antidepressants, but only because they're both crap, and really, every patient has a different, unpredictable trajectory which is poorly captured by such models.

Let's assume however that this is true. What would it mean?

Firstly, the fact that one class of people on antidepressants does worse than people on placebo doesn't mean that antidepressants are harming them. The authors miss this point, when they say
there are 2 trajectories for patients treated with antidepressants and 1 trajectory for patients treated with placebo [so] some patients would seem to be more effectively treated with placebo than with a serotonergic antidepressant.
But that's fallacious. It treats a purely statistical entity as representing individual people. Suppose that what antidepressants do is to take people who, on placebo, would have improved a bit, and make them improve a bit more than they otherwise would have. You'd then end up with more people doing well, but also fewer people doing moderately because they'd have been "moved up" out of the middle ground.

That "nudging people off the fence" could lead to a bimodal distribution and two distinct classes. But in this case the people doing badly would have done badly either way. The drug didn't make them do badly, it just made doing-badly into a class. On the other hand it's consistent with antidepressants doing real harm. We can't tell.

We do know that other randomized controlled trials show very convincingly that in a small minority of people, mostly but not exclusively young people, antidepressants do worsen suicidal thoughts and behaviours. So it's plausible. But we just don't know yet.

What worries me is that this paper is the latest in a series of attempts  to use, well, creative statistical approaches to antidepressant trial data. This one is nowhere near as dodgy as the Cherrypicker's Manifesto I discussed last year, but it cites that paper and others by the same group. The first sentence of the Abstract of this paper makes the intention clear:
The high percentage of failed clinical trials in depression may be due to high placebo response rates and the failure of standard statistical approaches to capture heterogeneity in treatment response.
In other words, the reason clinical trials of new antidepressants often fail to show a benefit over placebo is not because the drugs are crap but because the statistics aren't subtle enough. And you can see where this is going: if only we could use statistical models to find the people who do benefit from antidepressants, and compare them to placebo, there'd be no problem...

ResearchBlogging.orgGueorguieva R, Mallinckrodt C, and Krystal JH (2011). Trajectories of depression severity in clinical trials of duloxetine: insights into antidepressant and placebo responses. Archives of General Psychiatry, 68 (12), 1227-37 PMID: 22147842

Friday, 25 November 2011

A Dangerous Truth about Antidepressants

An opinion piece by veteran psychiatrist and antidepressant drug researcher Sheldon Preskorn contains a remarkable historical note -
“A dangerous idea!” That was the response after a presentation I gave to a small group of academic leaders with an interest in psychopharmacology [over 15 years ago].
What evoked such a response? The acknowledgment that most currently available antidepressants specifically treat only one out of four patients with major depression based on the bulk of clinical trials data.
There was no argument about the accuracy of this statement, but...some claim it is “dangerous” to admit that the specific response rate to most antidepressants is 20%–30% because such an acknowledgment might undermine the value of antidepressant treatment.
By the "specific" response rate Preskorn means the number of depressed people who'll get better on antidepressants and who wouldn't have done so well on placebo. This rate is fairly low because, while most people get better on antidepressants, most of those improve on placebo as well.

Preskorn rejects the view that it's dangerous to acknowledge this:
...there are several problems with this reaction. First, it is hard to deny reality. The “placebo” response rate in antidepressant trials is arguably the most reproducible finding in psychiatry. Moreover, if available antidepressants were magic bullets, then polypharmacy would not be so common. Second, this reaction ignores the fact that antidepressants are tremendously valuable to the patients who specifically benefit from them...
Every treatment in every area of medicine has limitations. Acknowledging that fact should galvanize us to action. Denial on the other hand perpetuates the status quo.
Unfortunately, we're not told who these academic leaders were. I wonder if they included amongst their ranks some of the "key opinion leaders" in the field whose leadership proved rather less than ideal. The column is actually adapted from a 1996 article by Preskorn.

Preskorn is right, of course, that denying the fact that antidepressants are only substantially better than placebo in a fraction of people who get diagnosed with "depression" is wrong, and also misses the point: because hundreds of millions of Americans have diagnosable depression (due to the loose definition of "depression"), even if they only helped 1% of them, they'd still help over a million people.

But he doesn't mention that this approach was ultimately self-defeating. As a result of the failure to acknowledge that antidepressants are only helpful in some cases of depression (namely "severe" depression), these drugs became very widely used and - oh dear - people started saying that the drugs are being overused, and don't work in most people who take them.

Whoever could have seen that coming.

This has "devalued" antidepressants - and psychiatry itself - more than anything else has.

ResearchBlogging.orgPreskorn SH (2011). What Do the Terms "Drug-Specific Response/Remission Rate" and "Placebo" Really Mean? Journal of psychiatric practice, 17 (6), 420-424 PMID: 22108399

Friday, 18 November 2011

Does MRI Make You Happy?


A startling new paper from Tehran claims Antidepressant effects of magnetic resonance imaging-based stimulation on major depressive disorder.

Yes, this study says that having an MRI scan has a powerful antidepressant effect.

They took 51 depressed patients, and gave them all either an MRI scan or a placebo sham scan. The sham was a "scan" in a decommissioned scanner. The magnet was off but they played recorded scannerish sounds to make it believable. Patients were blinded to group.

They found that people in the scanner group improved much more than those in the sham group over two weeks. Actually there were two different kinds of scans, T1 structural MRI and EPI functional MRI, but they were the same:
Now, if this is true, it's huge. Obviously. For one thing, it would undermine the whole premise of functional MRI, which is that it's a method of recording brain activity. If it's also stimulating the brain in some way at the same time, then it would make it hard to interpret those activations. In particular it would cast all the studies using fMRI in depression into doubt.

So is it true? I can't see any obvious flaws in the design. Assuming that the authors are right when they say that "patients could not distinguish the difference between the actual and sham MRI scan", i.e. assuming that the blind was truly blind, then the methodology was sound.

But let's look at the statistics. The paper is full of very impressive p values less than 0.001 but those turn out to all be referring to the changes within each group, and those changes are fairly meaningless. What matters is the differences in the groups and
Changes in BDI scores (between baseline and day 14) were significantly different among the three studied groups (F=5.48, p=0.007 overall) using ANOVA, and between the DWI group vs. Sham and T1 vs. Sham (p<0.05) using post hoc tests. Changes in HAMD24 scores (between baseline and day 14) were also compared among the 3 groups using ANOVA but the level of significance was slightly above the significance threshold (F=2.89, p=0.06).
Which is rather less convincing. There was a close-to-significant group difference in the HAMD24, and a significant but only just effect on the BDI. Remember that there were only 17 people in each group.

I'm inclined to think that this is one of the 5% of experiments which will produce a nominally significant result even assuming everything goes to plan and there are no confounds. My suspicion is that everyone in the trial got better (they were all on antidepressants, plus there's the placebo effect and the effect of time) - except a small number of people who didn't improve. And by chance they were all in the sham group.

The reason I'm skeptical is that I just can't see a plausible mechanism. The authors suggest that MRI scans might stimulate the brain in a similar way to TMS and that this could have antidepressant effects.

But there's a lot of problems with this: 1) the evidence is questionable whether TMS even works for depression 2) the magnetic stimulation of the brain generated during MRI is much weaker than in the case of TMS and 3) if MRI really stimulated the brain like TMS, then, like TMS, it would have a risk of triggering seizures in people with epilepsy. But it doesn't.

ResearchBlogging.orgVaziri-Bozorg SM, et al (2011). Antidepressant effects of magnetic resonance imaging-based stimulation on major depressive disorder: a double-blind randomized clinical trial. Brain imaging and behavior PMID: 22069111

Saturday, 15 October 2011

Placebos And The Brain's Own Pot

According to a neat little new paper, the placebo effect relies on the brain's own marijuana-like chemicals, endocannabinoids.

Or rather, some kinds of placebo effects involve endocannabinoids. It turns out that "the placebo effect" is not one thing.

The authors, led by Fabrizio Benedetti, have previously shown that placebo "opioids" - i.e. when you expect to get a painkiller such as morphine, but actually it's just water - relieve pain via the brain's own opioid system (endorphins). Blocking endorphins with certain drugs blocks the power of placebo morphine.

But there are many painkillers that aren't opioids, leaving open the question of whether all placebo effects on pain are mediated by endorphins.

The new study claims that endocannabinoids are involved in non-opioid placebo analgesia. They used rimonabant, a weight loss drug that was pulled from the market shortly after it appeared, because it caused depression. Rimonabant worked by blocking CB1 receptors, which are the main target of the psychoactive chemicals in cannabis - and also key players in the endogenous cannabinoid system.

Here's the headline result:

The graph on the left shows the relationship between the pain relieving power of morphine, and the pain relief caused by placebo "morphine" given on a subsequent day. As you can see, there was a strong correlation. People who had a strong response to real morphine, later responded well to the fake morphine. But rimonabant had no effect at all.

Pain relief was measured using tolerance to the pain caused by a tightly fitting tourniquet.

However, rimonabant did have a strong effect on the placebo response to a different drug, ketorolac, which is related to the better-known ibuprofen (Nurofen). As you can see in the graph on the right, people given rimonabant had a much lower response to the placebo "ketorolac".

In other experiments, they showed that rimonabant alone had no effect on pain tolerance.

This is a nice result. It shows that the placebo effect is not a single thing, but that it depends upon the nature of the drug that you believe you've got. It also reminds us that the placebo effect is not some magical power of mind-over-matter, but is in fact, well, matter-over-matter.

Interestingly, ketorolac has no effect on endocannabinoids, or at least no direct effect. The mechanism of action, which is fairly well understood, has nothing to do with cannabinoids. Yet placebo "ketorolac" still seems to set endocannabinoids buzzing.

ResearchBlogging.orgBenedetti F, Amanzio M, Rosato R, & Blanchard C (2011). Nonopioid placebo analgesia is mediated by CB1 cannabinoid receptors. Nature medicine, 17 (10), 1228-30 PMID: 21963514