Showing posts with label 1in4. Show all posts
Showing posts with label 1in4. Show all posts

Saturday, 12 May 2012

Shyness By Any Other Name

People think of "social anxiety disorder" as more serious than "social phobia" - even when they refer to exactly the same thing.

Laura C . Bruce et al did a telephone survey of 806 residents of New York State. They gave people a brief description of someone who's uncomfortable in social situations and often avoids them. The question was: should they seek mental health treatment for this problem?

When the symptoms were labelled as "social anxiety disorder", 83% of people recommended treatment. But when the same description was deemed "social phobia", it dropped to 75%, a statistically significant difference.

OK, that's only an 8% gap. It's a small effect, but then the terminological difference was a small one. "Anxiety disorder" vs "Phobia" is about a subtle a distinction as I can think of actually. Imagine if one of the options had been a label that didn't imply anything pathological - "social anxiety" or "shyness". That would probably have had a much bigger impact.

This matters, especially in regards to current debates over the upcoming DSM-5 psychiatric diagnostic manual. Lots of terminological changes are planned. This study is a reminder that even small changes in wording can have an impact on how people think about mental illness. Last week I covered another recent piece of research showing that beliefs about other people's emotions affect how people rate their own mental health.

My point is: DSM-5 will not merely change how professionals talk about the mind. It will change how everyone thinks and behaves.

ResearchBlogging.orgBruce, L. (2012). Social Phobia and Social Anxiety Disorder: Effect of Disorder Name on Recommendation for Treatment American Journal of Psychiatry, 169 (5) DOI: 10.1176/appi.ajp.2012.11121808

Saturday, 5 May 2012

More Depressed Than Average?

Whether we think of ourselves as "depressed" or "anxious" depends on what we think about other people's emotional lives, rather than our own, according to an important paper just published: Am I Abnormal? Relative Rank and Social Norm Effects in Judgments of Anxiety and Depression Symptom Severity

The work appears in the obscure Journal of Behavioural Decision Making, which is downright criminal. It deserves to be in the British Journal of Psychiatry ... and it's not often I think that about a paper.

In the first experiment, the authors quizzed people how many days per month they felt “depressed, sad, blue, tearful” or had “excessive anxiety about a number of events or activities.” They then asked them a series of questions designed to work out how they thought other people would answer than question. So they could work out where each individual thought they ranked within the general population, in terms of depression or anxiety symptoms.

Take a look. The top panel shows someone who felt depressed on 5 days a month, but believed this put him in the most depressed 70% of people. The second person felt depressed twice as often, but she thought she was below average.


They found that perceived rank was strongly correlated with whether people thought they "had depression" or "had anxiety" - much more strongly than actual frequency of symptoms. "Having depression" meant "being more depressed than other people".

That's just a correlation and doesn't prove causation, but in the second experiment, they randomly assigned people to get different versions of a survey which manipulated perceived rank, and they confirmed that rank was indeed associated with how "disabling" they felt a given level of symptoms would be.

Now, this is just common sense, in a way. Of course whether you think of yourself as abnormal will depend on what you think of as normal - that's what "abnormal" means. We understand ourselves in the context of other people.

But this common sense is maybe not so common nowadays; you can read a hundred papers about the chemistry, genetics or causes of "depression" without a consideration of what "depression" (i.e. "abnormal" as opposed to "normal" mood) is.

The implications are big. Here's my main concern. Right now a lot of people think that promoting the idea that mental illness is very common is a good idea. Their stated goal is that by 'normalizing' mental illness, we'll destigmatize it. This will both help the mentally ill to cope, and encourage people to talk about their own mental health and get help.

All very nice. I've accused such campaigns of being based on dodgy stats, but this paper suggests that such campaigns could end up having exactly the opposite effect from that intended - they could lead to under-diagnosis, and increased stigma.

Suppose being depressed or anxious becomes seen as more 'normal'. According to these data, this will make people who are depressed or anxious less likely to seek help, for any given level of symptoms. Change people's perceptions of other people, and you'll change how they see themselves.

Worse, normalizing distress could - paradoxically - make those who do seek help seem more abnormal. Think about it: if depression and anxiety are normal, surely only an abnormal person would need special help to deal with them.

It's a small step from this to the idea that mental illness is mere personal weakness, laziness, attention-seeking, or scrounging. 'What's your problem? Everyone feels down or worried sometimes... most of us just deal with it.' If everyone is mentally ill, then no-one is really mentally ill... so the "mentally ill" must have something else wrong with them. Not very nice.

I'm not sure if this has happened, or will ever happen, but it's something to think about.

ResearchBlogging.orgMelrose, K., Brown, G., and Wood, A. (2012). Am I Abnormal? Relative Rank and Social Norm Effects in Judgments of Anxiety and Depression Symptom Severity Journal of Behavioral Decision Making DOI: 10.1002/bdm.1754

Saturday, 3 March 2012

The World Mental Health Missionaries?

Is research on the global distribution of mental health problems a kind of modern-day missionary work?

Maybe, says Australia's Dr Stephen Rosenman in a provocative paper: Cause for caution: culture,sensitivity and the World Mental Health Survey Initiative.

The World Mental Health Survey (WMHS) is a huge World Health Organization project that aims to measure the rates of various psychiatric disorders in countries around the world. The WMHS has produced a great deal of data, but Rosenman points out that this assumes that people all over the world suffer from the same psychiatric disorders (and display them in the same ways) as the Americans and Europeans about whom the diagnostic manual was originally written.

The surveys translated the diagnostic criteria into the local languages, of course, but that doesn't mean they were appropriate to the local cultures.

He suggests that all this is a bit like missionaries who went around translating the Bible and trying to convince people to read it -
Looked at with a less admiring eye, the [WMHS] resembles in some ways the missionary movements of the last two centuries. Like the missionaries, the organisers are committed, selfless people of extraordinary goodwill who have come to poor countries from cultures at the apogee of their wealth, prestige and intellectual power.
They bring an evolved and highly developed system of thought. They set about delivering the fruits of that to the people. The survey initiative has engaged the leaders of the profession in the countries and, in a sense, has converted them to this view of psychopathology.
It is difficult to know if their success is due to the power of the ideas they brought, or the power and prestige of the cultures they came from, or from their technique of taking over both the centre and the contours of the beliefs of a culture. Missionaries brought a ‘colonisation of consciousness’... etc.
He does goes on to say though, "I do not want to push the missionary analogy too far" which is wise I think; there are important differences and other analogies are equally apt.

The paper's a good read though. It refers to Crazy Like Us, a book I'm fond of.

Although Rosenman doesn't cite another important source (cough cough): he points out that the WMHS national estimates of rates of depression don't correlate at all with national suicide rates, which is seriously odd -
According to the CIDI [the psychiatric interview used in the WMHS], Japan, for example, has one-third the rate of mood disorders (3.1%) seen in the USA (9.6%). At the same time, Japan’s suicide rate (20.3/100,000) is twice that of the USA (10.8/100,000). Suicide rates seem to have almost no relationship with CIDI diagnoses of affective disorder... Suicide, of course, is complexly shaped by the culture but are we to believe that answers to the CIDI are any less culturally determined and which is to be considered the better index of disorder?
I made the very same point using the very same datasets in 2009 (although I looked at 'all mental illness' rather than 'mood disorders').

ResearchBlogging.orgRosenman, S. (2012). Cause for caution: culture, sensitivity and the World Mental Health Survey Initiative Australasian Psychiatry, 20 (1), 14-19 DOI: 10.1177/1039856211430149

Friday, 30 December 2011

Britain - the Prozac Nation? Not So Fast...

Oh no! The stress of the recession has turned us into a nation of antidepressant addicts, according to every single British newspaper this morning.


The media coverage has been predictable with lots of scary, context-free statistics, and boilerplate quotes from the usual suspects. No doubt tomorrow we'll see a selection of moralistic op-eds about this.

But not one of the many nigh-identical articles provided a link to the original data, or even a useful description of where one might find it. After contacting one of the NHS organizations named as the source, I managed to track the numbers down.

It turns out that the key figures have been publicly available since April 2011, so I'm not sure why this story appeared in British "news"papers at all. Also, it would have been easy for journalists to link to the source, if they respected the intelligence of their readers enough to do that. I just did it and it wasn't terribly hard to click "Add Link".


On that note, I actually read a bizarre article today criticizing British journalists for providing too many links to their source data... if only.

Anyway, the data. Ben Goldacre has already written an excellent piece on this (in fact, he wrote it back in April 2011, curiously enough...see above), but here's some more detail.

First off, the data are all about antidepressants, not depression. A crucial distinction, there, because nowadays, antidepressants are widely used for all kinds of other things. Everything from other psychiatric disorders like anxiety and OCD, to non-psychiatric stuff like back and joint pain, premature ejaculation, and menopausal hot flushes.

We can't tell how much of the antidepressant use was for depression. But there are clues suggesting that a lot of it wasn't. It turns out that the second most popular antidepressant (after citalopram) was the very old drug amitriptyline, with nearly 9 million prescriptions per year - or 20% of the total.

Nowadays amitriptyline is rarely used for depression, because newer, less toxic alternatives are available. However it is used, in low doses, to treat chronic pain. So I suspect that pain accounts for a large % of amitriptyline use. That would also explain why the cost to the NHS per prescription of amitryptiline was by far the lowest of all antidepressants: low doses are cheap.

How about the increase over time?

The newspapers are correct that antidepressant use rose from 33.9 million prescriptions in the year 2007/8, to 43 million in 2010/2011. That's a 28% rise over 3 years. However, if we go 3 years further back to the equivalent 2004/5 Prescription Cost Analysis, we find that antidepressant prescriptions were 28.9 million. So they rose 17% in the 3 years before 2007/8, long before the recession was on the horizon.

The recent 28% rise, in other words, is unlikely to be related to the recession, at least not entirely.

We also know(1,2) that the number of antidepressant prescriptions per person has been rising over the past several years in the UK. So the increase in prescriptions might not even mean more antidepressant users - it might just mean that the same number of users are using more each. (And that could mean anything, including that bureaucracies are saving money by prescribing for shorter periods).

One study found that there was no increase in the number of people taking antidepressants for depression from 1993 to 2005, with all of the rise in prescriptions over that period being a product of more prescriptions per person.

Another study did find a true rise in users from 1995 to 2007, albeit lower than the raw figures would suggest, but those figures were limited to a particular part of Scotland and it wasn't just about depression - it included all other uses of these drugs as well.

Overall, it's just impossible to know, from these data, whether there's been a true increase in antidepressant use for depression in recent years. The most we can say is that there might have been one, and if so it might have something to do with the economy.

Tuesday, 6 December 2011

The Network of Mental Illness

A provocative but problematic paper just out offers a new perspective on psychiatric symptoms.


The basic idea is that rather than psychiatric disorders being entities, they are just bundles of symptoms which cause each other:
...symptoms are unlikely to be merely passive psychometric indicators of latent conditions; rather, they indicate properties with autonomous causal relevance. That is, when symptoms arise, they can cause other symptoms on their own. For instance, among the symptoms of MDE we find sleep deprivation and concentration problems, while GAD (generalized anxiety disorder) comprises irritability and fatigue. It is feasible that comorbidity between MDE and GAD arises from causal chains of directly related symptoms; e.g., sleep deprivation (MDE)→fatigue (MDE)→concentration problems (GAD)→irritability (GAD).
The authors seem to have mixed up their labels in the middle there, but you see the drift.

This symptom-based approach stands in contrast to the idea that psychiatric illnesses are underlying things which lead to some symptoms. So it's a challenge to the notion of underlying biological dysfunction (except maybe for specific symptoms) but it's equally incompatible with any theory of underlying psychological causes - there's no room for Freudian unconscious "complexes" here.

So there's something very straightforward and un-mysterious about this model, which will either make it attractive or suspect, depending on whether you think human life is mysterious or not.

What's the evidence? First, the authors do an analysis of the DSM-IV diagnostic manual in terms of symptoms. They take every symptom which is mentioned in at least one diagnosis. They found 439 symptoms in total, over 201 disorders, with many symptoms, such as insomnia, shared between lots of different "disorders".

They then used network analysis to create a kind of graph where the "distance" between the nodes (symptoms) is based on the number of shared diagnoses. They found that while some symptoms are unique to just one disorder, there's a core of highly shared symptoms which form a "giant component"




It's a very clever approach but I wonder what it really tells us. The DSM-IV is not data about mental illness. It's data about what we think about mental illness. Actually, it's not even that: it's data about what a particular set of people, at a particular time, were able to agree upon.

DSM-V is coming soon, and before that we had DSM's I, II and III. What about them? Do they have a different network structure? I'd have thought they would, but we don't know.

We've already seen the kinds of politics that lie behind the decision to include or exclude a diagnosis in the DSM. In the upcoming DSM-V they're seriously proposing to add a new diagnosis ("TDDD"), purely in order to stop people getting another diagnosis (childhood "bipolar").

There is a lot of symptom overlap between TDDD and bipolar disorder. Because one was designed for the purpose of diverting patients from the other. But that doesn't tell us anything about real people with real symptoms. This is an extreme example and to be fair to the authors they do acknowledge some of these problems with the DSM, but still.

The authors then show that the symptomatic closeness between DSM-IV disorders predicts the rates of comorbidity between those disorders, as measured in the American population survey the NCS-R. This is true even of disorders which don't share a common symptom but which are connected indirectly by a mutual friendship, as it were.

Finally they show that a statistical model based on interacting symptoms can predict the prevalence of depression (10% per year according to the NCS-R survey) and GAD (3% per year). It does so much better than a random model in which symptoms randomly interact.

However, I'm not convinced that all these show us that the symptom-network approach is the best model to explain the occurence of these disorders. It only shows us that it's a model that works better than a crazy random model. I'm also not sure that being able to model the NCS-R data is even a good thing, since these data are themselves of questionable validity.

But it's a genuinely interesting approach and well worth following up.

ResearchBlogging.orgBorsboom D, Cramer AO, Schmittmann VD, Epskamp S, and Waldorp LJ (2011). The small world of psychopathology. PloS one, 6 (11) PMID: 22114671

Wednesday, 16 November 2011

One in Four Revisited

In a recent Telegraph article, professional contrarian Brendan O'Neill argues against the idea that one in four people experience mental illness - and indeed against the idea that one in four people are bullied, abused or whatever else:
Can it really be true that a quarter of Brits are bullied or beaten up at home or are mentally ill, or is this simply a case of social campaigners exaggerating how bad life is in order that they can continue to make headlines, make an impact, and get funding? I reckon it's the latter. Next time you see the "one in four" figure, be very sceptical – it's probably Dickensian-style doom-mongering disguised as social research, where the aim is to convince us, against the evidence of our own eyes and ears, that loads of the people we encounter everyday are basket cases in need of rescue.
I say "argues against", but he doesn't actually provide any arguments. He just links to the claims and says they're silly.

As Neuroskeptic readers know, I am myself skeptical of the idea that one in four people are mentally ill, but I'm skeptical of it because I've looked at the evidence and it doesn't support that figure. Actually, if you take the available evidence at face value, it says that the true figure for the lifetime prevalence is much higher than one in four. I don't think those figures are very useful however because of various methodological issues.

So in my view we just don't know how many people are mentally ill, largely because we don't have any clear definition of what "mentally ill" means. But that doesn't mean we can just assume that it can't possibly be one in four just because "our own eyes and ears" tell us that most people are not "basket cases".

Much mental illness goes undiagnosed and unnoticed, and I'd imagine also that Brendan O'Neill and the kind of people who read him don't tend to "encounter everyday" people from groups such as the unemployed, the elderly and so forth, in whom the rates are higher.

But even beyond that, it's a silly argument because of selection bias. If you as a healthy person encounter someone everyday, chances are they're not severely ill - mentally or physically - because if they were, they'd be less likely to be around in places for you to encounter. Unless you're a doctor or whatever, you live your life in the world of healthy people.

It's like saying that you don't believe children or the elderly exist, because in your life as a working age adult, you never meet any of them.


Wednesday, 12 October 2011

Mountains of Mental Disorders

This is a story about a man who lived in a house. Here it is:


The house was a lovely thatched cabin, situated in a wooded valley between two little hills, set against the spectacular scenary of a snow-capped mountain. He'd been born there, and he'd lived there all his life.

One day, there was a knock on the man's door. He opened it to find two official-looking people carrying clipboards, with serious expressions on their faces.

"Hello, sir. We are officials from the Ministry of Mountains. Sorry it took us so long."
"Oh... excuse me?", the man replied, puzzled.
"We're very sorry we didn't get here earlier."
"I'm afraid that I don't know what you mean. I wasn't expecting any..."
"Hmm. Let me explain. The Ministry of Mountains exists to help people who live on mountains. So, you see, we're here to..."
"Ask for directions to the mountain? It's about 10 miles down the road. Just look up - you can't miss it."

The official looked unamused.
"No. We're here to help you, sir."
"Help you to cope with the rigors of mountain living!" the other chimed in, helpfully.
"But... I don't live on a mountain."
"I'm afraid you do. Look - " and the first official unfolded a large map. "Do you agree that there is a mountain, here?" and she pointed to a spot 10 miles down the road.
"Yes. Actually I just told you about i..."
"...and, do you agree that you live - here?"
"Of course, but..."

"So you do live on the mountain. The very ground beneath our feet right now is part of that mountain nearby."
"No it's not." The man protested. "This is a valley, miles away. I mean just look outside. We're clearly not on a mountain now, are we?"
"How old fashioned. That's what we used to think. But, thanks to advances in geology, we now appreciate that these hills and valleys are merely a part of the mountain."
"Yes!" the other said, whipping out a textbook and becoming increasingly enthusiastic. "You see, a mountain is merely a mass of rock, and this rock extends underground for a considerable distance... It's impossible, really, to draw a line on the map and say categorically, this side is mountain, this isn't. So 'mountains' are an arbitrary construct. 'Hills' are likewise just protrusions of the underlying mountain and..."

The man was even more confused now. "Umm... well, I suppose, technically...but..."
"...so yes, so you do live on a mountain. And we know that this is very difficult. You're exposed to all kinds of dangers like blizzards, altitude sickness, avalanches..."
"Not really. It's nice here. It doesn't even snow most years."
"That's unlikely. You agree that mountains have blizzards and avalanches? Right. And you earlier agreed that there's no dividing line between you and a mountain. So logically..."
"Er..."
"So you are in danger! Don't worry, though. We're here to help. To start off with, we're going to reinforce your house with six tons of cement, to protect you against rockfalls. The construction crew will arrive tomorrow morning. Now, as for those blizzards..."
The man had had enough of this.
"This is absurd. Now look - there is a guy who really does live on top of the mountain in a rickety old shack. Old Grandpa McHermit. He might actually need your help. I don't. Get out! And if I see anyone with a bag of cement tomorrow morning, I'll shove it right up their..."

---

As you may have guess, this story is a metaphor. There is a movement in psychiatry at the moment, away from a 'categorical' view of mental illness towards a 'spectrum' view. Mental disorders are not things you either have or don't - defined according to some arbitrary cut-off. Rather, they're things that everyone has, to some degree.

This has already happened, or is happening, to autism, schizophrenia, bipolar disorder, personality disorders, and more.

Now, the "spectrum" or "dimensional" approach has much to recommend it. It's true that diagnostic cutoffs are arbitrary. It's true that the categorical approach doesn't capture the true degree of variation that real people display.

My worry is that these new "spectra" are, in practice, merely the old categories, just bigger. We still think of people as being ill or not-ill, although we may call it on the spectrum or off it. Worse, we still think of "ill" in the same way as we used to i.e. as referring to the most severe end of the spectrum. The only difference is that we've expanded the old category of "ill" to cover more people.

This is evident in the fact that we still use the old categorical labels. It's the autism (or schizophrenia or bipolar) spectrum, even though "autism", in the old sense of a discrete disorder, is now supposed to be just one extreme of that spectrum. Yet the point about an extreme is that it's unusual, so why call it that?

We don't call the rainbow the red spectrum. We don't call height the midget spectrum. We don't call hills part of the mountain spectrum.

The point is, we really think of color and height and altitude as spectra, not as approximations to an extreme point, and that's good, because they are. Now it might well be possible to think of autistic or bipolar traits in the same way - but not if we call them autistic and bipolar traits. And not if we just rename them, while keeping the mental associations the same.

Not unless we can find a way of referring to what's currently called the autism spectrum without making anyone think of autism when they hear it. Similarly for "bipolar" and all the rest. Until we get to that point, there's a real risk that "spectra" will just be big categories.

Edit: This post has been very kindly translated into Hebrew over at the alhasapa.com blog.

Wednesday, 21 September 2011

Antidepressants In The UK

Antidepressant sales have been rising for many years in Western countries, as regular Neuroskeptic readers  will remember.


Most of the studies on antidepressant use come from the USA and the UK, although the pattern also seems to hold for other European countries. The rapid rise of antidepressants from niche drugs to mega-sellers is perhaps the single biggest change in the way medicine treats mental illness since the invention of psychiatric drugs.

But while a rise in sales has been observed in many countries, that doesn't mean the same causes were at work in every case. For example, in the USA, there is good evidence that more people have started taking antidepressants over the past 15 years.

In the UK, however, it's a bit more tricky. Antidepressant prescriptions have certainly risen. However, a large 2009 study revealed that, between 1993 and 2005, there was not any significant rise in people starting on antidepressants for depression. Rather, the rise in prescriptions was caused by patients getting more prescriptions each. The same number of users were using more antidepressants.

Now a new paper has looked at antidepressant use over much the same period (1995-2007), but using a different set of data. Pauline Lockhart and Bruce Guthrie looked at pharmacy records of drugs actually dispensed, not just prescribed, and their data only covers a specific region, Tayside in Scotland. The 2009 study was nationwide.

So what happened?

The new paper confirmed the 2009 survey's finding of a strong increase in the number of antidepressant prescriptions per patient.

However, unlike the old study, this one found an increase in the number of people who used antidepressants each year. It went up from 8% of the population in 1995, to 13% in 2007 - an extremely high figure, higher even than the USA.

In other words, more people took them, and they took more of them on average - adding up to a threefold increase in antidepressants actually sold. The increase was seen across men and women of all ages and social classes.

There's no good evidence of an increase in mental illness in Britain in this period, by the way.

But why did the 2009 paper report no change in antidepressant users, while this one did? It could be that the increase was localized to the Tayside area. Another possibility is that there was an increase nationwide, but it wasn't about people with depression.

The 2009 study only looked at people with a diagnosis of depression. Yet modern antidepressants are widely used for other things as well - like anxiety, insomnia, pain, premature ejaculation. Maybe this non-depression-based use of antidepressants is what's on the rise.

ResearchBlogging.orgLockhart, P. and Guthrie, B. (2011). Trends in primary care antidepressant prescribing 1995–2007 British Journal of General Practice

Monday, 8 August 2011

So Apparantly I'm Bipolar

According to a new paper, yours truly is bipolar.


I've written before of my experience of depression, and the fact that I take antidepressants, but I've never been diagnosed with bipolar.

I've taken a few drugs in my time. On certain dopamine-based drugs I got euphoric, filled with energy, talkative, confident, with no need for sleep, and a boundless desire to do stuff, which is textbook hypomania. So I think I know what it feels like, and I can confidently say that it has never happened to me out of the blue.

On antidepressants, I have had some mild experiences of this type. Ironically, the closest I've come to it was when I quit an SSRI antidepressant. I've also experienced periods of irritability and agitation on antidepressants. Either way, that's antidepressants. Bipolar is when you get high on your own supply of neurotransmitters.

Well, it used to be. Jules Angst et al have got some new, broader criteria for "bipolarity" in depression. They say that manic symptoms in response to antidepressants do count, exactly like out-of-the-blue mania.

What's more, under the new "Bipolar Specifier" criteria, there's no minimum duration. Under existing criteria the symptoms have to last 4 or 7 days, depending on severity. Under the new regime if you've ever been irritable, high, agitated or hyperactive, on antidepressants or not, you meet "Bipolar Specifier" criteria, so long as it was marked enough that someone else noticed it.

All you need is:
an episode of elevated mood, an episode of irritable mood, or an episode of increased activity with at least 3 of the symptoms listed under Criterion B of the DSM-IV-TR associated with at least 1 of the 3 following consequences: (1) unequivocal and observable change in functioning uncharacteristic of the person’s usual behavior, (2) marked impairment in social or occupational functioning observable by others, or (3) requiring hospitalization or outpatient treatment.
The bipolar net just got bigger. And they caught me in it. Me and 47% of depressed people in their study. They recruited 509 psychiatrists from around the world, and got each of them to assess between 10 and 20 consecutive adult depressed patients who were referred to them for evaluation or treatment. A total of 5635 patients were included.

Only 16% met existing DSM-IV criteria for bipolar disorder, so the new system with 47% identified an "extra" 31%, trebling the number of bipolar cases.

A cynic would say that this is a breathtaking piece of psychiatric marketing. You give people antidepressants, then you diagnose them with bipolar on the basis of their reaction to those drugs, thus justifying selling them yet more drugs.

The cynic would not be surprised to learn that this study was sponsored by pharmaceutical company Sanofi.
All investigators recruited received fees, on a per patient basis, from sanofi-aventis in recognition of their participation in the study....The sponsor of this study (sanofi-aventis) was involved in the study design, conduct, monitoring, data analysis, and preparation of the report.
In fairness, the authors do show that patients meeting their criteria tend to have characteristics typical of bipolar people. And they show that their system is at least as good as DSM-IV at picking out these cases:

For example, DSM-IV bipolar patients had a younger age of onset than DSM-IV depressed ones. "Bipolar specifier" patients did too, compared to the 53% who didn't meet the criteria. Same for a family history of manic symptoms, multiple episodes, and shorter episodes. All of those are pretty well established correlates of bipolar disorder.

That's fine, and the results are better than I expected when I picked up this paper. But all this shows us is that the bipolar specifier was no worse than the DSM-IV criteria as applied in this study.

It doesn't tell us whether either was any good.

DSM-IV criteria were used in a mechanical cookbook fashion - symptoms were assessed by the psychiatrist, written down, sent back to the study authors, who then diagnosed them if they ticked enough boxes. Is that a good approach? We don't know.

Most importantly, we have no idea whether these people would do better being treated as bipolar rather than as depressed. The difference being that bipolar people get mood stabilizers. Maybe these people would benefit from mood stabilizers, maybe not. Existing literature on mood stabilizers in bipolar people can't be assumed to generalize to these 47%.

In the discussion, the authors argue that antidepressants are not much good in bipolar people, whereas mood stabilizers are. Fun fact: Sanofi make many of the most popular formulations of valproic acid/valproate , a big selling mood stabilizer.

I think that is no coincidence. Maybe that sounds crazy, but hey, what do you expect? I'm bipolar.

ResearchBlogging.orgAngst J, Azorin JM, Bowden CL, Perugi G, Vieta E, Gamma A, Young AH, & for the BRIDGE Study Group (2011). Prevalence and Characteristics of Undiagnosed Bipolar Disorders in Patients With a Major Depressive Episode: The BRIDGE Study. Archives of general psychiatry, 68 (8), 791-798 PMID: 21810644

Friday, 17 June 2011

Bipolar Kids: You Read It Here First

Last year, I discussed the controvery over the proposed new childhood syndrome of "Temper Disregulation Disorder with Dysphoria" (TDDD). It may be included in the upcoming revision of the psychiatric bible, DSM-V.

Back then, I said:
TDDD has been proposed in order to reduce the number of children being diagnosed with pediatric bipolar disorder... many people agree that pediatric bipolar is being over-diagnosed.

So we can all sympathize with the sentiment behind TDDD - but this is fighting fire with fire. Is the only way to stop kids getting one diagnosis, to give them another one? Should we really be creating diagnoses for more or less "strategic" purposes?
Now, a bunch of psychiatrists have written to the Journal of Clinical Psychiatry to express their concerns over the proposed diagnosis. They make the same point that I did:
We believe that the creation of a new, unsubstantiated diagnosis in order to prevent misapplication of a different diagnosis is misguided and a step backward for the progression of psychiatry as a rational scientific discipline.
Although they go into much more detail in critiquing the evidence held up in favor of the idea of TDDD. They also point out that it is rather optimistic to think, as some people apparantly do, that if we were to diagnose kids with TDDD, as opposed to childhood bipolar, we'd save them from getting nasty bipolar medications.

As they say, the risk is that drug companies would just get their drugs licensed to treat TDDD instead. Same drugs, different label. It would be fairly easy: just for starters, there are plenty of sedative drugs, such as atypical antipsychotics, which would certainly alter or mask the "symptoms" of TDDD, in the short term. Doing a clinical trial and showing that these drugs "work" would be easy. It wouldn't mean they actually worked, or that TDDD actually existed.

They also point out that the public perception of child psychiatry has already been harmed by the proposal of TDDD, and would suffer further if it were to become official.

Well, of course it would, and quite rightly so. That would be a sign that child psychiatry is so out of control that, literally, the only way it can stop diagnosing children, is to diagnose them with something else!

The same issue of the the same journal features another paper, claiming that "pediatric bipolar disorder" has a prevalence rate of 1.8%, and that rates of diagnosis of childhood bipolar are not higher in the USA than elsewhere, contrary to popular belief based on evidence.

Their data are a bunch of epidemiological studies on bipolar disorder. One of which included children up to the age of...21. The majority included kids of 17 or 18.

So, er, not children at all, then.


The older the "children" in the study, the more bipolar that study found. Everyone knows that bipolar disorder typically starts in late adolescence. That's the orthodoxy and it has been since Kraepelin. It's right there at the top of the Wikipedia page. That's not pediatric bipolar, that's just normal bipolar.

All the recent controversy is about bipolar in children. As in, like, 8 year olds. Yet this paper is still titled "Meta-analysis of epidemiologic studies of pediatric bipolar disorder". The senior author on this paper also signed the paper criticizing TDDD.

This, then, is the state of the debate over the future of our children.

P.S. I've just noticed that in the latest draft of DSM-V, TDDD has been renamed. It's now called "DMDD". What's next? DUDD? DEDD? P-DIDDY ?


ResearchBlogging.orgAxelson DA, Birmaher B, Findling RL, Fristad MA, Kowatch RA, Youngstrom EA, Arnold EL, Goldstein BI, Goldstein TR, Chang KD, Delbello MP, Ryan ND, & Diler RS (2011). Concerns regarding the inclusion of temper dysregulation disorder with dysphoria in the DSM-V The Journal of clinical psychiatry PMID: 21672494

Van Meter AR, Moreira AL, & Youngstrom EA (2011). Meta-analysis of epidemiologic studies of pediatric bipolar disorder. The Journal of clinical psychiatry PMID: 21672501

Thursday, 9 June 2011

What Is Mental Distress?

"Mental distress" is term which has recently become popular in Britain. It's most often used as a replacement for "mental illness". I'm rather puzzled by this. In this post, I analyze this phrase.

The first thing that leaps out is that "mental" is redundant. What other kind of distress is there? Distress is mental, by default.

This awkward wording seems to be a result of the fact that it's an attempt to fuse some of the features of "mental illness" with some of the implications of "distress", a kind of verbal alchemy. What is mental distress? It's not mental illness, but it's not exactly not mental illness.

Fair enough. Mental illness is a problematic concept, so I'm all in favor of rethinking it. But I'm worried. My worry is that "mental distress" takes the worst features of mental illness and perpetuates them in the guise of being a new and radical idea.

*

Were I to go around making sweeping statements about "the mentally ill" or "people with mental illness", someone would call me out on it, like this - Mental illness is an umbrella term, for all kinds of different experiences! You can't talk about all those people as if they're the same. They're individuals!

Which is quite right.

But it's equally bad to talk about "mental distress" in the same way, and this happens as well. I don't know if mental distress is more often used as a blanket statement, but it's certainly not immune and it's no better. See for example the top Google hit for mental distress:
The first signs of mental distress will be different for the onlooker than it is for the person in distress...

Changes in sleep patterns are a common sign, and appetite may also be affected. Lethargy, low energy levels, feeling antisocial and spending too much time in bed may indicate the onset of depression. Wanting to go out more, needing very little sleep, and feeling highly energetic, creative and sociable, may signal that a person is becoming 'high'.

The first time it happens, the effects of hearing or seeing things that other people don't are likely to be especially dramatic...
Perfectly true, of some people. Not all. In this paragraph "mental distress" seems to mean "bipolar disorder", but in the course of the article it morphs into several other forms. All mental distress.

It's not good enough to make sweeping statements and say "...Of course, everyone is different, but..." That's a cop-out, not a serious attempt to be helpful. It's like being really offensive, and then quickly adding "No offence". If you think everyone's different, talk about them all differently.

I think there's a good case to be made that we shouldn't talk about "mental illness" at all. Take, say, bipolar disorder, social anxiety, and antisocial personality. I'm really not sure that these have anything in common.

They've only been considered to belong to the single category of "psychiatric disorders" for about 50 years. 100 years ago, bipolar was insanity, social anxiety was a character trait, or a 'nervous' problem, and antisocial behaviour was just evil. Different professionals dealt with each one, and few thought of them as being linked.

I'm not saying that we should go back to that. But categories are up for debate. "Mental distress" is a new label, but it's a 50 year old category.

*

My second problem is that "mental distress" implies that everyone who has it, is distressed. But they're just not - at least not if you're using that term as a replacement for "mental illness".

If you're bipolar, and in a manic or hypomanic episode, you might well be the opposite of distressed. More subtly, if you're severely depressed, you might be too low to be distressed. "Distress" implies an acute emotional response. Severe depression paralyses the emotions.

Maybe "mental distress" isn't like normal everyday distress. Maybe mania or depression are mental distress, but not distress. But that's rather confusing. If mental distress isn't distress, what on earth is it? You can't redefine words like that, unless you're Humpty Dumpty.



*

If "mental distress" implies that all mental illness is distress, it also works in reverse: it implies that all distress is a form of pathology. Taken seriously, this would lead to absurd conversations:

"Are you mentally distressed?"

"No, I'm fine. I'm just distressed."

It would also lead to even more people being treated in the mental health system. Already we're told that 1 in 4 people experience mental illness, but almost everyone gets distressed now and again.

You might say that you don't consider mental distress to be a form of pathology. I'm against medicalization! Mental distress isn't an illness! If so, fine, but to be consistent, you're going to have to stop talking about treatments. And causes. And symptoms. Those are all medical words. Discussions of mental distress are chock full of them.

Indeed, if you want to demedicalize "mental distress", you should probably just call it... distress. The "mental" part is a hangover from "mental illness", after all. If you're serious, you ought to junk that and stick with distress.

This would be perfectly clear, it doesn't require us to redefine words or use awkward phrases. Let's give it a go: "Mental illness" is distress. Easy. Unfortunately, when you put it like that, it looks a bit like a sweeping oversimplification, doesn't it? Hmm.

On the other hand, if you're not looking to demedicalize mental illness, why throw out the word illness?

The problem is that many people like the sound of demedicalization, but they're not sure how far they want to go. And in large organizations, some people will want to go much further than others.

Mental health charities seem to be particularly prone to this, so you often see them assuring people that "mental illness is an illness like any other", while simultaneously saying that seeing it just as a medical illness is far too narrow and unhelpful!

This is a serious debate, and it deserves a careful discussion. The compromise term "mental distress" seems to bridge this gap, and allows people with very different views to sound like they're agreeing with each other. This is not the best way to resolve debates like this. People still disagree with each other. They just lack the words to talk about it.

Tuesday, 7 June 2011

Britain's Not Getting More Mentally Ill

There's a widespread belief that mental illness is getting more common, or that it has got more common in recent years.

A new study in the British Journal of Psychiatry says: no, it's not. They looked at the UK APMS mental health surveys, which were done in 1993, 2000 and 2007. Long-time readers will remember these.

The authors of the new paper analyzed the data by birth cohort, i.e. when you were born, and by age at the time of the survey. If mental illness were rising, you'd predict that people born more recently would have higher rates of mental illness at any given age.

The headline finding: there was no cohort effect, implying that rates of mental illness aren't changing. There was a strong age effect: in men, rates peak at about age 50; in women the data is rather messy but in general the rate is flat up to age 50 and then it falls off, like in men. But there's no evidence that those born recently are at higher risk.

The only exception was that men born after 1950 were at somewhat higher risk than those born earlier as shown by the "break" on the graph above. The effect for women was smaller. The most recent cohort, those born after 1985, were also above the curve but there was only one datapoint there, so it's hard to interpret.

We also get a rather cute graph showing how life changes with age:

As you get older, you get less irritable and, if you're a woman, you'll worry less. But sleep problems and, in men, fatigue, increase. Overall, 50 is the worst age in terms of total symptoms. After that, it gets better. Well, that's nice to know. Or not, depending on your age.

Overall, the authors say:
Our finding of subsequently stable rates contradicts popular media stories of a relentlessly rising tide of mental illness, at least for men. Stable prevalence in the male population, together with peaking of the prevalence of common mental disorder at about age 50 years, indicates that a large increase in projected rates of poor mental health is unlikely in the male population in the near future....

Trends in women are less clearly identified, with considerable increases in the prevalence of sleep problems, but no clear increase or even some decrease in other measures. Further research is needed to relate these age and cohort differences to drivers of mental health such as employment status and family composition.
Caution's warranted, though, because the APMS data were based on self-reported symptoms of mental illness assessed by lay interviewers. As I've argued before, self-report is problematic, but this is true of almost all of these kinds of studies.

More unusual is that this study didn't attempt to assign formal diagnoses, it just looked at total symptoms on the CIS Scale; a total of 12 or more was considered to indicate "probable disorder".

Purists would say that this is a weakness and that you ought to be making full DSM-IV diagnoses, but honestly, it's got its own problems, and I think this is no worse.

Finally, this study only looked at "common mental disorders" i.e. depression and various kinds of anxiety symptoms. Things like schizophrenia and bipolar disorder weren't included, but from what I remember they're not rising either.

ResearchBlogging.orgSpiers N, Bebbington P, McManus S, Brugha TS, Jenkins R, & Meltzer H (2011). Age and birth cohort differences in the prevalence of common mental disorder in England: National Psychiatric Morbidity Surveys 1993-2007. The British journal of psychiatry : the journal of mental science, 198, 479-84 PMID: 21628710

Saturday, 9 April 2011

BBC: Something Happened, For Some Reason

According to the BBC, the British recession and spending cuts are making us all depressed.


They found that between 2006 and 2010, prescriptions for SSRI antidepressants rose by 43%. They attribute this to a rise in the rates of depression caused by the financial crisis. OK there are a few caveats, but this is the clear message of an article titled Money woes 'linked to rise in depression'. To get this data they used the Freedom of Information Act.

What they don't do is to provide any of the raw data. So we just have to take their word for it. Maybe someone ought to use the Freedom of Information Act to make them tell us? This is important, because while I'll take the BBC's word about the SSRI rise of 43%, they also say that rates of other antidepressants rose - but they don't say which ones, by how much, or anything else. They don't say how many fell, or stayed flat.

Given which it's impossible to know what to make of this. Here are some alternative explanations:
  • This just represents the continuation of the well-known trend, seen in the USA and Europe as well as the UK, for increasing antidepressant use. This is my personal best guess and Ben Goldacre points out that rates rose 36% during the boom years of 2000-2005.
  • Depression has not got more common, it's just that it's more likely to be treated. This overlaps with the first theory. Support for this comes from the fact that suicide rates haven't risen - at least not by anywhere near 40%.
  • Mental illness is no more likely to be treated, but it's more likely to be treated with antidepressants, as opposed to other drugs. There was, and is, a move to get people off drugs like benzodiazepines, and onto antidepressants. However I suspect this process is largely complete now.
  • Total antidepressant use isn't rising but SSRI use is because doctors increasingly prescribe SSRIs over opposed to other drugs. This was another Ben Goldacre suggestion and it is surely a factor although again, I suspect that this process was largely complete by 2007.
  • People are more likely to be taking multiple different antidepressants, which would manifest as a rise in prescriptions, even if the total number of users stayed constant. Add-on treatment with mirtazapine and others is becoming more popular.
  • People are staying on antidepressants for longer meaning more prescriptions. This might not even mean that they're staying ill for longer, it might just mean that doctors are getting better at convincing people to keep taking them by e.g. prescribing drugs with milder side effects, or by referring people for psychotherapy which could increase use by keeping people "in the system" and taking their medication. This is very likely. I previously blogged about a paper showing that in 1993 to 2005, antidepressant prescriptions rose although rates of depression fell, because of a small rise in the number of people taking them for very long periods.
  • Mental illness rates are rising, but it's not depression: it's anxiety, or something else. Entirely plausible since we know that many people taking antidepressants, in the USA, have no diagnosable depression and even no diagnosable psychiatric disorder at all.
  • People are relying on the NHS to prescribe them drugs, as opposed to private doctors, because they can't afford to go private. Private medicine in the UK is only a small sector so this is unlikely to account for much but it's the kind of thing you need to think about.
  • Rates of depression have risen, but it's nothing to do with the economy, it's something else which happened between 2007 and 2010: the Premiership of Gordon Brown? The assassination of Benazir Bhutto? The discovery of a 2,100 year old Japanese melon?
Personally, my money's on the melon.

Sunday, 20 March 2011

Depressed or Bereaved? (Part 2)

In Part 1, I discussed a paper by Jerome Wakefield examining the issue of where to draw the line between normal grief and clinical depression.


The line moved in the American Psychiatric Association's DSM diagnostic system when the previous DSM-III edition was replaced by the current DSM-IV. Specifically, the "bereavement exclusion" was made narrower.

The bereavement exclusion says that you shouldn't diagnose depression in someone whose "depressive" symptoms are a result of grief - unless they're particularly severe or prolonged when you should. DSM-IV lowered the bar for "severe" and "prolonged", thus making grief more likely to be classed as depression. Wakefield argued that the change made things worse.

But DSM-V is on its way soon. The draft was put up online in 2010, and it turns out that depression is to have no bereavement exclusion at all. Grief can be diagnosed as depression in exactly the same way as depressive symptoms which come out of the blue.

The draft itself offered just one sentence by way of justification for this. However, big cheese psychiatrist Kenneth S. Kendler recently posted a brief note defending the decision. Wakefield has just published a rather longer paper in response.

Wakefield starts off with a bit of scholarly kung-fu. Kendler says that the precursors to the modern DSM, the 1972 Feighner and 1975 RDC criteria, didn't have a bereavement clause for depression either. But they did - albeit not in the criteria themselves, but in the accompanying how-to manuals; the criteria themselves weren't meant to be self-contained, unlike the DSM. Ouch! And so on.

Kendler's sole substantive argument against the exclusion is that it is "not logically defensible" to exclude depression induced by bereavement, if we don't have a similar provision for depression following other severe loss or traumatic events, like becoming unemployed or being diagnosed with cancer.

Wakefield responds that, yes, he has long made exactly that point, and that in his view we should take the context into account, rather than just looking at the symptoms, in grief and many other cases. However, as he points out, it is better to do this for one class of events (bereavement), than for none at all. He quotes Emerson's famous warning that "A foolish consistency is the hobgoblin of little minds". It's better to be partly right, than consistently wrong.

Personally, I'm sympathetic to Wakefield's argument that the bereavement exclusion should be extended to cover non-bereavement events, but I'm also concerned that this could lead to underdiagnosis if it relied too much on self-report.

The problem is that depression usually feels like it's been caused by something that's happened, but this doesn't mean it was; one of the most insidious features of depression is that it makes things seem much worse than they actually are, so it seems like the depression is an appropriate reaction to real difficulties, when to anyone else, or to yourself looking back on it after recovery, it was completely out of proportion. So it's a tricky one.

Anyway, back to bereavement; Kendler curiously ends up by agreeing that there ought to be a bereavement clause - in practice. He says that just because someone meets criteria for depression does not mean we have to treat them:
...diagnosis in psychiatry as in the rest of medicine provides the possibility but by no means the requirement that treatment be initiated ... a good psychiatrist, on seeing an individual with major depression after bereavement, would start with a diagnostic evaluation.

If the criteria for major depression are met, then he or she would then have the opportunity to assess whether a conservative watch and wait approach is indicated or whether, because of suicidal ideation, major role impairment or a substantial clinical worsening the benefits of treatment outweigh the limitations.
The final sentence is lifted almost word for word from the current bereavement clause, so this seems to be an admission that the exclusion is, after all, valid, as part of the clinical decision-making process, rather than the diagnostic system.

OK, but as Wakefield points out, why misdiagnose people if you can help it? It seems to be tempting fate. Kendler says that a "good psychiatrist" wouldn't treat normal, uncomplicated bereavement as depression. But what about the bad ones? Why on earth would you deliberately make your system such that good psychiatrists would ignore it?

More importantly, scrapping the bereavement criterion would render the whole concept of Major Depression meaningless. Almost everyone suffers grief at some point in their lives. Already, 40% of people meet criteria for depression by age 32, and that's with a bereavement exclusion.

Scrap it and, I don't know, 80% will meet criteria by that age - so the criteria will be useless as a guide to identifying the people who actually have depression as opposed to the ones who have just suffered grief. We're already not far off that point, but this would really take the biscuit.

ResearchBlogging.orgWakefield JC (2011) Should Uncomplicated Bereavement-Related Depression Be Reclassified as a Disorder in the DSM-5? The Journal of nervous and mental disease, 199 (3), 203-8 PMID: 21346493

Thursday, 10 March 2011

Depressed Or Bereaved? (Part 1)

Part 2 is now out here.

My cat died on Tuesday. She may have been a manipulative psychopath, but she was a likeable one. She was 18.On that note, here's a paper about bereavement.

It's been recognized since forever that clinical depression is similar, in many ways, to the experience of grief. Freud wrote about it in 1917, and it was an ancient idea even then. So psychiatrists have long thought that symptoms, which would indicate depression in someone who wasn't bereaved, can be quite normal and healthy as a response to the loss of a loved one. You can't go around diagnosing depression purely on the basis of the symptoms, out of context.

On the other hand, sometimes grief does become pathological - it triggers depression. So equally, you can't just decide to never diagnose depression in the bereaved. How do you tell the difference between "normal" and "complicated" grief, though? This is where opinions differ.

Jerome Wakefield (of Loss of Sadness fame) and colleagues compared two methods. They looked at the NCS survey of the American population, and took everyone who'd suffered a possible depressive episode following bereavement. There were 156 of these.

They then divided these cases into "complicated" grief (depression) vs "uncomplicated" grief, first using the older DSM-III-R criteria, and then with the current DSM-IV ones. Both have a bereavement exclusion for the depression criteria - don't diagnose depression if it's bereavement - but they also have criteria for complicated grief which is depression, exclusions to the exclusion.

The systems differ in two major ways: the older criteria were ambiguous but at the time, they were generally interpreted to mean that you needed to have two features out of a possible five; prolonged duration was one of the list and anything over 12 months was considered "prolonged". In DSM-IV, however, you only need one criterion, and anything over 2 months is prolonged.

What happened? DSM-IV classified many more cases as complicated than the older criteria - 80% vs 45%. That's no surprise there because the criteria are obviously a lot broader. But which was better? In order to evaluate them, they compared the "complicated" vs "normal" episodes on six hallmarks of clinical depression - melancholic features, seeking medical treatment, etc.

They found that "complicated" cases were more severe under both criteria but the difference was much more clear cut using DSM-III-R.

Wakefield et al are not saying that the DSM-III-R criteria were perfect. However, it was better at identifying the severe cases than the DSM-IV, which is worrying because DSM-IV was meant to be an improvement on the old system.

Hang on though. DSM-V is coming soon. Are they planning to put things back to how they were, or invent an even better system? No. They're planning to, er, get rid of the bereavement criteria altogether and treat bereavement just like non-bereavement. Seriously. In other words they are planning to diagnose depression purely on the basis of the symptoms, out of context.

Which is so crazy that Wakefield has written another paper all about it (he's been busy recently), which I'm going to cover in an upcoming post. So stay tuned.

ResearchBlogging.orgWakefield JC, Schmitz MF, & Baer JC (2011). Did narrowing the major depression bereavement exclusion from DSM-III-R to DSM-IV increase validity? The Journal of nervous and mental disease, 199 (2), 66-73 PMID: 21278534