Showing posts with label books. Show all posts
Showing posts with label books. Show all posts

Sunday, 23 December 2012

Why (And How) To Write Less

I said a couple of times during my recent trip to UPenn that "Most writing is too long". People seemed to nod appreciatively at this, so here's some more on that topic...
Most writing is too long and the most common reason is that it's not written for the reader's benefit. Readers want the important stuff, as clearly as possible, in the shortest possible space. If you remember that and let it guide your writing, you won't go far wrong. The reader's favourite bits are the ones you don't write.

The problem is that it's tempting to write for your own benefit, not the reader's, and this almost always ends up making things too long. This can take many forms:

Some write to help themselves understand the material, such that the end product is a record of their learning process. Others will insert details that the reader doesn't need, because it's a topic the writer's fond of. Other fear making tough decisions about what to include, so they say everything and hope some of it's good: "Write it all and let God sort them out."

This is common because formal education teaches you to write poorly. Specifically, it encourages people to overwrite. Teachers and professors give assignments and they set a minimum word count. This sends the message that where writing's concerned, more is better.

Teachers have their reasons. They want a brain-dump to show that the student's done the homework, pretty much the opposite of good writing. That's fair enough for school, but if you internalize that philosophy, you will end up writing to show off rather than for the reader's benefit.

Once you put the reader's interests first, you'll naturally start to find your own ways to achieve that. Everyone's style is different, but here's a few I've learned:
  • If it starts "On that note...", "Also...", or "Furthermore...", you should probably cut it.
  • Join Twitter - writing to a 140 character limit is a great form of discipline. Then imagine that every paragraph you write must become a tweet. You may find you can compress that paragraph into one sentence.
  • Just as Twitter is good, other artificial constraints are good. Set yourself a word limit; if you're blogging, make it 500 words.
  • Unless the article's about you, sentences that include the word "I" or "we" can usually be cut.
  • Think of your piece as a nuclear missile: it has a payload, the message you want the reader to grasp, and a rocket motor, the introduction and other stuff you need to ensure it reaches the reader. Every missile needs a motor, but designers try to make the payload as big as possible, given the size of the motor. Identify what your payload is, and what your motor is. Then think, is my motor too big? (It probably is.) In this paragraph the missile analogy is the motor.
  • As a rule of thumb, by writing it better, you can cut it down by half.

Tuesday, 23 October 2012

The Psychology of Edgar Allan Poe

A paper by psychology undergrad Erica Giammarco offers a look at the mind that gave us The Raven and The Masque of the Red Death: Edgar Allan Poe: A Psychological Profile


Poe lost his mother to tuberculosis at the age of 2; he was then adopted, but his foster mother died young as well. He enrolled at the University of Virginia but became involved in gambling and had to ask his foster father for money; they argued and at the age of 20, Poe was cut off from his family. He married, but his wife suffered frequent illnesses, and died at the age of 25 in 1847; by this time Poe was drinking heavily and he died after collapsing 'drunk and delirious' in 1849.

According to Giammarco:
Poe was described as a mischievous child, playing practical jokes on classmates and teachers... One teacher was quoted as saying that Poe had an "...excitable temperament with a great deal of self-esteem." This grandiose self view would remain consistent throughout Poe’s life; however, Poe was defensive and threatened by negative comments. This is consistent with a narcissistic self-view rather than healthy self-esteem.
Although successful in his studies, he did not have many friends and wrote that school was a "miserable" experience. Classmates stated that he was incredibly defensive and did not allow others to get close...

As Poe aged his health deteriorated and he continued to drink heavily. He was described by coworkers and family as chronically melancholic, acquiring the nickname ‘the man who never smiles’... Poe had a great deal of pride, evident in his refusal to accept money when he and his wife were both sick and unable to work...

An examination of the letters Poe wrote to family reveals that he was a dramatic individual. He often used excessive, theatrical language, poignantly captured in his statement, "I do believe God gave me a spark of genius, but He quenched it in misery"


When describing Poe in terms of the Five-Factor Model of personality we can conclude that he would be high on Neuroticism – evident by the constant nervous anxiety he was said to have, as well as his melancholy and irritability. Poe would also be described as being low in Agreeableness and Conscientiousness since he was argumentative, untrusting, and lacked self-control (i.e. his drinking, his failure to pursue education).
Poe actually crops up several times in the medical literature. Other examples of scientific anthropoelogy include...
ResearchBlogging.orgGiammarco, E. (2013). Edgar Allan Poe: A psychological profile Personality and Individual Differences, 54 (1), 3-6 DOI: 10.1016/j.paid.2012.07.027

Thursday, 13 September 2012

Recommend Me An Agent

I'm looking for a literary agent.

I have an idea for a book, it's non-fiction, about science, for a general audience. It'll cover some  themes I've written about on this blog, although it'll all be new material.

Anyway, if you can recommend any good agents who you think might be interested in this or if you are one and are interested - please let me know. You can email me at neuroskeptic at gmail dot com. I live in the UK, so a London-based agent would be ideal, but I'm open to all suggestions.

Wednesday, 5 September 2012

Naomi Wolf's "Vagina"

Naomi Wolf's "Vagina" is full of bad science about the brain - is an article I wrote for the New Statesman. It's about a new book which is... not very good.


I didn't come up with the title by the way, but I do rather like it.

See also the Neurocritic's take.

Monday, 2 April 2012

When Prophecy Failed

I've just been reading the classic psychology book When Prophecy Fails.


Published in 1956, it tells the inside story of a group that believed the world was about to end - and what they did when it didn't. Here's a good summary over at Providentia.

The investigators, led by social psychologist Leon Festinger, infiltrated a small group (too amateurish to be called a 'cult' - see below) surrounding a Chicago woman called Dorothy Martin, or "Marian Keetch" as they dubbed her to protect her identity.

Martin, a classic 50s housewife, had a long-standing interest in the occult and dianetics. One day, she woke up with a strange sensation in her arm, and soon decided that she was receiving messages from spiritually advanced extraterrestrials by 'automatic writing'.

After several months of rather generic religious guidance, the aliens informed her that a flood would destroy Chicago, and much of the US, on the 21st December 1954. This was part of a cosmic plan to "cleanse" the earth. She, and a number of other believers, would be evacuated by UFOs shortly before the calamity.

Festinger and co learned of the group through a newspaper ad warning of impending doom; spying a  chance to field-test his ideas, Festinger assembled a crack team of sociology and psychology students to go undercover. Considering that the group only had perhaps 10 real core members, plus another 20 or so less committed sympathizers, the fact that no fewer than 4 investigators became involved is rather remarkable.

When the 21st dawned and Chicago remained, the core members of the group were upset, but rationalized the failure - the spacemen had called it off, because of the positivity shown by the group. In the days following the non-event, the previously secretive group became eager to spread the word. The media picked up the story a few days before the 21st, but the group refused interviews and actively avoided trying to convert people. Afterwards, that all changed. But the group broke up shortly afterwards.

Festinger et al's slant on this was that it supported their cognitive dissonance theory; essentially, having to face up to the fact that they'd been wrong would have been painful, so instead they chose to believe that they'd been fundamentally right all along, and sought confirmation for this by trying to get more members. They make much of the fact that those individuals who'd made more concrete commitments to the group (e.g. by selling their possessions or losing their jobs) were subsequently more faithful.

I wasn't convinced by this, though. Apart from the fact that it's just an isolated case, the group did, after all, break up, just a few weeks after the prophecy failed. While Martin herself seemed genuinely unfazed (and went on to lead a long life in much the same paranormal vein), there's little evidence that the rest remained believers for more than a few days, even the most committed.

When Prophecy Fails is an amazing human interest story, though. The group is just adorably naive and homely. It's all charmingly 1950s and about as far from the deadly fanaticism of the 1990s Heaven's Gate group as you can imagine.

It's full of details like the spirit of Jesus solemnly telling the group to take a break for coffee; the declaration that some new mountains formed following the rearrangement of North America would be called the "Argone range" (in honour of the fact that the Rockies etc. "are gone"); and the high school pranksters who phoned the group and announced that they have "a flood in their bathroom, do you want to come over and see it?" - they did.

Indeed, I couldn't help feeling that the least savory thing about this group was the investigators themselves.  Festinger et al notably don't discuss the ethics of their study at all, unlike Stanley Milgram in his classic work from the same era.

Was it ethical? At least some of the investigators actively lied to gain entrance to the group, by making up stories of their own 'paranormal' experiences. Other than that, the observers seemed scrupulously careful not to encourage the group in their beliefs - but the very fact that they were there, going along with it, was surely in itself a kind of tacit encouragement. Martin herself sounds like her head was far enough in the clouds that she was impervious to any such social influences but I'm not sure about the other members.

There's also the issue of whether it was unethical to publish the inner secrets of the group just two years after the event; they did disguise the names, but remember, this was all national news when it happened. It would have been easy to work out people's real identities with a bit of digging.

Overall, I found the book's story fascinating; but I'm not sure I agree with the book.

Thursday, 1 March 2012

WAFFLE: Why Most Books Are Too Long

I have a theory about modern books.


There's a certain kind of book, let's call it the "TITLE: How This Subtitle Summarizes My Big Idea" genre.

I don't think I need to name names.

Now, I read a lot of these, and I've come to the conclusion that most of these books shouldn't books at all. That's not to say they're bad - the big idea may be brilliant, but I don't care how big your idea is, you do not need 100-200 pages to explain one idea.

They tend to contain a couple of core chapters with the good stuff and maybe 4 or 5 chapters of what can best be called waffle. Anecdotes, backstory, additional illustrations, etc. Like a waffle this may be perfectly pleasant - but it's not very nutritious.

Here's why I think this is - publishers (we are told) increasingly want books with a single big idea that can be summed up in a sentence. Partly because they sell, and partly because publishers are overstretched and just don't have time themselves to spend hours thinking through a complex argument to find out if it's any good.

But the problem is that, for whatever historical and business reasons, books are meant to be a certain length, say 100 pages bare minimum. No-one prints 50 page books and few people would buy one, except children etc.

So there's a gap in the profile of the length of non-fiction writing. There are all kinds of shortish pieces - from the briefest news reports and op-eds up to long feature articles and New York Review of Books type articles. That covers all the way from 1 to up to, say, 10,000 words.

But then there's nothing until you reach the short book at (say) 35,000 words, after which, it's plain sailing again.

Think about it - have you ever read a 20,000 word piece of non-fiction? I don't think I have. It's too long for a periodical but too short to be a book. (Academic papers are an exception; I'm thinking of general interest pieces).

Yet it seems to me that a great many of today's books could have been that length, without weakening the argument or dumbing down in any way. And, if so, then they should be, because a fundamental rule of good writing is to keep things as concise as possible. The problem is that while that would make them better as pieces of writing, it would make them unmarketable as books, or anything else; there's practically no market for 20,000 good words and true.

Except... now we have ebooks.

So you could see this post as an argument in praise of ebooks, not just as a new technology but as a whole new form of writing falling somewhere between the "article" and the "book". Which is ironic because I don't even have a Kindle yet. Of course I'm not saying that all books are too long. I like books. Many are the right length, some I wish were longer; but just because an idea could be made into a book, doesn't mean it should be.

Edit: I hadn't read this when I wrote this post but it seems the industry are way ahead of me -
Yesterday, Amazon began selling its Kindle Singles online. Singles are e-books between 5,000 and 30,000 words long. According to the press release, these e-books are meant to “allow a single killer idea — well researched, well argued and well illustrated — to be expressed at its natural length.”

Thursday, 1 December 2011

Beware Good Theories

The ancient Greeks had a lovely theory. Certain places on the earth (caves, mostly) were, they thought, gateways to the underworld. Plants growing near these places could absorb the deadly essence of Hades and became poisonous.

Snakes and other venemous creatures got their poison by consuming these plants. And stinging insects got their little doses of poison by feeding off dead snakes.

Isn't that a great narrative? It explains everything, in a nice logical progression. OK, it presupposes what we would call a "supernatural" force as the ultimate origin of poison, but other than that, it's an entirely "scientific" account. In accordance with Occam's Razor, it proposes a single unified process underlying diverse phenomena.

It is, in other words, a perfect scientific theory. It's completely wrong, on every point, but we only know that because we now understand atoms, molecules, chemistry and biochemistry, which the Greeks had no way of knowing. At the time, the Hades theory was surely the best possible theory about where poison came from.

The moral of this story is, beware nice theories based on incomplete data.


Reference: Greek Fire, Poison Arrows and Scorpion Bombs, which I'm currently reading, all about chemical and biological weapons.

Friday, 24 June 2011

Blind Spots & Braintrust

This is a review of two recently published books about ethics: Bazerman and Tenbrunsel's Blind Spots (not to be confused with this one), and Patricia Churchland's Braintrust.

The pair may come from the same publisher (Princeton), but they couldn't be more different.


Blind Spots is a good book. It tells a story in a clear and compelling fashion, which is what a book is for.

The story is that we often act unethically, not because we're faced with ethical questions and decide to pick the "bad" option, but because we fail to see that there is an ethical issue at all.

This is not the same as saying that 'the road to hell is paved with good intentions'. That old phrase warns against trying to be good and, as a result, causing evil, because your plans go wrong. Blind Spots is saying, even if all of your attempts to be good work out just fine, you might still cause evil despite that.

For example, you could be a good employee, who never calls in sick unnecessarily, kind to your friends and colleagues, and a generous charity donor.

Unfortunately, you're an accountant connected to Enron, and your work - ultimately - consists of defrauding innocent people. But of course, you don't think of it like that, because we don't tend to think about things "ultimately".

Which is hard to disagree with. At worst, you could say it's obvious, although I think it's still something we ought to be reminded of. That's not all there is to the book, though: it also discusses how this happens and suggests ways to avoid it within organizations.

For example, the authors give an example of how setting up rewards and punishments to "make people be ethical", can make them less so, by encouraging people to think of the issue as a personal trade-off between gain and loss, rather than an ethical dilemma - what the authors call "ethical fading".

A day-care centre was annoyed at the fact that some parents were picking up their children late. This was antisocial because it meant staff had to work late into the evening.

So they started charging parents a late fee. Not a big one, but enough to send people a message: this is wrong, don't do. But in fact what happened was that late pickups became more common.

Previously, many people were making an effort to be on time, as a matter of principle. Once the fees were in place, it stopped being an ethical issue and just became a financial trade-off: is it worth paying the fee to get an extra hour?

Of course, you could make the fees higher to get around this, but even then, you've caused ethical fading, and you'll be relying on the sanctions from that point on.


Braintrust, by contrast, is just not a good read. The bulk of the book consists of discussions of various neurotransmitters and brain areas and how they may be related to human social behaviour. Oxytocin, for example, may make us behave all trusting and kindly, as it's involved in maternal bonding. There's a long discussion of the neurochemistry of male sexual behaviour in voles.

It's not clear how this is relevant to ethics. Whether it's oxytocin that does it, or something else, and whether voles are a useful model of human behaviour or not, clearly sometimes we trust people and sometimes we don't. That's psychology. And biology can't yet explain it.

Churchland doesn't claim that the various biological concepts that she covers can fully explain anything, and she doesn't vouch that all of these findings are rock solid. Which is good, because they can't, and they're not. So why spend well over half of the book talking about them?

Churchland's big idea seems to be that human morality emerges out of our more general capacity for sociability. Hence all the stuff about oxytocin and "the social brain". OK. But I'd have said that's a given - there's obviously some relation between sociability and morality.

I think there is an interesting idea in here, albeit not very clearly expressed, namely that morality isn't a special function of the brain, but just one of the many forms in which our social cognition can take.

In other words, I think the claim is that ethics isn't just related to sociability, it is sociability. Even asocial animals care about their own welfare, in terms of pleasure and pain; social ones become social when they extend this caring to others; intelligent social animals including humans and maybe some primates also have a system for inferring the motivations and thoughts of others.

At the end of the book, Churchland stops reviewing neuroscience, and starts talking about the implications for philosophy. This is best section of the book, but it's too short.

Churchland makes the interesting point, for example, that when we are considering philosophical "ethical dilemmas", like the famous trolley problems, we may not be applying any kind of ethical "rules" as such. Rather, she thinks that our moral reasoning is pretty much a kind of pattern recognition based on previous experience - like all our other social reasoning.

Someone who'd just read a book about the horrors of Stalinism might tend to adopt an anti-consequentialist, every-life-is-sacred approach. Whereas if you'd just watched a movie in which the hero, reluctantly but rightly, decides to sacrifice one guy to save many other people, would do the opposite. Then the ethical "rules" might be confabulated to cover it.

This is a nice idea. It's open to criticism, but it's a serious suggestion, and one that deserves a decent discussion. Sadly, there isn't one. If only there were more room in the book for this kind of stuff - but oxytocin covers so many pages.

Basically, the good parts of this book are not about the brain at all.

Reading Braintrust is like going on date but then bumping into an annoying friend who insists on coming along for dinner. Jesus, The Brain, you want to say. I like you and all, but seriously, you are getting in the way right now.

Links: Other blog reviews.

Monday, 28 February 2011

The Other Brain

An interesting new book from R. Douglas Fields: The Other Brain.

"Glia" is a catch-all term for every cell in the nervous system that's not a neuron. We have lots and lots of them: on some estimates, 85% of the cells in the brain are glia. But to most neuroscientists at the moment, they're about as interesting as dirt is to archaeologists. They're the boring stuff that gets in the way. The name is Greek for "glue", which says a lot.

It's telling that most neuroscientists (myself included I confess) use the term "brain cells" to mean neurons, even though they're a minority. Hence the book's title: Douglas Fields argues that glia constitute a whole world, another brain - although of course, it's not seperate from the neuronal brain, and neuron-glia interactions are the really interesting thing and the central theme of the book.

Glia have historically been regarded as mere "housekeepers", keeping the brain neat and tidy by cleaning up the byproducts of neural activity. Douglas Fields explains that there's actually a lot more to glia than that, but that even if they were just housekeepers, the housekeeping they do is extremely important.

Astrocytes, one kind of glial cell, are key to the regulation of glutamate levels in the brain. Glutamate is by far the most common neurotransmitter yet it's also the most dangerous: glutamate can kill neurons if they receive too much of it (excitotoxicity). I previously wrote about some bad clams which can cause permanent brain damage if who eat them; the toxin responsible mimics the action of glutamate.

By quickly clearing up glutamate as it's released from neurons, astrocytes perform a vital function which saves the brain from self-destruction. Yet recent evidence has shown that they don't just mop up neurotransmitters, they also respond to them, and even release them. People are nowadays talking about the "tripartite synapse" - presynaptic neuron, postsynaptic neuron, and glia.


Glia even have their own communication network quite seperate from the neuronal one. Whereas neurons use electrical currents to convey signals, and chemicals to talk to other cells, astrocytes are interconnected via direct gap-junctions - literally, little holes bridging the membranes between neighbors.

Waves of calcium can travel through these junctions across long distances. The function of this glial network is almost entirely mysterious at present, but it's surely important, or it wouldn't have evolved. (A few types of human neurons do the same thing; in some animals it's more common.)

The subtitle is overblown, as subtitles often are ("From Dementia to Schizophrenia, How New Discoveries About the Brain are Revolutionizing Medicine and Science"); the book also repeats itself in a number of places, especially when it's castigating neuroscientists for overlooking glia for so long (a fair point, but it gets old.) Overall though it's very readable and it's got some nice anecdotes as well as the science.

The Other Brain makes an excellent case that neuroscience can't remain neuron-science if it hopes to answer the big questions. It's certainly opened my eyes to the importance of glia and given me ideas for my own research. As such it's one of those rare popular science books that will prove interesting to professionals and others too.

Link: Also reviewed here.

Disclaimer: I got a free review copy.

Saturday, 26 February 2011

An Astonishingly Brilliant Epic Tour-De-Force

So I was browsing my local bookshop yesterday.

But what to buy? The back covers are not very helpful. Apparently, every novel published nowadays is, at the worse, a breathtaking masterpiece. Most are epoch-making, life-changing works of godlike genius.

OK, but which ones are actually good?

Why is this? Part of it, surely, is that literature is an incestuous world where the same authors who write the books are the first port of call when publishers want blurbs for everyone else's. Clearly you don't want to say anything bad about your peers lest you stop getting invites to dinner parties. Unless you're embroiled in a "bitter literary feud", but no-one has the energy to do that on a regular basis.

Because everyone is constantly complimenting each other in this way, praise inflation sets in and we soon reach the point where "This is a very good book" would be a serious insult.

There's also a theory, which has been around for a good few hundred years and maybe forever, that creative types are a breed apart from everyone else, possessed of divine powers and insight. Not just the really great artists, but any artist as a profession.

When Nietzsche wrote a book comparing himself favourably to Jesus, with chapters called "Why I Am So Clever" and "Why I Am A Destiny", people thought that was a bit much. (It didn't help that he went completely insane the next year.) You can't go on record and say that about yourself, but say it about your friends and get them to say it about you, and it seems to work quite nicely.

Tuesday, 7 December 2010

Delusions of Gender

Note: This book quotes me approvingly, so this is not quite a disinterested review.

Cordelia Fine's Delusions of Gender is an engaging, entertaining and powerfully argued reply to the many authors - who range from the scientifically respectable to the less so - who've recently claimed to have shown biological sex differences in brain, mind and behaviour.

Fine makes a strong case that the sex differences we see, in everything from behaviour to school achievements in mathematics, could be caused by the society in which we live, rather than by biology. Modern culture, she says, while obviously less sexist than in the past, still contains deeply entrenched assumptions about how boys and girls ought to behave, what they ought to do and what they're good at, and these - consciously or unconsciously - shape the way we are.

Some of the Fine's targets are obviously bonkers, like Vicky Tuck, but for me, the most interesting chapters were those dealing in detail with experiments which have been held up as the strongest examples of sex differences, such as the Cambridge study claiming that newborn boys and girls differ in how much they prefer looking at faces as opposed to mechanical mobiles.

But Delusions is not, in Steven Pinker's phrase, saying we ought to return to "Blank Slatism", and it doesn't try to convince you that every single sex difference definately is purely cultural. It's more modest, and hence, much more believable: simply a reminder that the debate is still an open one.

Fine makes a convincing case (well, it convinced me) that the various scientific findings, mostly from the past 10 years, that seem to prove biological differences, are not, on the whole, very strong, and that even if we do accept their validity, they don't rule out a role for culture as well.

This latter point is, I think, especially important. Take, for example, the fact that in every country on record, men roughly between the ages of 16-30 are responsible for the vast majority of violent crimes. This surely reflects biology somehow; whether it's the fact that young men are physically the strongest people, or whether it's more psychological, is by the by.

But this doesn't mean that young men are always violent. In some countries, like Japan, violent crime is extremely rare; in other countries, it's tens of times more common; and during wars or other periods of disorder, it becomes the norm. Young men are always, relatively speaking, the most violent but the absolute rate of violence varies hugely, and that has nothing to do with gender. It's not that violent places have more men than peaceful ones.

Gender, in other words, doesn't explain violence in any useful way - even though there surely are gender differences. The same goes for everything else: men and women may well have, for biological reasons, certain tendencies or advantages, but that doesn't automatically explain (and it doesn't justify) all of the sex differences we see today; it's only ever a partial explanation, with culture being the other part.

Tuesday, 12 October 2010

In Dreams

Freud's The Interpretation of Dreams is a very long book but the essential theory is very simple: dreams are thoughts. While dreaming, we are thinking about stuff, in exactly the same way as we do when awake. The difference is that the original thoughts rarely appear as such, they are transformed into weird images.

Only emotions survived unaltered. A thought about how you're angry at your boss for not giving you a raise might become a dream where you're a cop angrily chasing a bank robber, but not into one where you're a bank robber happily counting his loot. By interpreting the meaning of dreams, the psychoanalyst could work out what the patient really felt or wanted.

The problem of course is that it's easy to make up "interpretations" that follows this rule, whatever the dream. If you did dream that you were happily counting your cash after failing to get a raise, Freud could simply say that your dream was wish-fulfilment - you were dreaming of what you wanted to happen, getting the raise.

But hang on, maybe you didn't want the raise, and you were happy not to get it, because it supported your desire to quit that crappy job and find a better one...

Despite all that, since reading Freud I've found myself paying more attention to my dreams (once you start it's hard to stop) and I've found that his rule does ring true: emotions in dreams are "real", and sometimes they can be important reminders of what you really feel about something.

Most of my dreams have no emotions: I see and hear stuff, but feel very little. But sometimes, maybe one time in ten, they are accompanied by emotions, often very strong ones. These always seem linked to the content of the dream, rather than just being random brain activity: I can't think of a dream in which I was scared of something that I wouldn't normally be scared of, for example.

Generally my dreams have little to do with my real life, but those that do are often the most emotional ones, and it's these that I think provide insights. For example, I've had several dreams in the past six months about running; in every case, they were very happy ones.

Until several months ago I was a keen runner but I've let this slip and got out of shape since. While awake, I've regretted this, a bit, but it wasn't until I reflected on my dreams that I realized how important running was to me and how much I regret giving it up.

While awake, we're always thinking about things on multiple levels: we don't just want X, we think "I want X" (not the same thing), and then we go on to wonder "But should I want X?", "Why do I want X?", "What about Y, would that be better?", etc. Thoughts get piled up on top of one another: it's all very cluttered.

In a dream, most of the layers go silent, and the underlying feeling comes closer to the surface. The principle is the same, in many ways, as this.

But how do I know that feelings in dreams are the "real" ones? In most respects, dreams are less real than waking stuff: we dream about all kinds of crazy stuff. And even if we accept that dreams offer a window into our "underlying" feelings, who's to say that deeper is better or more real?

Well, "buried" feelings matter whenever they're not really buried. If a desire was somehow "repressed" to the point of having no influence at all, it might as well not exist. But my feelings about running were not unconscious as such - I was aware of them before I had these dreams - but I was "repressing" them, not in any mysterious sense, but just in terms of telling myself that it wasn't a big deal, I'd start again soon, I didn't have time, etc.

The problem was that this "repression" was annoying, it was causing long-term frustration etc. In dreams, all of these mild emotions spanning several months were compressed into powerful feelings for the duration of the dream (a few minutes, although the dreams "felt like" they lasted hours).

Overall, I don't think it's possible or useful to interpret dreams as metaphorical representations in a Freudian sense (a train going into a tunnel = sex, or whatever). I suspect that dreams are more or less random activity in the visual and memory areas of the brain. But that doesn't mean they're meaningless: they're activity in your brain, so they can tell you about what you think and feel.

Sunday, 10 October 2010

The Joy of Sexism

This week, I've been embroiled in not one but two gender-based debates.

First up, I've been quoted in Delusions of Gender, the new book from Cordelia Fine, in which she examines the science of alleged sex differences in behaviour. The quote was from this 2008 post about Vicky Tuck, a teacher with odd ideas about the brains of boys and girls. I haven't had time to read the book yet, but a review's in the pipeline.

Then yesterday, I found out that I've been the subject of some research.
In this report, we detail research into the representation of women in science, engineering and technology (SET) within online media...

The research involved data collection and analysis from websites, web authors and young web users. We monitored SET content across 16 websites. Eight sites were generalist: BBC, Channel 4, SkyTV, The Guardian, The Daily Mail, Wikipedia, YouTube and Twitter.

Eight sites were SET-specific: New Scientist, Bad Science, The Science Museum, The Natural History Museum, Neuroskeptic Blog, Science – So What? So Everything, Watt’s Up With That? Blog and RichardDawkins.net.
Quite a line-up. Clearly they decided to look at the very best, most illustrious and most respected science blogs... and also Neuroskeptic. Anyway, unfortunately I can't access the paper, despite being in it, but according to the abstract they found that:
Online science informational content is male dominated in that far more men than women are present... we found that these women are:
  • Subject to muting of their ‘voices’. This includes instances where SET women are pictured but remain anonymous and instances where they are used, mainly as science journalists, to ventriloquise other people's scientific work.
  • Subject to clustering in specific SET fields and website sections, particularly those about ‘feminine’ subjects or specifically about women...
  • Associated with ‘feminine’ attributes and activities, notably as caring, demonstrating empathy with children and animals...
  • Predominantly White, middle-class, able-bodied and heterosexual.
  • Peripheral to the main story and subordinated as students, young scientists, relatives of a male scientist ... we found less hyperlinking of women’s than men’s names in online SET.
  • Discussed in terms of appearance, personality, sexuality and personal circumstances more often than men...
  • More generally, constructed in ways that relocate them in the private domestic sphere, detract from their scientific contribution, and associate them, more often than men, with the new category of ‘bad science’.
Without knowing the details it's hard to evaluate these claims, but it's fair to say that some of it rings true.

There's been lots of buzz recently about the gender ratio of science bloggers - we're mostly male, who'd have guessed? - and I suppose this would be a good time to chip in. Does it matter?

I think it does, and moreover it's part of a bigger picture. As far as I can see, science bloggers are mostly: male, white, under 40... and almost all of the biggest ones are also native English speakers; I don't know if, overall, English-speakers are overrepresented, because not all blogs are written in English and I only know the ones that are - but English ones get the lions share of the traffic.

Back to gender, even in fields such as psychology and neuroscience in which there are lots of female researchers, bloggers are overwhelmingly male. Likewise, a lot of researchers, even those working in English-speaking countries, are non-native-English speakers, but they have an obvious disadvantage when it comes to blogging in English.

So science bloggers are drawn mostly from a narrow cross-section of the scientific community, which is a problem, because it greatly increases the chances of bloggers becoming an "echo chamber", or a clique, neither of which is likely to end well. Diversity is valuable, in this kind of thing, not because it's somehow morally good per se, but because it helps prevent stagnation.

Monday, 20 September 2010

The Refrigerator Mother

Autism is biological: that's the one thing everyone agrees about it. Scientific orthodoxy is that it's a neurodevelopmental condition caused by genetics, in most cases, and by environmental insult, such fetal exposure to anticonvulsants, in rare cases. Jenny McCarthy orthodoxy is that "toxins" - usually in vaccines - are to blame, not genes, and that the underlying damage might be in the gut not the brain: but they agree that it's biological.

However, it hasn't always been this way. From the 1950s to about the 1980s, there was a widespread view that autism was a purely psychological condition. Bruno Bettelheim is the name most often linked to this view. Bettelheim spent most of his career at the University of Chicago's Orthogenic School, an institution for "disturbed" children, including autistics as well as "schizophrenic" and others.

His magnum opus was his book The Empty Fortress: Infantile Autism and the Birth of the Self, in which he outlined his theory of autism illustrated by three long case histories. His ideas are now referred to as the "refrigerator mother" theory.

For Bettelheim, autism was a reaction to severe neglect. Not of physical needs, which would be fatal, but of emotional relations. In his view, the most common underlying cause of this neglect was when the mother (and to a lesser extent, the father) did not want the child to exist. They cared for him, but they did so in a mechanical fashion, treating the baby as a mouth to feed and a nappy to change, rather than as a human being.

Hence the "refrigerator" - it provides food, but it's cold.

The result was that the child never learned to interact with the mother on anything other than a mechanical level; and for Bettelheim, as for most psychoanalysts, our relationships with our parents were the model on which all our other relationships were based.

The mechanical mother thus left the autistic child unable to relate to anyone, indeed, unable to conceive of the existence of other human beings, and thus lacking a sense of "self" as opposed to "others".


The repetitive behaviours and obsessive interests characteristic of autism were seen as an active, even heroic, coping strategy. They were the child's way of asserting what little self they had, by doing something for themselves, albeit something "pointless". But they also had symbolic meanings: "Joey's" interest in fans, propellers and other rotating objects was interpreted as a representation of the "vicious circle" of his life. And so on.

*

Bettelheim's ideas are now generally derided as dangerously wrong; his reputation suffered a hit when, after his suicide in 1990, stories emerged from former colleagues and patients painting him in a nasty light. But psychiatry's wider turn away from Freud and towards biology probably made his downfall inevitable.

Today the "refrigerator mother theory" is routinely cited as a cautionary tale of how deeply one can misunderstand autism. Ironically, Bettelheim's only reference to that term in The Empty Fortress is a quotation, from none other than Leo Kanner, the man who coined the term 'childhood autism' in 1944. Kanner referred to the "emotional refrigeration" he observed in the families of autistic children, although it's not clear that he thought of it as causing the autism.

There is no doubt that Bettelheim's approach was unscientific. He repeatedly claimed that the fact that many children improved after three or four years at the Orthogenic School proved that their autism was psychological, because if it were biological it would be permanent.

Yet there is no reason to assume that children with a neurodevelopmental disorder would never change as they grew up. There was no control group, let alone a placebo group, to show that the children wouldn't have "grown out of" some symptoms anyway. (Edit: In fact, Kanner himself had written about improvement with age way back in 1943, in the first ever paper about autistic children! So there was simply no excuse for Bettelheim's flawed argument.)

Bettelheim's attributing the cause of autism to family dynamics was post hoc: for each autistic child, he looked back into their family history (i.e. what the parents reported) and found that they "consciously or unconsciously" didn't want the child to exist.

Yet all this proves is that it is possible to interpret a parent's behaviour in that way, in retrospect, if you want to. The "or unconsciously" caveat creates endless scope for over-interpretation.

But even if we now see autism as a neurodevelopmental disorder, there is something attractive about Bettelheim's book: it seems to be a serious attempt to understand the autistic experience "from the inside", and to appreciate the autistic child as a person rather than a disease. This is something that we rarely see nowadays.

Bettelheim's problem was that he tried to understand autistic behaviour from the assumption that the autistic child was, deep down, entirely "normal". Hence his interpretation of, say, Joey's fascination with rotating objects as symbolic of his life situation (and also as reflecting the fact that his father was often flying away in propeller-driven aircraft, which he was).

Yet couldn't it be that Joey was just fascinated by spinning fans per se? There's nothing interesting about rotating objects. They must have a hidden meaning. Otherwise it makes no sense - to someone who isn't autistic. But all that means is that trying to understand the autistic child is rather difficult if you don't bear in mind that they are autistic.

Monday, 12 July 2010

I Feel X, Therefore Y

I'm reading Le Rouge et le Noir ("The Red and the Black"), an 1830 French novel by Stendhal...

One passage in particular struck me. Stendhal is describing two characters who are falling in love (mostly); both are young, have lived all their lives in a backwater provincial town, and neither has been well educated.
In Paris, the nature of [her] attitude towards [him] would have very quickly become plain - but in Paris, love is an offspring of the novels. In three or four such novels, or even in a couplet or two of the kind of song they sing at the Gymnase, the young tutor and his shy mistress would have found a clear explanation of their relations with each other. Novels would have traced out a part for them to play, given them a model to imitate.
The idea that reading novels could change the way people fall in love might strange today, but remember that in 1830 the novel as we know it was still a fairly new invention, and was seen in conservative quarters as potentially dangerous. Stendhal was of course pro-novels (he was a novelist), but he accepts that they have a profound effect on the minds of readers.

Notice that his claim is not that novels create entirely new emotions. The two characters had feelings for each other despite never having read any. Novels suggest roles to play and models to follow: in other words, they provide interpretations as to what emotions mean and expectations as to what behaviours they lead to. You feel that, therefore you'll do this.

This bears on many things that I've written about recently. Take the active placebo phenomenon. This refers to cases in which a drug creates certain feelings, and the user interprets these feelings as meaning that "the drug is working", so they expect to improve, which leads them to feel better and behave as if they are getting better.

As I said at the time, active placebos are most often discussed in terms of drug side effects creating the expectation of improvement, but the same thing also happens with real drug effects. Valium (diazepam) produces a sensation of relaxation and reduces anxiety as a direct pharmacological effect but if someone takes it expecting to feel better, this will also drive improvement via expectation: the Valium is working, I can cope with this.

The same process can be harmful, though, and this may be even more common. The cognitive-behavioural theory of recurrent panic attacks is that they're caused by vicious cycles of feelings and expectations. Suppose someone feels a bit anxious, or notices their heart is racing a little. They could interpret that in various ways. They might write it off and ignore it, but they might conclude that they're about to have a panic attack.

If so, that's understandably going to make them more anxious, because panic is horrible. Anxiety causes adrenaline released, the heart beats ever faster etc., and this causes yet more anxiety until a full-blown panic attack occurs. The more often this happens, the more they come to fear even minor symptoms of physical arousal because they expect to suffer panic. Cognitive behavioural therapy for panic generally consists of breaking the cycle by changing interpretations, and by gradual exposure to physical symptoms and "panic-inducing" situations until they no longer cause the expectation of panic.

This also harks back to Ethan Watters' book Crazy Like Us which I praised a few months back. Watters argued that much mental illness is shaped by culture in the following way: culture tells us what to expect and how people behave when they feel distressed in certain ways, and thus channels distress into recognizable "syndromes" - a part to play, a model to imitate, though probably quite unconsciously. The most common syndromes in Western culture can be found in the DSM-IV, but this doesn't mean that they exist in the rest of the world.

Like Stendhal's, this theory does not attempt to explain everything - it assumes that there are fundamental feelings of distress - and I do not think that it explains the core symptoms of severe mental illness such as bipolar disorder and schizophrenia. But people with bipolar and schizophrenia have interpretations and expectations just like everyone else, and these may be very important in determining long-term prognosis. If you expect to be ill forever and never have a normal life, you probably won't.

Sunday, 4 July 2010

Fingers

How many fingers do you have?

10, obviously, unless you've been the victim of an accident or a birth defect. Everyone knows that. You count up to ten on your fingers, for one thing.

But look at your left hand - how many fingers are on it? Little finger, ring finger, middle finger, first finger... thumb. So that's 4. But then we'd only have 8 fingers, and we all know we have 10. Unless the thumb is a finger, but is it?

Hmm. Hard to say. Wikipedia has some interesting facts about this question, and on Google if you start to type in "is the thumb", the top suggested search terms are all about this issue. It's a tricky one. People don't seem to know for sure.

But does that mean there's any real mystery about the thumb? No - we understand it as well as any other part of the body. We know all about the bones and muscles and joints and nerves of the thumb, we know how it works, what it does, even its evolutionary history (see The Panda's Thumb by Steven J Gould, still one of the greatest popular science books ever.) Science has got thumbs covered.

The mystery is in the English language, which isn't quite clear on whether the word "finger" encompasses the human thumb; for some purposes it does, i.e. we have 10 fingers, but for other purposes it probably doesn't, although even English speakers seem to be in two minds about the details (see Google, above).

Notice that although the messiness seems to focus on the thumb, the word "thumb" is perfectly clear. The ambiguity is rather in the word "finger", which can mean either any of the digits of the hand, or, the digits of the hand with three joints. Take a look at your hand again and you'll notice that your thumb lacks a joint compared to the fingers; something I must admit I'd forgotten until Wikipedia reminded me.

Yet it would be very easy to blame the thumb for the confusion. After all, the other 4 fingers are definitely fingers. The fingers are playing by the rules. Only the thumb is a troublemaker. So it comes as somewhat of a surprise to realize that it's the fingers, not the thumb, that are the problem.

*

So words or phrases can be ambiguous, and when they are, they can lead to confusion, but not always in the places you'd expect. Specifically, the confusion seems to occur at the borderlines, the edge cases, of the ambiguous terminology, but the ambiguity is really in the terminology itself, not the edge cases. To resolve the confusion you need to clarify the terminology, and not get bogged down in wondering whether this or that thing is or isn't covered by the term.

It's important to bear in this in mind when thinking about psychiatry, because psychiatry has an awful lot of confusion, and a lot of it can be traced back to ambiguous terms. Take, for example, the question of whether X "is a mental illness". Is addiction a mental illness, or a choice? Is mild depression a mental illness, or a normal part of life? Is PTSD a mental illness, or a normal reaction to extreme events? Is... I could go on all day.

The point is that you will never be able to answer these questions until you stop focussing on the particular case and first ask, what do I mean by mental illness? If you can come up with a single, satisfactory definition of mental illness, all the edge cases will become obvious. But at present, I don't think anyone really knows what they mean by this term. I know I don't, which is why I try to avoid using it, but often I do still use it because it seems to be the most fitting phrase.

It might seem paradoxical to use a word without really knowing what it means, but it isn't, because being able to use a word is procedural knowledge, like riding a bike. The problem is that many of our words have confusion built-in, because they're ambiguous. We can all use them, but that means we're all risking confusing each other, and ourselves. When this gets serious enough the only solution is to stop using the offending word and create new, unambiguous ones. With "finger", it's hardly a matter of life or death. With "mental illness", however, it is.

Saturday, 26 June 2010

Password

A few days ago, a friend of mine had her GMail account compromised, resulting in much stress for all concerned. This prompted me to change my passwords.

That was three days ago. Since then, I've logged into GMail maybe ten or fifteen times, and every single time I've initially typed the old password. Sometimes, I catch myself and change it before hitting "enter", but usually not. Access denied. Oops. It's getting slightly better, but I think it'll be a good few days before I'm entering the new password as automatically as I did the old one.

It's not hard to see why this kind of thing happens: I'd typed in the old password hundreds, probably thousands, of times over the course of at least a year. It had become completely automatic. That kind of habit takes a long time to learn, so it's no surprise that it takes quite a while to unlearn (though hopefully not quite as long).

Psychologists will recognize the distinction between declarative memory, my concious knowledge of what my new password is, and procedural memory, my ability to unconsciously type it. It's also commonly known as "muscle memory": this is misleading because it's stored in the brain, like all knowledge, but it nicely expresses the feeling that it's your body that has the memory, rather than "you".

Damage to the hippocampus can leave people unable to remember what happened ten minutes ago, but perfectly capable of learning new skills: they just don't remember how they learned them. But you don't have to suffer brain damage to experience procedural knowledge in the absence of declarative recall. I've sometimes found myself unable to remember my password and only reminded myself by going to the login page and successfully typing it. I knew it all along - but only procedurally.

The thing about procedural knowledge is that when it works, you don't notice it's there. So we almost certainly underestimate its contribution to our lives. If you asked me what happens when I log in to GMail, I'd probably say "I type in my username and my password". But maybe it would be more accurate to say: "I go to the login screen, and my brain types my username and password."

Can I take the credit, given that sometimes I - my conciousness - don't even know the password until my brain's helpfully typed it for me? And while in this case I do know it some of the time, much of our procedural knowledge has no declarative equivalent. I can ride a bike, but if you asked me to tell you how I do it, to spell out the complex velocity-weight-momentum calculations that lie behind the adjustments that my muscles constantly make to keep me upright, I'd be stumped.

"I just sit down and pedal." But if I literally did that and nothing more, I'd fall flat on my face. There's a lot more to cycling than that, but I have no idea what it is. So can I ride a bike, or do I just happen to inhabit a brain that can? Isn't saying that I can ride a bike like saying that I can drive just because I have a chauffeur?


Take this train of thought far enough and you reach some disturbing conclusions. Maybe it's not so hard to accept that various skills lie outside the reach of our concious self, but surely the decisions to use those skills are ours alone. Sure, my brain types my username and password for me, but I'm the one who decided to login to GMail - I could have decided to turn the computer off and go for a walk instead. I have Free Will! Like George W. Bush, I'm the Decider. My brain just handles the boring details.

But isn't deciding a skill too? And willing, remembering, thinking, judging, feeling, concluding - I can do all those things, but if I knew how I do them, I'd win the the Nobel Prize in Physiology or Medicine because I'd just have solved the hardest questions of neuroscience. So can I take credit for doing them, or is it my brain?

Ultimately, every concious act must be constructed from unconscious processes; otherwise there would be an infinite regress of conciousness. If the world rested on the back of a giant turtle, what would the turtle stand on? Turtles all the way down?

Link: The Concept of Mind (1949) is a book by the British philosopher Gilbert Ryle, from which I "borrowed" the ideas in this post, and which was probably the one book that most inspired me to study neuroscience.

Wednesday, 23 June 2010

Carlat's Unhinged

Well he's not. Actually, I haven't met him, so it's always possible. But what he certainly has done is written a book called Unhinged: The Trouble with Psychiatry.

Daniel Carlat's best known online for the Carlat Psychiatry Blog and in the real world for the Carlat Psychiatry Report. Unhinged is his first book for a general audience, though he's previously written several technical works aimed at doctors. It comes hot on the heels of a number of other recent books offering more or less critical perspectives on modern psychiatry, notably these ones.

Unhinged offers a sweeping overview of the whole field. If you're looking for a detailed examination of the problems around, say, psychiatric diagnosis, you'd do well to read Crazy Like Us as well. But as an overview it's a very readable and comprehensive one, and Carlat covers many topics that readers of his blog, or indeed of this one, would expect: the medicalization of normal behaviour, to over-diagnosis, the controversy over pediatric psychopharmacology, brain imaging and the scientific state of biological psychiatry, etc.

Carlat is unique amongst authors of this mini-genre, however, in that he is himself a practising psychiatrist, and moreover, an American one. This is important, because almost everyone agrees that to the extent that there is a problem with psychiatry, American psychiatry has it worst of all: it's the country that gave us the notorious DSM-IV, where drugs are advertised direct-to-the-consumer, where children are diagnosed with bipolar and given antipsychotics, etc.

So Carlat is well placed to report from the heart of darkness and he doesn't disappoint, as he vividly reveals how dizzying sums of drug company money sway prescribing decisions and even create diseases out of thin air. His confessional account of his own time as a paid "representative" for the antidepressant Effexor (also discussed in the NYT), and of his dealings with other reps - the Paxil guy, the Cymbalta woman - have to be read to be believed. We're left with the inescapable conclusion that psychiatry, at least in America, is institutionally corrupt.

Conflict of interest is a tricky thing though. Everyone in academia and medicine has mentors, collaborators, people who work in the office next door. The social pressure against saying or publishing anything that explicitly or implicitly criticizes someone else is powerful. Of course, there are rivalries and controversies, but they're firmly the exception.

The rule is: don't rock the boat. And given that in psychiatry, all but a few of the leading figures have at least some links to industry, that means everyone's in the same boat with Pharma, even the people who don't, personally, accept drug company money. I think this is often overlooked in all the excitement over individual scandals.

For all this, Carlat is fairly conservative in his view of psychiatric drugs. They work, he says, a lot of the time, but they're rarely the whole answer. Most people need therapy, too. His conclusion is that psychiatrists need to spend more time getting to know their patients, instead of just handing out pills and then doing a 15 minute "med check" - a great way of making money when you're getting paid per patient (4 patients per hour: ker-ching!), but probably not a great way of treating people.

In other words, psychiatrists need to be psychotherapists as well as psychopharmacologists. It's not enough to just refer people to someone else for the therapy: in order to treat mental illness you need one person with the skills to address both the biological and the psychological aspects of the patient's problems. Plus, patients often find it frustrating being bounced back and forth between professionals, and it's a recipe for confusion ("My psychiatrist says this but my therapist says...")

This leads Carlat to the controversial conclusion that psychiatrists should no longer have a monopoly on prescribing medications. He supports the idea of (appropriately trained) prescribing psychologists, an idea which has taken off in a few US states but which is hotly debated.

As he puts it, for a psychiatrist, the years in medical school spent delivering babies and dissecting kidneys are rarely useful. So there's no reason why a therapist can't learn the necessary elements of psychopharmacology - which drugs do what, how to avoid dangerous drug interactions - in say one or two years.

Such a person would be at least as good as a psychiatrist at providing integrated pills-and-therapy care. In fact, he says, an even better option would be to design an entirely new type of training program to create such "integrated" mental health professionals from the ground up - neither doctors nor therapists but something combining the best aspects of both.

There does seem to be a paradox here, however: Carlat has just spent 200 pages explaining how drug companies distort the evidence and bribe doctors in order to push their latest pills at people, many of whom either don't need medication or would do equally well with older, much cheaper drugs. Now he's saying that more people should be licensed to prescribe the same pills? Whose side is he on?

In fact, Carlat's position is perfectly coherent: his concern is to give patients the best possible care, which is, he thinks, combined medication and therapy. So he is not "anti" or "pro-medication" in any simple sense. But still, if psychiatry has been corrupted by drug company money, what's to stop the exact same thing happening to psychologists as soon as they got the ability to prescribe?

I think the answer to this can only be that we must first cut the problem off at its source by legislation. We simply shouldn't allow drug companies the freedom to manipulate opinion in the way that they do. It's not inevitable: we can regulate them. The US leads the world in some areas: since 2007, all clinical trials conducted in the country must be pre-registered, and the results made available on a public website, clinicaltrials.gov.

The benefits, in terms of keeping drug manufacturer's honest, are far too many to explain here. Other places, like the European Union, are just starting to follow suit. But America suffers from a split personality in this regard. It's also one of the only countries to allow direct-to-consumer drug advertising, for example. Until the US gets serious about restraining Pharma influence in all its forms, giving more people prescribing rights might only aggravate the problem.

Wednesday, 28 April 2010

Head Trip

A quick post to recommend the 2007 book Head Trip, by Jeff Warren.

Head Trip is about "24 hours in the life of your brain": sleeping, waking, and everything in-between, from lucid dreaming to daydreams and hypnosis.

Warren gives a nice overview of current research and theory along with the story of his personal quest to experience the full spectrum of conciousness.

The book's most interesting chapter is called "The Watch". It's about that hour or two of wakefulness which occurs in the middle of the night, between the first sleep and the second sleep. You know the one...right? Neither did I, but apparently, this makes us a bit weird, historically speaking.

Warren says that until the era of artificial lighting and alarm clocks, sleep was segmented. It was common for people to sleep twice each night, with a bout of awakeness in the middle. This nocturnal alertness wasn't quite like daytime waking, though: it was more relaxed, less focussed, carefree. Our modern sleep pattern, then, is kind of compressed, with the two sleeps pushed together until they merge into one.

There are two lines of evidence for this. Writings from the pre-modern era routinely make reference to "first sleep" and "second sleep", and in many languages, although not modern English, there were special words for these periods and the wakefulness between. This is according to historian A. Roger Ekirch in his history of night-time, At Day's Close (review, Wiki), a book I really want to read now.

On the other hand, there's the findings of sleep psychiatrist Thomas Wehr, in particular his classic 1992 study called In short photoperiods, human sleep is biphasic. Wehr took healthy American volunteers and put them in an artificial environment with a controlled light cycle, such that there were only 10 hours of brightness per day. (That's 6 hours less than we get on average, even in winter, due to artificial light.) Within a few weeks "their sleep episodes expanded and usually divided into two symmetrical bouts, several hours in duration, with a 1-3 h waking interval between them."

This is pretty freaky. Sleeping all night seems natural, normal and healthy: if we wake up before we need to get up, we're dismayed and we call it insomnia. Maybe this is a modern invention like electric lighting. There's something amazing and also a bit disturbing about this idea. As Warren says, it's like finding out that your house "is really the exposed bell-tower of a vast underground cathedral".