There have been quite a few studies using neuroimaging to measure activations associated with tackling hypothetical moral dilemmas but what makes the new paper interesting is that the participants were faced with a real moral choice. Well, mostly real.
The task was called "Your Pain, My Gain". The participants were put in the MRI scanner and given £20 at the start of the experiment. Then they could spend some of the money to help save another person from getting electric shocks. The more they spent, the less severe the pain administered to the victim. Video footage of the victim receiving the shocks was then played to the decider, and the process repeated.
Selfish people could choose to keep more of the cash, if they could handle the guilt of knowing that they were shocking their poor victim. At the end of the study, the remaining sum was multiplied by a random factor of between 1 and 10, meaning that there was, potentially, £200 at stake - a serious amount of money.
Actually, the task was a sham - as these experiments usually are. All of the 'shock' videos were pre-recorded, there was no victim, so the decision to keep the money had no effect. But the experimenters tried to make it as believable as possible (out of 18 volunteers, 4 were skeptical, and they were excluded).
There was also a comparison task, which was the same idea, but the participants were told it was purely hypothetical, and would have no real consequences.
The fMRI data showed (although note, some of these did not use multiple comparisons correction) that both the 'real' and hypothetical tasks activated a 'shared moral network':
Although both the hypothetical and the real tasks also led to different patterns of individual activation as well - the authors write:
hypothetical moral decisions mapped closely onto the imagination network, while real moral decisions elicited activity in the bilateral amygdala and anterior cingulate—areas essential for social and affective processes.Hmm. To be honest the results are a bit messy but the method is extremely interesting. I've written before about the need to keep it real when it comes to stimuli in fMRI experiments so this is important: hypothetical moral dilemmas are no substitute for the real thing. Personally I'm holding out for someone to replicate the Milgram experiment in the MRI scanner although I'm not sure anyone would get ethical approval for that nowadays...

8 comments:
How can uncorrected results still pass the review process?
Awesome Blog i have never seen this much blog keep it on.
If I had PMS that day he can fry. In this day and age you have got to be a virgin if you think you're 'saving' a dude through a wannabe CIA experiment. No! no! Don't shock him I will sacrifice the 2 pence, oh LORD please stop but continue with the sadistic thrill I'm a virgin.
Cool. I always thought that 'moral dilemma' questions were worthless.
Sometimes i think why do i even bother. But: No way in hell can you determine such complex processes from what is basically a flawed computer model of the human brain in action.
You can wish it to be true, but doesn't make it so.
I think time is long overdue for a compulsory advanced statistics and computer modeling course for neuroscientists.
The paper is meaningless, it's conclusions are directly the consequence of the researcher preformed opinion.
"out of 18 volunteers, 4 were skeptical, and they were excluded"
I suppose it makes a change from the usual fMRI studies of psychology undergrads, this time it's people too dumb to realise what an ethics board would approve.
I agree with the above comments, it seems pretty difficult to make that believable. Did the 4 skeptical volunteers behave differently than the rest of the group?
Why is the word "circuitry" in the title of this paper?
Post a Comment