I'm not alone in my concerns as an interesting new paper reveals: Publication Bias in Laboratory Animal Research. The authors surveyed the approximately 3,000 Dutch scientists involved in research on laboratory animals. The response rate was about 20%.
When asked how much animal research ends up being published, university researchers estimated about half, but industrial scientists put it at only about 10% - which, if true, suggests that publication bias in Pharma animal work is extremely serious.
In terms of solutions, the survey considered two ideas which Neuroskeptic readers will be familiar with - public pre-registration of studies:
Mandatory anonymous publication of research protocols of all ethics-approved animal research experiments in a publicly available databaseand also open access to all data:
Mandatory anonymous publication of a brief structured form in a publicly available database, that gave main results or explained why an experiment could not be completedOn average the surveyed researchers felt that these measures would aid scientific progress; improve the validity of the literature; and prevent wasteful duplication of effort - but they also worried that it would increase bureaucracy.
Now, bureaucracy is second only to bias on my list of Things I Hate About Science, so I share their concern - but I really think registration wouldn't have to involve any extra paperwork. In many cases, it could be implemented simply by making existing data public.
For instance, grant applications, and requests for ethical approval, already contain detailed a priori protocols in most cases. They could so easily be published (perhaps with certain details removed for confidentiality reasons) and turned into a powerful weapon against publication bias.
Having said that though - it easily could end up being needlessly complicated and obstructive, as so much of the scientific process unfortunately is today. It will all depend on how it's implemented.
This is why I think it's so important that, as scientists, we reform science ourselves, and get it right, rather than leaving it to the bureaucrats, who won't.

14 comments:
http://www.youtube.com/watch?v=ihooFXrGBM0
The truth will emerge. RIP Dr Szasz.
Naively trusting that turning science into a paid profession without some form of ballotage at study entrance level wouldn't make any form of fraud more likely is less then visionary.
This was the politest way i could come up with to say: if the powers that be push anybody with a good retentive memory but bereft of talent into high education you are a moron if you didn't see this coming.
I am 100% certain that there is no untainted part of science to be found. Up to the most exact. Point in case: the so called 'discovery' of the Higgs boson.
I agree with most of your post, except the very last point. In practice, scientists aren't very interested in, or very good at, regulating themselves. Michael Gazzaniga's failed initiative to require data sharing at Journal of Cognitive Neuroscience is a good example--well-intentioned but mostly pointless, and possibly counter-productive over the long term.
Real regulation--the kind with teeth--comes from governments and funding agencies. Instead of encouraging scientists to self-regulate, I would say that scientists should get involved with the conventional policy-making apparatus. Call your congressperson, lobby for sensible policy--or even get a job working for the government or at an advocacy group. Instead of leaving science policy to the bureaucrats, scientists may want to consider become the bureaucrats themselves.
Joshua: Mmm, fair point, but it seems a bit 20th century ;-) This is the Facebook Age - surely we have a better chance of organizing this kind of thing than previous generations?
You may be right, but I think we should at least try to put our own house in order before calling in the cleaners.
Forgive me if this has been brought up before, but wouldn't open-access to everyone's data create the problem of data plagiarism? Any graduate student could just take a sample (the cleaner the better) from any dataset, call it their own, and write up a thesis? I'm sure there are safeguards for such a thing, but they have yet to be fleshed out.
Although I definitely agree with the idea of things like making data of even negative studies available for everyone, the one concern I always have is: where will the data be stored?
The amount of space needed to store all of this data on a server somewhere would surely be incredible and likely very expensive. I'm no computer whiz though - does anyone know if this is a valid concern, or am I becoming my dad who used to become alarmed whenever I'd mention that I was "burning" a cd?
Anonymous: That could happen, but it would be easy to detect.
MikeSamsa: I would guess storage would not be a huge issue, as it's getting cheaper continually to purchase/build servers with large storage quantities. An example (3 years old) would be: http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/
Nice post Neuroskeptic, would like to also hear (or be pointed to) your take on the state of significance testing in psychology. I feel rather disillusioned with it myself.
I think the problem is more fundamental than requiring pre-registration, regulation, etc. The reward structure of science as it is set up now - publications with an emphasis on positive and flashy findings, and without regard to actual quality of findings - is the problem. Publications in psychology/psychiatry are pretty much the road to everything good in a research career - grants, tenure. And as long as that is a priority, I don't think the system will ever change.
The reward structure is certainly a problem, but I see that as being a symptom of the way publication works rather than a cause.
I don't think anyone planned the current reward structure or even thinks about it very much (except to criticize it).
Rather, it emerged inevitably out of the publication stucture (which no-one planned and exists for various historic and economic reasons).
Scientists will always be judged by their publications. I think that's inevitable (what else is there?) But it is not inevitable that scientists can only get publications by getting the 'right' results - journals could peer review and accept papers based on the Methods a priori and then publish the Results when they come in, for example, as I have suggested before; and there are other options.
I might add that the majority of anima research is being performed on male animals. This is a clear source of bias.
The first anon here - NS, just because the reward structure has been a particular way doesn't mean it needs to continue that way. The old system worked in a time when academics and their publications were far fewer in number. It's obsolete now and the cracks are beginning to show in a very obvious manner. While some self-monitoring and regulatory practices like those you mentioned could help, I think those are temporary solutions. For a more honest academia, something more revolutionary will be required.
One possible solution that I can think of would be to ask each lab (however you end up defining a lab unit) to simply submit a series of reports - say 5 - 10 each year (no more and no less) - that details all the work the lab has done that resulted in both positive and negative findings. All that should matter is that the folks in the lab did that research - not whether it was positive or negative, where it got published, etc. This would also gelp get rid of overreliance of tenure, grants, etc. on publications. An additional benefit would be to decrease the number of journals and publications as well.
Of course, there's the inevitable argument that we shouldn't restrict research. I don't think the scheme I have outlined will restrict it. On the contrary, I think it will encourage more quality research since it removes the pressure to publish and spam journals with crappy papers.
Anon: I like that idea. But it's like I said - your idea (rightly) is to target publication practices, and use that as a way of changing the reward system. What I'm trying to say is that that's the proper order - we can't change rewards (to reward "good science rather than lots of high impact papers", which is what many people rightly want to do) except by the means of addressing publication.
Here an interesting editorial from a Boston University professor of law in the most prestigious medical journal relevant to your topic:
I cite "(...)The academic researchers involved in the controversy regarding the safety data for Avandia has thus far escaped sanctions as well.(...)"
http://www.nejm.org/doi/full/10.1056/NEJMp1209249 Punishing Health Care Fraud — Is the GSK Settlement Sufficient? — NEJM
Also very useful to teach non mathematicians , physicists and the like- about the respective sizes of the billions paid by "Bad Pharma" (Ben Goldacre) - to the USA states and federal states and the billions of their profits.
///The 2012 fines against Abbott Laboratories and GSK represent a modest percentage of those companies' revenue. Companies might well view such fines as merely a cost of doing business — a quite small percentage of their global revenue and often a manageable percentage of the revenue received from the particular product under scrutiny. (...)///
It seems to me that human nature plus - as petrossa wrote 23 09 12 at 16:34- the lack of selection on integrity of the PhD candidates in science makes the fraud likely as long as nobody is really hurt by it.
If you come to think of it, a fraud in science is a crime with social consequences at different level ( the innocent scientists in that lab, the poor journalists who make headlines with the fraud(joking )and the tax payer etc...).
Why should an USA citizen-for example- easily serve a time in jail for incometaxes fraud of some magnitude and not for a scientific deliberate fraud that can kill people or is the equivalent of an incometax fraud regarding the taxpayer's money?
Post a Comment