Sign in or Join FriendFeed
FriendFeed is the easiest way to share online. Learn more »
Alexey
I Hate Your Paper - The Scientist - Many say the peer review system is broken. Here’s how some journals are trying to fix it - http://www.the-scientist.com/2010...
I Hate Your Paper - The Scientist - Many say the peer review system is broken. Here’s how some journals are trying to fix it
“When it comes to journals and publications, I’m highly skeptical that [the peer review] process adds much value at all,” adds Richard Smith, former editor of the British Medical Journal, who has written extensively about peer review. “In fact, it detracts value because it wastes a lot of time of a lot of people,” he says. “There’s lots of evidence of the downside of peer review, and very limited evidence of the upside. - Alexey from Bookmarklet
Wonder about studies that have measured the relative quality of various review processes? - Mike Chelen
"The full podcast, originally created for the Journal of Participatory Medicine, is hosted here." http://www.patientpower.info/JoPM... (Podcast with transcripts) -- Peer Review and Reputation Systems: A Discussion -- 1. Defining the Problems and Issues with Peer Review Today -- 2. Light Versus Heavy Peer Review -- 3. Transparency in Peer Review -- 4. Wikipedia-Style Peer Review…and Rating/Reputation Systems -- 5. Crowdsourcing Research/Peer Review -- 6. Building a Community - Claudia Koltzenburg
Hmm, I don't know if peer-review improves papers but it has so far often improved *my*papers... - Björn Brembs
what's your take, Björn, does review quality depend on anonymity? should we say blind or non-blind should be up to the reviewer? - Claudia Koltzenburg
btw, have not seen this idea voiced before ;-) - Claudia Koltzenburg
I am with Bjoern here, I see improvement of papers on peer review (mine and others'). Both on substantial issues as much as smaller things. But i also don't think that peer review is a bulletproof system. - Kubke
I think the point of peer review is not to improve papers but to prevent the publication of truly shitty ones. We really should judge the system on the true negatives. Kind of like democracy - democracy is great, not because it helps you choose great leaders, but because it helps you get rid of shitty ones. - Bosco Ho
if publications were wiki documents then... +1 - Claudia Koltzenburg
Yes, like Kevin Kelly proposed (wikiscience): http://www.edge.org/3rd_cul... Additionally micro-contributions like comments and corrections should be measured and added to the overall reputation of a person. - Konrad Förstner
I'm going to offer counter examples. I've had several papers that were damaged by peer review in my opinion and several more which were severely delayed to publication and not significantly improved by the process, leading to potential opportunity costs (one paper which took nearly two years to get published subsequently has got around 50-80 citations depending on what you count). but in any case these are just personal experiences and anecdotes, not real data. The data says that improvements are marginal and opportunity and direct costs very high. - Cameron Neylon
Bosco, I disagree. Firstly the evidence suggests that most of the shitty papers get published anyway eventually and secondly, like I argued here (http://cameronneylon.net/blog...) actually the more rubbish is out there, the easier it will be to find what you want. - Cameron Neylon
Bosco. While Cameron may be right on arguing against the papers part of your argument, I think the democracy part is a true gem...... - Nils Reinton
Yep Nils, I don't think search is going to do much to improve our parliamentarians unfortunately... - Cameron Neylon
In both cases, though, bringing in transparency has lots of potential to improve the system. By the way, I listened to the podcast last night (not knowing about this thread then) and took my notes at http://ff.im/oVHcb . - Daniel Mietchen
Not sure I've done this before, but -- I disagree with Cameron here. The assumption that more data = better search just doesn't convince me at all. I've seen a lot of papers that were basically trash, but that would appear as hits in any search that would also find good work on the same topic. I think the figure that gets bandied about is that only 30% of papers fail to find a home somewhere -- this seems deeply misleading to me, as it doesn't take into account the changes that had to be made in the other 70% in order to find them a home. The problem with peer review is that it gets asked to do something it cannot do (filter for quality) at the expense of what it can do (filter for scientific validity). It would be a much faster and more robust process if journals did not waste everyone's time by asking reviewers to predict the future. - Bill Hooker
There's a step missing there. My argument was more that we need more data to build better search systems to enable better search. Putting out more stuff with today's Google won't help but it could help build tomorrows was the idea. In terms of the 70/30 figure the thing we don't have any real data on whether papers are improved, delayed, or made worse - most scientists say it improves their papers but that doesn't necessarily tell us what is really going on. Equally most scientists say, as you have, that it is an effective way of testing validity. The evidence for that is equivocal at best, the most troubling test being the BMJ putting in deliberate errors in papers to see whether referees caught them and finding that they didn't. - Cameron Neylon
Well, we've had this part of the conversation too -- if reviewing were seen as part of a researcher's job and being good at it were properly rewarded, then studies like the BMJ one might not find such rampant slack-arsery. I guess part of this is failure of imagination compounded by lack of knowledge on my part -- I don't see how any search algorithm is going to be able to distinguish between scientifically valid work and trash masquerading as same. I can't get past the need for a filter, and don't see how to automate that. - Bill Hooker
Imagine two papers, reporting the same experiments but in paper A the obvious control was omitted in each case, whereas paper B included proper controls. A and B may even draw the same conclusions! Both are going to be hits on any search I can imagine, and then you have to read 'em to know that A is rubbish. Scale that up, and you pretty quickly get beyond the point where you can read what you need to read. Unless I'm dead wrong, and peer review (as currently implemented) is such shite that we are actually in the situation I describe, and paper A will just make its way "down" the journal prestige ladder until someone publishes it and it fouls up my search. - Bill Hooker
I begin to think that what we need most is more data... - Bill Hooker
I don't really buy the opening crystallography example as an argument that peer review is broken. Okay, one reviewer suggested something unnecessary. There should have been more than one reviewer, and the editor (or his/her minions) should have reviewed the reviewer's comments, either omitting any blatantly ridiculous ones in correspondence back to the author or easily accepting the author's note that it did not need to be addressed. - Rachel Walden
Bill, agree that more data is required to draw any sensible conclusions but in response to your point about paper A and B, as far as I'm aware we have absolutely no credible evidence that this is _not_ the case and some examples of dreadful peer review at the top of the pile. So should we be spending billions a year on something we have no evidence does any good? - Cameron Neylon
In any case, my argument would be that unless you have both papers A and B then there is no way anyone can develop tools to distinguish between them, whether they be social or technical. Also surely the argument must be that both support the same conclusion and it is only by reading them together that you get the fullest possible picture...even without the control paper A strengthens paper B's case. - Cameron Neylon
Bill: Why should the concept of search be limited to keywords? Consider Friendfeed for example, where results are highlighted by number of Likes. If anyone considers one paper better than the other, it would appear higher in the search results. - Mike Chelen
Mike, may be a matter of semantics but when you get into "likes" etc, to me that's post-publication review -- in other words, a filter. I love the idea, but a glance at PLoS journals (and other experiments) will show that it hasn't taken off: people just don't interact with the research literature (yet?) in a way that makes social filtering effective. - Bill Hooker
Cam, if A really is going to get published without the controls (70% of the time), then I'd have to conclude that peer review is a waste of time and money. I'm finding that hard to believe, but as you point out -- I have no evidence. I've made fun of other people for falling into the trap where "I can't imagine it" == "it's not possible", and now it seems I am hoist on my own petard... certainly can't argue that in order to develop an automated way of telling A from B a reasonable corpus of A's and B's will be required. - Bill Hooker
A different way to consider peer review: it's as much a psychological barrier as a real one. So, consider paper A -- it may be shite but the authors thought it was good enough to pass peer review. If they didn't even have to consider that hurdle, what might they be pushing out? Is there a mountain of substandard work that would be dumped into the knowledge base but for fear of the Peer Review Ogre? Put another, more cynical way: in the absence of a first-pass filter there will be a temptation to plump up one's paper count, and then what is the obvious fallback for tenure committees and so on? Your favourite and mine: journal reputation! Yet another angle: who will go first, and abandon peer review? I can already hear the sneering from the CNS crowd... - Bill Hooker
That argument I'd agree with more. One of the things I've not seen a discussion of is the extent to which the citation advantage in high IF journals is a result of author selection bias (i.e. authors are probably pretty well placed to say which of their papers will be most successful and send them to 'appropriate' journals on the basis of that). There's a flip side to this tho. What about if the publication of Paper A prompts the publication of Paper B because the story isn't complete? Again tho, we'd need real data to know what the balance was. - Cameron Neylon from twhirl
see also http://journalology.blogspot.com/2010... (e.g., "CNS" disease explained) thx to http://ff.im/oZdka - Claudia Koltzenburg