Cameron Neylon
Interesting point from Greg Wilson on my post "Cameron Neylon’s recent post about peer review is pretty damning. It’s interesting to compare his description of peer review’s faults with what peer review in open source projects does right." - Cameron Neylon
And my comment [currently in moderation]: Interesting to think about this. The two obvious factors are first the “open” bit. One of the serious problems with traditional peer review is that it is necessarily limited by the small number of people involved - this is probably what maximises the random effects. The flip side, making it open, obviously has both the advantage and disadvantage of allowing anyone who is interested to comment. Central challenge is setting the procedural barriers to comment at a level which maximises important signal to noise. From a naive external perspective to the other advantage code has is that you can make “objective” assessments and test them against reality through explicit tests. Does this modification pass the unit tests? Is there evidence that speaks to whether this is real? Or just the imagination of us non-computational people. Comparison of code peer review in closed vs open systems? - Cameron Neylon
I'm (still) not convinced that 'limited by a small number of people involved' is actually a problem. Sure, "two heads are better than one", so I'm not disputing that having more people to look at a problem likely gives better feedback overall. Yet, if you apply this as a standard, where will you get all the extra people and time for all the problems/papers? And how can we make sure that each manuscript/paper gets a fair shake? IMO, this kind of "openness" is too unregulated to be efficient on a global scale. - Wobbler
Cameron's "open peer review" refers to "public peer review". And the way he suggests it there is absolutely unregulated inefficiency. As for your system: journals don't publish rejected manuscripts, so there's no peer review that can be published (alongside with it). There are additionally 2 other issues: the first is that journals don't share peer reviews with other journals (except maybe from the same brand/publisher) and secondly: scholars aren't all too excited about openly peer reviewing. - Wobbler
within the neuroscience publications there is sharing of peer reports as well as within NPG publications - this works for a good paper that just needs to find an appropriate home (could journal editors bid for it? author writes paper, gets certified peer reports then takes bids from journal editors?) But there's still the issue of papers that need work or are problematic - probably a higher proportion in fields where there isn't internal review within a lab first or where there isn't consensus over what is good - Christina Pikas
Sorry to bring up the same example again, but the EGU experiments with public peer review have shown that the quality of submission goes up considerably, precisely because things have to be brought up to standard before sending it off to the journal - otherwise, the reputation of the authors may take hits. - Daniel Mietchen
@Wobbler the evidence shows to my eye that traditional peer review is essentially a waste of time so we don't necessarily need to do a lot better but just having something distinguisable from random would be good. Greg's point is that open peer review of code _does_ work effectively so all I am really saying is that we should look for what lessons can be learnt. I think we need to get away from the idea that everything needs to be peer reviewed. If 90% of the current literature was grey I don't think it would actually make the slightest difference - Cameron Neylon
The EGU thing is really interesting - I haven't had a chance to look at it in detail but it does seem to be working in that direction and seems to be getting adoption. - Cameron Neylon
Your comment's longer than the post... Quick comment: open peer review tends to attract interested parties. Traditional peer review has a fairly strong element of randomness (through the odd assignments made by journal editors) so I might end up reviewing papers outside my area of expertise. Now on the one hand you could say that having reviewers in the same area as you reviewing your papers is a good thing, since they will know what they are talking about and are likely to review quickly. On the other hand I worry about the reviewing process in that case becoming nepotistic or one big love-in between influential members of a certain area. So how to involve a little cross-discipline randomness in open reviewing (when everyone is busy and pressed for time). This is not a big worry, but it's there. I worry about it like 17% of the time. - Matthew Todd
I've read a lot of papers about peer review, and most of them don't say peer review is essentially a waste of time. Most say that it wastes a lot of time (i.e. horribly inefficient) and that it is expensive, but still necessary and so far the best that we have. I'm actually for changing it, but I don't see how "uncontrollably/openly" putting/expecting (more) scholars on one manuscript/review session is going to improve the efficiency? How will that reduce the randomness? - Wobbler
As for the open source comparison, (and I apologize for perhaps incorrectly generalizing again) most open source projects aren't exactly in a time race. There's no real "final" state of a project, no "this is either correct or incorrect" and people's careers aren't significantly affected when it goes slowly or doesn't go anywhere at all. They're not putting their reputations on the line for putting out a product that's at its beginning stages. I'd definitely consider practical/professional open source still a niche. It is (still) firmly in the "people with paying jobs spending their spare time working on something else for fun" category. - Wobbler
Wobbler, based on what I've read of Camerons' in the past (and much other discussion on here), I think he doesn't focus on 'public peer review' (as defined above, meaning anyone comments, no reviewers are assigned) as the only case to discuss. He's just responding to this specific comment about open source peer review. In general, Cameron and others have laid out many benefits of opening up the peer review process, such as in his post that Third Bit was referring to: This doesn't have to mean "free for all" is the only form of peer review. Dorothea and others highlight one of these benefits above: there would likely be huge efficiency gains because authors would not be able to submit work that they know is low-quality and incomplete. Peer review would then automatically be allocated more efficiently towards the best papers, most worthy of improving. I can see this from perspective of both a referee and author. As a referee, I'd say that more than 50% of the articles I've seen were very clearly submitted with huge gaps, banking on the fact that they'd get a chance to fill in the holes after the first round of review. As an author, I am very confident that every first-author paper I've had has been significantly improved by very thoughtful comments by anonymous referees. I feel badly that they don't get credit. The same is true for most of my grant applications--a lot of hard work by anonymous referees that I can not credit. Grant applications give a further example of wasted effort that Cameron's talking about. Usually they're declined and all the writing of the applicants and the review panels are completely lost when the grant is submitted to another agency. What a waste of time and ideas. - Steve Koch
The purpose of peer review is essentially twofold: to assess the quality of the paper and to provide suggestions for feedback. "Incomplete" is not necessarily a good motive not to submit something for peer review. It's not always easy to determine exactly the most beneficial scope for a research/article. E.g. since all of your first-author papers were significantly improved, couldn't your papers also constitute as "incomplete" before you submitted them? I'd like to say the same for "low-quality" papers. Also, and perhaps this actually supports your points more than mine, most scholars publish because they have to publish. Not necessarily because they think their work is of high quality (which it probably isn't, going by the statistics). If everybody is doing it, and they do, because they have to, there's less incentive to not publish just because one perceives their own work as less than good quality. In addition, journal editors are still the ones who decide which manuscripts to peer review (including public peer review elements). And if it's good enough for the editor... With the current mentality, I don't see how even open peer review is going to stop low quality papers from being published, let alone submitted. It might take a longer while though. Which might be a good thing, if only it wasn't negatively compensated by basically depending on more scholars on one peer review session of one manuscript. But I'm just speculating right now. It's very possible you're right that open peer review will actually (significantly) reduce low quality manuscripts, with no hope of it being a good quality paper even after peer reviews, from being submitted. Which does indeed free up time for peer reviewers to focus on the manuscripts that will turn into good ones (if they weren't already) with some good peer reviewing. Definitely food for thought. - Wobbler
@Wobbler do you have a bibliography on this somewhere easily to hand I could take a look at? My reading is pretty incomplete (and I was hoping to provoke people to point at positive studies of peer review) but everything I've read that does single blind trials of peer review replication have found reliabity (measured by correlation of two repeated) processes that is either no better than than chance or marginally better, usually dominated by very high rated and very low rated items. [not asking you to do my reading for me but if you happened to have some tagged papers somewhere that would be great] - Cameron Neylon
Bear in mind that my personal opinion is that only a very small proportion of papers currently published should be peer reviewed (maybe 10%) but that they should all be published in some form. And that those that are peer reviewed should have a mechanism for continuous public peer review over time. One way this might be implemented is by having a central repository where things are lodged and anyone can choose to peer review whatever they like - after some number of reviews - or some other editorial decision these could be "formally journal published". Another option is the Frontiers type system, or those that are being explored by the EGU as Daniel mentions. Lots of different modes. - Cameron Neylon
Ad Steve, (1) "public peer review" in the EGU sense, even though conducted entirely in public, implies the usual >=2 assigned referees (which may remain anonymous), plus the option for everyone else (who will have to use their real name). (2) Using a similar scheme for grant applications would certainly be a huge step forward, as discussed, e.g., at . - Daniel Mietchen
Ad Cameron, the Frontiers review system is not "public", albeit it allows for more interaction between authors and reviewers than most closed systems. It is, however, "open" in the sense that the names of the reviewers and editors are published along with the manuscript. - Daniel Mietchen
I don't know any positive studies on peer review. I doubt there are any. As I've said, I agree with you that the current forms of (journal) peer review is highly inefficient and expensive, but is still the best that we currently got (empirically anyway). Take Richard Smith for instance. I know of no better proponent for changing peer review. And in this article, he actually completely supports your notion and rejects my views. But even in his research he reveals that, while not perfect, peer review does get errors out of manuscripts. I'm not wondering whether open/public peer review does improve the quality per peer reviewed article, because I'm sure it does, but whether it's relatively as efficient/managed as the (journal) peer review system? And how that would be possible if you essentially need to account for more people per peer review session. I'm assuming that even in open/public peer reviewing, manuscripts may require a second round (likely after revisions). Assuming that and that more people are required per review session, wouldn't that simply negate and even add to the inefficient redoing of the peer reviews by 2 scholars per session? - Wobbler
Another "public" option exists at Scholarpedia, where authors may choose to have their (assigned) reviews in the classical way (i.e. on a dedicated public page attached to the article), or to let reviewers modify the article directly. In either case, the reviewers may remain anonymous, so the peer review system is not "open". Also, others can chime in - anonymous or not, but all modifications to the article are subject to approval by the "curator" (usually one of the authors). - Daniel Mietchen
@Wobbler -- yes, my first paper was "incomplete," in the sense that a referee suggested a month's worth of experiments that led to a new and very good figure in the paper. It was clearly suggested by a biologist, and as physics graduate student, I would not have been embarrassed attributing those ideas to the referee. So, I think that was efficient use of peer-review. That referee and others also lambasted many areas of the paper, causing my advisor to withdraw many wild and ridiculous claims. One used the term "laughable," which I loved. I had no influence over my advisor, so I was really happy about that. If it was known that our first submitted version of the manuscript would be published along with the referee reports, I suspect my advisor would not have rushed that copy out the door without listening to intense criticism from the peons in the lab. The peer-review "worked," but I think a lot of effort could have been saved with an open system. Alternatively, I really think those referees deserve credit for the work they did. And more people could have chuckled than just the few of us who got to see the reviews. - Steve Koch
@Wobbler -- but from referee perspective, I am talking about cases where it seems obvious that authors have thought, "We know this is a sloppy piece of crap that doesn't make much sense now. But we'll submit it because peer-review will take several months, and in the meantime we can try to get our experiments to actually work. Hopefully we can convince the editors to accept with major revisions. If we wait until our experiments are done and correct, our paper won't be published as soon, and we'll lose credit for our work." - Steve Koch
Steve Koch: Is it too much to expect journal editors to have just enough knowledge to separate the "good or potentially good with peer reviews" papers from the "low quality beyond saving" papers? Or can't they just ask peer reviewers to warn them if they read such papers so they won't have to peer review? - Wobbler
I wouldn't think so! But I guess in practice it is too much to ask for editors that I deal with. And I'm getting better at recognizing problems before I spend too much time. But I still think this would be helped by publishing original manuscript + peer review. And that's only one of the benefits that Cameron, Daniel, etc. are outlining. Have any of the benefits of anonymous peer review (e.g., safely getting opinions that would otherwise be suppressed) been proven to exist? - Steve Koch
Note that anonymous peer review is possible in public. Dunno of any study actually targeting its benefits, but I can imagine there are some. - Daniel Mietchen
@wobbler - I keep hearing this "peer review is broken but its the best we have" and my argument is that we have no evidence of that. The Churchill quote actually runs "...democracy is the worst form of government except all those other forms that have been tried..." [according to Wikipedia anyway...]. My point is that we haven't really tried any of the others and we certainly haven't done the comparison. To say it is the best we have is simply to admit that it is the only system we have any real data on. If this were just us I wouldn't mind so much but we're blowing $100B of taxpayers money on this a year on this "science" stuff so we have some responsibility to look at whether our systems are fit for purpose and if not to act on it. - Cameron Neylon
Well, actually, several peer review systems have been tried, to different extents, for both manuscripts and research proposals: 1) No peer review, 2) non-public pre-publication peer review (single-blind, double-blind, open), 3) public post-publication peer review (open). Most of the above comments were on 4) public pre-approval (to avoid a seeming contradictio in adiecto) peer review (single-blind or open), and that's where we do not have much data. - Daniel Mietchen
It's not that I don't like to agree with you CN. But, even putting aside that I can be wrong about the efficiency being worse. 1. There are plenty of surveys out there that provide evidence that the majority of scholars don't want to do open/public peer review. 2. There are a couple of experiments that show that those surveys are more accurate than not. 3. Journal publishers have no (financial) incentive to share peer reviews with other journals (other than with their own brand/publisher maybe). And I still think that certification of research (communication) is too serious to open it up without enforcing some controls to ensure that all manuscripts are at least screened and have a shot at being peer reviewed. - Wobbler
Ah ok - now there we can agree. The perception is that it works and there isn't any real incentive to change the system (at least until it comes crashing down around our ears). And certification of some sort is clearly a useful service. So question becomes how (or should we) change perception? Or perhaps just that we need some more evidence? The problem of course being the that the perception makes it near impossible to get good evidence...Kind of brings me full circle. What can we learn from examples where there is evidence that peer review like processes do work? - Cameron Neylon
Just noticed that "The pre-publication history for this paper can be accessed" in some BMC journals. Example: . - Daniel Mietchen