Monday, November 21, 2011

Round Two Reviewing : An Exercise in Conditional Probabilities

We're in "round 2" of reviews for NSDI, and it's brought up a problem for me I've noticed before.  I worry that, subconsciously, I'm inclined to give papers I read on the second round a higher score, since I'm swayed by the fact that they've in fact made it to the second round. 

I wonder if anyone in the PC world has done any testing of this to see if it's a real phenomenon.  Are second round reviews on a set of papers statistically different from the first round of reviews?  I would bet yes, even controlling for the fact that the papers made it to the second round.  In particular, I'd suspect it's much harder for people to give a score of 1 (=reject) to a second round paper.

One could imagine attempting to test for this by sticking a few obvious rejects from the first round into the second round reviews.  Indeed, perhaps one should make this part of the process:  randomly select a few clear rejects to go into round two, and announce that you're doing this to the PC.  Then they might not feel so averse to assigning a score of 1 in the second round.

One joy in the second round reviews is once you submit a review you get to see the first round reviews.  So far, I feel I've been calling them fairly;  when I haven't liked a second round paper, the first round reviews seem to confirm my opinion.  So perhaps (with some effort) I'm keeping my subconscious at bay successfully, and not conditioning on the fact that it's a round 2 review. 

1 comment:

Arvind Narayanan said...

Even if you do have a subconscious bias, is it necessarily a problem, assuming that all reviewers have the same bias? Each paper will still have the same probability of acceptance that it would have had with unbiased reviewers.