Monday, September 07, 2009

Controversy at ESA

As you might imagine, there's actually little controversy at ESA. But as I mentioned to some colleagues during the breaks, the only (consistent) way to get comments on a blog is to say controversial things, so that people are annoyed enough to comment. So here are some possibly controversial things, perhaps just a bit tongue-in-cheek, focusing on the business meeting (where I'm typing away now):

1) I was talking with some others this afternoon about awards. Why aren't more conferences giving test of time awards (particularly in theory)? An award for the best paper from 10 years ago at the conference seems like a wonderful thing to have regularly.

Then we started talking about negative awards. Besides giving a best paper award at the conference, what if we gave a worst paper? Funnily enough, it didn't seem like such a negative. "Congratulations, you did just the right amount of work to get your paper over the bubble to get in!" How bad could you feel about that? Perhaps a worst talk award, in contrast, would inspire people to prepare better talks.

2) ESA had issues this year with deadlines -- they had to set their submission deadline a week after ICALP decisions, but still leave enough time for the other associated ALGO workshops to have their deadlines after and satisfy publication constraints for proceedings. This didn't give a lot of time for the PC decisions. I know I had issues chairing STOC as many other conferences tried to set their deadlines after decisions would be announcing.

Why don't we get a unified theory calendar, and have all deadlines preset a year in advance? (This was essentially Thore Husfeldt's idea, if I've interpreted him right.) Stick it on a website/Wiki somewhere, so everyone can see what's going on in advance and plan appropriately? Of course, such advance planning and coordination seems impossible (until we actually get organized and do it).

3) A related issue -- ICALP vs. ESA. Does one dominate the other in your mind? (Because of deadlines, ESA gets and accepts a lot of ICALP rejects.) Maybe we can get a European version of the SODA vs. FOCS/STOC debate going.

4) Amos Fiat, showing the slide at the business meeting on what percentage of papers in each "category" were accepted: "The higher the percentage, the less the PC knew about that topic." Discuss. (Once those SODA reviews come out, you can discuss in that context as well.)

17 comments:

JS said...

1) Test of time awards: Edsger W. Dijkstra Prize in Distributed Computing is a nice example of test-of-time awards in theory-related conferences.

2) But we do have a unified conference calendar, the wiki-like www.confsearch.org. Try, e.g., focs+stoc+soda+esa+icalp. (And while at it, update the details of upcoming conferences if you already know them...)

Anonymous said...

LATIN will likely start a test of time award in 2010.

Deadline (September 21st) is coordinated with SODA.

Anonymous said...

Here's a suggestion regarding awards. It's only semi-serious, and I'm not at all convinced it would be a good idea, but I'm curious to know what people think about it.

Instead of just having a few best paper awards, why not publicly rank order all the accepted papers (allowing ties)? This would give far more recognition to the authors of excellent papers than they current receive. After all, someone who has had several papers very near the top may be as deserving of praise as someone who managed once to get the best paper award. Plus it's impressive if your papers are consistently above average, even if they are never quite at the top.

Of course, the rankings would be somewhat arbitrary. Fortunately, there are no sharp cut-offs. If your paper is mistakenly rejected, or you unfairly miss out on the best paper award, you suffer a real injury. By contrast, if your paper is ranked 12th when you think it should have been 8th, the injury is much smaller. Many people are afraid of making mistakes in their judgements, but we are already making mistakes, and the sharp cut-offs greatly magnify those mistakes.

One big obstacle is that it may make program committee meetings much more contentious. However, NSF panels seem to do a fine job of rank ordering grant proposals, so it can be done. Of course the proposal rankings aren't made public, but they are used for a purpose that certainly matters a lot to everyone involved.

Currently, we worry about distinguishing the accepted papers from the rejected papers, or the award winners from the rest, but we stop there. If these distinctions are valid, important, and worth publicizing, why not go further? I see no argument that we have arrived at the ideal amount of ranking. If the community benefits from the ranking we are already doing, perhaps we would benefit more from making finer distinctions.

My take is that we already do too much ranking. Accept vs. reject decisions are sadly necessary, but the only good reason for best paper awards is that other subfields of CS give them out and theory will suffer in comparison if we don't. However, I'd be curious to hear from people who think best paper awards are a good idea. Is there a compelling reason to distinguish the best one or two papers from the others, but not to distinguish the next best paper from the very worst?

Unknown said...

I like the idea of test of time awards a lot.

Anonymous said...

Isn't ICALP much better than ESA (at least to publish papers on algorithms)? How about ICALP vs. ESA vs. STACS?

Anonymous said...

Two items we need:

* an agreed on total order for conferences and journals so that we can save hiring, tenure, and promotion committees a lot of time. Ideally these should be given weights, e.g.
1 SODA paper = 1.8 ESA papers
so that research output can be easily condensed to a single number. This would provide incentive for journal publication since both conference and journal publications would contribute to the sum.

* "so 10 years ago" awards for major conferences. These would be for papers that were highly rated 10 years ago but nobody gives a damn about any more.

Luca Aceto said...

LICS has been having test-of-time awards since 2006. They go back 20 years rather than 10.

The last commenter wrote

"so 10 years ago" awards for major conferences. These would be for papers that were highly rated 10 years ago but nobody gives a damn about any more.

To my mind, the list of winners of the LICS award indicates instead that those awards go to papers presenting work whose impact that can still be felt within the LICS community and that have sparked off a lot of related work.

Mihai said...

Currently, we worry about distinguishing the accepted papers from the rejected papers, or the award winners from the rest, but we stop there.

How about the journal special issue? That is supposed to distinguish the top papers that didn't get the best paper award.

I don't think you can make a ranking, so you have to decide on a few tiers and classify papers accordingly. Currently, we have "best paper(s) / special issue / simple accept," which seems rich enough a classification.

In my opinion, ICALP seems to be marginally above ESA. ESA and SWAT/WADS seem comparable. MFCS and STACS are clearly weaker than ESA/SWAT/WADS.

There are too many conferences.

JS said...

"MFCS and STACS are clearly weaker than ESA/SWAT/WADS."

Why do you think so? At least STACS isn't an accept-everything conference; in STACS 2009, they accepted only 54 papers out of 282 submissions. STACS has a longer history than ESA, SWAT or WADS, and the organisers of STACS had enough confidence on the quality of the conference to switch from LNCS to open access proceedings.

Anonymous said...

Based on papers/results that I am personally familiar with, I do not really think ICALP/STACS/ESA are comparable. STACS has more "european theory" in it (which ICALP has in track B), but other than that, all three conferences have good and bad papers. All three conferences have good attendance, which might make them more worthwhile than another conference that is not well-attended.

Anonymous said...

At least STACS isn't an accept-everything conference

Are you somehow implying that ESA is? If so can you back this up with acceptance rates?

STACS has a longer history than...

this is a nonsequitur... what does age have to do with anything?

According to citeseer impact factor is as follows:

296. ESA: 0.99 (top 24.24%)
379. WADS: 0.82 (top 31.04%)
389. STACS: 0.80 (top 31.85%)
638. MFCS: 0.48 (top 52.25%)

Anonymous said...

I would like to hear more opinion about how conferences are compared. My prior belief was that ICALP is a bit better than STACS which is much better than ESA. But now I'm confused. Are there any consensus about the rankings of these conferences.

(And how about these conferences vs. APPROX, LATIN, WADS, SWAT. Actually, opinions LATIN vs. STACS might be helpful as their deadlines are approaching and almost at the same time.)

I'm from US and rarely submit to these conferences.

Anonymous said...

To anonymous above. The citeseer statistics you refer to is quite old (here: http://citeseer.ist.psu.edu/impact.html). The newer one is here: http://citeseerx.ist.psu.edu/stats/venues;jsessionid=CC817EE26D306F4CCACCD88486B4A9C0


Anyway, the statistics from citeseer is hard to believe. E.g. STACS got 0.04 while FOCS got 0.03.

JS said...

I think Citeseerx statistics are worse than useless.

Please note that the page http://citeseerx.ist.psu.edu/stats/venues shows by default statistics for 2007, i.e., http://citeseerx.ist.psu.edu/stats/venues?y=2007

If you compare 2007 statistics to e.g. http://citeseerx.ist.psu.edu/stats/venues?y=2003 it is evident that something is seriously wrong.

It seems to me that Citeseerx has a very poor coverage of recent computer science papers, and the few papers that they have seem to have strange metadata, too. No wonder that impact factors computed from their 2007 data look weird.

ESAttendee said...

ESA 2009 Statistics

272 submissions
14 withdrawn

Track A accepted: 56/222 (25%)
Track B accepted: 10/36 (28%)

ESA accepted: 66/258 (26%)

@Mike: thanks to Springer, organizers made available a PDF of the proceedings. If you ask, I suppose they'll give you a copy.

Anonymous said...

citeseer's impact factors are quite random I concur, especially for theory conferences. However let's take a look at a more comprehensive list of conference publication/citations list.

http://libra.msra.cn/conf_category_24.htm

FOCS Publication: 2479 Citation: 36315
STOC Publication: 2713 Citation: 51471
SODA Publication: 2044 Citation: 21227
ICALP Publication: 2373 Citation: 15782
ESA Publication 903 Citation: 3980
STACS Publication: 1293 Citation: 5761

As far as i know the above numbers are sufficiently accurate and correspond to how the community views theory conferences. For example FOCS and STOC are almost equivalent in impact and have the highest citation/pub ratio among theory conferences. SODA follows 3rd. As a side note, it seems that STACS and ESA have almost the same influence.

Abhijit said...

My manuscript got rejected from ESA 2011. There was no serious technical objection on my proofs. Does it stand a chance to STACS or LATIN? Being a grad student, I want it published asap. Which of these two would be easier to get?