Enough Blame to go around

From Rajesh-Kumar.org


In 2012, I added this note to my website www.cs.jhu.edu/~rajesh as www.cs.jhu.edu/~rajesh/A-short-note. Whatever else is on these pages is the fallout of this note. Below is the note recreated verbatim.


You might well ask, why I am adding this. Because -- this is probably the only forum where I haven't yet made myself look silly.

So here goes nothing. A professor I used to *seriously* respect recently recommended in a recent conversation:

...so the only thing I could suggest is to contact the chair. He is a conscientious person and treats all queries seriously.

So in a recent email to a conference chair (ICRA 2012), I wrote:

...the work is worthy, it makes a material difference to the students in this case, and given that this is the second set of excuses
after <conference-x>, I am forced to make a stand to the point of looking silly.

I have made myself look silly, for sure. And it made no difference, of course, for the chair retorted without even reading the email:

Copying a sentence or two from the work of others is not something that IEEE accepts.

Ok. Now that is a serious allegation, if there ever was one.

The authors on this paper come from four different institutions, one of them is a chair on the same conference committee. My visiting student had worked particularly hard on it, and results are superior to previous journal articles. Perhaps there was something we had done. It is nearly impossible not to have some accidental overlap these days. Lets examine what they found. ICRA these days uses a new tool for checking this. Its called iThenticate. Here is the report Media:iThenticate-report.pdf

There were two problems with the chair's responses.

  1. Our report, which you can see above, didn't really have anything in it (an accomplishment in itself!), unless the word figure is now copyrighted, an email address constitutes a problem, or if a support vector machine description somehow institutes a claim of ownership. More disturbingly,
  2. You would think comparing any article with the rest of the world's research would be impossible.

Somehow this magical software accomplishes it. Except, the users are finding it doesn't quite work very well yet. I quote an AE from the same conference. Perhaps other associate editors did the same, what they did for our paper, or something different? Is that a recipe for fair play?

I was an Associate Editor for ICRA this year, and I didn't think it worked very well. Every paper I managed could have been accused of copying a line or two from here and there. After looking at the scan reports, I decided it was best to ignore them.

As it turns out, the chair was just trying to wiggle out of even reading my email properly. We protested, pointing out to the chair that there was nothing in the summary he could be bothered by. Reviewers don't see them, and this was not what the associate editor was worried about. He responded with the shortest possible email:

This was raised in the confidential comments by the AE. Sorry but the decisions will stand.

I know he is busy, but in a follow-up email even before we could respond he stressed that there was some super- secret information that was visible only to the associate editors and editors, and not visible to authors and reviewers, that made it all all right. Except there isn't. So the only reason I can think of this having occurred is:

What price you, you soft-money position...

Yes, that's an unsubstantiated fear. But this is the third time I am meeting this wall, so I am beginning to wonder. Perhaps, perhaps not. One would imagine the chair would at least think twice and read our requests before shooting off these emails, not once, but three times. And if IEEE (or indeed broader academia) was indeed so bothered by quality, they would have easily caught the following:

- Publication 1 (http://ijr.sagepub.com/content/24/9/731.short),

- Publication 2 (http://www.nada.kth.se/~danik/Papers/kragic_iros03.pdf)

- My thesis.

This is not very difficult to decode, and relatively visible work, in case anyone was wondering. May be I am wrong here as well (update 12/16 - no, I am not!). I have asked JHU, IEEE, and IJRR to figure this one out (and will add here whatever they tell me). If I am wrong, I am wrong -- Sorry. It looks funny and I was confused. Not accusing anyone of anything, unless the respected bodies above give me something written otherwise. (A dozen people who have looked at it haven't had the courage to tell me its not funny so far.)

So? Get to the point already...

As an author and reviewer, I have been distressed by both the quality of submissions and reviews in robotics conferences and journals for quite some time. Submissions routinely do not describe the authors' previous work in any way for the reviewers to understand what is new -- even if it is in the conference immediately preceding this submission; or in case of journal article in a recent conference article or thesis. Often even authorship is not quite right, e.g. an adviser on a thesis left out on a journal or conference paper. On the other hand, the reviewers don't seem to any interest in improving the quality of submissions. Reviews are frequently being seconded completely to graduate students, or are being performed by people who might have a conflict, or people without suitable background. Most reviewers also take on many more reviews than they can possible do justice to.

And there appears to be no recourse whatsoever. Chairs routinely respond with we can't get quality reviewers (I can quote at least three in the last two years), and we can only interpret what we get from the reviewers. Or in the case above, make up whatever funny excuse they want. In the mean time, quality work goes unpublished and discussed in interactive fora that only conferences can provide. And much lower quality work gets routinely published. Case in point - Publication 2 (http://www.nada.kth.se/~danik/Papers/kragic_iros03.pdf) . Does anyone even read these papers anymore? Do even authors read them themselves at least twice before submitting them?


If the conferences (and journals) were serious (of course, who cares about what I have to say?) they could easily institute blind submissions, a serious rebuttal process, conflict of interest and personal bias declarations by reviewers, and certification of knowledge of the field for paper being reviewed. There is usually a choice in a review for expertise level etc, but that is just not enough, I could still review a paper I had very little knowledge about, or I could be over-confident of what I know. And the AE is free to do with whatever I write, and if I mark the paper right down the middle, it is sure to be rejected. So you need consequences for bad reviews. Perhaps just dropping from reviewers lists, if we are afraid of more? There has to be a means to get back to the associate editors and if need be to the individual reviewer. Time alone is not a good enough excuse to hurry the process along.

There. Now I have made myself look completely silly. Pointing out the emperor is nude is never a very wise thing to do. In the mean time, I sent the conversations to IEEE, and JHU, and the respective schools. Is that a better way to resolve this?

Perhaps the system will yet surprise me. But given that its 2012, I fear, only badly (update 12/16: so far, things are happening on schedule and confirm my fears).


To be continued. As time and health permit.



<< Preface .... Episode Three >>