Response to Jack Heinemann’s article on peer review

I read Jack Heinemann’s article with great interest and Jack is raising some important issues. What I take home from it is the fundamental problem of the power asymmetry between authors and editors. What I find most problematic is that these days it appears that the authors of studies are by default considered guilty of negligence and misconduct. They have to defend their work in detail against arguments of the anonymous reviewers and sometimes also the editors. While it is expected of the authors to back up their arguments by additional data or literature references, the reviewers are not expected to live up to the same quality standards.

Just recently one of my own papers was rejected based on a comment of one of the reviewers. The reviewer questioned the suitability of a certain statistical method and did provide a reference to a source for his/her argument. The source, however, was a whole book of statistics and even after spending a considerable time searching we could not find the section that would support the reviewer’s critique. The editors did not intervene. Moreover, the editors kept the paper in the review process for 18 months. At the end, some newly invited reviewers even questioned the novelty of the paper! I refer to such a scenario of “Death By Review“. I am not certain if it will still be possible to publish the paper in the future.

In another recent incident one of my papers was rejected from alt.chi despite the fact that it received 10 (!) favorable reviews. The editors simply over-ruled the reviewers. I think that most of us can tell such horror stories of from their personal experience that highlight some of the problems of the peer review process. The debate is not new, but it is surprising that so little has been done to remedy its shortcomings. It appears that most journals and conference still use the traditional review process. Also government agencies in New Zealand use the peer review system to decide which project to fund. Those government processes are, as Jack highlighted, one of the most problematic processes. The actual reviews are withheld from the authors, the decision making is done behind closed doors and the feedback that authors receive on their proposals is most of the time minimal.

The peer review process has traditionally two purposes: judging the quality of the paper and providing feedback that helps authors to improve their manuscripts. The quality judgement is the basis for filtering papers in a publication channel. Filtering used to be an important function due to the economic nature of paper publishing. Every issue of a journal could only have a limited number of pages. But with the arrival of the internet, this limitation has gone. Journals can publish as many articles as they want without increasing the distribution costs. The filtering has lost its economical importance.

The filtering process also does not stop bad papers to be written or published. The biggest fool can make up stories and even automatic paper writing software have been developed that produce papers that can pass through the review process without making any sense. The peer review process can also not prevented manuscripts from being published. Authors can hang the papers on church doors, as Luther probably did with his 95 theses, or publish their work in self produced formats, such as self published books or web pages. There are even dedicated journals for negative results or contradictory results. Besides the humours aspects of the Journal of Universal Rejection, these types of publications channels serve an important function. They to some degree remedy the “File Drawer Effect”, which is a bias towards publishing only statistical positive effects.

Advances, such as Open Peer Commentary, or the “All-In Publication Policy” [1] are largely being ignored. Recently I started a new journal is some colleagues that also offered an innovative approach to publishing academic work. For a paper to be accepted into Interaction Records, the paper has to have been rejected by at least three other journals or conferences. The reviews and the response of the authors will be published along with the article itself. We have not yet received a single submission. Some of my colleagues even asked me if this has been a hoax. Well, it is not. What this reaction demonstrates is the fear that many academics have. We are so afraid of academic dispute that we do not dare to break out of the perceived norms of the academic community. A good example of the benefits of academic dispute is the article written by Peters and Ceci [2]. They had resubmitted articles that were accepted by 12 journals back to those journals under new names. Most of the articles were rejected based on a lack of scientific quality and not because of the obvious plagiarism or lack of novelty. The paper was published using the open peer commentary system and the comments published along the article outgrew the article itself by a ration of 1:10. The comments discuss the study in detail and is almost more interesting than the article itself.

But back to the story of Interaction Records. Upon announcing the launch of the journal, some publication venues immediately reacted and declared that their reviews cannot be send to our journal since the publication venue claims the copyright of the material. I find this highly problematic. First of all, because the first reaction of those venues has been one of protecting property and not to consider the academic benefits of our new publication venue. This points at the fact that the problems of the peer review process are also connected to clear business goals. The publishers try to defend their business model. In our case, this is rather absurd, since all the journals will have had first dip on the all the papers anyways. Interaction Records only accepts previously rejected papers. But even for papers that they reject, they want to claim copyright of the peer-reviews.

Another issues that Jack highlights is the issue of subjectivity. The reviewers arguments of pure statistics and to some degree of the research method used, it is often possible to find objective criteria. But what is more common is that papers are rejected due to “significance of the results”, “contribution to the community” or “originality of the work”. These quality criteria are to a large extend subjective and reviewers rarely bother to quantify their judgements. There are even instructions available on “How to Reject any Scientific Manuscript” [3]. We shall also not forget that there is almost always a hidden conflict of interest between the reviewers and the authors. Assuming that the reviewers are true experts in the field, then the reviewers compete with the authors for a place in the limited number of available publication slots. By being overly negative towards other manuscripts, reviewers increase their own chances of getting their own work published. Of course this is unethical and is not supposed to happen, but I believe that this conflict explains some of the overly negative attitudes that reviewers exhibit. This leads me back to my and Jacks main concern, the power asymmetry. Domenic Cichetti nicely summarised this problem in his response to Peters and Ceci’s article: One Peer Review: “We have met the enemy and he is us”.

[1] Bartneck, C. (2010). The All-In Publication Policy. Proceedings of the Fourth International Conference on Digital Society (ICDS 2010), St. Maarten pp. 37-40. | DOI: 10.1109/ICDS.2010.14
[2] Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.
[3] Gernert, D. (2008). How to Reject any Scientific Manuscript. Journal of Scientific Exploration, Vol. 22, No. 2, pp.233 – 243

International Journal of Human Computer Studies

The International Journal of Human Computer Studies (IJHCS) has invited me into their editorial board. I hope to be able to serve the community with my efforts. IJHCS is only second to Human-Computer Interaction in terms of impact factor according to the Journal Citation Report. Below you find the latest ranking:

Journal Impact Factor
HUM-COMPUT INTERACT 6.190
INT J HUM-COMPUT ST 2.380
USER MODEL USER-ADAP 2.345
INTERACT COMPUT 1.698
HUM FACTORS 1.458
ACM T COMPUT-HUM INT 1.194
BEHAV INFORM TECHNOL 0.767
INTERACT STUD 0.776
INT J HUM-COMPUT INT 0.587

Ranking of HCI Journals by Impact Factor

Call for Papers: NEW International Journal of Social Robotics

On behalf of the Editorial Board, we are very pleased to announce the launch of the International Journal of Social Robotics, Springer, with the goal of providing a common platform for researchers, scientists, artists and designers to share their findings. The journal will publish the latest developments in Social Robotics and its integration into our society, covering relevant advances in engineering, computing, psychology, arts, social sciences, and design philosophy. Continue reading “Call for Papers: NEW International Journal of Social Robotics”

Special Issue on Subtle Expressivity for Characters and Robots

Call for Papers for a Special Issue on Subtle Expressivity for Characters and Robots of the The international Journal of Human-Computer Studies. Continue reading “Special Issue on Subtle Expressivity for Characters and Robots”