University of Canterbury, HIT Lab NZNavigation
The Press reports today on our study Semi-Automatic Color Analysis For Brand Logos. The article focuses on the national flag section of our paper. According to The Press, I am the acting director of the HIT Lab NZ! Oh well, at least they spelled my name correctly....Read More
We are currently systematically using tools to kill each other and even autonomous machines are a tried and tested method to kill humans, both soldiers and civilians. Land mines are maybe one of the best examples for such autonomous killing machines, although they are of course...Read More
Sounds easy, I wish it was!
Mitchell Adair, our genius film making student, created an excellent video about the results of our paper “Semi-Automatic Color Analysis For Brand Logos“. Enjoy.
This Spirograph Automaton is a LEGO drawing machine based on the the popular toy and the ideas of PG52. This version extends previous designs by using LEGO Mindstorms EV3. The Spirograph Automaton knows when the pattern is complete and lifts the pen at the right time. You no longer have to observe the machine to stop it when the drawing is complete. The gears are optimized to give fast results so that the children do not have to wait too long. I also build a coin detector that triggers the Spirograph Automaton.
The 2013 LEGO Minifigure Catalog is now available. It contains photographs of more than 550 Minifigures that were produced in 2013. It is very complete and I was even able to photograph the very rare Minifigures.
Furthermore, I am happy to announce that the 2nd edition of the 2012 LEGO Minifigure Catalog and the 3rd edition of the Star Wars LEGO Minifigure Catalog are now available. The later contains all the photographs and data of the 2013 Minifigures in addition to several corrections and extensions. The 2nd edition of the 2012 catalog contains mainly additions and some minor corrections. The Minifigures from the Arkham Asylum, for example, have been added to the catalog. I hope you enjoy the books.
It is well known that the progress of science is related to social processes. Thomas Kuhn and the work based on him made this pretty clear. I would go a step further and declare that “Science progresses through the death of its professors”. I am not sure if this is a novel thought, maybe it is just a radicalization of Kuhn’s thoughts. I certainly do not want to encourage anyone to kill professors (being one myself), but I do think that the senior researchers do inhibit change. Maybe when I am a bit older I will see the benefit in it.
The BBC called me today for an interview about the story of LEGO leading children to the dark side. Here is the MP3 audio recording of the interview:
Yesterday there was also a short feature on TVNZ.
I am not proud of every citation I receive, and I certainly do not agree to Father Slawomir Kostrzewa who used our study to conclude that the increased diversity of LEGO faces has “…compounded their evil potential”. Yes, we have been able to show that the number of angry faces has increased, but our study did not investigate what effect this may have on children. We simply cannot provide any evidence one way or the other.
Father Slawomir Kostrzewa continues that “These toys can have a negative effect on children. They can destroy their souls and lead them to the dark side.” He really should know that the only way to join the dark side is by submitting to Darth Vader.
But then again, the report was published on April 1st, so maybe this is just a joke?
I read Jack Heinemann’s article with great interest and Jack is raising some important issues. What I take home from it is the fundamental problem of the power asymmetry between authors and editors. What I find most problematic is that these days it appears that the authors of studies are by default considered guilty of negligence and misconduct. They have to defend their work in detail against arguments of the anonymous reviewers and sometimes also the editors. While it is expected of the authors to back up their arguments by additional data or literature references, the reviewers are not expected to live up to the same quality standards.
Just recently one of my own papers was rejected based on a comment of one of the reviewers. The reviewer questioned the suitability of a certain statistical method and did provide a reference to a source for his/her argument. The source, however, was a whole book of statistics and even after spending a considerable time searching we could not find the section that would support the reviewer’s critique. The editors did not intervene. Moreover, the editors kept the paper in the review process for 18 months. At the end, some newly invited reviewers even questioned the novelty of the paper! I refer to such a scenario of “Death By Review“. I am not certain if it will still be possible to publish the paper in the future.
In another recent incident one of my papers was rejected from alt.chi despite the fact that it received 10 (!) favorable reviews. The editors simply over-ruled the reviewers. I think that most of us can tell such horror stories of from their personal experience that highlight some of the problems of the peer review process. The debate is not new, but it is surprising that so little has been done to remedy its shortcomings. It appears that most journals and conference still use the traditional review process. Also government agencies in New Zealand use the peer review system to decide which project to fund. Those government processes are, as Jack highlighted, one of the most problematic processes. The actual reviews are withheld from the authors, the decision making is done behind closed doors and the feedback that authors receive on their proposals is most of the time minimal.
The peer review process has traditionally two purposes: judging the quality of the paper and providing feedback that helps authors to improve their manuscripts. The quality judgement is the basis for filtering papers in a publication channel. Filtering used to be an important function due to the economic nature of paper publishing. Every issue of a journal could only have a limited number of pages. But with the arrival of the internet, this limitation has gone. Journals can publish as many articles as they want without increasing the distribution costs. The filtering has lost its economical importance.
The filtering process also does not stop bad papers to be written or published. The biggest fool can make up stories and even automatic paper writing software have been developed that produce papers that can pass through the review process without making any sense. The peer review process can also not prevented manuscripts from being published. Authors can hang the papers on church doors, as Luther probably did with his 95 theses, or publish their work in self produced formats, such as self published books or web pages. There are even dedicated journals for negative results or contradictory results. Besides the humours aspects of the Journal of Universal Rejection, these types of publications channels serve an important function. They to some degree remedy the “File Drawer Effect”, which is a bias towards publishing only statistical positive effects.
Advances, such as Open Peer Commentary, or the “All-In Publication Policy”  are largely being ignored. Recently I started a new journal is some colleagues that also offered an innovative approach to publishing academic work. For a paper to be accepted into Interaction Records, the paper has to have been rejected by at least three other journals or conferences. The reviews and the response of the authors will be published along with the article itself. We have not yet received a single submission. Some of my colleagues even asked me if this has been a hoax. Well, it is not. What this reaction demonstrates is the fear that many academics have. We are so afraid of academic dispute that we do not dare to break out of the perceived norms of the academic community. A good example of the benefits of academic dispute is the article written by Peters and Ceci . They had resubmitted articles that were accepted by 12 journals back to those journals under new names. Most of the articles were rejected based on a lack of scientific quality and not because of the obvious plagiarism or lack of novelty. The paper was published using the open peer commentary system and the comments published along the article outgrew the article itself by a ration of 1:10. The comments discuss the study in detail and is almost more interesting than the article itself.
But back to the story of Interaction Records. Upon announcing the launch of the journal, some publication venues immediately reacted and declared that their reviews cannot be send to our journal since the publication venue claims the copyright of the material. I find this highly problematic. First of all, because the first reaction of those venues has been one of protecting property and not to consider the academic benefits of our new publication venue. This points at the fact that the problems of the peer review process are also connected to clear business goals. The publishers try to defend their business model. In our case, this is rather absurd, since all the journals will have had first dip on the all the papers anyways. Interaction Records only accepts previously rejected papers. But even for papers that they reject, they want to claim copyright of the peer-reviews.
Another issues that Jack highlights is the issue of subjectivity. The reviewers arguments of pure statistics and to some degree of the research method used, it is often possible to find objective criteria. But what is more common is that papers are rejected due to “significance of the results”, “contribution to the community” or “originality of the work”. These quality criteria are to a large extend subjective and reviewers rarely bother to quantify their judgements. There are even instructions available on “How to Reject any Scientific Manuscript” . We shall also not forget that there is almost always a hidden conflict of interest between the reviewers and the authors. Assuming that the reviewers are true experts in the field, then the reviewers compete with the authors for a place in the limited number of available publication slots. By being overly negative towards other manuscripts, reviewers increase their own chances of getting their own work published. Of course this is unethical and is not supposed to happen, but I believe that this conflict explains some of the overly negative attitudes that reviewers exhibit. This leads me back to my and Jacks main concern, the power asymmetry. Domenic Cichetti nicely summarised this problem in his response to Peters and Ceci’s article: One Peer Review: “We have met the enemy and he is us”.
 Bartneck, C. (2010). The All-In Publication Policy. Proceedings of the Fourth International Conference on Digital Society (ICDS 2010), St. Maarten pp. 37-40. | DOI: 10.1109/ICDS.2010.14
 Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.
 Gernert, D. (2008). How to Reject any Scientific Manuscript. Journal of Scientific Exploration, Vol. 22, No. 2, pp.233 – 243