New Book Available: An Introduction to Ethics in Robotics and AI

Springer published our open access book.

Our new book entitled An Introduction to Ethics in Robotics and AI is now available under the Open Access policy of Springer. You can download the PDF directly or flip through the pages below.

An-Introduction-To-Ethics-In-Robotics-And-AI

 

Master Of Human Interface Technology Projects Available

Master Thesis Projects Available

The Master of Human Interface Technology (MHIT) program is soliciting applications for our February 2021 intake. The MHIT program is a 12 month degree that includes a 9 month research thesis. We are looking for students from a variety of backgrounds, including but not limited to psychology, design and computers science. Several scholarships are available. Please get in touch with me at christoph.bartneck@canterbury.ac.nz if you are interested in any of the following projects. Continue reading “Master Of Human Interface Technology Projects Available”

TVNZ reports on our study on The Morality Of Abusing A Robot

Our study was featured on 1 News.

TVNZ’s reporter Lisa Davis interviewed us about our latest study on “The Morality Of Abusing A Robot”. The paper was published under the Creative Commons license at the Paladyn Journal. Merel did an excellent job speaking in a TV interview.

Expressing uncertainty in Human-Robot interaction

PLOS One published our new article on Expressing uncertainty in Human-Robot interaction. This was another successful collaboration with Elena Moltchanova from Maths & Stats. The goal of the study was to explore ways on how to communicate the uncertainty inherent in human-robot interaction. More specifically, the interaction with a passenger and his/her autonomous vehicle. This is of particular importance since driving in an autonomous vehicle can result in the loss of life. So how do you tell a passenger that his chance of surviving this trip is almost certain?

Most people struggle to understand probability which is an issue for Human-Robot Interaction (HRI) researchers who need to communicate risks and uncertainties to the participants in their studies, the media and policy makers. Previous work showed that even the use of numerical values to express probabilities does not guarantee an accurate understanding by laypeople. We therefore investigate if words can be used to communicate probability, such as “likely” and “almost certainly not”. We embedded these phrases in the context of the usage of autonomous vehicles. The results show that the association of phrases to percentages is not random and there is a preferred order of phrases. The association is, however, not as consistent as hoped for. Hence, it would be advisable to complement the use of words with numerical expression of uncertainty. This study provides an empirically verified list of probabilities phrases that HRI researchers can use to complement the numerical values.

The Morality Of Abusing A Robot

Our paper The Morality Of Abusing A Robot has been published.

We are happy to announce that our paper “The Morality Of Abusing A Robot” has been published under the Creative Commons license at the Paladyn Journal. You can also download the PDF directly.

It is not uncommon for humans to exhibit abusive behaviour towards robots. This study compares how abusive behaviour towards a human is perceived differently in comparison with identical behaviour towards a robot. We showed participants 16 video clips of unparalleled quality that depicted different levels of violence and abuse. For each video, we asked participants to rate the moral acceptability of the action, the violence depicted, the intention to harm, and how abusive the action was. The results indicate no significant difference in the perceived morality of the actions shown in the videos across the two victim agents. When the agents started to fight back, their reactive aggressive behaviour was rated differently. Humans fighting back were seen as less immoral compared with robots fighting back. A mediation analysis showed that this was predominately due to participants perceiving the robot’s response as more abusive than the human’s response.

We created a little video to demonstrate the two main conditions of the experiment, a human or a robot being abused and then fighting back. We would like to acknowledge Jake Watson and Sam Gorski from Corridor Digital who made the stimuli for this experiment available.