Expressing uncertainty in Human-Robot interaction

PLOS One published our new article on Expressing uncertainty in Human-Robot interaction. This was another successful collaboration with Elena Moltchanova from Maths & Stats. The goal of the study was to explore ways on how to communicate the uncertainty inherent in human-robot interaction. More specifically, the interaction with a passenger and his/her autonomous vehicle. This is of particular importance since driving in an autonomous vehicle can result in the loss of life. So how do you tell a passenger that his chance of surviving this trip is almost certain?

Most people struggle to understand probability which is an issue for Human-Robot Interaction (HRI) researchers who need to communicate risks and uncertainties to the participants in their studies, the media and policy makers. Previous work showed that even the use of numerical values to express probabilities does not guarantee an accurate understanding by laypeople. We therefore investigate if words can be used to communicate probability, such as “likely” and “almost certainly not”. We embedded these phrases in the context of the usage of autonomous vehicles. The results show that the association of phrases to percentages is not random and there is a preferred order of phrases. The association is, however, not as consistent as hoped for. Hence, it would be advisable to complement the use of words with numerical expression of uncertainty. This study provides an empirically verified list of probabilities phrases that HRI researchers can use to complement the numerical values.