The Master of Human Interface Technology (MHIT) program is soliciting applications for our February 2021 intake. The MHIT program is a 12 month degree that includes a 9 month research thesis. We are looking for students from a variety of backgrounds, including but not limited to psychology, design and computers science. Several scholarships are available. Please get in touch with me at christoph.bartneck@canterbury.ac.nz if you are interested in any of the following projects.
Swimming Lane Distribution Simulation
Christoph Bartneck, Carl Peterson
Swimming is one of the healthiest workouts. It makes up around 20% of the mainstream endurance sports market . According to the 2015/16 annual report of Swimming NZ there are 19,026 active club members in New Zealand alone. This does not include recreational swimmers that swim outside of swimming clubs nor Triathlon swimmers. The USA Swimming Foundation recorded 400,000 members in its 2016 annual report.
This project focused on using agent base simulations to predict efficient and safe swim training sessions. Swim training occurs through lane swimming. For this purpose the pool is divided into lanes and a group of swimmers completes the same program within each lane. The swimmers within a lane are usually ordered by their speed by which the fastest swimmer leads the group. Swimmers swim in a circle within each lane. They swim down the lane on one side and return on the other. Overtaking is, however, a challenge since the swimmers are limited in their view ahead. When they swim on their back they even have no forward vision. The situation can become dangerous when two swimmers attempt an overtaking manoeuvre in opposite directions which can result in injuries.
It is therefore important to carefully balance the performance of the swimmers within lanes and across all lanes. Ideally, all swimmers within one lane have identical swimming speeds. Since there is a huge variety of swimming speeds this can usually not be accomplished.
The goal of this project is develop a model of swimmers in lanes using a social simulation based on NetLogo. This simulation will vary the number of swimmers, the variations in their speeds, the swimming program, and the number of lanes. The goal is to find an optimal distribution of swimmers to lanes based these variables. The goal is to minimize the overtaking manoeuvres. This will enable the coach to predict how many lanes would be necessary given a certain set of swimmers to ensure safe swim training.
The model will then be validated using observations from real world swimmers in swimming clubs. These observations will be conducted using local swimming clubs and their associated data on the swimmers, such as their swim speed records to estimate their training speeds.
Ethical Asymmetry Perception in HRI
Christoph Bartneck, Kumar Yogeeswaran, Andy Vonasch
Humans treat robots as social actors and they apply similar ethical standards to robots. Not only do robot need to act ethical, but also humans should treat the robot in an morally acceptable way. Abusive behaviour towards robots is judged to be as immoral as abusive behaviour towards humans (Bartneck & Keijsers, 2020). Sparrow (2020), however, pointed out that there might be an asymmetry when considering positive behaviour towards robots. People might not consider positive behaviour towards and of the robot as praiseworthy as they condemn negative behaviour towards them and of them.
The goal of this study is empirical test whether there is a moral asymmetry in the treatment of robots. We will conduct an experiment in which participants will have to evaluate the praiseworthiness and disapproval of behaviours exhibited by either a human or a robot.
For this purpose we will use a new stimuli set originally proposed by (Fuhrman, Bodenhausen, & Lichtenstein, 1989) and modified by Effron, D.A. (2020), The moral repetition effect: Bad deeds – but not good deeds – seem more ethical when repeatedly encountered. (Manuscript in preparation, Data). This will allow us to test a wide variety of behaviours.
Isidor Test
Christoph Bartneck, Diane Proudfoot
With the progress of AI and the associated conversational agents it will become increasingly difficult for people to know if they interact with a human or a machine. The lines will become increasingly blurred. What we need is a reliable and valid test that can distinguish between artificial conversation partners and humans. Alan Turing already proposed the Imitation Game (Turing, 1950) and many tests are based on his work. There are, however, several open issues that need to be addressed for a valid Turing Test as a benchmark for machine intelligence.
The Turing Test itself succeeds in case that the recognition accuracy of the judge in the imitation game with an intelligent machine is low while it is high in case of a basic machine. We have little information on what exactly high and low means since there are currently no machines available that have human like intelligence necessary for the task envisioned by Turing. We will therefore use a Wizard-Of-Oz methodology to simulate such a strong AI. This will allow us to establish the starting point for a valid Turing Test.
Next, we will consider the high variability of human performance. Normally AI systems are tested against healthy adult humans. We intend to develop a test that also takes into consideration children and humans with physical and mental disabilities. Only a test that can reliable and accurately distinguish between AI systems and the full spectrum of human nature will be useful for to not only safeguard human users, but also as a benchmark to monitor the progress of AI technology. Without a valid and reliable test testing the performance of various AI technologies will be meaningless.
Valence Free Racialization In HRI
Christoph Bartneck, Kumar Yogeeswaran
Robots are being perceived to have a race and this this changes the behaviour of people towards them (Addison, Yogeeswaran, & Bartneck, 2019; Bartneck et al., 2018). In our previous work we used the shooter bias methodology to test this change in behaviour which implies the negative behaviour of shooting at a person or robot. Robots racialized as Black where treated differently than robots being racialized as White. The experiment showed a clear shooter bias towards Black robots, similar to that exhibited to Black humans.
While this methodology has been widely used, it does associate race to a violent act. In this study we want to explore the use of other implicit association tests (IAT) for the perception of robots’ race. This may include the analysis of computer mouse movements when selecting images. The advantage would be that the task that the participants need to do is not inherently negative. This may impact the racial bias that participants show towards humans and robot. This study will consist of at least one empirical experiment in which a valence free IAT is used to test racial biases towards humans and robots.
References
- Addison, A., Yogeeswaran, K., & Bartneck, C. (2019). Robots Can Be More Than Black And White: Examining Racial Bias Towards Robots. Paper presented at the AAAI/ACM Conference On Artifical Intelligence, Ethics, and Society, Honolulu.
- Bartneck, C., & Keijsers, M. (2020). The Morality Of Abusing A Robot. Paladyn – Journal of Behavioral Robotics, 11(1), 271-283. doi:10.1515/pjbr-2020-0017
- Bartneck, C., Yogeeswaran, K., Ser, Q. M., Woodward, G., Sparrow, R., Wang, S., & Eyssel, F. (2018). Robots and Racism. Paper presented at the ACM/IEEE International Conference on Human Robot Interaction, Chicago.
- Fuhrman, R. W., Bodenhausen, G. V., & Lichtenstein, M. (1989). On the trait implications of social behaviors: Kindness, intelligence, goodness, and normality ratings for 400 behavior statements. Behavior Research Methods, Instruments, & Computers, 21(6), 587-597. doi:10.3758/BF03210581
- Sparrow, R. (2020). Virtue and Vice in Our Relationships with Robots: Is There an Asymmetry and How Might it be Explained? International Journal of Social Robotics. doi:10.1007/s12369-020-00631-2
- Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.