Visual Metaphor Gone Wrong

UC uses wrong design for cyber security campaign.

The University of Canterbury is making an effort to increase awareness for the need of strong passwords. For this purpose they ran the “Longer Is Stronger” campaign, including a poster that is still being shown on displays across the campus.

The problem is that the visual metaphor of a chain is completely wrong. A chain is only as strong as its weakest link and with an increasing length of the chain the likelihood of a particularly week link increases. A longer chain is weaker than a short one. I really hope that our IT security experts are smarter than our visual designers.

A password is indeed usually stronger the longer it is. Encouraging students and staff to use long passwords is a step in the right direction. It would be even better if UC would offer password managers, such as 1Password or LastPass to all its members. That way our passwords could not only be long, but we would be able to conveniently access them. But putting money where you mouth is, is a skill that UC still needs to practice. Purchases of password managers are still being processed on an individual basis and it can take months to complete a purchase.

The Morality Of Abusing A Robot

Our paper The Morality Of Abusing A Robot has been published.

We are happy to announce that our paper “The Morality Of Abusing A Robot” has been published under the Creative Commons license at the Paladyn Journal. You can also download the PDF directly.

It is not uncommon for humans to exhibit abusive behaviour towards robots. This study compares how abusive behaviour towards a human is perceived differently in comparison with identical behaviour towards a robot. We showed participants 16 video clips of unparalleled quality that depicted different levels of violence and abuse. For each video, we asked participants to rate the moral acceptability of the action, the violence depicted, the intention to harm, and how abusive the action was. The results indicate no significant difference in the perceived morality of the actions shown in the videos across the two victim agents. When the agents started to fight back, their reactive aggressive behaviour was rated differently. Humans fighting back were seen as less immoral compared with robots fighting back. A mediation analysis showed that this was predominately due to participants perceiving the robot’s response as more abusive than the human’s response.

We created a little video to demonstrate the two main conditions of the experiment, a human or a robot being abused and then fighting back. We would like to acknowledge Jake Watson and Sam Gorski from Corridor Digital who made the stimuli for this experiment available.

LEGO Train Automation

How to control four LEGO trains on one track.

The LEGO company offers remote controls to play with your trains. You can control up to four trains with one remote. This works fine as long as each train runs on a dedicated track and you only need to pay attention to one train at a time. LEGO only allows you to control trains. You cannot control track switches, lights or decouplers remotely.

The 4DBrix company is offering advanced train automation and today I would like to share my latest train automation project with you. I ran four trains on one track without any collisions.

This video shows all four trains from the top, including a picture in picture video of one of the trains.

This video is a 360 panoramic video. You can spin the camera and look at all the trains and LEGO sets.

I used 4DBrix’s nControl IDE to program all the trains that were connected to the computer using Bluetooth. nControl uses the Python programming language. One little hick up was the need to flash the firmware of my BLED112 Bluetooth dongle to allow for more than three trains simultaneously.

Rezension Ethik in KI und Robotik

Das iX Magazin hat eine Buchrezension geschrieben (iX 6/2020 S. 148) über unser Buch Ethik in KI und Robotik.

Was wäre, wenn die Menschen nach einem Sozialbonus klassifiziert würden, der ihren Wert bestimmt und bei einem Autounfall über ihr Schadens risiko entscheidet? Technisch gesehen wäre es einfach, so etwas in autonomen Fahrzeugen zu installieren. Es sei allein der gesellschaftliche Diskurs, der solche negativen Utopien verhindern könne, schreiben die Autoren von „Ethik in KI und Robotik“. Mit Fragen rund um künst liche Intelligenz sollten sich nicht nur Techniker beschäftigen, wie sich schon an der Zusammenstellung des vierköpfigen Autorenteams ablesen lässt. Es besteht aus Professoren, die auch die ethischen, philosophischen und wirtschaftlichen Aspekte von KI, Robotik und Mensch- Roboter- Interaktion erörtern. Sie beleuchten in zehn Kapiteln, was KI kann und wie viel Ent scheidungsgewalt Algorithmen bekommen sollten. Alles nicht sehr ausführlich, sondern kompakt auf 170 Seiten.

Menschen neigen dazu, ihre Wünsche und Gefühle auf Maschinen zu projizieren und Beziehungen zu ihnen aufzubauen. Solche psychologischen Aspekte der KI sind beispielsweise Thema in Abschnitt sechs. Auch wenn es schon Richtlinien und Grundsätze gibt – noch befindet sich KI am Anfang. Obwohl die Forschung nicht einmal versteht, wie Menschen zu moralischen Entscheidungen kommen, könnte KI nach Meinung der Autoren trotzdem in klar umrissenen Fällen moralische Entscheidungen treffen. Offene und dynamische Welten seien noch ein Problem, auch wenn der künstliche Embryo auf dem Cover etwas anderes suggeriert: Roboter haben kein angeborenes Wissen von der Welt, sondern ähneln einer nicht formatierten Festplatte. Sensoren schränken die Wahrnehmung ein, zudem könne KI nicht über gelernte Konzepte hinaus verallgemeinern. Wobei die Autoren solche Einschätzungen meist mit dem Zusatz „derzeit“ oder „noch“ einschränken.

New Podcast Episode: Artificial Artificial Intelligence

New Podcast Episode on the Terrible Foundation, Turing Test and Artificial Intelligence.

I am proud to announce my new Human-Robot Interaction Podcast Episode: Artificial Artificial Intelligence. Alan Turing devised the Imitation Game as a test the intelligence of machines. This test is also used in human-robot interaction. But what happens if not a computer is trying to convince you that it is a human, but a human is trying to deceive you in thinking he is an artificial intelligence? In this episode we will discuss the Turing Test, the Zach super computer and what it means to think. I interviewed Diane Proudfoot and David Farrier about the Terrible Foundation, Turing Test and Artificial Intelligence.