DOI: 10.1075/is.9.3.01edi | CITEULIKE: 3404600 | REFERENCE: BibTex, Endnote, RefMan | PDF PDF Version

Bartneck, C., Brahnam, S., De Angeli, A., & Pelachaud, C. (2008). Misuse And Abuse Of Interactive Technologies Interaction Studies - Social Behaviour and Communication in Biological and Artificial Systems, 9(3), 397-401.

Special Section on Misuse And Abuse Of Interactive Technologies – Editorial

Christoph Bartneck

Department of Industrial Design
Eindhoven University of Technology
Den Dolech 2, 5600MB Eindhoven, NL
christoph@bartneck.de

Sheryl Brahnam
Missouri State University
Computer Information Systems Department
901 South National Avenue Springfield, MO 65804, USA
sbrahnam@missouristate.edu

Antonella De Angeli
Manchester Business School
The University of Manchester
Booth Street West Manchester, M15 6PB, UK
Antonella.de-angeli@manchester.ac.uk

Catherine Pelachaud
IUT de Montreuil, Université de Paris 8
Mirages, Rocquencourt INRIA Domaine de Voluceau
Rocquencourt - BP 105 78153 Le Chesnay cedex, France
catherine.pelachaud@inria.fr


Introduction

Researchers, engineers and designers in Human Computer Interaction (HCI) analyze human behaviour and create technology to improve people’s lives. Naturally, we like to tell our success stories. How we were able to understand and model human behaviour, and how superior the new technology is. We tend to neglect reporting failures or even cases where the user’s behaviour did not match the designers’ expectations. Nobody likes to admit that new designs did not work the way they were intended to and especially to acknowledge that occasionally the users may have shown negative reactions towards it. Yet, users often appropriate technology in ways which were not expected by the designers. We believe that understanding the drivers of these unintended outcomes is fundamental to the theory and practice of interaction design.

In this special section we focus on the dark side of HCI, by exploring how technology can sometimes bring about the expression of negative behaviour, which we metaphorically label with the term abuse. The discussion on the topic started a few years ago at two workshops held at Interact 2005 and CHI 2006 (De Angeli, Brahnam, & Wallis, 2005; De Angeli, Brahnam, Wallis & Dix, 2006). These workshops attracted an interdisciplinary audience of scholars in the humanities and researchers in HCI who had started noticing an accumulation of potentially negative reactions towards interactive technologies. The papers (available at http://www. agentabuse.org) addressed a variety of themes including cyberbullying, computermediated communication, and psychological reactions to computer failures. A large number of contributions concentrated on social agents, technology designed to stimulate anthropomorphic attributions, such as (embodied) conversational agents and robots.

Choke-A-Chicken — A toy that is to be strangled.

Figure 1. Choke-A-Chicken — A toy that is to be strangled. The Choke-A-Chicken toy, produced by Magic Magic TW International, is marketed as a stress reliever with the following advert: “The Choke-A-Chicken flaps and waddles around doing the Chicken Dance, clucking and flapping its wings in sync with the Chicken Dance melody. Grab him by the neck and he will squawk and cluck like mad, flapping his wings and feet wildly as if he is really being choked. Put him down and he will waddle off, singing and dancing as he goes.”

These diverse set of studies suggested that interface design and metaphors can inadvertently rouse more than user dissatisfaction and angry reactions: they can promote a wide range of disinhibited behaviour that can be directed towards the machine as well as towards other users. The topic is witnessing increasing interest, and evidence of verbal abuse in the interaction with machines has been reported (Brahnam and De Angeli, 2008). First products that are directly intended to be abused by their users have also entered the market and led to an outcry in the animal rights movement (see Figure 1).

This special section contributes to this emerging field of research by proposing a collection of empirical studies, and theoretical discussions. We present studies on humans abusing technology and vice versa, i.e. studies which explore the potential of technology for abusing their users.

Humans abusing technology

Bernard Dionysius Geoghegan demonstrates how the potential for abuse in human– agent interaction goes back to the early age of technology, by analysing how crypto-intelligence evolved in technology in the course of history. After looking back at first AI attempts, the author gives a detailed description of chat-bots, in particular of Eliza. He points out how, rather than being a “study of natural language communication between man and machine,” Eliza highlighted how to deceive, mislead, and misinform users. The author continues his reasoning by reporting that chat-bots that best passed the “Turing test” were those able to trick and turn aside the aggressiveness of human users. In conclusion, he states that by recognizing the root of abuses, it would be possible to go beyond dealing with agent abuse.

Christoph Bartneck and Jun Hu explore the issue of robot abuse through two empirical studies. The first study is a replication of one of Milgram’s obedience studies, with the difference that this time the student was substituted by a robot. The famous original study was designed to investigate the factors influencing obedience and provided the shocking results that normal people could potentially kill another person if required to do so by an experimenter in a formal laboratory setting. Bartneck and Hu’s results demonstrated that this tendency becomes stronger when the potential victim is a robot. Although some participants reported feeling sorry for the robot, all of them would eventually kill it. The second study explored the effect of the robot’s intelligence on people’s destructive behaviour. Participants interacted with a Crawling Microbug robot (which simply reacts to light stimulation) and then were instructed to ‘kill it’ with a hammer. Results suggest that the level of intelligence of the robot had a significant effect on participants’ behaviour. Namely, after interacting with a ‘stupid’ robot people tended to smash it into more pieces. These studies have a number of well acknowledged limitations, yet they are important as they contribute initial empirical evidence on people’s behaviour.

Peter Wallis investigates one form of abuse, namely swear words enunciated by human users to interactive dialog systems. He first analyses a corpus of dialogs to identify the points in the dialogs where problems and/or miscommunication arise. Then, he applies Conversation analysis methodology to understand how problems happened. For his analysis he uses transcripts of frustrated and annoyed users in the DARPA Communicator project. Peter Wallis shows that part of the abuse came after the failure of the communicator systems to handle mixed initiative at the discourse structure level.

Technology abusing humans

Thomas B. Cavanagh argues that our obsession with efficiency has produced what he calls a Kiosk culture, a community where self-service technologies (SSTs) have become the norm. Although many people welcome SSTs, some systems offer users few advantages. For instance, most people welcome the convenience of ATMs since they offer fast and ubiquitous access to money. However, what advantages do customers gain by checking out their own goods at the local grocery store? Cavanagh asks in this case whether customers are not being exploited as uncompensated labourers. Cavanagh goes on to show that the increasing burden of labour that is being shifted by SSTs onto customer shoulders is but the first layer o abuse. Other abuses include issues of privacy and the displacement of workers by machines. The latter erodes the social as well as increases unemployment, leaving Cavanagh to wonder, “… just where in the self-service construct does the organic human fit?” Socrates complaint about the new technology of writing resurfaces in a new guise: won’t we lose something essentially human in our obsession with efficiency? Isn’t our relationship with machines prosthetic, a form of cyborgism? Cavanagh provides no answers; the paper follows the dialectic strategy of presenting both sides of the issue. He leaves it entirely to the readers to ask themselves whether “…we are being abused by the kiosk culture and, if so, is it worth it?”

Chris Creed and Russell Beale investigate opportunities for embodied agents to abuse their users. The paper builds upon a discussion of the meaning of abuse in the context of interaction with machines and an in-depth review of the state of the art in the design of embodied agents. Interestingly, the authors link opportunities for abuse to a unique peculiarity of the embodied agent: the quest o providing them with some form of social and emotional intelligence. This is normally achieved by ‘coding’ social scripts into the agent mind based on knowledge of social psychology. Yet, as the authors evidence, this knowledge can be used to create both social and antisocial behaviour. By discussing possibilities for abusive behaviour, Creed and Beale provide suggestions for risk reduction and highlight important directions for future research in the field.

Constance Ruzich examines the emotions of grief and loss that users experience when their computers crash. She bases her elaboration on the work of Reeves and Nass as well as on Ferdig and Mishra. Their work demonstrated that humans have a tendency to anthropomorphize computers. The metaphorical language used to describe the experience of a computer crash emphasizes the negative impact on human–computer interaction and parallels Kubler-Ross’s stage theory o grief: denial, bargaining, anger, depression, and acceptance.

Acknowledgements

We would like to acknowledge Peter Wallis, Alan Dix, and all the participants in our previous workshops for their contributions in defining this research niche. We also wish to extend our gratitude to the people who helped in the review process for this special section for without their expertise this special issue would not have been possible: Russell Beale, Nadia Bianchi Berthouze, Chris Creed, Alan Dix, Aaron Doering, Bernard Geoghegan, Dirk Heylen, Kristina Höök, Kathy Keeling, Brigitte Krenn, Linda Little, Tatsuya Nomura, Sabine Payr, Daniela Petrelli, Paolo Petta, Connie Ruzich, Terence H. W. Shih, Liz Sillence, Oliviero Stock, George Veletsianos, and Sean Zdenek

References

Brahnam, S., & De Angeli, A. (2008). Special issue on the abuse and misuse of social agents. Interacting with Computers, 20(3), 287-291. | DOI: 10.1016/j.intcom.2008.02.001

Angeli, A. D., Brahnam, S., Wallis, P., & Dix, A. (2006). Misuse and abuse of interactive technologies. Proceedings of the CHI '06 extended abstracts on Human factors in computing systems, Montreal, Quebec, Canada, pp 1647-1650. | DOI: 10.1145/1125451.1125753

Angeli, A. D., Brahnam, S., & Wallis, P. (2005). Abuse: The darker side of human computer interaction. Proceedings of the Interact - Adjunct Proceedings, Rome, pp 91-92.


This is a pre-print version | last updated January 2, 2009 | All Publications