DOI: 10.1007/978-3-540-73105-4_3 | CITEULIKE: 1604292 | REFERENCE: BibTex, Endnote, RefMan | PDF

Bartneck, C., & Lyons, M. J. (2007). HCI and the Face: Towards an Art of the Soluble. In J. Jacko (Ed.), Human-Computer Interaction, Part 1, HCII2007, LNCS 4550 (pp. 20-29). Berlin: Springer.

HCI and the Face: Towards an Art of the Soluble

Christoph Bartneck

Department of Industrial Design
Eindhoven University of Technology
Den Dolech 2, 5600MB Eindhoven, NL
christoph@bartneck.de

Michael J. Lyons

ATR Intelligent Robotics and Communication Labs
2-2-2 Hikaridai, Seika-cho
Soraku-gun, Kyoto 619-0288, Japan
michael.lyons@gmail.com

Abstract - The human face plays a central role in most forms of natural human interaction so we may expect that computational methods for analysis of facial information and graphical and robotic methods for synthesis of faces and facial expressions will play a growing role in human-computer and human-robot interaction. However, certain areas of face-based HCI, such as facial expression recognition and robotic facial display have lagged others, such as eye-gaze tracking, facial recognition, and conversational characters. Our goal in this paper is to review the situation in HCI with regards to the human face, and to discuss strategies which could bring more slowly developing areas up to speed.

Keywords: face, hci, soluble, recognition, synthesis


Introduction

The human face is used in many aspects of verbal and non-verbal communication: speech, the facial expression of emotions, gestures such as nods, winks, and other human communicative acts. Subfields of neuroscience, cognitive science, and psychology are devoted to study of this information. Computer scientists and engineers have worked on the face in graphics, animation, computer vision, and pattern recognition. A widely stated motivation for this work is to improve human computer interaction. However, relatively few HCI technologies employ face processing (FP). At first sight this seems to reflect technical limitations to the development of practical, viable applications of FP technologies.

This paper has two aims: (a) to introduce current research on HCI applications of FP, identifying both successes and outstanding issues, and (b) to propose, that an efficient strategy for progress could be to identify and approach soluble problems rather than aim for unrealistically difficult applications. While some of the outstanding issues in FP may indeed be as difficult as many unsolved problems in artificial intelligence, we will argue that skillful framing of a research problem can allow HCI researchers to pursue interesting, soluble, and productive research.

For concreteness, this article will focus on the analysis of facial expressions from video input, as well as their synthesis with animated characters or robots. Techniques for automatic facial expression processing have been studied intensively in the pattern recognition community and the findings are highly relevant to HCI [1, 2]. Work on animated avatars may be considered to be mature [3], while the younger field of social robotics is expanding rapidly [4,5,6]. FP is a central concern in both of these fields, and HCI researchers can contribute to and benefit from the results.

HCI and the Face

Computer scientists and engineers have worked increasingly on FP, from the widely varying viewpoints of graphics, animation, computer vision, and pattern recognition. However, an examination of the HCI research literature indicates that activity is restricted to a relatively narrow selection of these areas. Eye gaze has occupied the greatest share of HCI research on the human face (e.g. [7]). Eye gaze tracking technology is now sufficiently advanced that several commercial solutions are available (e.g. Tobii Technology [8]). Gaze tracking is a widely used technique in interface usability, machine-mediated human communication, and alternative input devices. This area can be viewed as a successful, sub-field related to face-based HCI.

Numerous studies have emphasized the neglect of human affect in interface design and argued this could have major impact on the human aspects of computing [9]. Accordingly, there has been much effort in the pattern recognition, AI, and robotics communities towards the analysis, understanding, and synthesis of emotion and expression. In the following sections we briefly introduce the areas related to analysis and synthesis, especially by robots, of facial expressions. In addition, we share insights on these areas gained during a workshop we organized on the topic.

Analysis: Facial Expression Classification

The attractive prospect of being able to gain insight into a user’s affective state may be considered one of the key unsolved problems in HCI. It is  known that it is difficult to measure the “valence” component of affective state, as compared to “arousal”, which may be gauged using biosensors. However, a smile, or frown, provides a clue that goes beyond physiological measurements. It is also attractive that expressions can be guaged non-invasively with inexpensive video cameras.

Automatic analysis of video data displaying facial expressions has become a topic of active area of computer vision and pattern recognition research (for reviews see [10, 11]). The scope of the problem statement has, however, been relatively narrow. Typically one measures the performance of a novel classification algorithm on recognition of the basic expression classes proposed by Ekman and Friesen [12]. Expression data often consists of a segmented headshot taken under relatively controlled conditions and classification accuracy is based on comparison with emotion labels provided by human experts.

This bird’s eye caricature of the methodology used by the pattern recognition community given above is necessarily simplistic, however it underlines two general reflections. First, pattern recognition has successfully framed the essentials of the facial expression problem to allow for effective comparison of algorithms. This narrowing of focus has led to impressive developments of the techniques for facial expression analysis and substantial understanding. Second, the narrow framing of the FP problem typical in the computer vision and pattern recognition may not be appropriate for HCI problems. This observation is a main theme of this paper, and we suggest that progress on use of FP in HCI may require re-framing the problem.

Perhaps the most salient aspect of our second general observation on the problem of automatic facial expression recognition is that HCI technology can often get by with partial solutions. A system that can discriminate between a smile and frown, but not an angry versus disgusted face, can still be a valuable tool for HCI researchers, even if it is not regarded as a particularly successful algorithm from the pattern recognition standpoint. Putting this more generally, components of algorithms developed in the pattern recognition community, may already have sufficient power to be useful in HCI, even if they do not yet constitute general facial expression analysis systems. Elsewhere in this paper we give several examples to back up this statement.

Synthesis: Robotic Facial Expressions

There is a long tradition within the HCI community of investigating and building screen based characters that communicate with users [3]. Recently, robots have also been introduced to communicate with the users and this area has progressed sufficiently that some review articles are available [4, 6]. The main advantage that robots have over screen based agents is that they are able to directly manipulate the world. They not only converse with users, but also perform embodied physical actions.

Nevertheless, screen based characters and robots share an overlap in motivations for and problems with communicating with users. Bartneck et al. [13] has shown, for example, that there is no significant difference in the users’ perception of emotions as expressed by a robot or a screen based character. The main motivation for using facial expressions to communicate with a user is that it is, in fact, impossible not to communicate. If the face of a character or robot remains inert, it communicates indifference. To put it another way, since humans are trained to recognize and interpret facial expressions it would be wasteful to ignore this rich communication channel.

Compared to the state of the art in screen-based characters, such as Embodied Conversational Agents [3], however, the field of robot’s facial expressions is  underdeveloped. Much attention has been paid to robot motor skills, such as locomotion and gesturing, but relatively little work has been done on their facial expression. Two main approaches can be observed in the field of robotics and screen based characters. In one camp are researchers and engineers who work on the generation of highly realistic faces. A recent example of a highly realistic robot is the Geminoid H1 which has 13 degrees of freedom (DOF) in its face alone. The annual Miss Digital award [14] may be thought of as a benchmark for the development of this kind of realistic computer generated face. While significant progress has been made in these areas, we have not yet reached human-like detail and realism, and this is acutely true for the animation of facial expressions. Hence, many highly realistic robots and character currently struggle with the phenomena of the “Uncanny Valley” [15], with users experiencing these artificial beings to be spooky or unnerving. Even the Repliee Q1Expo is only able to convince humans of the naturalness of its expressions for at best a few seconds [16]. In summary, natural robotic expressions remain in their infancy [6].

Major obstacles to the development of realistic robots lie with the actuators and the skin. At least 25 muscles are involved in the expression in the human face. These muscles are flexible, small and be activated very quickly. Electric motors emit noise while pneumatic actuators are difficult to control. These problems often result in robotic heads that either have a small number of actuators or a somewhat larger-than-normal head. The Geminoid H1 robot, for example, is approximately five percent larger than its human counterpart. It also remains difficult to attach skin, which is often made of latex, to the head. This results in unnatural and non-human looking wrinkles and folds in the face.

At the other end of the spectrum, there are many researchers who are developing more iconic faces. Bartneck [17] showed that a robot with only two DOF in the face can produce a considerable repertoire of emotional expressions that make the interaction with the robot more enjoyable. Many popular robots, such as Asimo [18], Aibo [19] and PaPeRo [20] have only a schematic face with few or no actuators. Some of these only feature LEDs for creating facial expressions. The recently developed iCat robot is a good example of an iconic robot that has a simple physically-animated face [21]. The eyebrows and lips of this robot move and this allows synthesis of a wide range of expressions.

More general and fundamental unsolved theoretical aspects of facial information  are also relevant to the synthesis of facial expressions. The representation of the space of emotional expressions is a prime example [22]. The space of expressions is often modeled either with continuous dimensions, such as valence and arousal [23] or with a categorical approach [12]. This controversial issue has broad implications for all HCI applications involving facial expression [22]. The same can be said for other fundamental aspects of facial information processing, such as the believability of synthetic facial expressions by characters and robots [5, 24].

Workshop on “HCI and the Face”

As part of our effort to examine the state of the field of FP in HCI, we organized a day-long workshop the ACM CHI’2006 conference (see: http://www.bartneck.de/workshop/chi2006/ for details). The workshop included research reports, focus groups, and general discussions. This has informed our perspective on the role of FP in HCI, as presented in the current paper.

One focus group summarized the state of the art in facial expression analysis and synthesis, while another brainstormed HCI applications. The idea was to examine whether current technology sufficient advanced to support HCI applications. The proposed applications were organized with regards to the factors “Application domain” and “Intention” (Table 1). Group discussion seemed to naturally focus on applications that involve some type of agent, avatar or robot. It is nearly impossible to provide an exhaustive list of applications for each field in the matrix. The ones listed in the table should therefore be only considered as representative examples.

  Intention
Persuade Being a companion Educate
Application domain Entertainment Advertisement: REA [3] Greta [25] Aibo [19] Tamagotchi [26] My Real Baby [27]
Communication Persuasive Technology [28] Cat [29] Avatar [30] Language tutor [31]
Health Health advisor Fitness tutor [32] Aibo for elderly [33] Attention Capture for Dementia Patients [34] Autismtic children [35]

Table 1. Examples of face processing applications in HCI and HRI

These examples well illustrate a fundamental problem of this research field. The workshop participants can be considered experts in the field and all the proposed example applications were related to artificial characters, such as robots, conversational agents and avatars. Yet not one of these applications has become a lasting commercial success. Even Aibo, the previously somewhat successful entertainment robot, has been discontinued by Sony in 2006.

A problem that all these artificial entities have to deal with is, that while their expression processing has reached an almost sufficient maturity, their intelligence has not. This is especially problematic, since the mere presence of an animated face raises the expectation levels of its user. An entity that is able to express emotions is also expected to recognize and understand them. The same holds true for speech. If an artificial entity talks then we also expect it to listen and understand. As we all know, no artificial entity has yet passed the Turing test or claimed the Loebner Prize. All of the examples given in Table 1 presuppose the existence of a strong AI as described by John Searle [36].

The reasons why strong AI has not yet been achieved are manifold and the topic of lengthy discussion. Briefly then, there are, from the outset, conceptual problems. John Searle [36] pointed out that digital computers alone can never truly understand reality because it only manipulates syntactical symbols that do not contain semantics. The famous ‘Chinese room’ example points out some conceptual constraints in the development of strong AIs. According to his line of arguments, IBM’s chess playing computer “Deep Blue” does not actually understand chess. It may have beaten Kasparov, but it does so only by manipulating meaningless symbols. The creator of Deep Blue, Drew McDermott [37], replied to this criticism: "Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really fly because it doesn't flap its wings." This debate reflects different philosophical viewpoints on what it means to think and understand. For centuries philosophers have thought about such questions and perhaps the most important conclusion is that there is no conclusion at this point in time. Similarly, the possibility of developing a strong AI remains an open question. All the same, it must be admitted that some kind of progress has been made. In the past, a chess-playing machine would have been regarded as intelligent. But now it is regarded as the feat of a calculating machine – our criteria for what constitutes an intelligent machine has shifted.

In any case, suffice it to say that no sufficiently intelligent machine has yet emerged that would provide a foundation for our example applications given in Table 1. The point we hope to have made with the digression into AI is that the application dreams of researchers sometimes conceal rather unrealistic assumptions about what is possible to achieve with current technology.

Towards an “Art of the Soluble”

The outcome of the workshop we organized was unexpected in a number of ways. Most striking was the vast mismatch between the concrete and fairly realistic description of the available FP technology and its limitations arrived at by one of the focus groups, and the blue-sky applications discussed by the second group.

Another sharp contrast was evident at the workshop. The actual presentations given by participants were pragmatic and showed effective solutions to real problems in HCI not relying on AI.

This led us to the reflection that scientific progress often relies on what the Nobel prize winning biologist Peter Medawar called “The Art of the Soluble” [38]. That is, skill in doing science requires the ability to select a research problem which is soluble, but which has not yet been solved. Very difficult problems such as strong AI may not yield to solution over the course of decades, so for most scientific problems it is preferable to work on problems of intermediate difficulty, which can yield results over a more reasonable time span, while still being of sufficient interest to constitute progress. Some researchers of course are lucky or insightful enough to re-frame a difficult problem in such a way as to reduce its difficulty, or to recognize a new problem which is not difficult, but nevertheless of wide interest.

In the next two subsections we illustrate the general concept with examples from robotic facial expression synthesis as well as facial expression analysis.

Facial Expression Synthesis in Social Robotics

As we argued in section 2, the problems inherited by HRI researchers from the field of AI can be severe. Even if we neglect philosophical aspects of the AI problem and are satisfied with a computer that passes the Turing test, independently of how it achieves this, we will still encounter many practical problems. This leads us to the so-called “weak AI” position, namely claims of achieving human cognitive abilities are abandoned. Instead, this approach focuses on specific problem solving or reasoning tasks.

There has certainly been progress in weak AI, but this has not yet matured sufficiently to support artificial entities. Indeed, at present, developers of artificial entities must to resort to scripting behaviors. Clearly, the scripting approach has its limits and even the most advanced common sense database, Cyc [39] , is largely incomplete. FP should therefore not bet on the arrival of strong AI solutions, but focus on what weak AI solutions can offer today. Of course there is still hope that eventually also strong AI applications will become possible, but this may take a long time.

 

Robots with animated faces

Fig. 1 Robots with animated faces

When we look at what types of HRI solutions are currently being built, we see that a large number of them do barely have any facial features at all. Qrio, Asimo and Hoap-2, for example, are only able to turn their heads with 2 degrees of freedom (DOF). Other robots, such as Aibo, are able to move their head, but have only LEDs to express their inner states in an abstract way. While these robots are intended to interact with humans, they certainly avoid facial expression synthesis. When we look at robots that have truly animated faces, we can distinguish between two dimensions: DOF and iconic/realistic appearance (see Figure 1).

Robots in the High DOF/Realistic quadrant not only have to fight with the uncanny valley [40] they also may raise user expectations of a strong AI which they are not able to fulfill. By contrast, the low DOF/Iconic quadrant includes robots that are extremely simple and perform well in their limited application domain. These robots lie well within the domain of the soluble in FP. The most interesting quadrant is the High DOF/Iconic quadrant. These robots have rich facial expressions but avoid evoking associations with a strong AI through their iconic appearance. We propose that research on such robots has the greatest potential for significant advances in the use of FP in HRI.

Facial Analysis for Direct Gesture-Based Interaction

The second example we use to illustrate the “Art of the Soluble” strategy comes from the analysis of facial expressions. While there is a large body of work on automatic facial expression recognition and lip reading within the computer vision and pattern recognition research communities, relatively few studies have examined the possible use of the face in direct, intentional interaction with computers.  However, the complex musculature of the face and extensive cortical circuitry devoted to facial control suggest that motor actions of the face could play a complementary or supplementary role to that played by the hands in HCI [1].

One of us has explored this idea through a series of projects using vision-based methods to capture movement of the head and facial features and use these for intentional, direct interaction with computers. For example, we have used head and mouth motions for the purposes of hands-free text entry and single-stroke text character entry on small keyboards such as found on mobile phones. Related projects used action of the mouth and face for digital sketching and musical expression.

One of the systems we developed tracked the head and position of the nose and mapped the projected position of the nose tip in the image plane to the coordinates of the cursor. Another algorithm segmented the area of the mouth and measured the visible area of the cavity of the user’s mouth in the image plane. The state of opening/closing of the mouth could be determined robustly and used in place of mouse-button clicks. This simple interface allowed for text entry using the cursor to select streaming text. Text entry was started and paused by opening and closing the mouth, while selection of letters was accomplished by small movements of the head. The system was tested extensively and found to permit comfortable text entry at a reasonable speed. Details are reported in [41].

Another project used the shape of the mouth to disambiguate the multiple letters mapped to the keys of a cell phone key pad [42]. Such an approach works very well for Japanese, which has a nearly strict CV (consonant-vowel) phoneme structure, and only five vowels. The advantage of this system was that it took advantage of existing user expertise in shaping the mouth to select vowels. With some practice, users found they could enter text faster than with the standard multi-tap approach.

The unusual idea of using facial actions for direct input may find least resistance in the realm of artistic expression. Indeed, our first explorations of the concept were with musical controllers using mouth shape to control timbre and other auditory features [43]. Of course, since many musical instruments rely on action of the face and mouth, this work has precedence, and was greeted with enthusiasm by some musicians. Similarly, we used a mouth action-sensitive device to control line properties while drawing and sketching with a digital tablet [44]. Here again our exploration elicited a positive response from artists who tried the system.

The direct action facial gesture interface serves to illustrate the concept that feasible FP technology is ready to be used as the basis for working HCI applications. The techniques used in all the examples discussed are not awaiting the solution of some grand problem in pattern recognition: they work robustly in real-time under a variety of lighting conditions.

Conclusion

In this paper we have argued in favour of an “Art of the Soluble” approach in HCI. Progress can often be made by sidestepping long-standing difficult issues in artificial intelligence and pattern recognition. This is partly intrinsic to HCI: the presence of a human user for the system being developed implies leverage for existing computational algorithms.  Our experience and the discussions that led to this article have also convinced us that HCI researchers tend towards an inherently pragmatic approach even if they are not always self-conscious of the fact. In summary, we would like to suggest that skill in identifying soluble problems is already a relative strength of HCI and this is something that would be worth further developing.

References

  1. Lyons, M. J. (2004). Facial Gesture Interfaces for Expression and Communication. Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, The Hague, pp 598- 603. | DOI: 10.1109/ICSMC.2004.1398365
  2. Lyons, M. J., Budynek, J., & Akamatsu, S. (1999). Automatic Classification of Single Facial Images. IEEE Transactions Pattern Analysis and Machine Intelligence, 21(12), 1357-1362. | DOI: 10.1109/34.817413
  3. Cassell, J., Sullivan, J., Prevost, S., & Churchill, E. (2000). Embodied Conversational Agents. Cambridge: MIT Press. | view at Amazon.com
  4. Bartneck, C., & Okada, M. (2001). Robotic User Interfaces. Proceedings of the Human and Computer Conference (HC2001), Aizu, pp 130-140. | DOWNLOAD
  5. Bartneck, C., & Suzuki, N. (Eds.). (2005). Subtle Expressivity for Characters and Robots (2 ed. Vol. 62). Amsterdam: Elsevier. | DOI: 10.1016/j.ijhcs.2004.11.004
  6. Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and Autonomous Systems, 42, 143-166. | DOI: 10.1016/S0921-8890(02)00372-X
  7. Zhai, S., Morimoto, C., & Ihde, S. (1999). Manual and gaze input cascaded (MAGIC) pointing . Proceedings of the SIGCHI conference on Human factors in computing systems: the CHI is the limit, Pittsburgh, pp 246-253. | DOI: 10.1145/302979.303053
  8. Tobii Technology. (2007). Tobii Technology. Retrieved February 2007, from http://www.tobii.com/
  9. Picard, R. W. (1997). Affective computing. Cambridge: MIT Press. | view at Amazon.com
  10. Pantic, M., & Rothkrantz, L. J. M. (2000). Automatic analysis of facial expressions: the state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12), 1424 - 1445 |DOI: 10.1109/34.895976
  11. Fasel, B., & Luettin, J. (2003). Automatic facial expression analysis: a survey. Pattern Recognition, 36(1), 259-275. | DOI: 10.1016/S0031-3203(02)00052-3
  12. Ekman, P., & Friesen, W. V. (1975). Unmasking the Face. Englewood Cliffs: Prentice Hall. | view at Amazon.com
  13. Bartneck, C., Reichenbach, J., & Breemen, A. (2004). In your face, robot! The influence of a character's embodiment on how users perceive its emotional expressions.. Proceedings of the Design and Emotion 2004, Ankara. | DOWNLOAD
  14. Cerami, F. (2006). Miss Digital World. Retrieved August 4th, from http://www.missdigitalworld.com/
  15. Mori, M. (1970). The Uncanny Valley. Energy, 7, 33-35.
  16. Ishiguro, H. (2005). Android Science - Towards a new cross-interdisciplinary framework. Proceedings of the CogSci Workshop Towards social Mechanisms of android science, Stresa, pp 1-6.
  17. Bartneck, C. (2003). Interacting with an Embodied Emotional Character. Proceedings of the Design for Pleasurable Products Conference (DPPI2004), Pittsburgh, pp 55-60. | DOI: 10.1145/782896.782911
  18. Honda. (2002). Asimo. from http://www.honda.co.jp/ASIMO/
  19. Sony. (1999). Aibo. Retrieved January, 1999, from http://www.aibo.com
  20. NEC. (2001). PaPeRo. from http://www.incx.nec.co.jp/robot/
  21. Breemen, A., Yan, X., & Meerbeek, B. (2005). iCat: an animated user-interface robot with personality. Proceedings of the Fourth International Conference on Autonomous Agents & Multi Agent Systems, Utrecht. | DOI: 10.1145/1082473.1082823
  22. Schiano, D. J. (2004). Categorical Imperative NOT: Facial Affect is Perceived Continously. Proceedings of the CHI2004, Vienna, pp 49-56. | DOI: 10.1145/985692.985699
  23. Russel, J. A. (1979). Affective space is bipolar. Journal of personality and social psychology, 37(3), 345-356. | DOI: 10.1037/0022-3514.37.3.345
  24. Bartneck, C. (2001). How convincing is Mr. Data's smile: Affective expressions of machines. User Modeling and User-Adapted Interaction, 11, 279-295. | DOI: 10.1023/A:1011811315582
  25. Pelachaud, C. (2005). Multimodal expressive embodied conversational agents. Proceedings of the 13th annual ACM international conference on Multimedia, Hilton, Singapore, pp 683 - 689. | DOI: 10.1145/1101149.1101301
  26. Bandai. (2000). Tamagotchi. Retrieved January 2000, from http://www.bandai.com/
  27. Lund, H. H., & Nielsen, J. (2002). An Edutainment Robotics Survey. Proceedings of the Third International Symposium on Human and Artificial Intelligence Systems: The Dynamic Systems Approach for Embodiment and Sociality, Fukui.
  28. Fogg, B. J. (2003). Persuasive technology : using computers to change what we think and do. Amsterdam ; Boston: Morgan Kaufmann Publishers. | view at Amazon.com
  29. Zanbaka, C., Goolkasian, P., & Hodges, L. (2006). Can a virtual cat persuade you?: the role of gender and realism in speaker persuasiveness. Proceedings of the SIGCHI conference on Human Factors in computing systems, Montreal, Quebec, Canada. | DOI: 10.1145/1124772.1124945
  30. Biocca, F. (1997). The cyborg's dilemma: embodiment in virtual environments. Proceedings of the Second International Conference on Cognitive Technology - "Humanizing the Information Age", Aizu, pp 12-26. | DOI: 10.1109/CT.1997.617676
  31. Schwienhorst, K. (2002). The State of VR: A Meta-Analysis of Virtual Reality Tools in Second Language Acquisition. Computer Assisted Language Learning, 15(3), 221 - 239. | DOI: 10.1076/call.15.3.221.8186
  32. Mahmood, A. K., & Ferneley, E. (2004). Can Avatars Replace The Trainer? A case study evaluation. Proceedings of the The International Conference on Enterprise Information Systems (ICEIS), Porto, pp 208-213.
  33. Tamura, T., Yonemitsu, S., Itoh, A., Oikawa, D., Kawakami, A., Higashi, Y., et al. (2004). Is an entertainment robot useful in the care of elderly people with severe dementia? . The Journals of Gerontology Series A: Biological Sciences and Medical Sciences, 59:M83-M85 .
  34. Wiratanaya, A., Lyons, M. J., & Abe, S. (2006). An interactive character animation system for dementia care. Proceedings of the ACM SIGGRAPH 2006 Research posters, Boston, Massachusetts, pp Article No. 82. | DOI: 10.1145/1179622.1179717
  35. Robins, B., Dautenhahn, K., Boekhorst, R., & Billard, A. (2005). Robotic assistants in therapy and education of children with autism: can a small humanoid robot help encourage social interaction skills? Universal Access in the Information Society, 4(2), 105-120. | DOI: 10.1007/s10209-005-0116-3
  36. Searle, J. R. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3(3), 417-457.
  37. McDermott, D. (1997, May 14th). Yes, Computers Can Think. New York Times.
  38. Medawar, P. B. (1967). The art of the soluble. London,: Methuen.
  39. Cycorp. (2007). Cyc. Retrieved February 2007, from http://www.cyc.com/
  40. MacDorman, K. F. (2006). Subjective ratings of robot video clips for human likeness, familiarity, and eeriness: An exploration of the uncanny valley. Proceedings of the ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science, Vancouver.
  41. Gamhewage, C. D. S., Michael, J. L., Shinjiro, K., & Nobuji, T. (2003). Human Factors Evaluation of a Vision-Based Facial Gesture Interface. Proceedings of the Conference on Computer Vision and Pattern Recognition Workshop, pp 52. | DOI: 10.1109/CVPRW.2003.10055
  42. Lyons, M. J., Chan, C., & Tetsutani, N. (2004). MouthType: Text Entry by Hand and Mouth. Proceedings of the Conference on Human Factors in Computing Systems CHI2004, Austria, pp 1383-1386. | DOI: 10.1145/985921.986070
  43. Lyons, M. J., & Tetsutani, N. (2001). Facing the Music: A Facial Action Controlled Musical Interface. Proceedings of the Conference on Human Factors in Computing Systems CHI2001, Seattle, pp 309-310. | DOI: 10.1145/634067.634250
  44. Chan, C., Lyons, M. J., & Tetsutani, N. (2003). Mouthbrush: Drawing and Painting by Hand and Mouth. Proceedings of the Fifth International Conference on Multimodal Interfaces, Vancouver, pp 277-280. | DOI: 10.1145/958432.958482

This is a pre-print version | last updated February 5, 2008 | All Publications