Don Rowe wrote an excellent article about Jürgen’s and my work and ask the question, are you also just a Truman? Valid and important point. Have a read of the full article.
My talk on Persuasive Robots at the Emotional Machines Conference.
I was invited to give a talk at the Interdisciplinary Conference on Emotional Machines in Stuttgart on September 21st, 2017. My talk focused mainly on the work I did in collaboration with Jürgen Brandstetter (doi: 10.1145/2909824.3020257, doi: 10.1177/0261927X15584682, doi: 10.1109/IROS.2014.6942730). My main argument was that the number of robots in our society will increase dramatically and robots will participate in the formation of our language. Through their influence on our language they will be able to nudge our valence related to certain terms. Moreover, it will only take 10% of us to own a robot for them to dominate the development of our language.
This is also the first time I used a 360 degree camera to record a talk. This technology becomes particularly useful when following the discussion between the speaker and the audience. YouTube’s 360 video feature does not work in all web browser (e.g. it does not work with Safari). Chrome and Firefox should be fine.
Jürgen presented our paper on “Persistent Lexical Entrainment in HRI”. The full paper is available at the ACM Digital Library.
Here is the abstract of the paper:
In this study, we set out to ask three questions. First, does lexical entrainment with a robot interlocutor persist after an interaction? Second, how does the influence of social robots on humans compare with the influence of humans on each other? Finally, what role is played by personality traits in lexical entrainment to robots, and how does this compare with the role of personality in entrainment to other humans? Our experiment shows that first, robots can indeed prompt lexical entrainment that persists after an interaction is over. This finding is interesting since it demonstrates that speakers can be linguistically influenced by a robot, in a way that is not merely motivated by a desire to be understood. Second, we find similarities between lexical entrainment to the robot peer and lexical entrainment to a human peer, although the effects are stronger when the peer is human. Third, we find that whether the peer is a robot or a human, similar personality traits contribute to lexical entrainment. In both peer conditions, participants who score higher on “Openness to experience” are more likely to adopt less conventional terminology.