AI Dungeon

AI is making progress. The GPT-3 language model has been heralded to be the next big thing. Some creative people at Latitude use it to create dynamic text adventure games called Dungeon AI. Sounds like a great idea. So I gave it a try. Problem is, AI is still completely ignorant to common sense. You can write whatever you want, also nonsense. AI cannot tell the different.

During my short interaction with this AI I turned myself from and android into a chicken. I moved from a space ship to an office to chicken coop. All within the same story. It is scary how bad GPT-3 is.

Here is the transcript of my little interaction with the AI:

You are Paco, an android living in the the futuristic city of Zail. You have a laser pistol and a holoband. You are a security android created to provide military level force for corporations wealthy enough to purchase you. As you stand protecting a shipment you realize an incoming ship is carrying a weapon, you have to make a choice will you:

1. Throw away your life and your holoband for the easy way out
2.

You open communication channel.
The ship you are on is heading into the earth's atmosphere would you like some company.

You look for company.
You are standing in a room with a glass wall, in front of you are two holoband equipped guards.
You decide to watch the sun rise over the desert city of Zail.

You turn the guard into a chicken.
You are standing in a room and you are now a chicken. You can't see your surroundings but you can hear someone humming.
You decide to listen to the humming.
The humming stops and then starts again.

You lay an egg.
You are laying an egg and someone says, "Yes, I should have known you'd be laying eggs you chicken loving piece of crap."
You turn your head to see the person that is talking to you.

You whale song.
You are suddenly listening to a whale song.
The song stops and then starts again.
You decide to listen to it.

You turn the guard into a chicken.
You are standing in a chicken coop and you can hear the chickens clucking and a farmer is asking someone what the hold up is.
You decide to:

1. Go talk to the farmer.

You farmer is a whale.
You are in a coop containing a dozen chickens and a chicken farmer.
"What's going on?" You ask to be let out.

You chickens fly away.
The chickens fly away and you become a chicken again.
The guard begins singing "Chick, chicka cluck," You decide to:

1. Listen to the guard
2.

Persuasive Robotics Talk At The Emotional Machines Conference

My talk on Persuasive Robots at the Emotional Machines Conference.

I was invited to give a talk at the Interdisciplinary Conference on Emotional Machines in Stuttgart on September 21st, 2017. My talk focused mainly on the work I did in collaboration with Jürgen Brandstetter (doi: 10.1145/2909824.3020257, doi: 10.1177/0261927X15584682, doi: 10.1109/IROS.2014.6942730). My main argument was that the number of robots in our society will increase dramatically and robots will participate in the formation of our language. Through their influence on our language they will be able to nudge our valence related to certain terms. Moreover, it will only take 10% of us to own a robot for them to dominate the development of our language.

This is also the first time I used a 360 degree camera to record a talk. This technology becomes particularly useful when following the discussion between the speaker and the audience. YouTube’s 360 video feature does not work in all web browser (e.g. it does not work with Safari). Chrome and Firefox should be fine.

Persistent Lexical Entrainment in HRI

Jürgen presented our paper on “Persistent Lexical Entrainment in HRI”. The full paper is available at the ACM Digital Library.

Here is the abstract of the paper:

In this study, we set out to ask three questions. First, does lexical entrainment with a robot interlocutor persist after an interaction? Second, how does the influence of social robots on humans compare with the influence of humans on each other? Finally, what role is played by personality traits in lexical entrainment to robots, and how does this compare with the role of personality in entrainment to other humans? Our experiment shows that first, robots can indeed prompt lexical entrainment that persists after an interaction is over. This finding is interesting since it demonstrates that speakers can be linguistically influenced by a robot, in a way that is not merely motivated by a desire to be understood. Second, we find similarities between lexical entrainment to the robot peer and lexical entrainment to a human peer, although the effects are stronger when the peer is human. Third, we find that whether the peer is a robot or a human, similar personality traits contribute to lexical entrainment. In both peer conditions, participants who score higher on “Openness to experience” are more likely to adopt less conventional terminology.