Detecting The Corruption Of Online Questionnaires By Artificial Intelligence

Our recent paper on detecting the corruption of online questionnaires by artificial intelligence was recently published in the Frontiers In Robotics and AI Journal. We created a short explainer video about our project:

Here is the abstract of the paper:

Online questionnaires that use crowd-sourcing platforms to recruit participants have become commonplace, due to their ease of use and low costs. Artificial Intelligence (AI) based Large Language Models (LLM) have made it easy for bad actors to automatically fill in online forms, including generating meaningful text for open-ended tasks. These technological advances threaten the data quality for studies that use online questionnaires. This study tested if text generated by an AI for the purpose of an online study can be detected by both humans and automatic AI detection systems. While humans were able to correctly identify authorship of text above chance level (76 percent accuracy), their performance was still below what would be required to ensure satisfactory data quality. Researchers currently have to rely on the disinterest of bad actors to successfully use open-ended responses as a useful tool for ensuring data quality. Automatic AI detection systems are currently completely unusable. If AIs become too prevalent in submitting responses then the costs associated with detecting fraudulent submissions will outweigh the benefits of online questionnaires. Individual attention checks will no longer be a sufficient tool to ensure good data quality. This problem can only be systematically addressed by crowd-sourcing platforms. They cannot rely on automatic AI detection systems and it is unclear how they can ensure data quality for their paying clients.

New Zealanders respond to strict regulation of artificial intelligence

What a national survey reveals about AI in New Zealand

We just published the article “Personality and demographic correlates of support for regulating artificial intelligence” in the AI and Ethics journal. We analysed data from the New Zealand Attitudes and Values Study survey that collected around 48 thousand responses from New Zealanders. The survey asked them if they support the strict regulation limiting the development and use of Artificial Intelligence. We analysed if the response to this question is related to the respondent’s personality and social economic indicators.

Data revealed that support for strict regulation of AI is positively related with agreeableness, neuroticism, and honesty–humility. However, it is negatively related to openness to experiences. Being female, older, non-European, religious, being single, a parent, living rural, being born in NZ, and living in a more economically deprived region were all related to support for strict regulation of AI. However, how secure one felt in their current job and one’s level of education were not significantly related to their support for the strict regulation of AI.

This study will inform policy makers and technology companies on how New Zealanders feel about AI. You can read the full article for free from this address: https://doi.org/10.1007/s43681-023-00279-4

New HRI Podcast: Zombies – Can robots be conscious?

I recorded a new episode of the Human-Robot Interaction Podcast:

023: Zombies – Can robots be conscious?

Are robots zombies? This might seem like a strange question at first, but it leads to one of the most important questions in science today: what is consciousness and can robot’s become conscious? These questions fascinate many people. And when Blake Lemoine suggested that Google’s latest AI Lambda had become sentient, it triggered a worldwide media frenzy. In this HRI podcast episode, I talk with Professor Jack Copeland about machine consciousness.

New HRI Podcast: Emotions and Consciousness in Robots

Extended Interview With Douglas Campbell

I am happy to announce a new episode of the Human-Robot Interaction Podcast:

Emotions And Consciousness In Robots

In the last HRI podcast episode we talked about the role of emotions for humans and robots. The interview with Douglas Campbell truly inspired me and there was not enough time in the last episode to go into the full depth of our conversation. So here it is, our full discussion about emotions and consciousness in robots.

AI Dungeon

AI is making progress. The GPT-3 language model has been heralded to be the next big thing. Some creative people at Latitude use it to create dynamic text adventure games called Dungeon AI. Sounds like a great idea. So I gave it a try. Problem is, AI is still completely ignorant to common sense. You can write whatever you want, also nonsense. AI cannot tell the different.

During my short interaction with this AI I turned myself from and android into a chicken. I moved from a space ship to an office to chicken coop. All within the same story. It is scary how bad GPT-3 is.

Here is the transcript of my little interaction with the AI:

You are Paco, an android living in the the futuristic city of Zail. You have a laser pistol and a holoband. You are a security android created to provide military level force for corporations wealthy enough to purchase you. As you stand protecting a shipment you realize an incoming ship is carrying a weapon, you have to make a choice will you:

1. Throw away your life and your holoband for the easy way out
2.

You open communication channel.
The ship you are on is heading into the earth's atmosphere would you like some company.

You look for company.
You are standing in a room with a glass wall, in front of you are two holoband equipped guards.
You decide to watch the sun rise over the desert city of Zail.

You turn the guard into a chicken.
You are standing in a room and you are now a chicken. You can't see your surroundings but you can hear someone humming.
You decide to listen to the humming.
The humming stops and then starts again.

You lay an egg.
You are laying an egg and someone says, "Yes, I should have known you'd be laying eggs you chicken loving piece of crap."
You turn your head to see the person that is talking to you.

You whale song.
You are suddenly listening to a whale song.
The song stops and then starts again.
You decide to listen to it.

You turn the guard into a chicken.
You are standing in a chicken coop and you can hear the chickens clucking and a farmer is asking someone what the hold up is.
You decide to:

1. Go talk to the farmer.

You farmer is a whale.
You are in a coop containing a dozen chickens and a chicken farmer.
"What's going on?" You ask to be let out.

You chickens fly away.
The chickens fly away and you become a chicken again.
The guard begins singing "Chick, chicka cluck," You decide to:

1. Listen to the guard
2.