Our new HRI Podcast episode is out. Enjoy our coverage of The New Humanoids. Here is the summary:
A new wave of humanoids entered the scene, and their creators promised us a bright future. Atlas, Figure, and Optimus are intended to work in spaces that are designed for humans. They are not only targeted at factories, but also at our homes and families. But what promises can they actually meet? Dwain Allan and I interviewed Will Jackson and Robert Riener on the future of humanoids.
Abstract: A fair and inclusive competition depends on a scoring system that takes all relevant factors into account. We analysed the current World Para Point System for swimming and identified several theoretical and practical disadvantages. We propose and test a Fair World Para Point System that not only improves the algorithm, but also extends it to accommodate for the age of the athlete. It also provides a method to break point ties. This will enable para masters swimmers for the first time to compete fairly with each other. We also develop and publish tools that enable event organisers to directly use the Fair World Para Point System.
Our recent paper on detecting the corruption of online questionnaires by artificial intelligence was recently published in the Frontiers In Robotics and AI Journal. We created a short explainer video about our project:
Online questionnaires that use crowd-sourcing platforms to recruit participants have become commonplace, due to their ease of use and low costs. Artificial Intelligence (AI) based Large Language Models (LLM) have made it easy for bad actors to automatically fill in online forms, including generating meaningful text for open-ended tasks. These technological advances threaten the data quality for studies that use online questionnaires. This study tested if text generated by an AI for the purpose of an online study can be detected by both humans and automatic AI detection systems. While humans were able to correctly identify authorship of text above chance level (76 percent accuracy), their performance was still below what would be required to ensure satisfactory data quality. Researchers currently have to rely on the disinterest of bad actors to successfully use open-ended responses as a useful tool for ensuring data quality. Automatic AI detection systems are currently completely unusable. If AIs become too prevalent in submitting responses then the costs associated with detecting fraudulent submissions will outweigh the benefits of online questionnaires. Individual attention checks will no longer be a sufficient tool to ensure good data quality. This problem can only be systematically addressed by crowd-sourcing platforms. They cannot rely on automatic AI detection systems and it is unclear how they can ensure data quality for their paying clients.
Theories are an integral part of the scientific endeavour. The target article proposes interesting ideas for a theory on human–robot interaction but lacks specificity that would enable us to properly test this theory. No empirical data are yet available to determine its predictive power.