Research

Comparison of color measurement accuracy of ColorMunki Design and FRU WR-10QC Colorimeter

Posted by on Oct 25, 2017 in Design, Featured, Research | 0 comments

Comparison of color measurement accuracy of ColorMunki Design and FRU WR-10QC Colorimeter

I am working on a colour project and had purchased the WR10 colorimeter to complement my long serving work horse, the X-Rite Color Munki Design. My ColorMunki is already several years old and I was concerned that its accuracy might have declined. When I measured several hundreds of samples, I noticed that both colorimeters gave me considerably different LAB values.

To determine which device was closer to the truth I measured the 48 defined colours of Datacolor’s SpyderCHECKR 48. I calculated the absolute error both devices made. The results of a paired-sample t-test showed that the ColorMunki is producing significantly less measurement errors on L (t(47)=-9.229, p<0.001), L (t(47)=-4.590, p<0.001) and L (t(47)=-4.871, p<0.001). However, both devices measure colours that are significantly different from the target colour of the SpyderCheckr card on all three measurements. Figure 1 shows the means and standard deviation for all measurement errors.

Figure 1: Mean and Standard Deviation of all measurements for both devices.

There does seem to be some structure in the errors that WR-10 is producing. Have a look at the heat map (Figure 2). The data for my little experiment is available at the Open Science Framework (DOI: 10.17605/OSF.IO/UWEFD).

Figure 2: Heat Map of the absolute errors

Although both devices show some significant deviation from the original, it is not far off from what can be expected of devices in this price range. The ColorMunki Design produces significantly better results than the FRU’s WR-10QC.

Read More

Persuasive Robotics Talk At The Emotional Machines Conference

Posted by on Sep 27, 2017 in Featured, Research | 0 comments

Persuasive Robotics Talk At The Emotional Machines Conference

I was invited to give a talk at the Interdisciplinary Conference on Emotional Machines in Stuttgart on September 21st, 2017. My talk focused mainly on the work I did in collaboration with Jürgen Brandstetter (doi: 10.1145/2909824.3020257, doi: 10.1177/0261927X15584682, doi: 10.1109/IROS.2014.6942730). My main argument was that the number of robots in our society will increase dramatically and robots will participate in the formation of our language. Through their influence on our language they will be able to nudge our valence related to certain terms. Moreover, it will only take 10% of us to own a robot for them to dominate the development of our language.

This is also the first time I used a 360 degree camera to record a talk. This technology becomes particularly useful when following the discussion between the speaker and the audience. YouTube’s 360 video feature does not work in all web browser (e.g. it does not work with Safari). Chrome and Firefox should be fine.

Read More

Bloomberg Businessweek interviewed Omics about my nonsense paper

Posted by on Aug 29, 2017 in Featured, Research | 0 comments

Bloomberg Businessweek interviewed Omics about my nonsense paper

Esmé E Deprez and Caroline Chen from Bloomberg Businessweek visited the headquarters of Omics in India to interview its owner Srinubabu Gedela about his company. Omics is widely considered a predatory publisher that publishes papers without rigorous peer review. Confronted with the acceptance of my non-sensical paper he replied that “Bartneck’s paper slipped through because it was submitted so close to the conference’s deadline.” Yeah, right.

Read More

Jürgen Brandstetter defended his PhD today!

Posted by on May 5, 2017 in Event, Research | 1 comment

Congratulations to Jürgen Brandstetter who successfully defended his PhD thesis today. It will become available in the library soon. Thanks to Clayton, Jen and Janette for the continues support for the project.

Jürgen Brandstetter

The Power of Robot Groups with a Focus on Persuasion and Linguistic Cues – Manipulating robot through the backdoor of language

Until now, the HRI community generated a lot of knowledge on how one robot affects one human. And on how one robot affects multiple human listeners. Much less knowledge is available on how robots affect the human language and how this language change can affect the human attitude and behavior. This language effect is of particular interest when we get to a situation where a major part of the human population has robots, like now smartphones. In this case, the questions are, what happens if all robots use the same words? For example, when they all use the same source as their dictionary. And will robots be able to affect the word choice of the human population? Even more interesting, will this word choice affect the attitude and behavior of the human population? To find out if this might be possible, I developed three with each other connected experiments.

In the first experiment, the effect of peer pressure on humans created by robots is explored. A particular focus was taken on, how this peer pressure affects the language of a person. To see if and how this influence of robots works, the effect of the robots was compared against humans-actors. The results of the experiment showed that the actors could indeed influence the participants as predicted. However, no such influence could be shown by the robots. It was concluded, that the reason why the robots did not affect the humans was, that the humans did not feel being part of the robots group.

In the second experiment, a group setting between the robot and the human was created. In this experiment, the robot tried to influence the human language. Important here to mention is, that the experiment measures if the language of the participant was affected by the robots influence, even after the interaction and without any robot in the room. Furthermore, it was also measured if the word chosen by the robot had an influence on how a person would perceive the discussed object. The outcome of the experiment was successfully, and it showed that first, the group building worked and that now the robot was able to affect the human language. Second, that robots are able to effect the human language even after the interaction was over. And third, that robots are able to affect the humans’ attitude toward objects simply by using positive or negative connoted synonyms for the particular object.

In the third and last experiment, the question was, can many robots affect the language of a whole human population. A couple of different parameters were measured. Which were; how many humans need robots so that the robots could manipulate the whole human population’s language. And, does it matter what person in the human population will get a robot. Is it a person who is very well connected or a person who is poorly connected? Since, we are currently not in a time where a huge amount of people actually have robots, and we would not be able to influence all of this robots, a simulation was created. Within this simulation, the before described parameters were simulated. The outcome was, that on average only 11\% of the human population need robots so that the robots might affect the language of 95\% of the human population.

Finally, to further deepen the knowledge on how simple language change can affect the human’s attitude and behavior a literature review is added. This review focuses particularly on persuasion via language, which includes effects like gender neutral versus non-gender neutral language. To conclude, this dissertation shows, that in certain situations a group of robots is able to effect the language of the biggest part of the human population. Which in turn might lead to a change in behavior in the same population.

Read More

Robert M. Pirsig died at the age of 88

Posted by on Apr 25, 2017 in Culture, Research | 0 comments

With great sadness I became aware today that Robert M. Pirsig, author of “Zen and the Art of Motorcycle Maintenance” and “Lila: An Inquiry into Morals” died today. He was and always will be a personal hero for me. His work inspired some of my own articles. Robert, you will be greatly missed and I wished you had written more.

Read More

Persistent Lexical Entrainment in HRI

Posted by on Mar 7, 2017 in Research | 0 comments

Jürgen presented our paper on “Persistent Lexical Entrainment in HRI”. The full paper is available at the ACM Digital Library.

Here is the abstract of the paper:

In this study, we set out to ask three questions. First, does lexical entrainment with a robot interlocutor persist after an interaction? Second, how does the influence of social robots on humans compare with the influence of humans on each other? Finally, what role is played by personality traits in lexical entrainment to robots, and how does this compare with the role of personality in entrainment to other humans? Our experiment shows that first, robots can indeed prompt lexical entrainment that persists after an interaction is over. This finding is interesting since it demonstrates that speakers can be linguistically influenced by a robot, in a way that is not merely motivated by a desire to be understood. Second, we find similarities between lexical entrainment to the robot peer and lexical entrainment to a human peer, although the effects are stronger when the peer is human. Third, we find that whether the peer is a robot or a human, similar personality traits contribute to lexical entrainment. In both peer conditions, participants who score higher on “Openness to experience” are more likely to adopt less conventional terminology.

Read More