Opinion: Us and Them – Interacting with Robots
When I was a nipper, I was fascinated with all things science fiction. I would drift off to sleep of a night with thoughts of Star Wars, Battlestar Galactica, and Dr. Who. I remember drawing robots as a seven year-old – and cyborgs, they were dead cool.
All these shows and movies, had robots. They were the bad guys (Cybermen and Cylons), and the good guys (Robby the Robot, Twiki, K9). They were the light relief (C3PO and R2D2, Marvin and Wall-E).
These robotic characters play important parts in our fictions. We suspend our disbelief and empathise with them, are scared by them and laugh at them. Have you seen the 1972 movie Silent Running? Seldom have I been so invested in boxes on legs, or so traumatised – which leads, in a round-about way to the psychology of how we think about robots, and react to them.
You may be surprised that there is no great literature on the psychology of robots. At the same time, partly due to our growing reliance on machines that do things that previously only we could do, and the increasing sophistication of those machines, this [the psychology] is becoming a bigger focus for social scientists.
One question concerns how we react to increasingly “real” robots. We are quite willing to empathise and laugh at C3PO, but compare the feelings and thoughts that C3PO engenders with your reaction to Professor Hiroshi Ishiguro of Japan’s robotic twin Geminoid, or CB2, his simulacra of a two year-old. On the one hand, these (and others) show we are getting better and better at producing robots that are incrementally more lifelike year-on-year. On the other, there are websites devoted to “creepy robots” with a range of targets that share in common the fact that they are increasingly, but not quite perfectly, lifelike.
The same has been observed in the increasingly CGI world of movies. (CGI is computer-generated imagery.) While we’re happy to recognise Disney’s old school, pencilled Aladdin as something like a real person, the near (but not quite) perfect characters in the all-CGI movie, Polar Express, make people uncomfortable. Those interested have seized upon the notion of the “uncanny valley” – a concept attributed to then-Professor of Robotics at Tokyo Institute of Technology, Masahiro Mori. Professor Mori argued that though our reaction to robots becomes more positive as they appear more human, there’s an important point between almost-but-not-quite-human, and indistinguishable-from-human, to which we’ll react with revulsion.
One hypothesis for this sensitivity around the tipping point is that we have been finely tuned over a long evolutionary period to protect ourselves from contamination. Green eggs and ham? In spite of the joy the story brings children, adults like you and I wouldn’t touch ‘em because we know eggs and ham shouldn’t be green and when foods are green when they’re not supposed to be, they can make us sick.
Disgust is important in this context – it’s considered to be one of the six or sevenish primary emotions – emotions that make sense all around the world and we find relatively easy to recognise on the faces of others. Researchers like Paul Rozin and Jonathan Haidt have developed theories about how disgust develops. Evolution is presumed to play a big role: because health is such an important commodity, threats to our “body envelope” lead to a hard-wired disgust response in the same way that pain is a hard-wired mechanism designed to tell us to take our hands of the hot radiator.
So, we feel disgust when something doesn’t look right and particular kinds of not-right pretty automatically (it’s not a deliberative process). Think of the things we feel disgust towards – things that are “corrupted”, that are dead or rotten. In combination with Mori’s “uncanny valley”, then, we appear to be reacting with a hard-wired disgust response because something, in this case a robot, doesn’t look quite healthy. Evidence for this line of thought includes studies with primates that show that even non-human primates are discomforted by images of other primates that have been subtly colour-rebalanced.
One of the many interesting things about disgust is that it’s not a yes-no, or everyone has it, kind of thing – people vary in terms of how disgust-sensitive they are. Some feel uncomfortable at the thought of sitting on an already-warm bus-seat, while others don’t. Some of us are quite happy to eat a chocolate shaped like a dog poop, but others gag at the thought. Disgust sensitivity also predicts a range of social and political attitudes. For example, those on the political right tend to be more disgust sensitive, and sensitivity predicts things like opposition to immigration (concern over symbolic contamination of values) and placement of landfill sites (proximity to corrupted material). I don’t believe anyone has looked at this, but I’d predict that people who score high on disgust sensitivity will probably show a stronger reaction to uncanny valley-inducing stimuli.
Another mechanism, and one that’s potentially consistent with the disgust hypothesis, is that a human-looking machine leads us to expect human-like behaviour. A robot that looks right, but behaves wrong, creates a perceptual conflict that we respond to with the disgust siren of wrongness. Analogues of this can be seen in some of our other movie experiences – the use of humans moving oddly in horror movies can be a very striking.
Human-like behaviour and movement aren’t the only expectations that go along with looking human-like. We also tend to assess the intentionality of things that we see. For example, studies in the mid-1900s showed that when people watch two objects apparently moving along the same path, they tend to infer that the second object is “following” the first. The closer the two objects are in space and time, the stronger this inference. In fact, we constantly question the intentions of those around us, sampling and re-sampling from observable behaviour until we feel comfortable with what we think is going on.
Similarly, psychologists Kurt Gray and Daniel Wegner note the other thing we look to attribute the things around us is a mind. Following recent work in the area of dehumanisation – trying to understand how human beings can inflict unpleasantness on other humans, aided by looking at them as something other than human – they propose that what makes us a little queasy is that human-like robots may appear, by extension, to be human-minded. Furthermore, it’s not just any mind that makes us worry. We understand we have emotions and autonomy – but we don’t understand this for robots. This leads us again to a conflict of expectations. Thus, when something we know is a robot moves apparently autonomously we freak out, and when something that is obviously non-human displays emotion we lose it (demonic possession)!
Robophiles will be pleased to hear that it’s not just robots that generate disquiet. So do zombies and recognisably human people who appear to lack “normal” feelings (Hannibal Lecter anyone?) also inspire ambivalence.
Given the strongly automatic nature of our reactions to this expectation gap, this reaction will remain. At least until we reach a point where there is no expectational conflict because we don’t even recognise a robot for what it is.
Marc Wilson is an Associate Professor and currently Head of the School of Psychology at Victoria University of Wellington. Besides teaching psychology he has diverse research interests involving understanding what is going on in people’s heads. His major research focus at present is understanding adolescent self-injury, but he tinkers in just about anything. He is a regular contributor to print, radio, and television media.