A shorter, edited version of this essay was published on TheConversation.com in February 2021 under the title Will robots make good friends? Scientists are already starting to find out. For an in-depth discussion of this topic please see my iScience perspective piece with Julie Robillard—Are friends electric? The benefits and risks of human-robot relationships. I have also considered some of the “in principle” arguments against the possibility of human-robot relationships in the essay Robots are not just tools (a commentary on the EPSRC Principles of Robotics).
In the 2012 movie Robot and Frank the lead protagonist, a retired cat burglar, Frank, is suffering the early symptoms of dementia, such as forgetfulness. His son, partly out of concern, partly guilt, buys him a home robot of a kind that is not available yet but that has capabilities that could be achievable in robotics in the future. “Robot” can do household chores, like cooking and cleaning, engage in conversation, and provides Frank with reminders such as when to take his medicine. Real-world robots already exist that do all of these things, at least to some degree.
In the movie, Frank is appalled at the idea of having a robot look after him, but only to begin with. Gradually he begins to see Robot as something that is both useful and as a form of social companion. By the end of the film (spoiler alert!), there is a clear bond between man and machine, such that Frank is concerned at the prospect that Robot might have to be reset, obliterating all personalised memories and experience.
This is, of course, a fictional story, but it challenges us to think about difficult questions around the use of robots in care, at a time when these technologies are becoming available and when there is increasing need for new ways to help people live independently as they age. More than that, the film raises, and gently explores, an even more controversial idea—that humans could, over time, form relationships with robots that they find valuable and rewarding. Frank initially despises the robot, but gradually as he gets to know the robot and understands it uses, and as the robot adapts to him, they form a close relationship.
For some, relating to robots is a natural extension of relating to other things in our world—people, pets, even cherished objects like cars or wedding rings. As the psychologists Reeves and Nass, discovered, and reported in their 1996 book The Media Equation, people respond naturally and socially towards media artifacts like computers and televisions, and it is becoming clear that this is also true, with ‘bells on’, for robots.
However, for others, including some writers on robot ethics, this just seems wrong on many levels. They worry about wasting emotional energy on entities that can only simulate emotions, or about escaping into easy, but ultimately unfulfilling, interactions with robots and missing out on relationships other people, that may be more difficult at times but are ultimately more rewarding. For an influential group of UK researchers who charted the EPSRC principles of robotics, human-robot “companionship” is an oxymoron, and to market robot as having social capabilities is dishonest.
If you consider that emotional bonding with robots is a bad idea then you have reason to be concerned. Not because super home-help humanoids like Frank’s Robot are arriving imminently, current prototypes are too expensive and their behavior very limited, but because people are developing bonds with robots that are already here—the helpful, but very non-humanoid, vacuum-cleaner and lawn-trimming robots that can be bought for less than the cost to a dishwasher. A surprisingly large number of people give these robots pet names, some even take their cleaning robots on holiday. Other evidence of emotional bonds with robots include the Shinto blessing ceremony for the Sony Aibo dog-like robots, that took place prior to the robots being dismantled for spare parts, and the squad of US troops who fired a 21-gun salute, and awarded a Purple Heart and a medal, to a bomb-disposal robot “Boomer” that was destroyed in action.
These stories, and the psychological evidence we have so far, make clear that there is something about human sociality that can extend to include entities that are very different to us, even when we know they are manufactured and pre-programmed. Robots are perhaps particularly good at eliciting a social response, and invoking some form of emotional bond, as they move autonomously, and with the appearance of purpose; something we otherwise only see in humans and animals. The physicality of robots also helps, compared, for instance, to a smart speaker, or virtual character—tactile interaction is of enormous importance to humans as social mammals. Also, while they are often doing helpful things, robots sometimes also need our help, and helping others can also promote social bonding (human caring and bonding are served by closely related brain systems).
The relationships that people form with these robots may have similarities to those we have with animal pets, but we should be careful not to assume that they are exactly the same. People may know quite well that animals are sentient in a way that robots or not, but they may not be concerned by that, or they may be happy to “suspend disbelief” in the constructed nature of the robot’s behavior, as we do when watching a play or reading a novel.
Forming an emotional attachment to a robot is one thing, but could a bond with a robot ever be considered to be a form of friendship? The philosopher John Danaher thinks yes, despite setting a very high bar for what friendship means. Danaher takes as his starting point a definition of “true” or “virtue” friendship originated by the Greek philosopher Aristotle that saw true friendship as premised on mutual good will, admiration and shared values. In these terms, friendship is about honesty, open-ness and forming a partnership of equals. Building a robot that can satisfy Aristotle’s criteria is a technical challenge and is some considerable way off (as Danaher happily admits). Robot’s that may seem to be getting close, such as Hanson Robotic’s Sophia, often base their behavior on a library of pre-prepared responses. They are what people in AI call “chatbots” but with a human-like appearance. Arguably, this is an elaborate form of puppetry, although from the perspective of the person interacting with the robot this does still feel like a social exchange.
Meanwhile, Aristotle also talked about other forms of friendship—specifically “utilitarian and “pleasure” friendship—neither of which requires a symmetric bond, and are defined by the benefits provided to the befriender. By contrast to the ideal of virtue friendship, this is a relatively low bar—robots that are useful or give some form of pleasure are with us already (the vacuum cleaner for example). But to most of us this doesn’t seem like genuine friendship, and for Aristotle only the virtue friendship was “perfect” and these other forms “imperfect”.
Although we should give Aristotle his due, as an original and radical thinker, it may be that his 2000-year-old taxonomy is not well-suited to the task in hand—Aristotle, after all, lived in an age before robots. Together with a colleague, Julie Robillard, I recently reviewed the extensive literature on human-human relationships to try to understand how, and if, ideas about how human relate to each other, could apply to our future relationships with robots. We note that there are many different kinds of human-human relationships—parent, relatives, long-term partners, lovers and sexual partners, friends, pen/online -pals, colleagues, teachers, acquaintances, service providers (including carers, therapists, assistants, waiters) even celebrities (with whom the relationship can be intense but one-way!). Moreover, relationships don’t exist in isolation, they come in clusters and networks, and happen in different settings. They also change over time and through our lives.
Overall, we noticed that different kinds of human-human relationship are qualitatively different from each other, but that relationships with robots might be usefully compared with those we have with other humans, along some dimensions at least (duration, formality, intensity, for instance). However, there are also other models for human-robot relationships that we might also consider, such as those we have with animals—after all, some people will consider their pets to be amongst their “best friends”. Given that robots are interestingly different from both people and animals, we may relate to them in ways that quite are novel; in other words, technology could lead to the evolution of new forms of relationship and friendship that we haven’t seen before.
What of friendship? Whereas Aristotle had a particular view on what constituted the perfect (true) friendship, Cacioppo and Patrick, writing on the problem of human loneliness, have suggested that the ideal of perfect friendship, is very hard to achieve, and perhaps impossible for many of us. Moreover, in looking for perfection we may overlook forms of social connectedness that are rewarding and satisfying. Many of the human-human relationships we have will not match up to the Aristotelian ideal, but equally they often provide more than simple utility or pleasure. What Aristotle missed out is that social interaction is rewarding in its own right, and something that, as social mammals, humans have a strong need for. It seems possible that relationships with robots could help to address this deep-seated urge we all feel for social connection.
In our analysis of human-robot relationships, Julie and I identified multiple areas of potential benefit for people from social interaction with robots. These included receiving forms of physical comfort and emotional comfort and sharing enjoyable social exchanges. We also discussed some potential risks. These particularly arise in settings where interaction with a robot could come to replace interaction with people, or where people are denied a choice as to whether they interact with a person or a robot. These are important concerns, but also possibilities rather the inevitabilities they are sometimes portrayed as by people who dislike the idea of relating to robots in principle. In the literature we reviewed we found evidence of the opposite effect, robots acting to scaffold social interactions with others—for example, by acting as icebreakers in groups—or helping people to improve their social skills or boost self-esteem.
When it comes to robots, though, there may be another reason why we lose sight of the potential upside—a deep-rooted concern that our modern digital technologies, epitomised in the human-like robot, are causing us to become dehumanized. If this is the case, we should think twice about treating robots as the villains of the piece.
Robots, including the vacuum cleaner, pet-like, or humanoid ones, are not necessarily all that complicated or sinister. If an experience of social connectedness can be gained by interacting with a helpful home robot it seems foolish to stigmatize that, or to suggest that the opportunity be taken away. In the movie, Frank came to recognise Robot as his friend—recognition that loyalty, complementarity, and a willingness to be there, can be valued traits in a social companion that may not be uniquely human.
Disclosure: I am a co-founder, director and shareholder of the company Consequential Robotics Ltd, that has developed the MiRo-e animal-like robot.