Are social robots deceptive? It’s complicated…

There is increasing interest in social robots as assistive technologies to support a wide range of potential user groups.  Nevertheless, the widespread use of robots has been challenged in terms of the benefits as well as the ethics of using robots in social roles. People ask, can a robot be authentically social? If not, does that mean that any use of social robots as assistive technology is intrinsically deceptive? This contribution, co-authored with Julie Robillard, addresses this controversy, building on a relational view of human-robot interaction which argues that sociality has less to do with the essential natures of the human and robot actors involved, and more to do with the patterns and consequences of their interaction. From this starting position we consider and explore four design principles for social robots and compare/contrast these with the view of design “transparency” that robots should behave to reveal their true machine nature.

This is a slightly updated version of a paper “Designing Assistive Robots: A Relational Approach” that was delivered to the Joint International Conference on Digital Inclusion, Assistive Technology & Accessibility – ICCHP-AAATE 2022, Lecco, Italy and published in the Open Access Compendium of that meeting. Please see the archived version for the full text and additional references.

Introduction

Social robots are increasingly used for assistive applications across the lifespan with populations across the entire spectrum of vulnerability (Prescott & Robillard, 2021). Examples include as distraction devices for children undergoing painful procedures, as communication aids in children with autism, as mental health interventions in adults and children, and as interventions to reduce agitation in older adults living with dementia. Until recently, social robots were seen as somewhat futuristic and largely existed in the realm of research. However, the Covid-19 pandemic accelerated their use in a range of contexts from education to healthcare as part of a drive to maintain social connectedness while limiting close physical contact. The development of large language models, such as ChatGPT, has also hugely accelerated the capacity to create robots and AIs that provide engaging forms of social interaction for people (see our Hey MiRo demo as an example).

Today, questions surrounding the ethics of social robots and the nature and morality of human-robot (and human-AI) relationships are more pressing than ever, with important implications for how these artefacts are designed and employed in real-world settings. Here we consider different approaches to human-robot relationships, describe the key components of a relational approach, and propose four evidence-based design principles for the ethical design of social robots.

The Relational Approach in Robot Ethics

The relational view in robot ethics argues for a move away from essentialist (or substantialist) notions of what is a human, what is a robot, and what it means for them to have a relationship. Instead the relational view proposes that what matters are the patterns and consequences of social interactions between humans and robots including their meaning and significance to the people involved, and their wider impact on social and relationship contexts (Prescott & Robillard, 2021). 

This view can be seen as an alternative to more essentialist conceptions that seek to define what is (and what is not) a human (or a robot) in terms of fundamental character or attributes irrespective of context.  Essentialist views can be attractive ways to frame and explore certain ethical questions as they chime with many of our intuitions (for instance that all humans share a common “nature”), however, they can be criticised on metaphysical grounds for supporting outdated ideas of the human that can be exclusionary, and for failing to recognise the changing nature of our humanity, including through our interactions with our technologies.  The relational view in technology ethics, on the other hand, is part of a broader interactivist turn in the social, cognitive and information sciences (e.g. Emirbayer, 1997; Coeckelbergh, 2010; De Jaegher et al., 2010; Gunkel, 2018) that sees the entities involved in a social transaction (e.g. humans and robots) as deriving ‘‘their meaning, significance, and identity from the (changing) functional roles they play within that transaction” (Emirbayer, 1997, p. 287).

While the debate between relational and essentialist views continues, we consider that it is useful to explore and set out some of the implications of the relational view for the design of assistive technologies, particularly, those such as robots, and other social AIs (e.g. avatars), that purport to have some social function, and whose benefits are considered to arise, at least in part, through their sociality.

The possibility that a robot could be deemed to be social is hotly contested. For instance, Robert Sparrow (2002) has argued than robots (and similar devices) are incapable of sociality, and that to present them as otherwise is intrinsically deceptive and morally deplorable.  Reflecting on similar views, has led some authors to propose that, to be ethical, robots should be designed such that their machine nature is transparent (Boden et al, 2017; Wortham and Theodorou, 2017; Bryson, 2018). To enable this transparency, it is suggested that the user should be reminded, occasionally, if not continuously, that the device is a machine controlled by algorithms rather than a “genuine” social actor (Wortham & Theodorou, 2017; Bryson, 2018).

Robert Wortham and Andreas Theodorou’s (2017) “muttering robot” provided a running commentary on its programming to increase transparency about its machine nature.

Central to this debate is the question of what it means to be deceptive. We follow the philosopher John Danaher (2020a) who defines deception as involving “the use of signals or representations to convey a misleading or false impression” (p. 118).

In robotics, deception is most often held to be about portraying a misleading impression about qualities that humans have, and that robots do not (or in principle could not) have. We might summarise these as anthropomorphic qualities, or more specifically, a sub-class of anthropomorphic qualities that are deemed controversial, most often psychological phenomena such as emotions, intentions, and self-awareness (in contrast, physical features such as having a head, two arms and two legs, are rarely considered deceptive or problematic). 

If robots exhibit qualities or functionalities that are viewed as deceptive, the further question is whether this is, indeed, unethical.  Broadly speaking, we see three general positions set out in table 1. The first two are broadly similar only differing in what they see as the solution to the ethical “problem” of social robotics. We identify with the third of these positions (of which there are multiple versions), which begins from a more nuanced view on the nature of deception in robotics.

Table 1. Views on deception and ethics in social robotics.

Are social robots deceptive?Is this unethical?What should we do about it?Example authors
YesYesAvoid building or using them altogetherSparrow (2002) Turkle (Turkle, 2017)
YesYesDesign it out, or minimise it through transparencyBoden et al. (2017) Wortham & Theodorou (2017)
Not necessarilyDepends on the nature of the deceptionDesign to avoid damaging forms of deceptionShim & Arkin (2016) Sorell & Draper (2017)
Danaher (2019, 2020a, 2020b) Prescott & Robillard (2020; 2021)

Determining whether social robots are deceptive by nature requires reflection on our understanding of sociality.  To rule out the possibility that an artefact could ever be social seems exclusionary given that we do not yet have a clear understanding of human sociality or how it is generated (Prescott, 2017). Moreover, embodied cognitive science is forcing a rethink about the nature of sociality as something that arises not in individuals but in the interactions that occur between them (De Jaegher, Di Paolo, & Gallagher, 2010). Applied to robots, this suggests that they need not have self-understanding, or intrinsic social competencies or properties to be authentically social (Damiano & Dumouchel, 2018).

Nevertheless, we might agree that present-day robots are not social in the same way that people are.  If so, is it possible to defend the deliberate creation of an impression of human-like sociality (as, for example, artificial personal assistants strive to do)?  A key idea here is that the tendency to anthropomorphise objects and devices occurs widely and pre-dates robotics and artificial intelligence (Heider & Simmel, 1944; Reeves & Nass, 1996).  For example, we anthropomorphise dolls, cars, even trees and mountains. 



The tendency to project human-like attributes onto animals or machines is termed anthropomorphism.  An experiment performed by Fritz Heider and Marianne Simmel, in 1944, showed that people will see human-like behaviour and intentions in something as minimal as this short animation of geometric figures. 

A related point is that we may be able to distinguish different forms of deception, and that some of these may not be unethical.  For example, anthropomorphism, has been described as being “honest” where it exploits people’s tendency to view artefacts as social actors, and does so overtly and for their benefit, using anthropomorphic features to provide a more engaging or effective interaction (for example, to provide navigation instructions in a vehicle, or to promote the effectiveness of a therapy) (Kaminski et al., 2016). However, anthropomorphism can be seen as “dishonest” where it is used to deliberately misdirect attention or conceal a robot capability. For example, to pretend that the robot is unable to see a person because its artificial eyes appear closed even while continuing to observe them with a covert camera (Kaminski et al., 2016; Leong & Selinger, 2019).

Is this robot being deceptive? The Moxie robot, designed for use by young children, has a 2MP camera in the forehead that can still see when its animated eyes are closed. This could be considered an example of “dishonest” anthropomorphism.

John Danaher (2020a) has argued that some forms of honest anthropomorphism are not unethical even though they may be deceptive. Analysing different forms of deception employed by robots, Danaher describes an “ethical behaviourist” approach, according to which judgements about whether a robot’s anthropomorphic behaviour is permissible should be based on superficial observables—including the robot’s appearance, utterances and actions—and not on any presumptions about the presence or absence of human-equivalent robot inner states. This is termed “superficial state deception”. As Danaher puts it:

“According to ethical behaviourism, if a robot appears to have certain capacity (or intention or emotion) as a result of its superficial behaviour and appearances, then you are warranted (possibly mandated) in believing that this capacity is genuine. In other words, if a robot appears to love you, or care for you, or have certain intentions towards you, you ought, ceteris paribus, to respond as if this is genuinely the case. […] simulated feeling can be genuine feeling, not fake or dishonest feeling. Consequently, if ethical behaviourism is true, then superficial state deception is not, properly speaking, a form of deception at all.” (p. 122-3).

Danaher’s position can be likened to a strong version of the relational perspective (e.g. Damiano & Dumouchel, 2018), that is, that what manners is that the robot’s behaviour, over the duration of its interactions, is consistent with its social utterances and expressions.  This is a stronger constraint than you might at first imagine as explored further below.

Design Principles for Social Robots

Based on the above, and from a relational standpoint, we believe it should be possible to define design principles for ethical social robots. As an initial effort, we propose the following:

  1. Promote contextual integrity. This principle advocates co-design of robot social capabilities for the role that the robot will fulfil alongside alignment of the robot’s behaviour and capabilities with expectations and norms. Helen Nissenbaum (2010) introduced the notion of “contextual integrity” in the context of a framework for the design of sociotechnical systems, applying it particularly to concerns around information privacy; however, the idea has broad generality.  Its application to robotics has been discussed further by Margot Kaminski and colleagues (2016).  The key idea is that the capabilities and behaviour of a robot should be judged in terms of their appropriateness to the context in which it is used.  For example, if we encounter a social robot that is waiting tables in a restaurant, we might reasonably expect that it would enter the room unannounced, observe where people are sitting and approach them safely, monitor ongoing conversation and diner behaviour for an appropriate point at which to intercede with an offer of service and so-on.  The same robot, but in a home setting, might be required to observe quite different social etiquette, for example, never entering certain rooms, asking before entering others, not using cameras or microphones at certain times of the day, or in some situations, unless specifically directed to do so.
  1. Develop honest anthropomorphism. This principle requires that we evaluate the benefits and risks of anthropomorphic features and make decisions on their permissibility accordingly. “Superficial state deception” can be acceptable if consistent with expectations and norms; “hidden state deception”, such as where the robot conceals a covert feature that might violate contextual integrity, is unacceptable.  Ethical behaviourism requires that the robot’s actions are consistent with its utterances.  Thus, if a robot declares that it “cares about you a great deal and wants to be of help” then its subsequent behaviour should not be to avoid or ignore the user.  Whilst it is easy to program a robot to make these kinds of supportive declarations it is much more difficult to make its behaviour consistent with them.  For instance, to be genuinely helpful, the robot must be able to recognise individuals consistently, perhaps remembering past encounters, and be able to monitor and anticipate the person’s needs, at least to some degree.  Few, if any social robots, are capable of this level of helpful behaviour at present (Prescott et al., 2019).  On ethical behaviourism grounds, we might consider that the robot’s statement that it “cares” and “wants” to help as problematic to the extent that it raises expectations about its wider behaviour that cannot be met, however, a future, more care-capable robot might more reasonably make such statements. As a further example of honest anthropomorphism we suggest that robots could utilise the ability of robots to track and recognise human emotions, and to modulate their own emotional expressions to be aligned with those of their human interlocutor (Robillard & Hoey, 2018). 
  1. Clearly signal the robot capacities. The requirement to avoid hidden state deception suggests the importance of clear signalling. Here anthropomorphism can have some direct benefits, for example, if the robot’s only cameras are mounted forward-facing on its head, and can be covered by opaque eyelids, then closing the eyelids, or turning the head away, will be sufficient to communicate that the robot can no longer observe you.  This is an intuitive and easy-to-read signal that matches our experience and expectations from interactions with people and pet animals. On the other hand, if the robot has other cameras, in anthropomorphically unexpected places (e.g. a rear-facing camera on a humanoid), then their presence/use should be very clearly signalled—for example, it has become conventional for cameras on computers to illuminate a small pilot light when they are operating. Dynamic feedback—emitting signals when the context changes—is likely to be important. For example, a home robot might usefully signal a switch from standby mode to awake/monitoring mode to alert users that its sensors have become operational. Note that honest signalling is not the same as “transparency”, at least as that term has been used by Wortham and Theodorou (2017) and others to imply transparency about the internal processes of the robot that underlie its decision-making etc.  Signalling is here intended to avoid hidden-state deception and is not about revealing the robot’s machine nature.  Of course, if the robot is asked about its internal processes it should answer honestly (to the extent that it is capable), as to do otherwise would contravene broader principles around truth-telling and deception.
  1. Be especially careful when designing for vulnerable users and/or for “thick” relationships (i.e. longer-term interactions with deeper psychological involvement). In assessing the potential benefits and risks, the relational approach emphasises the need to consider the role of the robot within the wider network of the user’s interpersonal relationships.  Social robots are currently developed and implemented in populations typically considered vulnerable, such as children with autism or with mental health conditions, and older adults living with dementia.  These populations may be less able to make sophisticated judgments about meaning and intentions in social interactions. Ethical risks can be addressed through appropriate consent procedures involving family and carers, monitoring, and through careful co-creation of robot capabilities in order that these are aligned with the values of end-users.  Where there is deeper psychological involvement there is also more risk of harm, but also the potential of greater benefit from providing robots with richer set of social capabilities. 
The MiRo-e biomimetic robot has some similarities to animals but is intended to be sufficiently different from an animal so as to avoid possible confusion. The robots camera “eyes” face forward and when the mechanical eyelids shut the robot cannot see. Miro’s “emotional” signals are generated by closed feedback loops and depend on drive states that promote social interaction, its control system design is published, and there is an interface that demonstrates how it operates.

Conclusion

In this paper we have sought to outline some considerations for the design of future social robots based on a relational ethics approach.  We have sought to distinguish this from approaches predicated on a more essentialist (or substantialist) view that emphasises ontological differences between human machines. Some of the latter approaches have argued that sociality in robots is wrong in principle, and that anthropomorphic features such as the ability to convey emotional signals are deceptive.  Against this, we have argued that sociality can be a desirable and valued capability and that anthropomorphic features should be evaluated according to their risks and benefits.  Benefits include ease-of-use and intelligibility for people.  For instance, in persons living with Alzheimer’s disease, there is evidence that emotional processing is more resistant to decline than cognitive processing (König et al., 2017).  In seeking to eliminate aspects of interaction that carry emotional connotations, there is a risk that this could make otherwise beneficial technologies less engaging and therefore reduce adoption.  More broadly, the relational approach emphasises the need to consider the social setting and relationship context in which a robot is deployed, and the alignment of its behavior with prevailing norms.  This argues for a pragmatic and inclusive approach to the design of assistive social robots, that involves potentials users and other stakeholders, in evaluating when and how social capabilities and anthropomorphic features can be safely and beneficially deployed.

For more on this topic see this blog post and our iScience article Are Friends Electric: The Benefits and Risks of Human-robot Relationships.

Disclosure

I am a co-founder, director and shareholder of the company Consequential Robotics Ltd, that has developed the MiRo-e animal-like robot.

Acknowledgement

This research was supported by the Wellcome Trust through the “Imagining Technologies for Disability Futures” (ITDF) project.

Citation: Prescott, Tony J., and Julie M. Robillard. 2022. “Designing Assistive Robots: A Relational Approach.” In ICCHP-AAATE 2022 Open Access Compendium “Assistive Technology, Accessibility and (e)Inclusion”.

References

Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., Newman, P., Parry, V., Pegman, G., Rodden, T., Sorrell, T., Wallis, M., Whitby, B., & Winfield, A. (2017). Principles of robotics: regulating robots in the real world. Connection Science, 29(2), 124-129. doi:10.1080/09540091.2016.1271400

Bryson, J. J. (2018). Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15-26. doi:10.1007/s10676-018-9448-6

Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12(3), 209-221. doi:10.1007/s10676-010-9235-5

Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in Human–Robot Co-evolution. Frontiers in Psychology, 9, 468.

Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3(1), 5-24. doi:10.5325/jpoststud.3.1.0005

Danaher, J. (2020a). Robot Betrayal: a guide to the ethics of robotic deception. Ethics and Information Technology, 22(2), 117-128. doi:10.1007/s10676-019-09520-3

Danaher, J. (2020b). Welcoming robots into the moral circle: A defence of ethical behaviourism. Science and Engineering Ethics, 26(4), 2023-2049. doi:10.1007/s11948-019-00119-x

De Jaegher, H., Di Paolo, E., & Gallagher, S. (2010). Can social interaction constitute social cognition? Trends in Cognitive Sciences, 14(10), 441-447. doi:10.1016/j.tics.2010.06.009

Emirbayer, M. (1997). Manifesto for a Relational Sociology. American Journal of Sociology, 103(2), 281-317. doi:10.1086/231209

Gunkel, D. J. (2018). The Relational Turn: Third Wave HCI and Phenomenology. In M. Filimowicz & V. Tzankova (Eds.), New Directions in Third Wave Human-Computer Interaction: Volume 1 – Technologies (pp. 11-24). Cham: Springer International Publishing.

Heider, F., & Simmel, M. (1944). An Experimental Study of Apparent Behavior. The American Journal of Psychology, 57(2), 243-259. doi:10.2307/1416950

Kabacińska, K., Prescott, T. J., & Robillard, J. M. (2020). Socially assistive robots as mental health interventions for children: A scoping review. International Journal of Social Robotics. doi:10.1007/s12369-020-00679-0

Kaminski, M. E., Rueben, M., Smart, W. D., & Grimm, C. M. (2016). Averting Robot Eyes Symposium Essays from the State of Cyberlaw: Security and Privacy in the Digital Age. Maryland Law Review, 76(4), 983-1024.

König, A., Francis, L. E., Joshi, J., Robillard, J. M., & Hoey, J. (2017). Qualitative study of affective identities in dementia patients for the design of cognitive assistive technologies. Journal of Rehabilitation and Assistive Technologies Engineering, 4, 2055668316685038. doi:10.1177/2055668316685038

Leong, B., & Selinger, E. (2019). Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism. Paper presented at the Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA. https://doi.org/10.1145/3287560.3287591

Nissenbaum, H. (2010). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, USA: Stanford Law Books.

Prescott, T. J. (2017). Robots are not just tools. Connection Science, 29(2), 142-149. doi:10.1080/09540091.2017.1279125

Prescott, T. J., Camilleri, D., Martinez-Hernandez, U., Damianou, A., & Lawrence, N. D. (2019). Memory and mental time travel in humans and social robots. Philosophical Transactions of the Royal Society B: Biological Sciences, 374(1771), 20180025. doi:doi:10.1098/rstb.2018.0025

Prescott, T. J., & Robillard, J. M. (2021). Are friends electric? The benefits and risks of human-robot relationships. iScience, 24(1). doi:10.1016/j.isci.2020.101993

Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places. New York, NY, US: Cambridge University Press.

Robillard, J. M., Goldman, I. P., Prescott, T. J., & Michaud, F. (2020). Addressing the Ethics of Telepresence Applications Through End-User Engagement. Journal of Alzheimer’s Disease, Preprint, 1-4. doi:10.3233/JAD-200154

Robillard, J. M., & Hoey, J. (2018). Emotion and motivation in cognitive assistive technologies for dementia. Computer, 51(3), 24-34. doi:10.1109/MC.2018.1731059

Shim, J., & Arkin, R. C. (2016, 2016//). Other-Oriented Robot Deception: How Can a Robot’s Deceptive Feedback Help Humans in HRI? Paper presented at the Social Robotics, Cham. pp. 222-232

Sorell, T., & Draper, H. (2017). Second thoughts about privacy, Safety and deception. Connection Science, 29(3), 217–222.

Sparrow, R. (2002). The march of the robot dogs. Ethics and Information Technology, 4(4), 305-318. doi:10.1023/a:1021386708994

Turkle, S. (2017). Alone Together: Why We Expect More from Technology and Less from Each Other (3rd ed.). New York: Basic Books.

Wortham, R. H., & Theodorou, A. (2017). Robot transparency, trust and utility. Connection Science, 29(3), 242-248. doi:10.1080/09540091.2017.1313816

Leave a comment