Are friends electric? The benefits and risks of human-robot relationships

Authors: Professor Tony Prescott (University of Sheffield) and Dr Julie Robillard (University of British Columbia)

Read the original source and full article here.

Social robots that can interact and communicate with people are growing in popularity for use at home and in customer-service, education, and healthcare settings. Although growing evidence suggests that co-operative and emotionally aligned social robots could benefit users across the lifespan, controversy continues about the ethical implications and their potential harms. In this condensed post, Professor Prescott and Professor Robillard highlight the benefits and risks of human-robot relationships. 

Potential benefits

Across the lifespan, research suggests positive impacts of social robots across five overlapping dimensions: (1) physical comfort; (2) emotional comfort; (3) direct social interaction and scaffolding of social interactions with others, and (5) behavior modeling. Interventions, such as using social robots as therapeutic tools, could harness one or more of these dimensions, and longer-term relationships, such as the use of companion robots at home or in care, could also sustain multiple areas of benefit.

Examples of humanoid and animaloid social robots(A) Sophia (Hanson Robotics), (B) Nao (Softbank), (C) Pepper (Softbank), (D) Paro (Paro Robotics), (E) Aibo fourth generation (Sony Corporation), (F) MiRo-e (Consequential Robotics). Credits: (A, B, D,…

Examples of humanoid and animaloid social robots

(A) Sophia (Hanson Robotics), (B) Nao (Softbank), (C) Pepper (Softbank), (D) Paro (Paro Robotics), (E) Aibo fourth generation (Sony Corporation), (F) MiRo-e (Consequential Robotics).

Credits: (A, B, D, and F) Tony Prescott; (C) The University of Sheffield; (E) Paul Killeen.

The role of social robots in providing physical comfort has been demonstrated in studies that compare interventions with social robots with controls using a tablet-based avatar of the same robot. Results consistently show greater engagement and more positive affect in the embodied intervention (Li, 2015), both in pediatric populations (Logan et al., 2019) and in older adults (Mann et al., 2015). As one example, in a randomized pilot trial, Logan et al. (2019) studied the responses of 54 children to one of three conditions: (1) a tele-operated bear robot, (2) an avatar version of the robot displayed on a tablet; and (3) a static plush bear. Children in the robot condition expressed greater joyfulness and agreeableness than those in the two other conditions. In a review of thirty-eight experimental studies comparing co-present robots, telepresent robots, and virtual agents, Li (2015) found that robots had greater influence on participants when physically present and elicited more favorable responses when compared with other agents. Barber et al. (2020) studied children’s free play with the animal-like robot Miro-e, comparing it with interactions with a living therapy dog. Children engaged in social touch with both the dog and the robot but, overall, spent more time interacting with the robot. Emerging work on affective touch in human-robot interaction also supports the value of physical contact with artificial companions in providing comfort (Flagg and MacLean, 2013Sefidgar et al., 2016Kerruish, 2017Krichmar and Chou, 2018).

To the extent that robots can act as companions, they could plausibly act to reduce social isolation and the experience of loneliness (Gulrez et al., 2015). A study of the use of the Sony Aibo robot dog in a residential care home found a positive impact on the experience of loneliness similar to that generated by interaction with a real dog (Banks et al., 2008). Recent work on social robots as interventions for mental health also indicates the potential for the affective components of human-robot interaction to generate emotional comfort and to scaffold feelings of self-worth (Ostrowski et al., 2019Kabacińska et al., 2020). The effectiveness of robots as social companions can also be improved by adapting their cognitive architectures and capabilities to suit specific populations, such as people living with dementia (Perugia et al., 2020).

By definition, social robots support communication and interaction and can be used to support social behaviors both between the user and the robot (e.g., companionship) and by acting as catalysts, or scaffolds, for human-human interaction. As one example of the latter, Ostrowski et al. (2019) used a participatory, mixed methods approach to study robots as tools for human connectedness in an older adult community, finding that robots prompted conversations between residents and drew them into the community space. The Paro robot has also been found to encourage group interaction between adults with dementia (Marti et al., 2006Shibata and Wada, 2011).

In a systematic review, Kachouie et al. (2017) analyzed ninety-five studies investigating the use of social robots with older people and rated their outcomes against five constructs related to human well-being defined by the PERMA (positive emotion, relationships, engagement, meaning, and achievement) framework (Forgeard et al., 2011). This review found that most studies reported that social robots have the potential to improve positive emotions (such as peace, satisfaction, hope, love, security, calm). Nine studies reported an impact on relationships, including an increase in social interactions, networks, and ties (three studies), a decrease in loneliness (two studies), and facilitation of friendly interactions with peers (three studies). A more focused review of randomized control studies (Pu et al., 2018) found that social robots could improve quality of life for older adults including impacts on agitation, anxiety, engagement, stress, loneliness, and use of medications; however, meta-analysis showed a lack of robust cross-study effects. Both reviews commented on the need for additional and more rigorous studies.

One area in which social robots have demonstrated benefits is in behavior modeling, that is, in encouraging behaviors that promote well-being. A particular area where this application has proven useful for pediatric, adult, and older adult populations is in rehabilitation therapy. In this context, social robots can be used to promote engagement with self-directed exercises during (Kozyavkin et al., 2014) and between therapy sessions (Winkle et al., 2018), as well as to demonstrate specific exercises in ways that are customized for the user and the course of the treatment. Social robots can also model other types of healthy behaviors as well as activities of daily living, such as taking medication or making a cup of tea (Shishehgar et al., 2018). Social robots have been widely trialed as an intervention to scaffold social skills in children with ASD (Cabibihan et al., 2013), including training in imitation, eye contact, turn-taking and self-initiation, and learning of context appropriate social behavior.

The magnitude of benefit experienced from social robot therapeutic interventions, which integrate one or more of these dimensions, depends, in part, on attitudes and beliefs toward robots. Factors such as trust and acceptance, in combination with variables such as age, gender, culture, and prior robot exposure, are important influences in the adoption and sustained use of robot technology (Wortham and Theodorou, 2017Langer et al., 2019Naneva et al., 2020).

Potential risks

The important potential benefits of social robots must be weighed against the risks they pose and evidence about the harms they could cause.

A number of commentators have argued that, because robots are designed machines, it is ethically risky, if not altogether wrong, to encourage people to treat them as social—because only other living things (principally humans and some animals) are capable of being truly social (at least for the foreseeable future). Critics include Dennett who has accused manufacturers of social robots of “false advertising” in designing robots to trigger overtly social and emotional responses in people (Dennett, 2017) (see Sharkey and Sharkey (2020) for a similar view), Sparrow and Sparrow (20022006) who have described sociality in robots as intrinsically deceptive, Elder (2016) who describes relationships with robots as counterfeit, and Bryson (2010a2018) who has argued that forming social bonds with robots risks creating a moral obligation toward them, which goes against the best interests of human well-being. 

There are a number of issues with such positions. First, other technologies, and even simple objects such as cuddly toys and dolls, are designed to elicit emotional and social engagement without undue ethical worry. Second, we are able to suspend disbelief when watching theater, TV, or film, and do not take issue with the deceptive behavior of actors in representing themselves as someone or something different to their intrinsic nature. This speaks to our sophistication as social beings and our ability to flexibly adopt different stances and to switch between them—for example, to alternately, or even simultaneously, see a robot as both an intentional agent and a designed machine or to see a robot as intentional and social, but not as having phenomenal experience or moral patiency. Third, as noted earlier, increasing evidence points to people's willingness to emotionally invest in robots, at least to some degree, and that they are already doing so with devices such as robot cleaners and pets. Social tolerance and the need to avoid stigmatization suggests that such sentiments should be respected (Danaher, 2019). Indeed, our human capacity to be concerned for things that are unable to reciprocate our concern is perhaps something to celebrate rather than to criticize (Brown, 2015).

Against the view that robots can never qualify as social entities, a relational or transactional approach would consider that what matters is not so much the category membership of robots, but the patterns and consequences of social interaction between human and robots (Coeckelbergh, 2010bGunkel, 20122018Danaher, 20192020). This view aligns with the movement away from essentialist notions of identity (Haraway, 1991Mischel and Shoda, 1995) and the broader relational turn in social science (e.g. Emirbayer, 1997) that sees the units (e.g. humans and robots) involved in a transaction as deriving “their meaning, significance, and identity from the (changing) functional roles they play within that transaction. The latter, seen as a dynamic, unfolding process, becomes the primary unit of analysis rather than the constituent elements themselves” (Emirbayer, 1997: p. 287). According to this systems view, inequalities, and ethical harms more broadly, derive from the unfolding relations between individuals or groups, in which essentialist attributions (for instance, stereotypes) are often part of the problem. 

From this perspective then, the more pressing ethical questions concern the balance of benefits and harms that can arise from allowing robots, that people are willing to recognize as social, enter our lives. The list of potential risks and harms is still long; rather than attempt to be comprehensive, we focus here on those concerning socioemotional factors, specifically, human dignity, the potential for a reduction in, or loss of, human contact as a result of social robot use, and the broader emotional impacts of social robots.

The relationship between social robots and human dignity has been most studied in the context of robot care for older adults. At one end of the spectrum, some argue that such relationships are completely permissible, and a robot is considered as an assistive technology similar to others such as smart home systems or intelligent wheelchairs. At the other end of the spectrum, and as noted previously, some argue that social robots are inherently an affront to human dignity, as they are intrinsically deceptive and intended to replace human contact. Central to this debate and critically missing is a unifying definition of “human dignity.” Depending on context, the word dignity has been framed as a medical term, as an inherent component of human rights, and as an achievable virtue (Sharkey, 2014). Fears that social robots, for instance, as carers or companions to older adults, would reduce human dignity can be countered by evidence of mistreatment and disturbing care of older adults by fellow humans (Sharkey, 2014); in other words, there is a balance of harms to be considered. In an attempt to tackle this debate early in the social robot development process, there have been calls for the integration of human dignity as a key principle for the design and governance of social robots (Sharkey, 2014Zardiashvili and Fosch-Villaronga, 2020). 

Critics have also argued that forming relationships with robots could damage our capability to socialize with human others, for instance, by undermining our capacity for secure attachment (Sharkey and Sharkey, 2010) or our desire to engage in human-human relationships (preferring the ease, convenience, and non-challenging nature of artificial companionship) (Turkle, 2017) or by usurping our time and capacity for emotional investment (Bryson, 2018). Each of these threats deserves consideration. 

Attention to how and where relationships with robots are emerging, and the extent to which they are displacing human-human relationships, is important. As noted earlier, human-robot relationships have the potential to be extremely diverse and to include forms of relationship that do not fit into any pre-existing class. The risks are likely to vary between these different settings and a clearer taxonomy and analysis of human-robot relationships building on insights from human relationship science could help. For example, the study of human relationships demonstrates that close association over a period of time can lead to deeper bonds, pointing to the possibility of greater risks (but potentially also benefits) in long-term associations with robots.

Some relationships are clearly more significant for our social development and general well-being than others. Such considerations should drive caution about the use of robots with children, for instance, where they might overlap with roles traditionally performed by primary caregivers. Nanny robots present a potential risk in this regard as highlighted by Sharkey and Sharkey (20102020). On the other hand, robot dolls or pets for children can scaffold learning, promote positive behaviors such as care-giving, and provide forms of social contact that might otherwise be absent from children's lives. More broadly, worries that we exhaust our emotional capital on unfeeling artifacts, making us less able or willing to care for or befriend one another, should be set against the emerging evidence that social robots can support the acquisition of social skills, act as catalysts for forming relationships with other people, and bolster feelings of self-worth that could encourage relationship seeking.

It is worth noting the use of “slippery slope” arguments in the rhetoric surrounding some of these societal concerns. For example, worries about the use of social robots limiting access to human contact (Sparrow and Sparrow, 2006Sharkey and Sharkey, 2012b), and the resulting psychological damage, are often predicated on supposing inappropriate and excess use of robots in, for example, child or eldercare settings, where robots could be imagined as replacing interpersonal contact largely or entirely. In order to assess such risks, we need to identify the causal chains whereby the introduction of social robots would lead to these worst-case outcomes. With respect to robot nannies, for example, Bryson (2010b), reviewing a range of risks and defeaters (such as legal liability), found the use of social robots in childcare to be “no greater danger than other artifacts and child-care practices already present in our society (Bryson, 2010b, p. 196).” This is not to dismiss the risk but to recognize that challenges such as addiction, over-dependence, and their knock-on effects on our human-human relationships are threats that social robots share with other aspects of our increasingly digitally engaged lives, from streaming services, to social media, to smartphones (Turkle, 2017). The impacts of our future relationships with robots therefore need to be considered alongside study of the broader pattern of changes to human social connectedness brought about by new technologies. 

Looking to the future

The task of gathering empirical and theoretical evidence on the role and impact of social robots can be a fairly siloed endeavor. Engineers and computer scientists develop and refine hardware and software components, advance the integration of artificial intelligence in social robots, and measure the effectiveness of human-robot interaction, among other goals. Philosophers, and other humanities scholars, explore the nature and morality of human-robot relationships in relation to broader questions about the human condition. Social psychologists, social ecologists, and relationship scientists study the dynamics of relationships, and of networks of social connection, and examine their impact on quality of life including the experience of loneliness. Technology ethicists address questions related to end-user acceptance and ethical issues such as the implications for dignity, privacy, and autonomy.

A strategy for investigating the ethical and societal impacts of social robots. Credits: Tony Prescott.

A strategy for investigating the ethical and societal impacts of social robots.

Credits: Tony Prescott.

Although each of these lines of inquiry are essential to move forward, a more transdisciplinary approach, which bridges perspectives and methodologies, could allow for an in-depth understanding of relationships with social robots and their potential to improve or harm human lives. This approach could also meaningfully engage diverse stakeholders at earlier stages of prototype and product development. The potential of co-creation methods for social robotics has been demonstrated in various settings, including with children (Huijnen et al., 2017Vallès-Peris et al., 2018) and older adults (Leong and Johnston, 2016Lee et al., 2017Robillard and Kabacińska, 2020). Incorporating the needs, priorities, and values of potential users, their families, and other stakeholders (e.g. care services) can address key ethical issues while increasing acceptability and adoption (Robillard et al., 2018).

The important challenges that arise when weighing the benefits and risks of human-social robot relationships translate to a lack of effective regulation and governance in this sphere. Research can play a key role in shaping both policy and practice. The challenges include to (1) consider the positive and negative potential impacts of social robots, (2) identify potential outcomes that are plausible, and (3) develop strategies to promote positive impacts and discourage negative ones. Addressing these lines of inquiry, using a transdisciplinary approach, that purposefully engages with wider society, will be critical in moving the field forward.

Read the full article here.