Friday, February 5, 2016

Robot friends? Ethical issues in social robotics

This week's post is by guest blogger Alexis Elder.

Relationships between robots and humans have fascinated filmmakers and storytellers for decades.
In Blade Runner, several human characters find themselves in relationships with replicants, androids so sophisticated that even they don’t always realize they aren’t human. On Star Trek: The Next Generation, Data is recognized as artificial by his crewmates, but accepted as a friend, which Data reciprocates in his own robotic way.

Today’s robots are a long way off from such complicated constructs. However, relatively simple but appealingly cute robots are fulfilling companionate roles, from the robotic seal Paro, who keeps senior citizens company in nursing homes, to NAO, a little humanoid robot that holds users’ hands and retrieves small items.

Deciding whether Rachel from Blade Runner could be a friend might require us to decide whether she’s a person, introducing thorny questions about what that involves. Data might just be an example of what it would take for a robot to be both a person and a friend.

But there is another character in Blade Runner whose situation more closely parallels ours: the eccentric inventor J.F. Sebastian.

When Pris, one of the replicants, encounters J.F., who lives in an abandoned building, she comments, “Must get lonely here, J.F.”

“Not really,” he replies. “I MAKE friends. They're toys. My friends are toys. I make them. It's a hobby.” And he does. His living space is populated by an assortment of creatures much closer to Paro or NAO than Pris or Rachel.

Is J.F. right? Are his toys his friends? What should we think of his claim that he isn’t lonely because he’s got them?

These questions aren’t merely speculative. Robots are being used in nursing homes and extended care facilities to alleviate patients’ loneliness, and research suggests that they are effective: patients report less subjective loneliness after interacting with them, and show fewer physical markers of stress.

But although I am a fan of using technology to improve our lives, I have a worry about these technologies, one that dates back to well before we started telling stories about robots. Is what they provide an improvement, on balance?

Grant that these robots can make people feel less lonely. May they, in doing so, introduce another problem?

To answer this, we need to think a bit about the value we place on social relationships versus the feelings they induce in us.

Aristotle claimed that “without friends no one would choose to live, though he had all other goods”. Even if he overstates the case a bit, what he seems to have meant is that, given the choice, we would opt for a life with friends over one that included all the other goods but no friends at all.

In that spirit, imagine being given a choice between two lives. You know at the time of the choice how the lives will differ. But once you begin your chosen life, you will forget – it will be as though things have always been this way.

In one option, the people you consider your friends are actors, although if this life were chosen you wouldn’t discover their illusory nature. These friend-facsimiles would not use the appearance of friendship to exploit you, or betray your confidence. But neither would they care for you or find pleasure in interacting with you. Call this the Truman Show option.

In the other life, your closest friends are exactly as they appear to you to be. Call this the Genuine option. It is my guess that most of us would prefer Genuine over Truman Show.

In Truman Show, “friends” provide the same external appearances as in Genuine. They do not cause harms associated with “false friends”. And yet Truman Show is less choice-worthy than Genuine. It seems the best lives involve reciprocal caring of genuine agents - something today’s robots can’t pull off. (This does not mean it’s always bad to be alone. Some peace and quiet might also be important for the good life.)

Feelings of loneliness can be relieved in many ways, from taking Tylenol to a hot shower, without addressing social isolation. But when lonely patients are at risk for cognitive disorders, social-robotic interventions may be ethically bad. They work because they look and feel enough like companions that they hit the right emotional buttons, in populations that are already predisposed to confusion.

A good movie can hit one’s emotional buttons without being immoral. But social robots are different here, because lonely and compromised residents of long-term care facilities are not in a good position to distinguish the genuine article from a compelling facsimile – one that makes them feel like they’ve got a friend.

About deception and friendship, Aristotle said,
when a man has deceived himself and has thought he was being loved for his character, when the other person was doing nothing of the kind, he must blame himself; when he has been deceived by the pretences of the other person, it is just that he should complain against his deceiver; he will complain with more justice than one does against people who counterfeit the currency, inasmuch as the wrongdoing is concerned with something more valuable.
Causing lonely patients to think they have friends when they don’t makes us counterfeiters of something more important than money. This seems like something we ought to avoid.

To combat this, while taking advantage of the benefits such robots offer, several things will be important:
· Distinguishing treatment of patients’ subjective loneliness from their social isolation. (This is especially important when we must make good decisions on their behalf.) 
· Being aware of individual patients’ susceptibility to mistake robot “friends” for real ones. 
· Where possible, designing robots that are unlikely to fool people. (Paro is a good example of this – we rarely encounter seals in ordinary life. A realistic robot baby or child might more easily confuse geriatric patients.) 

Until robots are capable of real friendship, designing and using them wisely and well will require us to avoid manufacturing false friends.

Editor's note: In case you love this topic, today on The Splintered Mind, Eric Schwitzgebel writes on an overlapping theme. The problem of making fully conscious robots that will always cheerfully sacrifice themselves for humans.


Alexis Elder
Department of Philosophy & Women's Studies
Southern Connecticut State University

13 comments:

  1. Great topic Alexis! I have a concern about one of your arguments, and a concern about robotic care animals in rest homes that you might imply, but don't discuss in detail.

    I think the Truman Show is unfair. When confronted with a choice between two things (including lives) that are identical except for one of the option having an extra something that might be seen as valuable, it would be irrational to choose the option without the bonus. I would have to be 100% certain that actual friendship (as opposed to the perfect experience of friendship that happens to be false) does not have intrinsic value. 100% sure?! Even wannabe hedonists should surely recognize that it would be irrational to be 100% sure that hedonism is the true theory of welfare (the prudential good life). So, by setting up the thought experiment this way, you are stacking the deck in your favor - you are making it very difficult for the opponent of your view to win. I put it to you that any rational person should choose the genuine life because there is a more than 0% chance that genuine friendship has intrinsic value. Furthermore, an opponent to your view might bring up the interesting question of exactly how the life of the person in the Truman Show is made worse by something that they never experience. We have a negative intuitive response to the false friendship, but a person in a perfect Truman Show would never know about the falseness, nor would they experience anything different than the person with the genuine friendship. For all these reasons, I think progressing this interesting issue should be done using examples closer to real life, such as wondering what we would like if we were in a rest home.

    And, what I would wonder about is my aforementioned concern about robotic care animals in rest homes. These robots will improve the lives of the people in the rest homes, including making them less lonely. However, if real people visit less, or make less of an effort to connect with the people in the rest homes because they think the robots are taking care of everything, then that could be bad for most people in the rest home. I say most because I expect that a few people in the rest home will genuinely prefer the company of a caring robot than of a caring person.

    ReplyDelete
    Replies
    1. Dan, I don't quite understand what you are saying in your first paragraph. Alexis' example doesn't feel like a case of deck stacking to me, at least not in any unfair sense. If I offered you a choice between a dollar and a dollar with a chance to win something extra, amd I stack the deck? I'm just presenting you a choice with a thoroughly dominated option. And I don't see how your counterformulation changes anything. She still wins, right? What am I missing?

      Delete
    2. Thanks, Dan! I see my answer to your first and second concerns as related in the following way.

      It does seem to me that people who believe that nursing home residents are “taken care of” by (purely) robotic companions are making a mistake. (With the caveat that, as you note, some people may knowingly prefer the company of robots – an issue I’m bracketing for now, as my main focus here is on people who UNknowingly associate with companionate robots.)

      Then the question is, why?

      I posit that the apparent mistakenness of this assumption stems from the incorrect belief that appearance/reality distinctions don’t matter in sociality, and then use cases like Truman Show to illustrate that they DO matter to us. While a clever hedonist might be able to construct a justification along the lines you suggest to otherwise explain people’s preference for genuine friendships, I do not think this is enough to render the thought experiment without value.

      If we take our intuitions about cases (whether realistic ones, like what we’d want in a rest home, or more outlandish ones, like I propose) to function like observations to be explained, then we can concoct a variety of competing explanations to account for them (like competing hypotheses in the sciences). But this is just the first step in running an inference to the best explanation. We can then compare the explanations in terms of explanatory breadth, depth, parsimony, fruitfulness, etc.

      While there wasn’t space in the original post to go into much detail about this, my position is that intuitions about Truman Show are consistent with a wide range of intuitions about the things we value about relationships (such as the importance of genuineness of friends’ motivations – see Stocker on the schizophrenia of modern moral theories, and Williams on the problem of “one thought too many” when friends act in each other’s interests). Basically, the hedonist has a tough row to hoe when it comes to accounting for many of the ways that we value friendship, and even if explanations can be constructed that cover the wider range, parsimony may not count in their favor – the more complex the explanation (risk-aversion in case hedonism, their preferred theory, turns out to be false?), the less well-positioned it is relative to the straightforward (we care that people care about us, especially the people we care about).

      I agree that more down-to-earth cases are helpful in reasoning about these issues, as well, but I don’t see that as being in competition with more abstract examples.

      Delete
  2. Alexis,
    Thanks for this thoughtful and thought-provoking piece. Anything that lets me talk about Blade Runner while on the job deserves additional praise!

    So, with that in mind, a couple of comments which are more like questions. First, is your concern about the potential for the formation of false friendships among the elderly and otherwise institutionally vulnerable more a result of mistaking the comparison condition than a problem that results from these potential friends being robots? Second, does it matter (I think it might) whether a robot/android like Rachel knows what she is or not for her efforts and potential success at being a friend to count as genuine?

    Regarding the first question. Your worry about robots being introduced to remedy social isolation among institutionalized vulnerable populations stems from their being harmful, in an Aristotelian sense, because the relationship formed would not be genuine friendship. Today's robots are not capable of reciprocal caring, which you identify at the heart of the value of genuine friendship, so believing they are our friend is a mistake which risks harming those who make it. But it seems only to be a mistake when compared to the genuine alternative. Perhaps the better comparison, the actual condition most institutionalized people find themselves in, is not a choice between the Truman Show and Genuine, but between the Truman Show and nothing. If I recall, Aristotle allows for friendship of a kind between non-equals (men and women, in his example) and between those who derive an instrumental benefit from the friendship, and between equals. Might the introduction of robotic carers not offer an opportunity at friendship of the first and second sort, even if it cannot attain the latter?

    To my second question, in Blade Runner, Rachel (potentially also Deckard, as many believe is implied in the director's cut) is different from Pris in that she truly seems not to know that she is not human, not a person, with the past she remembers and the friendships and relationships in which she finds herself. She cannot act other than genuinely, there is no other "self" or motive or inclination she can identify other than herself as she knows herself to be. We have the advantage of knowing she is also a Replicant, a very nearly perfect one. But this external frame of reference for assessing whether she is capable of genuine friendship -- and then answering no, because she is not a person but a robot -- seems to mistake the relevant frame of reference. From her point of view, she can only be as she understands herself to be. Her experience feels to her like it is her, arising from her own inclinations and motivations. Her relationships, insofar as they are indistinguishable, from her point of view, from those of a person seems to me to be grounds to take her friendship as genuine. That we cannot see her friendship as genuine because we see her as a Replicant and therefore as a non-person is a problem. But, isn't that our problem? It seems to me, if it tells us anything, it is those who see her as incapable of genuine experience and therewith it genuine friendship, who are incapable of being genuine with her, of reciprocating her friendship. But that shouldn't necessarily be troubling, since it seems within the realm of ordinary human experience that we find ourselves often incapable of reciprocating efforts at friendship with others. I'm pretty sure I could not be Donald Trump's friend, even if he tried really hard to persuade me he was genuine, and even if, from everything he knows about himself, he is being genuine. But I still couldn't be his friend -- he would be deemed by me non-friend-worthy. But that would be a problem with me, not with him, my perception of his capacity for genuine friendship, my rejection of a relationship with him.

    I'd much rather be friends with Rachel.

    ReplyDelete
    Replies
    1. Thanks, Christina!

      To your first question: I agree that getting the relevant comparisons right is crucial. I even think that there may be cases where even harmful deception might be the best choice, out of a range of bad options. (Including in some long-term care facilities, where actual caregivers may be facing a choice between “social isolation plus loneliness, no deception” vs. “social isolation but no loneliness, via deception” and the overall health benefits of rectifying loneliness might weigh in favor of the latter. But what I aim to do here is make it clear that rectifying loneliness via (what I construe as) a harmful deception is a moral cost, and even when it’s the best thing to do, we ought to recognize the associated costs, and mitigate them when and where we can (as for instance by some of the design considerations I sketch above). Certainly, we shouldn’t (as Dan put it) think the robots have “got things covered” so long as robots are keeping patients company.

      In this sense, I’m trying to give an account of a particular kind of cost, rather than an all-things-considered verdict on whether deception ought to be deployed in any particular case.

      It is really helpful to put this issue in terms of instrumental vs. intrinsic friendship, via the Aristotelian schema you suggest, and yes, I think a case could be made that (today’s) robots could be instrumental friends, even though not friends of virtue. (Although there’s a Hursthouse paper on instrumental friendships that makes me reluctant to endorse this without some further thought on the matter.) But Aristotle has a fair bit to say about the moral costs of misrepresenting one kind of friendship as another, particularly trying to pass off an instrumental friendship as an intrinsic one, so I think the worry still holds.

      To your second point: I think, in the way you’ve set up Rachel as a creature with experiences, motivations, inclinations, and a point of view (on herself and others) might just be enough to convince me both that she COULD be a genuine friend, BECAUSE she’s a person (even if not a human). At that point, I agree that if someone can’t be friends with her (even if she’s interested) because that person is bothered by her origins, then that’s their problem.

      Delete
  3. Great post. Thanks Alexis. I think the purpose of the Truman vs. Genuine Friends example is to show that given the choice, we'd prefer one over the other. But it's not clear to me that our preferring one over the other carries any philosophical weight. We have lots of preferences that are mistaken, false, or harmful.

    I think we are all legitimately interested in having relationships where the feelings are actually reciprocated, but I'm not convinced yet that machines that give the perfect appearance of emotions could be bad for us. If, as you suggest, some actual harm could be done to these elderly because they are confused or they acquire cognitive disorders as a result, then, yes, we'd have something new to fold into the equation. But just speculating, my guess is that the machines could do a lot more good overall than the alternative harm that comes to them from isolation and loneliness. Visits from dogs and petting cats have also been shown to be good for them. And I bet even caring for plants has some benefit, and there's no reciprocation at all there. So why not robots?

    ReplyDelete
    Replies
    1. In fact, caring for plants has been shown to have benefits. Matt, I don't think you intended this, but the way you said it made me think: Why not caring for robots? This is basically what people are doing in FarmVille right? Except they aren't robots, but simulations.

      Delete
    2. Thanks, Matt!

      While I agree that we do have many preferences that are mistaken, I disagree that showing a widespread preference is thereby without philosophical weight. It is insufficient for delivering a deductive argument, but can, for example, do important work in an abductive argument/inference to the best explanation. As I indicated in my response to Dan, intuitions about Truman Show and intuitions about the kinds of motivations we want our friends to have, the importance of reciprocity, etc. can all be straightforwardly explained by the posit that genuine, reciprocal friendships are valuable to us, and while other explanations can be constructed, there remains much work to be done to show why those explanations would be preferable to the simple, straightforward one that accords with appearances. (It seems a theoretical cost for a theory to give up on the apparent value of genuine friendship.)

      I agree that many non-reciprocal experiences can also be valuable, and am in fact in favor of including robots as components of a range of enrichment experiences for caregivers to potentially make available to their charges. My concern, as you note, is specifically with the potential for deception about friendship, because friendship seems to be a distinctive component of the good life about which it would be bad to mislead people.

      To play with the analogy between false friends and false coinage a bit, there’s nothing inherently wrong with, for instance, Monopoly money, but if one starts using it to deceive cognitively compromised patients into believing they’re getting paid, that seems like a bad thing to do even if they never are in a position to try to spend their earnings. (That’s probably my character-theoretic background coming through – it’s bad to be deceptive, even if the deceived never catches on!) In conjunction with the thought that patients who are deceived about the reality of their friendships are worse off than they think they are, we still end up with reason to exercise care in building and deploying social robots on vulnerable populations.

      Delete
  4. Cool post Alexis. I'm inclined to agree with much of what has been said above. It would certainly be a bad move to simply replace Mary who cares to degree X, with robot Mary who perfectly simulates caring to degree X, and nothing else. But that doesn't really seem to me to map onto a real choice. For example, if Mary replaced herself with robot Mary, either forever, or for specific periods of time, then she would presumably derive some extra benefit. And because the units of intrinsic value seem not to be fungible, particularly into hedons, we have no idea how much extra benefit Mary would need to get to make it a good deal all the way around. Perhaps very little.

    I'm also inclined to think that we are already dealing with this problem with pets, specifically with cat owners. Cat owners seem completely unaware that cats are robots. And far be it from me to tell them.

    ReplyDelete
    Replies
    1. Thanks, Randy! It seems to me that you’re asking, “OK, now what?” because you’re on board with my basic point. To that, I’m afraid, I have a rather unsatisfying answer: it depends! It certainly won’t yield (as you suggest) to a straightforward hedonic calculus, because I’m denying that anything like hedons are the relevant things to measure.

      In fact, I think we should be suspicious of attempts to look for algorithmic solutions to questions about how to provide good lives for long-term care patients in the face of practical constraints on resources. I suspect good answers will be highly sensitive to a range of contextual factors. What I’m trying to do here is specify one such factor (deceptiveness about friendship), and distinguish it from another related concern (alleviating loneliness).

      Although this doesn’t answer the question “but what should I do about MY grandma?” it does point to at least one practical principle for future ethical deliberating: where possible, minimize deceptiveness about friendship (among other things, via some of the design-and-deployment considerations I’ve sketched). This can be helpful even if we don’t have precise guidelines for determining when other factors outweigh this consideration.

      (And as a cat owner, I will do my best to remain blissfully ignorant... although I don't think that's consistent with my position here, whoops.)

      Delete
    2. Thanks Alexis, that's a satisfying reply. One of my favorite roll call questions to ask students is: How much money would you require to break an inconsequential promise to a friend? There are always plenty of people who would do it for a relatively small sum, and lots of people who say they would not do it for any amount of money, (and that they would never want to be friends with people who would.) I've always thought this reveals something important about the different ways that people think about the nature of friendship. I wonder if the promise breakers would also be more easily satisified with robot friends.

      Delete
  5. Good points, Randy. But my cats tell me that your last assertion is false.

    ReplyDelete
  6. Interesting post! I also appreciate the Blade Runner references, as many of your commenters do. I wonder if your example of people in nursing homes using Paro has to be taken as an example of a deceptive friendship. You may well be right that friendship has some particularly laudable qualities that are not met with (insufficiently human) robots, but is that necessarily what's being simulated with Paro? Paro is designed to look like and act like a cute baby harp seal. Some people in the nursing home respond to Paro by cuddling with it, brushing its fur, etc. It seems to me that they might be responding to Paro as a simulated pet, rather than a friend. It's even possible that they are responding to it as a particularly engaging, cuddly, stuffed toy. The benefits of friendship may well, as you argue, depend in part on the degree to which both parties feel the friendship, but presumably the benefits of having a doll or stuffed animal to hold when you're sad or in pain do not depend on reciprocity, and pets are a border case between the two (especially depending on the kind of pet; dogs are more reciprocal of your feelings than cats, and most cats are more reciprocal than fish). The beneficial good feelings one has toward a toy or pet may mitigate feelings of loneliness, but seeing a beautiful sunset while hiking by yourself may mitigate feelings of loneliness. Neither need necessarily be a simulation of the feeling of togetherness with another person. If this is right, then the problem Sebastian has in Blade Runner is that he's trying to replace or is being confused about the difference between friendship and a different positive feeling. I may be wrong about some of this; I know you've put more thought into friendship than I, so I'd appreciate your thoughts. Thanks again for the post!

    ReplyDelete