I like Paro. It is a robotic seal designed to support emotional wellbeing of, for example, people suffering dementia. Wellbeing comes from nurturing something. Hence, this robot is a care receiver more than a care giver. Paro is an interactive robot pet, yet not a cat or a dog, but something more uncommon and wild. Paro reacts to touch and voice and learns its name. A part of Paro’s charm lies in its unpredictable manners and actions. This makes it very animal-like instead of being a systematic machine. Paro has startled me with a sudden head toss, just like any unfamiliar animal would.
In November 2017 I had a privilege to meet Dr Shibata, the inventor of Paro. In the International Conference of Social Robotics, Shibata walked us through many studies done with Paro. Some of the study examples made me think about the ethics and the deception that goes along with robots and the elderly.
The debate of presenting robots as real and breathing entities is ongoing. People with dementia can’t necessarily tell the difference between a living pet and a furry robot, and to cover this up is no less than deceiving the patient (Sharkey & Sharkey 2010). It is why I found these following dialogues troublesome.
1) Nurse: “Aww, it loves you!”
(Patient petting Paro)
2) Therapist: “Look who wanted to see you!”
(Therapist offering Paro to a restless patient)
(Patient petting Paro and relaxing)
Patient: “Did it really want to see me?”
Especially regarding the second example, when the patient becomes suspicious of the veracity of the story, there should be a way to say, instead of “Yes”, something like “Well, it is a robot”. To keep the deception on even after the person makes a direct question of the truthfulness of the situation, is exactly the deception and infantilisation Sharkeys (2010) are concerned about.
In the first example, the patient was seemingly delighted petting Paro. In the second example, the restless patient got considerably less anxious after introduced to Paro. It gets tricky, if these effects are depended on deception. Should we accept the deceit if it improves one’s wellbeing in such measures that actually reduces pain, anxiety and the need for medication?
Anthropomorphism varies culturally, personality wise, and situationally (Epley et al. 2007), yet it is more or less in the human nature to consider interactive robotic pets as living creatures. I focused on the dialogue between care staff and the elderly patients. However because people are prone to anthropomorphism, we can’t prevent the deception that lies in social robots merely by choosing our words wisely. We actually would have to accept that these robot-borne benefits, such as reducing medication and all the side effects coming with it, come sealed with a possible deception.
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. Psychological review, 114(4), 864.
Sharkey, A. & Sharkey, N. (2010). Granny and the robots: ethical issues in robot care for the elderly. Ethics and Information Technology 14(1):27–40.