Robot lies in health care : when is deception morally permissible?

Document Type

Journal article

Source Publication

Kennedy Institute of Ethics Journal

Publication Date

6-1-2015

Volume

25

Issue

2

First Page

169

Last Page

192

Abstract

Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot’s workings, capabilities, and internal structure. The robot’s real capabilities may diverge from this mental model to the extent that one might accuse the robot’s manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.

DOI

10.1353/ken.2015.0007

Print ISSN

10546863

E-ISSN

10863249

Publisher Statement

Copyright © 2015 Johns Hopkins University Press.

Access to external full text or publisher's version may require subscription.

Full-text Version

Publisher’s Version

Language

English

Recommended Citation

Matthias, A. (2015). Robot lies in health care: When is deception morally permissible? Kennedy Institute of Ethics Journal, 25(2), 169-192. doi: 10.1353/ken.2015.0007

Share

COinS