The Moral Responsibility of Complex Robots
Description
In the past several years, the long-standing debate over freedom and responsibility has been applied to artificial intelligence (AI). Some such as Raul Hakli and Pekka Makela argue that no matter how complex robotics becomes, it is impossible for any robot to become a morally responsible agent. Hakli and Makela assert that even if robots become complex enough that they possess all the capacities required for moral responsibility, their history of being programmed undermines the robot’s autonomy in a responsibility-undermining way. In this paper, I argue that a robot’s history of being programmed does not undermine that robot’s autonomy in a responsibility-undermining way. I begin the paper with an introduction to Raul and Hakli’s argument, as well as an introduction to several case studies that will be utilized to explain my argument throughout the paper. I then display why Hakli and Makela’s argument is a compelling case against robots being able to be morally responsible agents. Next, I extract Hakli and Makela’s argument and explain it thoroughly. I then present my counterargument and explain why it is a counterexample to that of Hakli and Makela’s.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2020-05
Agent
- Author (aut): Anderson, Troy David
- Thesis director: Khoury, Andrew
- Committee member: Watson, Jeffrey
- Contributor (ctb): Historical, Philosophical & Religious Studies
- Contributor (ctb): College of Integrative Sciences and Arts
- Contributor (ctb): Barrett, The Honors College