The Moral Responsibility of Machines – Creating an Artificial Moral Agent
The University of Manchester
Dr. Ann Whittle
Prof. Bijan Parsia
Prof. Fraser Macbride
I am conducting interdisciplinary research into the moral status and responsibility of artificial intelligence. I first consider how various duties might come to be placed on artificial agents, either via explicit agreements made or through the social roles that they occupy in our community. I argue that the notion of prospective responsibility can be applied to artificial agents using deontic logic translations of the obligations and duties we want to place on agents in virtue of their role. I discuss how, using these translations, artificial agents might respond when faced with problems such as moral conflicts between two or more duties they have taken on.
The second question I focus on is what happens if an artificial agent, having been ascribed certain duties, subsequently violates one of these duties. This can be framed as the standard problem of responsibility gaps – if an agent is acting autonomously and does something wrong (accidentally or otherwise), who is to blame for this? I assume a compatibilist account of moral responsibility and consider whether notions of retrospective responsibility such as accountability, attributability, and answerability are appropriate to use when discussing artificial wrongdoing.
Moral Responsibility and