Abstract
Abstract. This paper supplies a computational model, via Logic Programming
(LP), of counterfactual reasoning of autonomous agents with application to morality.
Counterfactuals are conjectures about what would have happened, had an
alternative event occurred. The first contribution of the paper is showing how
counterfactual reasoning is modeled using LP, benefiting from LP abduction and
updating. The approach is inspired by Pearl's structural causal model of counterfactuals,
where causal direction and conditional reasoning are captured by inferential
arrows of rules in LP. Herein, LP abduction hypothesizes background conditions
from given evidences or observations, whereas LP updating frame these
background conditions as a counterfactual's context, and then imposes causal
interventions on the program through defeasible LP rules. In the second contribution,
counterfactuals are applied to agent morality, resorting to this LP-based
approach. We demonstrate its potential for specifying and querying moral issues,
by examining viewpoints on moral permissibility via classic moral principles and
examples taken from the literature. Application results were validated on a prototype
implementing the approach on top of an integrated LP abduction and updating
system supporting tabling.
Original language | English |
---|---|
Pages (from-to) | 25-53 |
Journal | Applications of Formal Philosophy |
DOIs | |
Publication status | Published - 1 Dec 2017 |
Keywords
- abduction, counterfactual, logic programming, morality, non-monotonicreasoning.