The Belief-Desire-Intention (BDI) architecture is a practical approach for modelling large-scale intelligent systems. In the BDI setting, a complex system is represented as a network of interacting agents - or components - each one modelled based on its beliefs, desires and intentions. However, current BDI implementations are not well-suited for modelling more realistic intelligent systems which operate in environments pervaded by different types of uncertainty. Furthermore, existing approaches for dealing with uncertainty typically do not offer syntactical or tractable ways of reasoning about uncertainty. This complicates their integration with BDI implementations, which heavily rely on fast and reactive decisions. In this paper, we advance the state-of-the-art w.r.t. handling different types of uncertainty in BDI agents. The contributions of this paper are, first, a new way of modelling the beliefs of an agent as a set of epistemic states. Each epistemic state can use a distinct underlying uncertainty theory and revision strategy, and commensurability between epistemic states is achieved through a stratification approach. Second, we present a novel syntactic approach to revising beliefs given unreliable input. We prove that this syntactic approach agrees with the semantic definition, and we identify expressive fragments that are particularly useful for resource-bounded agents. Third, we introduce full operational semantics that extend CAN, a popular semantics for BDI, to establish how reasoning about uncertainty can be tightly integrated into the BDI framework. Fourth, we provide comprehensive experimental results to highlight the usefulness and feasibility of our approach, and explain how the generic epistemic state can be instantiated into various representations.
Click here to return to Volume 58 contents list