PDF | PostScript | doi:10.1613/jair.5521
Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we prove a representation result that characterizes the class of trust-sensitive revision operators in terms of a set of postulates. We also show that trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information.
Click here to return to Volume 61 contents list