Logical relevance, as it can be described in spontaneous explanations produced during natural conversations, seems to be a desirable characteristic of explanations that may be given by artificial systems. Model-based surprise, when it can be recognized with a good probability, either by the user (as is needed in SAVANT3) or by the system (e.g. by a help system), can lead to logically relevant explanations. Alternating surprises and explanations should be an interesting way through which KBS could negotiate conceptual knowledge (which corresponds to the structures mentioned above, as opposed to procedural knowledge), even during task-oriented interactions. SAVANT3 relies on such a negotiation.
Every KBS user has expectations, and (s)he needs a conceptual explanation when the situation does not match them. We tried here to indicate a possible way to give logically relevant explanations by recognizing and invalidating user's expectations.