<< Title

Some extensions and limits of surprise-based explanations

The kind of explanations we are dealing with in this paper may be proposed in any situation in which the functioning of the system does not match the user's expectations. This includes some interactions with knowledge-base systems, for end-users, but also experts during the elicitation and maintenance phases. This concerns also help and advisory systems, as far as the system is able to detect unsatisfied expectations in the user's request.

In any case, implementing surprise-based explanation capabilities requires that the systems has a very good representation of the user's knowledge. For instance, if we want a system to detect surprise, as PARADISE does, in a user's utterance and then to reply by giving an explanation, using a knowledge structured as a set of incompatibilities, then the system has to select a clause which contains terms of the user's request, say r1 and r2, and terms that were actualized in the present situation: s1, s2.

[ r1 & r2 & s1 & s2 & o1 ] ==> F

If such a clause exists, then a good guess would be that

[ r1 & r2 & s1 & s2 ] ==> F

is an accurate representation of what the users believes and of his/her surprise: for him/her, r1 and r2 cannot be simultaneously true, if we know that s1 & s2. The system will then try to explain the surprise by invalidating the clause. If one of the terms in the system's clause can be proven false, then the system is able to utter an admissible explanation. For example "but o1 is false" or "its because not o1" would be perceived as relevant explanations by the user. The system may also suggest these explanations when a term in the clause is unknown to him or has been learned directly from the user: "but perhaps not o1". When all terms in the clause can be proven true, then the system can find another clause used to prove one of these terms, and recursively try to invalidate this new clause.

However such results can only be obtained under very specific conditions:

A possible consequence of this is that an explanatory module that would include surprise-based explanation capabilities should be autonomous, as emphasized by B. Safar [1992]. But a transposition of the mechanisms outlined above onto KBS explanatory modules raises many problems. One of them is that the backward chaining underlying this mechanism will not necessarily match the trace of the KBS inferences. A possible solution would be that the explanation module avoids using inference rules that were not actually present in the trace. But many aspects of these transposition problems are still to be investigated.


<< Title