Explanations that occur after a model-based surprise are interesting for at least two reasons: first they occur naturally, as we saw (this point is also illustrated in [Heritage 1990]), and thus we may hope that their use could improve the acceptability of human/machine interactions under certain circumstances. But the problem is to reproduce these explanations on artificial systems. This is the second point. As we will see now, this type of explanation is heavily constrained, so that, in certain situations, one can write programs that are able to recognize and to synthesize such explanations.
We expressed the surprise contained in [ex_canteen] with a logical representation that can be rewritten as:
[ normal_workday & students_are_absent ] ==> F
F stands here for an ever false proposition. Thus [ a & b ] F means that a and b are logically incompatible. The explanation given by B aims at denying normal_workday: if the forum takes place today, then today is not a normal day (classes have been cancelled). Any model-based surprise can be written this way as a logical incompatibility, and thus we are exactly in the situation mentioned by Inhelder and Piaget, where subjects have to "suppress contradictions or incompatibilities". M. Baker [1991] describes also "internal conflicts" as leading to explanatory dialogues, and shows situations in which inconsistencies are related to dialogic cooperation at the sociological level. But our suggestion of using surprise-based explanations in explanatory systems comes more simply from the observation that interlocutors in conversation do utter their internal logical conflicts spontaneously, and that other interlocutors do their utmost to find relevant explanations.
The situation of logical conflict is interesting because it is heavily constrained: only few explanations are admissible (even if not necessarily accepted) as solutions to an incompatibility. Let us take first the simple case of an explanation working as a direct invalidation. If we express the logical incompatibility this way:
[ p1 & p2 & . . . & pn ] ==> F
then a direct invalidation consists in denying one of the terms pi considered as belonging to the contradiction by the person uttering surprise. So any explanation which denies one pi or which proves that pi must be false is thus admissible. This was the case in [ex_canteen].
Another possibility for explaining a logically surprising situation is illustrated by the following example:
[ex_toy]
context: E is surprised by the fact that her great child G (two years old) is playing a lot with a broken toy. The mother, F, gives an explanation.
E1- One could think they leave the toys when they are broken. Listen: G played with a car which had no wheels left. I'm not saying he liked it better, but he played with it at least as much as with the others.
F1- In fact it's because he is imagining he is a mechanic, and he is going to repair it.
We can represent E's surprise logically:
[ plays_with( G, Toy) & not functional(Toy) ] ==> F
where Toy is instantiated on the car with no wheels left. F's explanation can be understood as an indirect invalidation, i.e. an invalidation of another clause including further premises:
[ plays_with( G, Toy) & not functional(Toy) & not playing_at_repairing( Toy) ] ==> F
F's explanation could be paraphrased this way: "if the child did not play at repairing the toy, then it would be indeed surprising that he plays with it. But this is not the case."
This kind of explanation through indirect invalidation is admissible as long as the surprised speaker can accept it as denying a forgotten premise pn+1. In other words, this speaker has to accept that
[ p1 & p2 & . . . & pn & pn+1 ] ==> F
represents the actual incompatibility. The explanation (not pn+1) then appears as an invalidation of this augmented incompatibility. We should not wonder that some premises may be "forgotten" by the first speaker. After all, any incompatibility noticed in real life presupposes that the world still exists, that people are at a single location at any time, and so on. But requiring that a given fact pn+1 can be recognized as part of the initial incompatibility remains a very strong constraint on what can or cannot be considered as an admissible explanation. In the preceding excerpt, F could have denied hypotheses like:
but not:
These constraints which limit the logical form of explanations that may follow a model-based surprise are strict enough to allow artificial systems to utter or to recognize such explanations. We will see now that a system like SAVANT3, which was designed to help students acquire new technical concepts, is able to recognize both direct and indirect invalidations. We will then make some suggestions upon the synthesis of relevant explanations as reactions to a surprise expressed by the user.