Javier PintoCausality in Theories of Action. |
c-fcs-98-349 [original] [abstract] |
[mail to author] [mail to moderator] [debate procedure] [copyright] |
N:o | Question | Answer(s) | Continued discussion |
---|---|---|---|
1 |
7.1 Graham White |
29.1 Javier Pinto |
|
2 |
7.1 Erik Sandewall |
29.1 Javier Pinto |
|
3 |
7.1 Tom Costello |
29.1 Javier Pinto |
7.1 Tom Costello 29.1 Javier Pinto |
Q1. Graham White:
Assume that a child is able to deal with switches and lights. Do you believe that the child has to have any knowledge of electrical current for this to be possible?
A1. Javier Pinto (29.1):
My work is not aimed towards dealing with children cognitive processes, therefore my answer is not based on any experimental evidence; rather, it is based on my own introspection and on my little experience with children development. I don't think that anybody could correctly analyze the behavior of the circuit without having some notion of what an electrical current is. Explanations based on other hypotheses might also be possible. The point of the paper is that if your explanations are based on explicit causal information, and if you do not get the right answers, you might want to change your model of the world to better reflect its behavior. Therefore, if a child gets the behavior wrong, because of a wrong analysis of the causal mechanism, the child should revise its knowledge, as opposed to revise his underlying reasoning mechanism to deal with causation.
The representation examples that you have given are the classical ones, which others have adequate solutions for as well. Are there some examples which your representation does correctly and which others can not do right?
Also, the examples in the article are very simple ones. What is the most complex example that you have done in this representation?
A2. Javier Pinto (29.1):
The most complex examples are the standard examples that appear in the literature related to causality (along with some examples involving continuous change, common in the qualitative reasoning literature). I believe that the work of other people can also be seen as instances of what I propose (e.g. the work of Todd Kelley on qualitative physics). In particular, we have looked at extensions to the circuit example (as illustrated in the paper and taken from [thielscher97]. Also, as shown in the article, we use a similar approach to deal with indeterminacy. I admit that this latter treatment of indeterminacy might be regarded as too complex. This added complexity is unavoidable, since we need to worry about the many possible outcomes of an indeterminate event, and must also be prepared to reason about each possible outcome. In future work I plan to study whether one can show a formal correspondence between approaches based on explicit causation and my own. My conjecture is that there is always a translation from an explicit causal representation and the one i propose. However, it remains to be seen if this translation is a natural one.
People are able to verify systems as complex as the Pentium, why then worry about these silly little circuits?
A3. Javier Pinto (29.1):
Well, I guess the best answer to this was given by John McCarthy, i.e., these sort of examples are the drosophilas of AI and Knowledge Representation. That is, the examples represent problems that are good testbeds for the different essential problems that we confront in AI.
Thus, these simple examples allow us to isolate a particular problem that we want to deal with. We do not need a zillion transistors to model the problems that arise when dealing with ramifications. In particular, this problem illustrates reasoning problems related to causality. There are many variants of this example, like the stuffy room example of Ginsberg and Smith [reasacti] and the suitcase with two latches of Lin [c-ijcai-95-1985]. All these examples have allowed us to illustrate problems with our reasoning based on formalizations of these domains, and have helped us advance the state of the art in the area.
References:
c-ijcai-95-1985 | Fangzhen Lin. Embracing Causality in Specifying the Indirect Effects of Actions. [postscript] Proc. International Joint Conference on Artificial Intelligence, 1995, pp. 1985-1991. |
What if the switches are very close, or if ...?
C3-2. Javier Pinto (29.1):
The point of the question, as I understand it is that the model proposed did not account with the many different ways in which the model might be a mistaken idealization of reality. Certainly the model proposed is not geared towards dealing that particular issue. Rather, assuming that the circuit behaves the way it does, and that the primitive actions are independent flippings of the circuit, then how can we reason about it? Therefore, the answer is that we cannot deal with those situations since we were trying to deal with a completely separate set of issues.
This on-line debate page is part of a discussion at recent workshop; similar pages are set up for each of the workshop articles. The discussion is organized by the area Reasoning about Actions and Change within the Electronic Transactions on Artificial Intelligence (ETAI).
To contribute, please click [mail to moderator] above and send your question or comment as an E-mail message.