This page is a historical archive. For the latest information please visit commonsensereasoning.org.

Peter Grünwald

Ramifications and sufficient causes.

c-fcs-98-42
 
[original]
[abstract]
[mail to author]
 
[mail to moderator]
[debate procedure]
[copyright]

Overview of interactions

N:o Question Answer(s) Continued discussion
1 7.1  Erik Sandewall
19.2  Peter Grünwald
21.2  Erik Sandewall
23.2  Peter Grünwald
25.2  Vladimir Lifschitz
25.2  Fangzhen Lin
25.2  Erik Sandewall
27.2  Peter Grünwald
27.2  Peter Grünwald
27.2  Peter Grünwald
27.2  Vladimir Lifschitz
27.2  Erik Sandewall
27.2  Fangzhen Lin
12.3  Murray Shanahan
12.3  Pat Hayes
2 21.2  Camilla Schwind
23.2  Peter Grünwald
 
3     21.2  Vladimir Lifschitz
25.2  Erik Sandewall

Q1. Erik Sandewall:

You mentioned in your talk that Lin's use of the mnemonic "causes" tends to be misleading. I agree with this, and indeed the observation from our group to his work was that it was more or less a reformulation of what had already been done using occlusion. However, I wonder if there isn't a similar problem with your use of the mnemonic "do". When you characterize the  Toss  event using formulae such as  Do(Heads(1), true, then who is it that is doing something?

A1. Peter Grünwald (19.2):

Yes and no. `Yes' in the sense that the word `Do' may be misleading too - - indeed, nobody needs to be doing something when a coin falls on the table. `No' in the sense that I am very explicit in my article about the `physics' of Do: I state precisely what has to happen in a domain in order for  Do(XB to be the case in that domain; no such statement about  Caused  can be found in Lin's work.

Namely,  Do(XB means that an intervention takes place that sets the value of  X  to  B . A more appropiate term might indeed be  Set(XB or  Intervention(XB. I used  Do  to stay close to Pearl's notation, see [c-tark-96-51].

Lin's use of the predicate  caused  has not such a clear interpretation. It gets an implicit interpretation by the axioms it is involved in (like  Caused(XsTrue) ·-> Holds(Xs) but it is not a priori clear what has to be the case in the domain of interest in order for  Caused  to be true.

In other words, we do not really know what we are trying to model by the predicate  Caused  - that's the essential difference with  Do .

References:

c-tark-96-51Judea Pearl.
Causation, Action, and Counterfactuals.
Proc. Theoretical Aspects of Rationality and Knowledge, 1996, pp. 51-73.

C1-1. Erik Sandewall (21.2):

Peter,

You wrote

  ... I am very explicit in my article about the `physics' of Do: I state precisely what has to happen in a domain in order for  Do(XB to be the case in that domain; no such statement about  Caused  can be found in Lin's work. Namely,  Do(XB means that an intervention takes place that sets the value of  X  to  B . ... Lin's use of the predicate  caused  has not such a clear interpretation.

But Lin [c-ijcai-95-1985] wrote:

  The ternary predicate  Caused  - for any fluent  p , any truth value  v , and any situation  s ,  Caused(pvs is true if the fluent  p  is caused (by something unspecified) to have the truth value  v  in the situation  s .

What is the difference?

Erik

References:

c-ijcai-95-1985Fangzhen Lin.
Embracing Causality in Specifying the Indirect Effects of Actions. [postscript]
Proc. International Joint Conference on Artificial Intelligence, 1995, pp. 1985-1991.

C1-2. Peter Grünwald (23.2):

The difference is that it is not clear what exactly has to happen (in the real world) for a fluent  p  `to be caused to have truth value  v '.

For example,  Holds(Sw1(Up), s) ^ Holds(Sw2(Up), s) ·-> Caused(Opens

is how Lin formalizes the suitcase domain. This translates to `if both switches of the famous suitcase are up, then it is caused to be open'

But is the suitcase caused to be open by the mere fact that the switches are up? Shouldn't it rather be: `the suitcase is caused to be open only if the the switches are put in the up position? One may argue about this - as one may argue about the appropiateness of lots and lots of uses of the word `caused' in everyday language: in different situations the word can stand for different things. Because of this ambiguity, I argue, theories of action in which `caused' is used as an atomic concept, intended to be used whenever we use the word `caused' in everyday language, will run into trouble at some point.

I think the concept of `intervention' suffers no such ambiguity. Suppose I give you a description of the complete state of a reasoning domain at time  t  (or in a situation  s , if you like). Then you will usually be able to say whether or not an intervention takes place at that point in time/ in that situation. It will not always be clear whether or not something is `caused to have truth value  v ' at that point in time/ in that situation. In other words, to put it very bluntly, `I know what I'm modelling' whereas Lin does not.

C1-3. Vladimir Lifschitz (25.2):

In connection with Lin's suitcase example, Peter Grünwald asks:

  Is the suitcase caused to be open by the mere fact that the switches are up? Shouldn't we rather say that the suitcase is caused to be open if the the switches are put in the up position?

To support Lin's view, let me quote from an article by John Searle in the New York Review of Books (1995, Vol. 17, No. 42):

  In our official theories of causation we typically suppose that all causal relations must be between discrete events ordered sequentially in time. For example, the shooting caused the death of the victim. Certainly, many cause and effect relations are like that, but by no means all. Look around you at the objects in your vicinity and think of the causal explanation of the fact that the table exerts pressure on the rug. This is explained by the force of gravity, but gravity is not an event. Or think of the solidity of the table. It is explained causally by the behavior of the molecules of which the table is composed. But the solidity of the table is not an extra event, it is just a feature of the table.

- Vladimir

C1-4. Fangzhen Lin (25.2):

I was delighted and flattered, on the one hand, to learn that one of my previous work had been the focus of discussions in recent Newsletters on Reasoning about Actions and Changes. On the other hand, I was also disappointed and disturbed that much of the remarks about my work were either based on unsubstantiated claims (see below) or plain personal insults such as Peter Grünwald's indignity "`I know what I'm modelling' whereas Lin does not." Hey man, are you really that desperate that you could not find any other way to defend your work?

I don't think this is what this newsletter was set up for, and I'm sure this is definitely not what this community needs.

  to be misleading. I agree with this, and indeed the observation from our group to his work was that it was more or less a reformulation of what had already been done using occlusion. However, I wonder if there isn't a

Whoever made this claim, could you please substantiate it? By my understanding of "reformulation", I'm expecting at least something like:

  1. A pointer to work on occlusion before IJCAI-95 that discusses examples similar to those in my IJCAI-95 paper [c-ijcai-95-1985].

  2. A pointer to work on occlusion before IJCAI-95 that proves results similar to those in my IJCAI-95 paper.

  3. A reformulation of my predicate  Caused(pvs in the language of occlusion.

- Fangzhen Lin

References:

c-ijcai-95-1985Fangzhen Lin.
Embracing Causality in Specifying the Indirect Effects of Actions. [postscript]
Proc. International Joint Conference on Artificial Intelligence, 1995, pp. 1985-1991.

C1-5. Erik Sandewall (25.2):

Peter, Vladimir, and Fangzhen

(The following combines answers to your respective comments to the discussion)

A common feature in much of the recent work is the use of an operator that "enables" change, that is, the operator excepts a feature from the requirement or the preference of persistence. The neutral term exception operator then covers both occlusion (often written as a predicate  X ), Vladimir's "release" operator, Lin's "Caused" operator, and Peter's "Do" operator, and also e.g. the negation of the "persistent" operator of del Val and Shoham. (An important early reference that articulates the use of such an operator in a general way is Hector Geffner's paper at KR 1989, [c-kr-89-137]).

Anyway, when such logics are used and one is to express that a certain "cause" has a certain "effect", then one has in principle three possibilities:

All of these are instances of what Fangzhen calls "fluent-based causes", that is, causal relationships that only refer to values or changes of fluents, but not to the occurrence of actions.

Peter, you seem to argue that (3) is the right approach, and quote Fangzhen as using (1); Vladimir defends this with the reference to Searle. However, I observe that in his 1995 article, Fangzhen allows both (1) and (3). The general form of causal rules in his formula (16) allows one or more uses of the exception operator in the antecedent. However, example (22) does not use it, apparently in order to assure that the minimization will be computable by Clark completion. Isn't that a problem in your (Peter's) case as well?

Vladimir, I don't see why everything that's called "causation" in natural language has to be handled with the same logical devices. Seconding Peter's point of view, why can't we use one formalization for causation chains, where one change causes another change, and another formalization for static dependencies? The causal propagation semantics that I introduced in my KR 1996 paper [c-kr-96-99] and used for assessments of several approaches to ramification, is exactly a way of formalizing those cases where one change causes another. The kinds of situations that Searle refers to, or the early "dumbbell" example of Ginsberg and Smith represent arguably another class of phenomenona.

For causal propagation of change, there remains the choice between approaches (2) and (3) above: should the rule trigger on the fact that change actually occurred, or on the fact that change was enabled? This is important especially when actions are nondeterministic.

In the Features and fluents tradition, we first used alternative (3) (tech reports only), but then two group members, Tommy Persson and Lennart Staflin argued that (2) was the right way to go. Their paper at ECAI 1990, [c-ecai-90-497], uses (2) for characterizing indirect change. Their paper contains examples from continuous domains, but the formalism is also defined for, and applies equally well to the discrete case. (A revised version of the paper [r-linkoping-90-45] extends the work) In their approach, they wrote rules where the antecedent requires a specific fluent to actually have changed its value, but they also allowed to specify or to restrict the new or the old value.

We were not alone, of course. The paper by del Val and Shoham at IJCAI '93, [c-ijcai-93-732] defines a predicate  persistent(ps that is like the negation of occludes, and an entailment method that (I believe) does chronological minimization of unoccluded change. They mention the need for restrictions on the persistence axioms, but these are quite generous:

  some further requirements are needed in order to ensure that the extension of  persistent  at any state depends only on the current and possibly past state of the database, not on future states...

Also, I imagine that the event calculus is now able to represent these things, and it would be interesting to hear what the history was there (Murray? Rob?).

Finally, in order to answer Vladimir's question

  Can you please explain this?

because of my parenthetical remark to Peter, after having agreed with him that the use of the mnemonic "causes" may be misleading,

  ... the observation from our group to [Lin's] work was that it was more or less a reformulation of what had already been done using occlusion

and Fangzhen's question

  By my understanding of "reformulation", I'm expecting at least something like:
  1. A pointer to work on occlusion before IJCAI-95 that discusses examples similar to those in my IJCAI-95 paper.
  2. A pointer to work on occlusion before IJCAI-95 that proves results similar to those in my IJCAI-95 paper.
  3. A reformulation of my predicate Caused(p,v,s) in the language of occlusion.

First, Fangzhen, if you felt I was being rough, I apologize, it was not my intention. I don't know if it's worthwhile to go so deeply into this, but since you both ask the question let me summarize the background. As several authors are now using operators similar to the family of occlude/ release/ persistent/ caused/ do/, a clarification of history may in fact be of some general interest. The following are the specific contributions of Fangzhen's 1995 paper [c-ijcai-95-1985] according to its abstract:

  ... we argue that normal state constraints that refer only to the truth values of fluents are not strong enough for [specifying the effects of actions using domain constraints], and that a notion of causation needs to be employed explicitly.

See e.g. the Persson-Staflin paper [c-ecai-90-497]. Several papers in our group during 1988-1993 used a unique concept, "occlusion" or "explanation" of change for several purposes: for imprecise timing of changes within an action with extended duration, for nondeterminism, and also for "our intuition that a discontinuity should have a cause" (Persson/ Staflin). The abstract continues:

  Technically, we introduce a new ternary predicate...  Caused(pvs if the proposition  p  is caused... to have the truth-value  v  in the situation  s .

Compare e.g. my article in Journal of Logic and Computation, 1994 [j-jlc-4-581], section 7.2 (misprint corrected):
  Here  [st]p := F  is an often-useful abbreviation for  (st]Xp ^ [t]p = F . Informally, it is read as saying that the feature  p  changes its value to become  F  some time during the interval  [st.

This reduces to  Caused(pFs if  s = t . Finally, the abstract says:

  Using this predicate, we can represent fluent-triggered [causal statements]

Compare the 1990 Persson-Staflin paper, section 5 for some examples.

Besides the material that is mentioned in the abstract, the paper also contains a section 4, which begins

  The procedure we followed in solving the suitcase problem can be summarized as follows: ...

and which then also defines a class of theories for which the Clark completion is enough to compute the intended conclusions. The KR'96 paper by Gustafsson and Doherty [c-kr-96-87] shows the striking similarity between this procedure and PMON. The procedure for reducing PMON to a first-order formulation was described by Doherty and Lukaszewicz in their 1994 ICTL paper [c-ictl-94-82], together with similar reductions of all other entailment methods that were assessed in "Features and Fluents" [mb-Sandewall-94]. The generalization of PMON to allowing fluent-based causal rules, is trivial. The reduction also generalizes for rules of types 1 and 2 above. I believe that Clark completion also has trouble with type 3?

We should of course not get enmeshed in priority debates, neither in this Newsletter nor anywhere else. I do feel however that many papers in our area get published with very incomplete accounts of previous and related work. This is unlikely to change because of the constraints that affect us all: limitations on our time, and on the number of pages allowed for each article. No reason to cast stones, therefore, but maybe the section on "related work" in research papers ought not to be our only mechanism for assembling topic-specific surveys and bibliographies, and possibly the present debate forum could serve as a complement. Additional contributions are invited to this account of recent history, therefore.

References:

c-ecai-90-497Tommy Persson and Lennart Staflin.
A Causation Theory for a Logic of Continuous Change.
Proc. European Conference on Artificial Intelligence, 1990, pp. 497-502.
Also available as Linköping technical report Nr. 90-18 [postscript].
c-ictl-94-82Patrick Doherty and Witold Lukaszewicz.
Circumscribing Features and Fluents.
Proc. International Conference on Temporal Logic, 1994, pp. 82-100.
c-ijcai-93-732Alvaro Del Val and Yoav Shoham.
Deriving Properties of Belief Update from Theories of Action (II).
Proc. International Joint Conference on Artificial Intelligence, 1993, pp. 732-737.
c-ijcai-95-1985Fangzhen Lin.
Embracing Causality in Specifying the Indirect Effects of Actions. [postscript]
Proc. International Joint Conference on Artificial Intelligence, 1995, pp. 1985-1991.
c-kr-89-137Hector Geffner.
Default Reasoning, Minimality and Coherence.
Proc. International Conf on Knowledge Representation and Reasoning, 1989, pp. 137-148.
c-kr-96-87Joakim Gustafsson and Patrick Doherty.
Embracing Occlusion in Specifying the Indirect Effects of Actions.
Proc. International Conf on Knowledge Representation and Reasoning, 1996, pp. 87-98.
c-kr-96-99Erik Sandewall.
Assessment of ramification methods that use static domain constraints. [entry]
Proc. International Conf on Knowledge Representation and Reasoning, 1996, pp. 99-110.
j-jlc-4-581Erik Sandewall.
The Range of Applicability of some Non-monotonic Logics for Strict Inertia.
Journal of Logic and Computation, vol. 4 (1994), pp. 581-615.
mb-Sandewall-94Erik Sandewall.
Features and Fluents. The Representation of Knowledge about Dynamical Systems.
Oxford University Press, 1994.
r-linkoping-90-45Persson and Lennart Staflin.
Cause as an Operator in a Logic with Real-valued Fluents and Continuous Time.
Appeared as Linköping technical report Nr. 90-45 [postscript].

C1-6. Peter Grünwald (27.2):

Vladimir,

I agree with you that the concept of `causality in terms of intervention only' is too narrow (and Searle expresses it quite well!); I just think that most of the examples in common sense reasoning we are currently dealing with are simple enough to be formalized in `interventionistic' terms.

On the technical side, I do think there are some advantages in using the formalization one would obtain by translating the `reverse switches domain' (example 5 in my paper) to the suitcase domain, i.e. using  (holds ^ caused) v (caused ^ holds) ·-> caused  instead of  holds ^ holds ·-> caused .

These advantages are the same as those outlined for the switches domain in my common sense paper (see example 5).

C1-7. Peter Grünwald (27.2):

Fangzhen,

First of all, let me apologize. I am very sorry to have insulted you with my remarks. This was never my intention. Too eager to write a reply quickly, I wrote it without thinking and I didn't realize that it would be taken literally.

In fact, I find your approach one of the most intuitive I have come across; I think we both have quite similar intuitions about we are modelling which makes it all the more silly to write `you don't know what you're modelling'. Here is what I really wanted to say; I hope I'll be able to formulate it in a better way now:

`I think it is clearer what kind of situation can and what kind of situation cannot be modeled with the help of the `Do' predicate (which is intended to stand for interventions) than with the help of the `Caused' predicate (which, if I understand you correctly, is intended to be used in those situations in which we would use the word Caused in natural language).'

Now that I'm at it, let me elaborate on this a little bit:

Though their roots are different (`Do' coming from Pearl's structural equation theory and `Caused' coming from your work), it turns out that formally speaking, `Do' is almost the same as your `Caused' : they are defined in very similar ways which may even be equivalent (though this hasn't been proven). One of the aims of (the long version of) my paper is to point this out. That it may make sense to point it out can be seen from example 5 in the paper (the `reversed switches domain', originally due to Sandewall and/or Doherty, I believe). The Do-predicate occurs there in the following axiom:
    Do(Light(t), TRUE) ·-> Do(Switch(t), TRUE (*)
(from the fact that the light has been put on we conclude that the switch has been put in the on position)

If we describe to one another, in natural language, what is modelled by this axiom, then we would probably not use any causal terms (does putting on the light cause the switch to be on?) (see the paper for details). However, (*) can be modelled equally well by your `caused'-predicate as with the `do'-predicate used here. It follows that your `caused'-predicate has broader applicability than it might seem, which I think is worth noting.

Peter

C1-8. Peter Grünwald (27.2):

Erik,

You wrote:

  Peter, you seem to argue that (3) is the right approach, and quote Fangzhen as using (1); Vladimir defends this with the reference to Searle. However, I observe that in his 1995 article, Fangzhen allows both (1) and (3). The general form of causal rules in his formula (16) allows one or more uses of the exception operator in the antecedent. However, example (22) does not use it, apparently in order to assure that the minimization will be computable by Clark completion. Isn't that a problem in your (Peter's) case as well?

Indeed, Fangzhen allows both approaches (1)  [holds ·-> caused and (3)  [caused ·-> caused. My point is only (see my answer in C1-7 to Lin's question in C1-4)) that my approach can help you in deciding what domains should be modelled by (1) and what domains should be modelled by (3). I may indeed have trouble with efficiently computing the minimization - that wasn't my primary concern in the paper.

C1-9. Vladimir Lifschitz (27.2):

Erik,

I am puzzled by your comments about Lin's predicate  Caused  in ENRAC 25.2. In your view, it is an "exception operator," like occlusion. My understanding is very different. Exception operators are as old as formal default reasoning; McCarthy called his exception operator  Ab , and its predecessor  End  can be found in your paper written back in the 1970s. Lin's predicate  Caused , on the other hand, is a new and original idea.

You quote from Lin's abstract:

  Technically, we introduce a new ternary predicate...  Caused(pvs if the proposition  p  is caused... to have the truth-value  v  in the situation  s .

Then you write:

  Compare e.g. my article in Journal of Logic and Computation, vol. 4, no. 5, 1994, section 7.2 (misprint corrected):
  Here  [st]p := F  is an often-useful abbreviation for  (st]Xp ^ [t]p = F . Informally, it is read as saying that the feature  p  changes its value to become  F  some time during the interval  [st.
  This reduces to  Caused(pFs if  s = t . Finally, the abstract says:

Two points:

First, if  s = t  then your informal reading turns into " p  changes its value to become  F  some time during the interval  [tt." How can  p  change its value during an interval consisting of one time instant?

Second, Lin's paper and yours use different temporal formalisms. To compare the two approaches, let's translate your formulas into the situation calculus. The counterpart of the formula
    (st]Xp ^ [t]p = F   
in the situation calculus is
    Ab(pas) ^ Value(pResult(as)) = F  (1)
where  Ab  is the abnormality predicate from the commonsense law of inertia
    ¬ Ab(pas) ·-> Value(pResult(as)) = Value(ps  
The fundamental difference between (1) and  Caused(pFs is that the former contains an action variable, and the latter doesn't.

Lin's IJCAI-95 paper was a major development that has not been fully appreciated so far by the nonmonotonic community.

--Vladimir

C1-10. Erik Sandewall (27.2):

Vladimir,

I thought I had found a previously unused term when I used "exception operator" as a generic for "occlude", "release", "Caused", etc. I don't think you would claim that all of those are trivially reducible to abnormality?

Re
  First, if  s = t  then your informal reading turns into " p  changes its value to become  F  some time during the interval  [tt." How can  p  change its value during an interval consisting of one time instant?
a feature is said to change its value at time  t  iff its value at  t  differs from its value at  t-1 . This is for the case of discrete time; for continuous time you have to use the left limit value at time  t . Then, a feature is said to change its value in the interval  [st iff it changes its value in some point in  (st (omitting the left endpoint of the interval), that is, either between  s  and  s+1 , or..., or between  t-1  and  t .

When I referred to the special case of  s = t  in  [stp := F , I chose naturalness over formal precision, since it should have said  s = t-1 . I thought introducing this technicality would have been an unnecessary detour.

Re
  The fundamental difference between (1) and  Caused(pFs is that the former contains an action variable, and the latter doesn't.
evidently this action variable is an artifact of your translation, since it is not present in the original formulation. Also, since Lin's paper does not use any  Ab  predicate (he minimizes  Caused  directly), translating my formulation to one using  Ab  doesn't particularly help the comparison. In fact, if you translate Lin's notation in the same way as you did for mine, you will have to introduce an action variable there as well since your  Ab  predicate is defined with that as its second argument. However, this has nothing to do with either Lin's formalism or mine.

The translation task is actually very simple, since my formalism allows for both linear time and branching time (F&F page 138 ff). For branching time, the notation  [st is defined only if  t  is a direct or indirect successor of  s , and it then refers to the path from  s  to  t . Defining  t-  as  t-1  in the case of integer time, and through the axiom  do(as)- = s  for the case of gof-sitcalc situations, the obvious translation of
    (t-tXp ^ [tp = F  (1)
into Lin's notation is
    Caused(pFt (2)
(Remember that  [tp = v  is the same as  H(pvt, and that this is a modified kind of equality, whose character can't be rendered in HTML). Both the informal explanation and the formal treatment of (1) and (2) are identical in the two approaches, for mainstream examples. Formula (2) is more compact, which is why I introduced the abbreviation
    [t-tp := F   
Then it's clear that the difference is purely notational. If you don't want to represent the duration of actions, then you can of course modify the abbreviation so that only one timepoint is mentioned.

C1-11. Fangzhen Lin (27.2):

Erik,

Some thoughts about your note C1-5.

1.  Caused =/= Occlude . For example,  Caused(ptrues) ·-> H(ps. But there are no such axioms for  Occlude . Notice that these axioms go a long way: in deriving a generic successor state axiom, new causal rules from old ones, making causal rules strictly stronger than corresponding state constraints...

2.
    Caused(ptrues=/= Occlude(pss) ^ H(ps  
    Caused(ptrues=/= Occlude(ps) ^ H(ps  
See (Gustafsson and Doherty, KR'96).

3. Regarding the previous results of occlude that you mentioned:

  ... we argue that normal state constraints that refer only to the truth values of fluents are not strong enough for [specifying the effects of actions using domain constraints], and that a notion of causation needs to be employed explicitly.
  See e.g. the Persson-Staflin paper at ECAI 1990. Several papers in our group during 1988-1993 used a unique concept, "occlusion" or "explanation" of change for several purposes: for imprecise timing of changes within an action with extended duration, for nondeterminism, and also for "our intuition that a discontinuity should have a cause" (Persson/ Staflin). The abstract continues:

Observe that none of them were about domain (state) constraints and the ramification problem. As you can see from the paragraph that you cited from (Lin 95), the main contribution of my paper has to do with constraints and in particular causal constraints. If I'm not mistaken, the main impacts of the set of quite related papers on actions in IJCAI'95 have to do with the ramification problem. They shed some new lights on the nature of this problem that had never before been thought of, and have just begun to bear fruits: for example, the work by Michael Thielsher on diagnoses that was discussed in this newsletter before, and the work on planning from causal theories using GSAT by Norm McCain and Hudson Turner (in KR'98).

4. Regarding the connection between your PMON and the minimization strategy that I used, yes, I'd be the first to admit that they are very similar. I apologize for not knowing it earlier. Thinking back, it's not surprising how they come together. As you know, I have been a devotee of what you call "filtered preferential entailment" approach for as long as I can remember, going back when Yoav and I were working on our provably correct theories of actions.

The idea of minimizing  Caused  with  Holds  fixed was also straightforward as it was the obvious one that would yield the Clark completion of causal rules.

5. I enjoyed reading Gustafsson and Doherty's KR'96 paper. I learned a lot from that paper. Patrick and I had some correspondences on the related topics. I believe we found some common grounds, but none in terms of "one can be reformulated in terms of another". (Patrick, correct me if I'm wrong.) If anything, I wouldn't call PMON(RCs) a "trivial extension of PMON to causal rules", especially when  Occlude  is applied to a time point in PMON(RCs) while the same predicate is applied to a time interval in PMON. Many things are trivial in retrospect. (The idea of GSAT is so simple that Russell and Norvig even include it as an assignment in their AI textbook.) But in any event, this paper is more or less irrelevant as far as the history is concerned.

Last but not the least, Erik, I have all the respects and regards in the world for you and your work! And Peter, there is no hard feeling whatsoever.

C1-12. Murray Shanahan (12.3):

Erik,

  Also, I imagine that the event calculus is now able to represent these things, and it would be interesting to hear what the history was there (Murray? Rob?).

Now that the heat of the debate has died down a little, I would like to give a belated response.

The issue of ramifications and indirect effects is dealt with quite thoroughly in my book Solving the Frame Problem, especially in the context of the event calculus. (Incidentally, the book also deals extensively with the situation calculus, so it's not just an axe-grind for a particular formalism.) A large number of standard benchmark scenarios are discussed. These include the following.

I believe the treatment of these scenarios covers all the issues in the current debate. (One version of Thielscher's circuit example raises some interesting issues not covered in the book. But this example can be dealt with too.)

In particular, the event calculus formalism presented in the book incorporates a Releases predicate, much like Vladimir's releases predicate and Erik's occlusion, which exempts a fluent from the common sense law of inertia. (The intellectual debt is duly acknowledged in the book.) It also permits state constraints, such as

    HoldsAt(Alivet) if HoldsAt(Walkingt  

as well as more "causal" constraints such as,

    Initiates(aWett) if Initiates(aInLaket).   

If anyone has a scenario they think the book's formalism can't handle I'd like to hear from them.

  I do feel however that many papers in our area get published with very incomplete accounts of previous and related work.

I second that. I didn't devote years of my research time, as well as a lot of sweat and tears to writing that book for no-one to read it. (I know Erik sometimes feels the same way.) On my part, in the book, I tried very hard to give credit to others wherever appropriate, and to build on other's work rather than starting from scratch wherever I could.

  This is unlikely to change because of the constraints that affect us all: limitations on our time, and on the number of pages allowed for each article.

I don't agree. We are all guilty of this to some degree, of course, and I tend to be generous in actual refereeing. But deep down I feel that an article that doesn't take scholarship seriously doesn't deserve to get published, in the same way that it wouldn't deserve to get published if it contained mathematical errors.

Murray

C1-13. Pat Hayes (12.3):

Vladimir Lifschitz, in reply to Erik Sandewall (ENRAC 27.2 (98022)), asks the following:

  First, if  s = t  then your informal reading turns into " p  changes its value to become  F  some time during the interval  [tt." How can  p  change its value during an interval consisting of one time instant?

Consider a flash of lightning at night, during which a room is instantaneously illuminated ( p ) for a single moment. The relevant predicate changes its value twice during such an instant, if the instant is considered to be like a point (having no duration), which seems quite plausible. Even if you insist on making the flash into an interval by separating the times of its beginning and ending, these times are both the temporal locations of instantaneous changes in the illumination-predicate.

In fact, one can make a case that any change must involve instantanous changes. Consider the classical example of two adjacent intervals meeting at timepoint  t  (which in my favorite temporal logic is identical to the interval  [tt ), and a light being switched on at  t , so that  p  - being illuminated - becomes true at that timepoint. One might object that such an apparently instantaneous change is in fact always really a continuous one (the electricity begins to flow, the filament becomes hot and begins to glow, etc.) and therefore occupies an non-instantaneous interval. But then consider the beginning point $s$ of that interval, and consider the property  q  of an illumination-change happening. The transition from not- q  to  q  is now instantaneous at  s . Clearly, the same strategy is going to work no matter how much you turn up the temporal magnification: there will always be a first point of the relevant interval, and something will be different there from the point immediately preceding.

For what its worth, by the way, this is one of the few places where quantum physics seems to agree with naive intuition :-)

Pat Hayes


Q2. Camilla Schwind (21.2):

As far as I have understood your approach, you do not use a causality predicate (or connective), but rather you define a causality relationship or causal implication by means of material implication as  Do(pb) ·-> Do(qb' or  a ·-> Do(pb as in example 3

(6)  Do(Alive(t), false) ·-> Do(Walking(t), false
and

(3)  Shoot ·-> Do(Alive(1), FALSE

But then you can derive classically,  A ^ Do(pb) ·-> Do(qb', for any formula  A . In other words, your causality relation appears to be monotonic. I think however that causal implication should be non-monotonic. For example, from you axiom (3), you can derive

 Shoot ^ ¬ -loaded ·-> Do(Alive(1), False

Does you approach behave in this way and do you think that causality should rather be non-monotonic?

A2. Peter Grünwald (23.2):

Yes, my `causal implication' is monotonic. To handle examples like
    Shoot ^ ¬ -loaded ·-> Do(Alive(1), False  
I would remodel my domain as follows:
    Shoot ^ -Abnormali ·-> Do(Alive(1), False  
All the nonmonotonicity is then put in the  Abnormali  propositional variable. This is discussed, by the way, in the technical report INS-R9709 (basically a long version of my Common Sense paper) available at my web-page. For brevity, I excluded all talk of abnormalities from the Common Sense paper.


C3-1. Vladimir Lifschitz (21.2):

Hi Erik,

  You mentioned in your talk that Lin's use of the mnemonic "causes" tends to be misleading. I agree with this, and indeed the observation from our group to his work was that it was more or less a reformulation of what had already been done using occlusion.

Can you please explain this?

Regards, Vladimir

C3-2. Erik Sandewall (25.2):

See comment C1-5 above.


This on-line debate page is part of a discussion at recent workshop; similar pages are set up for each of the workshop articles. The discussion is organized by the area Reasoning about Actions and Change within the Electronic Transactions on Artificial Intelligence (ETAI).

To contribute, please click [mail to moderator] above and send your question or comment as an E-mail message.