Giuseppe De Giacomo, Ray Reiter, and Mikhail SoutchanskiExecution Monitoring of High-Level Robot Programs. |
c-fcs-98-277 [original] [abstract] |
[mail to author] [mail to moderator] [debate procedure] [copyright] |
You assume that the robot is able to sense all "actions". That is not realistic, is it?
Originally, we defined the logic so that it sensed values of fluents, but this was technically more difficult to deal with. The present account is technically better.
But assuming that the values of fluents can always be correctly sensed is also not realistic. Observations will sometimes be faulty.
Agreed.
Q3. A workshop participant:
Doesn't your account indicate a basic design flaw in Golog, namely that it ought to have had an exception mechanism?
(Answer not recorded).
How do you consider using the notion of relevance for prediction, in order to cut down the space of what has to be predicted in a given failure situation?
We have thought about it, but we have no answer yet.
Since there is no notion of backtracking, why did you decide to use Prolog for the implementation?
Its properties are used for the cautious interpreter.
When an exogenous "action" has occurred, the system invokes the planner in order to find a way of getting to a state where the plan can be continued. This seems to require it to know about the preconditions for each action or action sequence, and not merely the postconditions of the whole program.
No, the postconditions are enough.
Q7. A workshop participant:
What if someone else comes in and completes building the tower while the robot is building one, will it then knock it down in order to be able to build it itself?
It is a research issue how to avoid having to do that.
How can it possibly be anything but trivial? If you are going to pursue the existing program, you must find a plan for getting to a state where it can be continued. If you are going to shortcut it and find an alternative plan directly to the desired goal of the given program, then you have to find a plan for getting to a state that's characterized by the program's postconditions. If you have a system that does one of these things, it should be able to do the other. It should also be able to try both alternatives, compare the plans, and pick the best one.
Programs may be fairly long, and we do not wish to engage the planner for making long plans, only short ones, which is usually sufficient for recovering to a state where the program can be continued.
(Referring to question 6) But if one does not look for going directly to the program's goal, but only for recovering, then it seems that preconditions of remaining actions are necessary, and the postcondition of the entire program is irrelevant.
Q8. François Lévy:
You do not seem to rely on work in robotics or in planning. Why is that so?
The work in robotics and in algorithmic planning assumes linear plans, but we need to have loops. Work in reactive planning doesn't take context into account.
This on-line debate page is part of a discussion at recent workshop; similar pages are set up for each of the workshop articles. The discussion is organized by the area Reasoning about Actions and Change within the Electronic Transactions on Artificial Intelligence (ETAI).
To contribute, please click [mail to moderator] above and send your question or comment as an E-mail message.