Neil V. Murray

 

Research Synopsis:

Automated Deduction concerns the automation of "theorem proving" and similar (sometimes more general) processes like "inference" or "deduction". As J. A. Robinson has elegantly stated:

``. . . a theorem-proving problem can be solved automatically if it can be solved at all. That is to say, if [a conclusion] B does follow from the given [hypotheses] A1, A2, ...,An then this fact can be detected, automatically, by executing a certain purely clerical algorithm on A1, ...,An and B as input.''

Most research efforts in (and the above remarks about) automatic theorem proving are based on Herbrand's theorem. This theorem helped lay the foundation for mechanizing the discovery of proofs in mathematical logic. Put in lay terms, Herbrand's theorem says that valid logical formulas can be proven to be so by analyzing only the structure and symbols of those formulas. Since the symbols (other than the logical operators) of a valid formula may be interpreted in arbitrary ways over arbitrary domains of discourse without affecting the truth of the formula, this is a remarkable theorem.

Initially, Robinson's resolution rule dominated research into automated deduction; even now, it is the most widely employed technique; it is the basis of OTTER and Vampire, arguably two of the most successful theorem proving programs ever developed. Robinson's rule, and many of its variants that have been subsequently introduced, require formulas to be normalized in special ways -- typically conjunctive or disjunctive normal form (CNF or DNF). Although such normalization simplifies analysis and implementation, it can be computationally expensive: Normalizing a formula may cause a great increase in size.

My early research is centered around the continuing study of Herbrand-based proof systems, but the frameworks I have employed permit the avoidance of CNF or DNF. (These normal forms are often collectively referred to as clause form.) The fundamental ideas behind such frameworks had their genesis in my 1982 Artificial Intelligence Journal paper ``Completely Non-Clausal Theorem Proving.'' It was one of the first formal treatments of mechanized theorem proving without requiring CNF or DNF.

The use of both graph concepts and the two-dimensional format of Bibel and Andrews to express non-clausal structure is explained in a paper with Erik Rosenthal that appeared in the April 1987 issue of the Journal of the ACM entitled "Inference With Path Resolution and Semantic Graphs." There, path resolution, a very general form of resolution inference, was introduced; path resolution contains (properly) as special cases almost all known variants of Robinson's original resolution rule. Path resolution also allows larger and more general inference steps than those allowed by any of the earlier resolution rules. Also, path resolution incorporates and generalizes certain search space reductions of earlier resolution systems.

More recently, we introduced a new automated deduction technique, path dissolution (Journal of the ACM, July 1993). Dissolution is rather different from earlier Herbrand-based methods, and yet is related to such earlier methods in meaningful ways. For example, in a joint effort with Erik Rosenthal of the University of New Haven and Reiner Haehnle of the University of Karlsruhe, we have recently shown that any resolution proof can be ``simulated'' by dissolution to produce a corresponding proof. Thus in some sense, dissolution is at least as powerful as resolution. In a similar manner, dissolution can simulate another proof method, analytic tableaux with local lemmas.

On propositional formulas, dissolution has the desirable property of strong termination. In other words, the computer does not have to search for a proof; by employing the dissolution inference rule in any arbitrary way, some proof (if there is any) will be discovered. Even if no proof exists, the process will terminate. This is helpful in prime implicant/implicate computations (see below).

In related work, we produced the first published proof of the intractability of the standard analytic tableau method. (``On the Computational Intractability of Analytic Tableau Methods.'' N.V. Murray and E. Rosenthal. Bulletin of the Interest Group on Pure and Applied Logics. 2,2 (Oct. 1994) 205-228.) The result was originally announced without proof by Cook and Reckhow; an explicit proof is surprisingly non-trivial. This result, stated simply, means that in the worst case, the amount of computational effort required to present a proof of formula F using the tableau method must be proportional to a function that is exponential in the size of F. We have also shown that by employing dissolution, the computational cost incurred by analytic tableaux procedures can be reduced in certain well defined ways. (``On the Relative Merits of Path Dissolution and the Method of Analytic Tableaux.'' N.V. Murray and E. Rosenthal. Theoretical Computer Science 131 (1994), 1-28.)

Dissolution shares with tableau methods the capability of producing a representation of the models and/or consequences of a (propositional) formula. (Models summarize what must be true in order to force the formula to be true; consequences summarize what must be true whenever the formula is known to be true.) Models can be characterized via prime implicants, consequences via prime implicates. We predicted that this capability would be useful for determining prime implicants and prime implicates and developed a new non-clausal algorithm called PI. (``CNF and DNF Considered Harmful for Computing Prime Implicants/Implicates.'' A. Ramesh, G. Becker, and N.V. Murray. Journal of Automated Reasoning 18,3 (June 1997) 337-356.) This algorithm can be used as a stand-alone algorithm, or may be combined in more than one way with dissolution to form a prime implicant/implicate generator. We had previously implemented such a system and proved that on certain classes of formulas, any CNF or DNF based algorithm will be intractable (i.e., require exponential time as explained above), whereas PI is not intractable.

In an attempt to eliminate certain redundancies in the PI system, we discovered a new inference rule called semi-resolution. We have proven that this rule is also computationally as adequate as resolution. Investigation into the potential advantages from the employment of semi-resolution as the inference engine in prime implicate computations is just beginning.

Reiter has laid the foundations of a formal theory of an approach to diagnosis known as diagnosis from first principles. (``A theory of diagnosis from first principles.'' Reiter, R. Artificial Intelligence, 32 (1987), 57-95.) We have developed a new technique for computing minimal diagnoses of a system, based on Reiter's theory. (A. Ramesh and N.V. Murray. ``An Application of Non-Clausal Deduction in Diagnosis.'' Expert Systems with Applications. 12,1 (1997), 119-126.) Modifications to the technique have been introduced that restrict the diagnoses generated to the simplest -- those representing single faults. Our approach does not rely on a clause form representation (although it is applicable to systems represented in clause form), nor does it require generating the so-called set of minimal conflicts; this set must be computed in Reiter's original method. Ramesh has worked on an implementation of this technique as part of his Ph.D. thesis.

We have also extended dissolution, resolution, and path resolution to a wide class of multiple-valued logics (MVL's). (``Adapting Classical Inference Techniques to Multiple-Valued Logics Using Signed Formulas.'' N.V. Murray and E. Rosenthal. Fundamenta Informaticae Journal 21,3 (Sept. 1994), 237-253.) Basically, an MVL is a formal logic in which formulas evaluate to one of many, not two, truth values. Such logics are widely used in modeling electrical circuits, for example. Recently, fuzzy logics have received much attention, and these logics can also be characterized as MVL's. The ideas that led to our results were developed first by Reiner Haehnle and, independently, by myself with Erik Rosenthal. In a sense, we handle these MVL's by recasting them as a ``logic of signed formulas.'' The advantage of the logic of signed formulas is that it is a classical two-valued logic, and it admits a version of Herbrand's Theorem. As a result, essentially all inference methods based on that theorem are applicable to signed formulas, and thus to the related MVL's.

Of course, one of the first inference technologies considered for MVL's was resolution. The investigation of resolution for multiple-valued logics led to joint work with James Lu of Bucknell University. (``A Framework for Automated Reasoning in Multiple-Valued Logics.'' J. Lu, N.V. Murray and E. Rosenthal. Journal of Automated Reasoning. 21,1 (Aug. 1998) 39-67.) The approach using signed formulas contains as a special case the annotated logics investigated by da Costa, Lu, Henschen, and Subrahmanian, among others. Annotated logics comprise a collection of MVL's that were designed primarily for the purpose of knowledge representation. In particular, annotated logics have been proposed as the basis of an extension of logic programming to MVL's.

We have explored the relationship between signed formulas and annotated logics, and shown that a special case of the signed resolution rule is equivalent to -- and thus unifies -- the two inference rules, resolution and reduction, of annotated logics. This clarification of the inference mechanisms of annotated logics provides considerable benefit for annotated logic programming. The annotated logic programming systems proposed initially lacked the SLD-style proof procedures intrinsic to classical logic programming. Such proof procedures have several advantages for logic-based programming systems: There is a close resemblance to the operational behavior of ordinary programming languages, making the logic-based language accessible to more traditional programmers. More importantly, only the clauses originally input as part of the program must be considered at each deduction step; this allows for much greater execution efficiency than for less restricted proof procedures. Finding a true SLD-style proof procedure for annotated logic programs was an open problem that became easily solvable once the resolution and reduction rules were shown to be special cases of our single rule for signed formulas.

Signed resolution also captures the fuzzy resolution rule proposed for fuzzy operator logics. This is another example of the logic of signed formulas offering a means of adapting classical inference techniques to a multiple-valued logic. We have also applied our techniques to compute prime implicants/implicates for a class of MVL's called regular logics. (``Computing Prime Implicants/Implicates for Regular Logics.'' A. Ramesh and N.V. Murray. Proceedings of the 24-th International Symposium on Multiple-Valued Logics, Boston, MA, May 24-27, 1994, 115-123.)

The last decade has seen a virtual explosion of applications of propositional logic. One emerging application is knowledge compilation: preprocessing the underlying propositional theory. While knowledge compilation is intractable, it is done once, in an off-line phase, with the goal of making frequent on-line queries efficient. In recent work, we have studied both the off-line and on-line phases.

Decomposable negation normal form (DNNF) was developed primarily for knowledge compilation. Formulas in DNNF are linkless, in negation normal form (NNF), and have the property that atoms are not shared across conjunctions. Full dissolvents are linkless NNF formulas that do not in general have the latter property. However, many of the applications of DNNF can be obtained with full dissolvents. We have shown that two additional methods --- regular tableaux and semantic factoring --- will produce an equivalent DNNF version of a formula.

Work on knowledge compilation continues, some of which has been supported by NSF. For a more detailed discussion, see the project website.