The very notion of mathematical proof is founded upon the idea that a proof is ‘routine’ to check. While it may be difficult to actually prove a mathematical theorem, a proof should allow us to verify its truth . But exactly how much effort is required to check a proof, in terms of computational complexity?
Abstract proof systems and the Cook-Reckhow criterion
In proof complexity there is a well-known formalisation of this idea: the Cook-Reckhow criterion. This criterion says that, to qualify as a ‘proof system’, proofs must be polynomial-time checkable. It is this that gives rise to the famous Cook-Reckhow theorem [BIBCITE%%3%]:
Theorem (Cook-Reckhow ’74). There is a proof system for classical propositional logic with polynomial size proofs of each tautology if and only if
.
But this polynomial time criterion for checkability seems overkill in practice. We can usually verify proofs in commonly used systems in some ‘local manner’ leading to log-space algorithms or simpler.
So what is the general complexity of checking proofs? Well, naturally, the answer to that question depends on the proof system at hand. However, perhaps a lower bound can be derived from the following but is possible in
(an apparently folklore result).
Proofs as parsing certificates
However, one could imagine that a proof, as well as verifying the truth of a formula, also acts as a parsing certificate, at the same time verifying the syntactic correctness of the conclusion. Seen this way, the problem of checking syntactic correctness actually lies at the other end of a proof: the axioms. To check that an expression is an ‘instance’ of an axiom involves, in particular, checking that the expression is a formula. This yields a on checking, say, Hilbert-Frege style axiomatic proofs.
But what about systems where the axioms are atomic? Gentzen style sequent systems are the natural point of call. Indeed, taking a cedents-as-lists implementation seems rather promising, and indeed is the variant originally proposed by Gentzen. However, we hit another problem, namely the weakening and additive rules:
The problem with the rules above is that, in each case, the expression could be arbitrary and we only know that the conclusion is syntactically correct if we know that
is a syntactically correct formula. A complete sequent calculus typically contains either the first two rules (and multiplicative
–
and
–
) or the second two rules (and contraction). Some calculi absorb weakening into the identity, but this gives us the same problem as for Hilbert-Frege systems since axioms are (see [BIBCITE%%6%] for a selection of such calculi).
Other classical proof formalisms exhibit the same problem of ‘new formulas’ occurring, top-down. For instance in natural deduction it is the elimination rules that are problematic. So I am left with the following curiosity:
Question. Is there a ‘natural’ proof system for which proof checking is in
?
One point of call could be
proof theory, where all structural rules can be made atomic. (Note that this is impossible in sequent style systems, cf. [BIBCITE%%7%]). Such systems, in a sense, enjoy ‘local’ proof checking, though it does not seem that this has been formalised in the setting of computational complexity. For now this is beyond the scope of this post, but I aim to cover the complexity of checking deep inference proofs at some point in a future post.Some special cases of efficient checkability
Notice that logics without ‘weakening behaviour’ do admit proof systems with -checkability. The simplest such example is that of Multiplicative Linear Logic, whose rules in a system are as follows:
The weakening issue described above becomes present as soon as the additive connectives or exponential modalities are included.
There are also many systems that operate with a restricted syntax of formulas, such as Resolution. The problem of proof checking here is much simpler since we do not need to determine scoping. In particular, proof checking is possible with an appropriate implementation of the rules. For example the following conventions will suffice:
- Clauses are implemented as lists, so that clause-identity is in
.
- The cut rule insists that the resolved literals occur in some fixed position in a clause, e.g. at the beginning or the end.
- The structural rules exchange and contraction are explicit, due to the previous two points.
From the point of view of , it does not matter if the proof is or . This difference does, however, affect the amount of time taken in traversing the proof deterministically without random access.