*Cut-elimination* for intuitionistic logic has particular significance to proof theorists due to the constructive nature of the logic. Cut-free proofs give rise to significant computational information, including *interpolants* of implications and, in the case of predicate logic, *witnesses* of existentials. Moreover, the process of cut-elimination can itself be viewed as a computational process thanks to the Curry-Howard correspondence. This post is a continuation of this one, investigating ‘simpler’ proofs of cut-elimination in special cases. One particular criterion for simplicity here is computational complexity, but I am also interested by expositional simplicity.

### Hauptsatz: cut-elimination or cut-admissibility?

As for classical logic before, we should distinguish two problems, that could equally interpret Gentzen’s

:- Every theorem has a cut-free proof (
*cut-admissibility*). - Every proof can be ‘locally’ transformed into one of the same theorem without cuts (
*cut-elimination*).

Somewhat ironically, the usual statements (e.g. the one on Wikipedia) of the Hauptsatz correspond to cut-admissibility, while the usual understanding of the Hauptsatz among (at least structural) proof theorists is that of cut-elimination. In this post I am more interested in cut-elimination, but let us at least mention some basic properties of cut-free LJ proofs for intuitionistic propositional logic (IPL).

Proposition 1 (Folklore).For any theorem of IPL, we can compute a cut-free LJ proof of A in exponential time and in .

This result is widely known, and appears in several guises in the literature (see [1] for a recent survey of related results). It is a special case of the fact that any ‘well-behaved’ analytic sequent system admitting structural rules *contraction* and *weakening* has the above property, thanks to a simple *loop checking* argument.

Many important consequences of the Hauptsatz in fact require only this weaker statement of cut-admissibility. For example:

Theorem 2 (Craig interpolation).If IPL , where all propositional variables are indicated and disjoint, then there is a formula , with all propositional variables indicated, (theinterpolant), such that IPL and IPL .

However, Proposition 1 above is not really a ‘cut-elimination’ theorem *per se*, since it merely asserts cut-free completeness of LJ. In particular, it does not allow us to distinguish proofs by their normal forms under some reduction process.

### Curry-Howard: cut-elimination vs normalisation

For many structural proof theorists, cut-elimination for IPL is *identified* with the normalisation of -terms. The latter is known to have a non-elementary lower bound since we can type functions of arbitrary elementary growth rate. In fact we have a much stronger result from [2]:

Theorem 3 (Statman ’79).Deciding whether two simply typed -terms reduce to the same normal form is not elementary recursive.

Via the Curry-Howard correspondence, we may view this result as a lower bound on deciding *proof* *equivalence* for intuitionistic logic, but I will leave the discussion of this to another day.

However, Statman’s result does not imply any lower bounds on the complexity of cut-elimination for LJ, since normalisation of -terms constitutes just one particular strategy for cut-elimination. Note in particular, that both Proposition 1 and Theorem 3 are perfectly consistent with each other.

On the other hand, more efficient (yet local) cut-elimination strategies may induce alternative notions of *computation-as-normalisation*, which could lead to other interesting term calculi and type systems for the Curry-Howard-oriented proof theorist.

### Semi-invertibility and elementary cut-elimination

The fact that elementary cut-elimination is possible in LJ seems to be folklore (though I will cite some particular results in a moment). All arguments I am aware of use a form of ‘semi-invertibility’. Recall from last time that invertibility of rules in LK allowed us to normalise proofs in exponential time. For intuitionistic logic, the same trick works for cuts on conjunctions , but we have only invertibility on one side for disjunction and implication . As I mentioned last time, this sort of one-sided invertibility is well-understood under the guise of focussing, where connectives of a logic are organised based on whether their introduction rules are invertible on the left (*positive*) or invertible or the right (*negative*) (see [3]).

Semi-invertibility can still be used to drive a more efficient cut-elimination argument, e.g. via the following reduction:

Here is obtained from by replacing every immediate ancestor of the indicated by , and adding to the antecedent of all affected sequents. Any affected -r steps may be safely deleted.

However note that the –-cut in the RHS above is of the same logical complexity as the cut in the LHS, in contrast with ‘full inversion’ reductions for propositional LK. Termination is still guaranteed by the fact that the –-cut roots a smaller subproof but, along with the necessity of

, there are many more technicalities involved to organise the reductions into a cut-elimination procedure of demonstrably elementary complexity.The first result I am aware of that established elementary bounds on the complexity of cut-elimination for IPL is Hudelmaier’s work [4]. Inspired by the proof-search-oriented calculus G4ip for IPL, Hudelmaier devised inversion rules by employing a modest form of deep inference, inspecting the antecedent of implications. G4ip is significant enough to merit a post of its own, so I will avoid discussing it here and refer only to [1] for further details. More pertinent is (a simplified statement of) Hudelmaier’s main result:

Proposition 4 (Hudelmaier ’92).Any proof of LJ may be transformed by a sequence of local reductions into a cut-free one of the same conclusion of .

Recently Baaz and Fermüller obtained elementary bounds on cut-elimination for some nontrivial fragments of *predicate* LJ [5]. Along the way, they formalised a semi-invertibility argument for cut-elimination in propositional LJ, improving Hudelmaier’s result:

Proposition 5 (Baaz & Fermüller ’15).Any proof of LJ may be transformed by a sequence of local reductions into a cut-free one of the same conclusion of .

As far as I know, this triple-exponential bound is the best known on cut-elimination for LJ. However, recalling Proposition 1, note that we now have a curious (and to me surprising) gap between the bounds on the size of cut-free proofs and on the complexity of cut-elimination.

### Closing the gap?

It astounds me that Proposition 1 is so easy to establish, while Propositions 4 and 5 above seem non-optimal yet already require much more work. In classical logic, as mentioned in the previous post, exponential-time cut-elimination is readily established, thanks to a full invertibility argument. It is not clear to me whether exponential-time cut-elimination *requires*, in some formal sense, invertible rules for all connectives, or if there is another way. Put more succinctly:

Question.Is cut-elimination for LJ possible in exponential time?

In fact, as far as I know, there is not even a *double-exponential-time* procedure known! It would not surprise me if someone has worked this out on the back of a cigarette packet, or if there is some hidden literature reference to this effect. If anyone knows anything more about this, please do get in touch or leave a comment below!

### References

[Bibtex]

```
@Inbook{Dyc16:ipl-decision-procedures,
author="Dyckhoff, Roy",
editor="Kahle, Reinhard
and Strahm, Thomas
and Studer, Thomas",
title="Intuitionistic Decision Procedures Since Gentzen",
bookTitle="Advances in Proof Theory",
year="2016",
publisher="Springer International Publishing",
address="Cham",
pages="245--267",
isbn="978-3-319-29198-7",
doi="10.1007/978-3-319-29198-7_6",
url="https://doi.org/10.1007/978-3-319-29198-7_6"
}
```

[Bibtex]

```
@article{Sta79:stlc-non-elementary,
author = {Richard Statman},
title = {The Typed -Calculus is not Elementary Recursive},
journal = {Theor. Comput. Sci.},
volume = {9},
pages = {73--81},
year = {1979},
url = {https://doi.org/10.1016/0304-3975(79)90007-0},
doi = {10.1016/0304-3975(79)90007-0},
}
```

[Bibtex]

```
@article{And92:focussing,
author = {Jean-Marc Andreoli},
title = {Logic Programming with Focusing Proofs in Linear Logic},
journal = {J. Log. Comput.},
volume = {2},
number = {3},
pages = {297--347},
year = {1992},
url = {https://doi.org/10.1093/logcom/2.3.297},
doi = {10.1093/logcom/2.3.297},
}
```

[Bibtex]

```
@article{Hud92:bounds-cut-elim-lj,
author = {J{\"{o}}rg Hudelmaier},
title = {Bounds for cut elimination in intuitionistic propositional logic},
journal = {Arch. Math. Log.},
volume = {31},
number = {5},
pages = {331--353},
year = {1992},
url = {https://doi.org/10.1007/BF01627506},
doi = {10.1007/BF01627506},
}
```

[Bibtex]

```
@inproceedings{BaaFer15:elementary-cut-elim-lj,
author = {Matthias Baaz and Christian G. Ferm{\"u}ller},
title = {{Elementary Elimination of Prenex Cuts in Disjunction-free Intuitionistic Logic}},
booktitle = {24th EACSL Annual Conference on Computer Science Logic (CSL 2015)},
pages = {94--109},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-90-3},
ISSN = {1868-8969},
year = {2015},
volume = {41},
editor = {Stephan Kreutzer},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2015/5409},
URN = {urn:nbn:de:0030-drops-54097},
doi = {10.4230/LIPIcs.CSL.2015.94},
annote = {Keywords: Cut-elimination, sequent calculus, intuitionistic logic}
}
```

Hi Anupam, thanks for a nice post! I wonder if the connection to Lambda Calculus here is a red herring. LC corresponds to a natural deduction proof system, not the LJ sequent calculus. So why might one expect that lower bounds for one proof system would be entail any bounds for a different proof system?

Hi Reuben,

Thanks for the comment too. As you say, the sequent calculus is not the usual proof-theoretic-counterpart to the simply typed -calculus, vis-à-vis the Curry-Howard analogy, but nonetheless normal natural deduction proofs () and cut-free sequent proofs () are closely related, in particular in terms of proof size.

There are standard

elementarytranslations and vice versa, . Thus non-elementary lower bounds on normal natural deduction do indeed yield non-elementary lower bounds for the cut-free sequent calculus (and vice-versa). What is more, the translation can be realised bylocalproof transformations in the sequent calculus, so normalisation really does induce some form of ‘cut-elimination’ (as I consider it above). A good reference for all this is Troelstra and Schwichtenberg’s excellent textbook [1] (see in particular Section 6.3 for the translations, and even 6.3.1B and 6.3.1C for the complexity bounds).Beyond this, there has been significant work on establishing a Curry-Howard ‘correspondence’ for the sequent calculus that identifies cut-elimination with normalisation of some term calculus. One should in particular mention Herbelin’s fragment of , for which Herbelin gives a strongly normalising and confluent cut-elimination procedure (and associated typed -calculus) in [2].

[1] Troelstra, A.S. and Schwichtenberg, H., 2000. Basic proof theory (No. 43). Cambridge University Press.

[2] Herbelin, H., 1994, September. A -calculus structure isomorphic to Gentzen-style sequent calculus structure. In International Workshop on Computer Science Logic (pp. 61-75). Springer, Berlin, Heidelberg.

Thanks Anupam, yes that clears things up. Regarding the Curry-Howard correspondence for sequent calculus, the following paper may be of interest, which builds Herbelin’s LJT calculus.

Espíritu Santo, J., 2015. Curry-Howard for Sequent Calculus at Last!. In 13th International Conference on Typed Lambda Calculi and Applications (TLCA 2015).

Thanks for the reference Reuben. I was aware of that work as a sort of extension of Herbelin’s work to the

classicalsequent calculus (though I have not carefully read it to appreciate the other concepts involved). I wonder if there is some sort ofgeneralapproach to devising Curry-Howard style interpretations of sequent calculi, e.g. based on non-classical logics such as modal logic or relevant logic. This sort of stuff is certainly not my principal expertise, but it would be interesting to know.Nice post! (I realise that it was posted a couple of months ago!).

The exponential bound on cut-free proofs in Prop 1 is a direct consequence of using loop check to ensure that no sequent is repeated along any branch of the proof tree. Since the cut-elimination arguments are meant to be local (in some sense) and in particular since loop check to eliminate repeats is not employed, shouldn’t we expect different bounds?

…but we also have more input information! Cut-elimination has an actual proof to manipulate, unlike proof search. What is more, such a local exponential-time cut-elimination algorithm *does* exist for *classical* propositional logic (see my previous post). Wouldn’t it be nice to know if we can do it for the intuitionistic case? As far as I know, we do not have an answer to that (cf. my final Question).

Yes, I am just thinking of the usual loop-check argument for Proposition 1, using the structural rules to makes sure sequents (as sets) are increasing in size to polynomially bound depth.

I agree, it is interesting to see how many exponents could be knocked off from the intuitionistic cut-elimination. I see that in the Hudelmaier proof for classical and intuitionistic propositional logic, every cut reduction reduces the maximum along all branches of the sum of the degrees of the cut-formulas, at the cost of doubling the height of the proof. In the classical case, the degree deg of the cut-formula is defined in the usual way e.g. deg(A&B)=deg(A)+deg(B)+1, so the deg-sum of the cut-formulas along a branch in pi is bounded by |pi|. In the intuitionistic case, the degree degi is defined e.g. degi(A&B)=degi(A)(1+degi(B)) from which degi(A&B)<=2^deg(A&B) follows, and so the degi-sum of the cut-formulas along a branch in pi (i.e. degi(A1)+..+degi(An)<=2^{degi(A1)..degi(An)}) is bounded by 2^2^|pi| (or at least that’s my understanding of how the bound comes about). So finding a definition of degree on the formulas that reduces the sum under the cut reductions and also gives a sharper bound wrt |pi| could be one route to knock out exponents?

The Dyckhoff definition of degree in his contraction-free sequent calculus is degD(A&B)=degD(A)+degD(B)+2 (and no worse for the other cases) so (thinking aloud) I wonder if degD(A)<=2.deg(A) could be used to knock off two exponents? I didn’t check details, I’m just observing from the degree functions.