Skip to content

The Proof Theory Blog

□(□A → A) → □A

Menu
  • Home
  • About
  • Contribute
  • Resources
Menu

A new linear inference of size 8

Posted on June 25, 2020June 26, 2020 by Anupam Das

Linear inferences are propositional tautologies of the form A\to B , where A and B are formulas with at most one occurrence of each propositional variable. These form a \mathbf{coNP}-complete set, but surprisingly little is known about their structure, despite their importance in structural proof theory (and beyond). In this post I will cover some of the background on linear inferences, and present a new ‘independent’ linear inference on 8 variables, which I conjecture is minimal.

Switch, medial and relationships to linear logic

Classical examples of linear inferences are the switch and medial rules from deep inferenceDeep inference is a proof methodology that allows rules to operate arbitrarily deep inside a formula, as opposed to just on the main connective..

    \[\begin{array}{rrcl}\mathsf s : & \quad a \wedge (b \vee c) & \to &  (a\wedge b)\vee c\\\mathsf m : & \quad (a\wedge b)\vee (c\wedge d) &\to & (a\vee c)\wedge (b\vee d)\end{array}\]

These inferences are fundamental in Linear Logic, since they govern the interaction between multiplicative conjunction \otimes and disjunction \parr (in the case of switch) and between the additive conjunction \with and disjunction \oplus (in the case of medial).

The set \mathsf L of all linear inferences was observed to be \mathbf{coNP}-complete by Lutz Strassburger in [BIBCITE%%11%] so it is, in a sense, as rich as all of propositional logic. Interestingly, we also have the following well-known correspondence with linear logic:

Observation 1. \mathsf L coincides with the multiplicative fragmentsIn both these settings, linear inferences are better known as 'binary tautologies'. In fact the two notions are not exactly the same, but are equivalent in a certain formal sense. of Blass‘ Game Semantics of linear logic [BIBCITE%%12%] and Japaridze‘s Computability Logic.

Before going any further, I better explain elaborate on how we construct ‘proofs’ using linear inferences.

Composing inferences via rewriting

Construing \{\mathsf s,\mathsf m\} as a rewriting system, modulo associativity and commutativity of \vee and \wedge, we can derive other linear inferences. For instance the following is a derivation, with redexesThe 'redex' of a rewrite step is the subterm of the LHS that matches the LHS of the corresponding rule instance. The RHS of this instance is called the 'contractum'. underlined:

    \[\begin{array}{rll}a \wedge \underline{b \wedge (c \vee d)} & \to_\mathsf s & \underline{a \wedge ((b \wedge c) \vee d)} \\& \to_\mathsf s &  \underline{(a\wedge d) \vee (b \wedge c)} \\&  \to_\mathsf m &  (a\vee c) \wedge (b \wedge d)\end{array}\]

However, notice that we could have also derived this inference using a simple application of the ‘mix’I really don't like this name since it seems unrepresentative of the usual mix of Gentzen, a merge of a cut and several contractions. However, it is common terminology in the Linear Logic community. rule:

    \[a \wedge b \to a \vee b\]

This is not a priori derivable with the system \{\mathsf s , \mathsf m \} above, unless we admit unitBy a unit, I mean the truth constants false and true, which act as 'units' for disjunction and conjunction, in the algebraic sense. equations:

    \[a \vee \bot = a = a \wedge \top \qquad a\wedge \bot = \bot \qquad a \vee \top = \top\]

Henceforth I will assume In fact, it turns out that these equations are precisely all the linear equivalences, semantically speaking, so this is a somewhat canonical choice. that all rewriting takes place modulo associativity, commutativity and the unit equations above.

Example 2. (Atomicity of contraction). Any non-atomic instance of \mathsf c : A \vee A \to  A may be simplified using the medial rule. When A is a disjunction, it suffices to invoke associativity and commutativity of \vee. The interesting case is when A is a conjunction, for which we may apply the following reduction.

    \[\begin{array}{rcl}(A \wedge B) \vee (A \wedge B) & \to_{\mathsf m} & \underline{(A\vee A)} \wedge (B\vee B) \\& \to_\mathsf c & A \wedge \underline{(B\vee B)} \\& \to_\mathsf c & A \wedge B\end{array}\]

Applying this reduction recursively means that we can simulate each \mathsf c step using only atomic instances. This is a crucial feature of deep inference proof theory, giving rise to proof decompositions that are unavailable in the sequent calculus [BIBCITE%%13%].

\{\mathsf s,\mathsf m\}-independent linear inferences: the quest for minimality

Lutz Strassburger found in [BIBCITE%%14%] a family of linear inferencesMore precisely, he gave 'balanced tautologies', where each variable is used exactly once positively and once negatively. These may be construed as the RHS of a linear inference by taking the conjunction of all excluded-middles as the LHS. based on the pigeonhole principleThe pigeonhole principle states that, if n+1 pigeons sit in n holes, then there must be some hole with two pigeons in it. that were independent of switch and medial, even with units. The smallest one there consisted of 36 variables. I improved that in [BIBCITE%%15%] to one on 10 variablesThe method of obtaining this inference, also based on the pigeonhole principle, similarly extends to an infinite family of indepent linear inferences.:

(1)   \begin{equation*}\begin{array}{lc}& (a \vee (b \wedge b')) \wedge (c \vee d ) \wedge (c'\vee d') \wedge ((e\wedge e') \vee f) \\\to & (a \wedge ( c \vee e)) \vee (c' \wedge e') \vee (b' \wedge d') \vee ((b \vee d) \wedge f)\end{array}\end{equation*}

Around the same time I had an undergraduate student, Alvin Šipraga, who wrote a program that searches for linear inferences, as well as checking validity and other related tasks [BIBCITE%%16%]. Unfortunately we hit computational barriers even at 7 variables, but we were able to verify the following:

Proposition 3. Any linear inference on 6 variables or fewer is already derivable in {\mathsf s, \mathsf m} (with units).

This raises a rather curious question: what is the smallest linear inference that is independent of switch and medial? By (1) and the proposition above we had narrowed the window to 7-10 variables.

8 years later, an 8-variable inference

Since 2012 a lot has happened. Axiomatising \mathsf{L} via only linear inferences remained open until 2015, when Lutz and I showed the following resultThis is actually one of my favourite results, requiring techniques from graph theory, term rewriting and Boolean function theory. in [BIBCITE%%17%] [BIBCITE%%18%], resolving an open problem in Computability Logic and Blass’ game semantics from the early ’90s (see, e.g., [BIBCITE%%19%] for more details):

Theorem 4. \mathsf L cannot be axiomatised by a polynomial-time set of linear inferences, unless \mathbf{coNP} = \mathbf{NP}.

Since then, I became more interested by the graph-theoretic aspects underlying the above result, working with students Cameron Calk and Tim Waring. Independently, Ross Horne has been developing similar ideas in the setting of linear logic, with Lutz Strassburger and Matteo Acclavio, and the whole area of ‘graph-based logics’ is now a rather active topic [BIBCITE%%20%] [BIBCITE%%21%] (albeit one beyond the scope of this post).

In the midst of all this recent activity, I re-examined my old arguments on linear inferences, rather trying to tackle a different question concerning ‘yanking’I will save this topic for another day. in deep inference. In the course of all this, I stumbled upon a new linear inference on 8 variables:

(2)   \begin{equation*}\begin{array}{rl}& (a \vee (b \wedge b')) \wedge (c \vee d) \wedge ((e \wedge e') \vee f) \\\to & (a \wedge (c \vee e)) \vee (b'\wedge e') \vee ((b\vee d) \wedge f)\end{array}\end{equation*}

One way to obtain this inference is by simply setting d'=e'=\bar c' in (1). This takes care of soundness, but the fact that it is underivable requires a little work. A case analysis shows that there are only two applicable steps to the LHS, both switches, yielding one of the following two formulas:

    \[((a \wedge (c \vee d)) \vee (b \wedge b') ) \wedge ((e \wedge e') \vee f)\qquad \text{or} \qquad(a \vee (b \wedge b')) \wedge ((e \wedge e') \vee ((c \vee d ) \wedge f))\]

Any other step applied to the LHS results in a formula that does not logically imply the RHS. In the same way, there remains no applicable switch or medial step from either of the formulas above. Thus (2) is indeed independent of switch and medial.

Only one question left…

Recall that Proposition 3 tells us that there is no \{\mathsf s,\mathsf m\}-independent linear inference on 6 or fewer variables. Thus only one question remains which, despite its simplicity, remains unresolved:

Question 5. Is there a linear inference of size 7 not derivable in \{\mathsf s, \mathsf m\} (with units), or is (2) above the/a smallest such inference?

My feeling is that this question is probably not so difficult to resolve, either through mathematical or computational techniques (or a mixture), but no one seems to have devoted the effort to answering it. I, for one, would very much welcome a resolution!

References

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Meta

  • Register
  • Log in
  • Entries RSS
  • Privacy Policy

Search

© 2023 The Proof Theory Blog | Powered by Minimalist Blog WordPress Theme
The Proof Theory Blog uses cookies, but you can opt-out if you wish. For further information see our privacy policy. Cookie settingsAccept
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT