Standardized notation can be helpful, especially after a topic has matured. For example, Gentzen used as an infix symbol to build sequents from lists of formulas. While one also sees used, it seems that in recent years, has become the standard, especially when proof theory is applied to type theory (where is invariably used) and computational logic (where is often used for other purposes). This notation is overloaded: the expression is both a syntactic object (a sequent) and the mathematics-level judgment that entails . Nonetheless, we seem to be able to live with such overloading.
The many notations for focused sequents
Since I am working with various focused proof systems, I want a coherent and appealing way to present the sequents and rules involved with building the two focusing phases. This notation should be consistent over various sequent calculus systems, including one-sided and two-sided; single-conclusion and multi-conclusion; single-focus and multi-focus. I have come up with the following notation and terminology. I share these here since others might find this notation convenient.
Let us first list some of the notations used to present focused sequents.
- Andreoli’s original format was only for single-sided sequents for linear logic. His triadic sequent system  had sequents of the form and where denotes a formula, , , and are multisets of formulas, weakening and contraction are allowed in (i.e., the classical zone), weakening and contraction are not allowed in (i.e., the linear zone), and introduction rules take place in and with .
- Girard used zones (with one acting as the stoup) in LU and LC [9,10] with the classical zones closest to the . Herbelin, Curien, and Munch-Maccagnoni used similar notation [5,6,11] as did Zeilberger . In their presentation of LKT/LKQ, Danos et al.  also used sequents with various zones, one of which denotes the stoup.
- LJQ by Dyckhoff and Lengrand  used the two arrows and to denote the sequents used in the two separate phases.
- Chaudhuri et al.  used and in focused linear logic systems.
- In some early writing of mine , I put a formula above or below the sequent arrow to denote the formula in focus (denoting the left or right focus). Chuck Liang and I in  continued that practice and added brackets to denote different zones: e.g., , , etc.
The terminology associated with focused proof systems varies a lot as well. For example, Andreoli referred to the two phases of focus-proof construction using the adjectives asynchronous and synchronous. Describing these phases as the invertible and non-invertible phases seems natural, although sometimes an inference rule in the synchronous phase can be invertible (see the rule in the LJF proof system below). So these days I favor calling these phases simply the negative and positive phases.
As another example, there is the religious term “stoup” (i.e., “blessed”) used by Girard and others to denote the formula under focus. More dynamic terminology was used by Andreoli (e.g., reaction rules) and the Carnegie Mellon group (e.g., active, neutral, passive, focus, blur). I have settled on terminology that is more motivated by an administrative perspective of focusing as a process for building synthetic inference rules. This perspective suggests the following terminology. (I assume we are building proofs in a conclusion-to-premise fashion.)
- The first step in building a synthetic inference rule is to decide on which formula in storage should be selected to provide the material from which the synthetic rule is built. That formula is placed into a temporary part of the sequent (the staging area).
- Applications of non-invertible (positive) introduction rules in the staging area form the first phase of this process. When this phase is exhausted, the focus phase ends with either an initial or release rule.
- Applications of invertible (negative) inference rules in the staging area forms the second phase. Any formula an invertible rule cannot introduce is stored, meaning it is moved from the staging area to a storage zone.
For these reasons, I prefer using the terms decide, store, and release to describe certain administrative inference rules in a focused proof system.
Selecting the notation
The notation I settled on is simply the following. I write one-sided sequents as and and two-sided sequents as and .
The outer zones and are the storage areas, while the inner zones, namely and , are the staging areas. Also, and might be further decomposed into multiple zones with different structural rules available (as Andreoli did in his original focused proof system). When these sequents are used with intuitionistic logic, the right side of sequents are restricted so that the multiset union of and contains exactly one formula. A sequent never contains an occurrence of both and .
There are several positive points of this notation. It only employs the special symbols and , and these symbols have seldom been used in proof theory outside focusing. This notation also supports multifocusing : that is, a -sequent can have more than one formula in the staging area (). The only downside I see with this notation is that (with two-sided sequents) always writing two occurrences of an up or down arrow seems heavy. For this reason, I also adopt the following conventions (assuming that an empty zone is written using the dot ). We can drop writing and when they appear on the right, and we drop writing and when they appear on the left. Thus, a border sequent (i.e., a sequent with an empty staging area) can be written as . In this way, a border sequent can be confused with a sequent without focusing markings, and such confusion is a happy accident since border sequents appear as both premises and conclusions of synthetic inference rules. This way, focusing notation only appears in sequents used to build the internals of a synthetic inference rule. In the proof systems I have considered, sequents of the form are such that the multiset union is never empty.
LJF: A focused version of LJ
To illustrate this proposed notation, I present below my preferred way to write the LJF proof system . Here, is positive, is negative, is a negative formula or positive atom, a positive formula or negative atom, is a negative atom, is a positive atom, and is an arbitrary formula. In the rules and , is either 1 or 2. In the rules and , the eigenvariable does not occur free in any formula of the conclusion. The display of these rules can be simplified somewhat using the conventions mentioned before that allow some up and down arrows next to empty zones to be elided.
Since the order in which the negative introduction rules are applied does not matter (don’t care nondeterminism), it is sometimes convenient to modify this proof system so that a negative, right-introduction rule is applied only when the left staging area is empty.
If one is only interested in using some negative connectives (i.e., , , , and ), then LJF can be simplified to the point where every two-sided sequent only needs to display at most one arrow symbol. In such a setting, it makes sense to call these two phases the right phase and the left phase.
I would argue that the adjective “focused” should be used only with proof systems: in particular, a formula and a logic are not focused. However, it make sense to say that an occurrence of a formula in the staging area of a sequent is called a focus of that sequent.
Focusing is a way to take the micro-rules given by Gentzen and arrange them into macro or synthetic rules. In particular, a synthetic rule is the result of taking a (partial) derivation built using micro-rules with border sequents as its premises and conclusion, with no border sequents elsewhere, and then hiding all of the micro-rules. Cut-elimination can be an automatic consequence for synthetic inference rules built using focusing .
The style of focusing described here is sometimes called strong focusing: that is, the decide rule is not selected until the staging area of the -phase is empty. A weak focusing proof system would allow the decide rule to select a focus even if not all invertible rules have been applied. The notation given here does not immediately accommodate weak focusing proof rules.
A synthetic inference rule is bipolar: its internal structure is defined by both positive and negative phases. If we are to introduce a notion of synthetic connectives, we would probably insist that they are monopolar, i.e., composed of either only negative or only positive connectives.
Focusing has also been applied in natural deduction [2,16,17] and deep inference . I have not considered how the notational conventions mentioned above might be applied in those settings.
 Jean-Marc Andreoli. Logic programming with focusing proofs in linear logic. J. of Logic and Computation, 2(3):297–347, 1992. [ DOI ]
 Taus Brock-Nannestad and Carsten Schürmann. Focused natural deduction. In Christian Fermüller and Andrei Voronkov, editors, LPAR 17, volume 6397 of LNCS, pages 157–171, Yogyakarta, Indonesia, 2010. Springer. [DOI ]
 Kaustuv Chaudhuri, Nicolas Guenot, and Lutz Straßburger. The Focused Calculus of Structures. In Computer Science Logic: 20th Annual Conference of the EACSL, Leibniz International Proceedings in Informatics (LIPIcs), pages 159–173. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, September 2011. [ DOI | .pdf ]
 Kaustuv Chaudhuri, Frank Pfenning, and Greg Price. A logical characterization of forward and backward chaining in the inverse method. J. of Automated Reasoning, 40(2-3):133–177, March 2008. [DOI | .pdf ]  Pierre-Louis Curien and Hugo Herbelin. The duality of computation. In ICFP ’00: Proceedings of the fifth ACM SIGPLAN international conference on Functional programming, pages 233–243, New York, NY, USA, 2000. ACM Press. [DOI ]
 Pierre-Louis Curien and Guillaume Munch-Maccagnoni. The duality of computation under focus. In Cristian S. Calude and Vladimiro Sassone, editors, Theoretical Computer Science – 6th IFIP TC 1/WG 2.2 International Conference, TCS 2010, Held as Part of WCC 2010, Brisbane, Australia, September 20-23, 2010. Proceedings, volume 323 of IFIP Advances in Information and Communication Technology, pages 165–181. Springer, 2010. [DOI ]
 V. Danos, J.-B. Joinet, and H. Schellinx. LKT and LKQ: sequent calculi for second order logic based upon dual linear decompositions of classical implication. In J.-Y. Girard, Y. Lafont, and L. Regnier, editors, Advances in Linear Logic, number 222 in London Mathematical Society Lecture Note Series, pages 211–224. Cambridge University Press, 1995. [DOI ]
 R. Dyckhoff and S. Lengrand. LJQ: a strongly focused calculus for intuitionistic logic. In A. Beckmann and et al., editors, Computability in Europe 2006, volume 3988 of LNCS, pages 173–185. Springer, 2006. [DOI ]
 Jean-Yves Girard. A new constructive logic: classical logic. Math. Structures in Comp. Science, 1:255–296, 1991. [DOI ]
 Jean-Yves Girard. On the unity of logic. Annals of Pure and Applied Logic, 59:201–217, 1993. [DOI ]
 Hugo Herbelin. A lambda-calculus structure isomorphic to Gentzen-style sequent calculus structure. In Computer Science Logic, 8th International Workshop, CSL ’94, volume 933 of Lecture Notes in Computer Science, pages 61–75. Springer, 1995. [DOI ]
 Chuck Liang and Dale Miller. Focusing and polarization in linear, intuitionistic, and classical logics. Theoretical Computer Science, 410(46):4747–4768, 2009. Abstract Interpretation and Logic Programming: In honor of professor Giorgio Levi. [DOI ]
 Sonia Marin, Dale Miller, Elaine Pimentel, and Marco Volpe. From axioms to synthetic inference rules via focusing. Annals of Pure and Applied Logic, 173(5):1–32, 2022. [ DOI ]
 Dale Miller. Forum: A multiple-conclusion specification logic. Theoretical Computer Science, 165(1):201–232, September 1996. [DOI ]
 Dale Miller and Alexis Saurin. From proofs to focused proofs: a modular proof of focalization in linear logic. In J. Duparc and T. A. Henzinger, editors, CSL 2007: Computer Science Logic, volume 4646 of LNCS, pages 405–419. Springer, 2007. [.pdf ]
 Gabriel Scherer. Deciding equivalence with sums and the empty type. In Giuseppe Castagna and Andrew D. Gordon, editors, Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 18-20, 2017, pages 374–386. ACM, 2017. [DOI ]
 Wilfried Sieg and John Byrnes. Normal natural deduction proofs (in classical logic). Studia Logica, 60(1):67–106, 1998. [DOI ]
 Noam Zeilberger. Focusing and higher-order abstract syntax. In George C. Necula and Philip Wadler, editors, Proceedings of the 35th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2008, San Francisco, California, USA, January 7-12, 2008, pages 359–369. ACM, 2008. [DOI ]
6 thoughts on “Notation for focused proof systems”
Thanks Dale for this survey of focussing notations, and for explaining your current preferences. If I understand correctly, you are not arguing that there should be a uniformity in the notation but rather that explaining notational decisions might help inform authors in the area. Correct?
Regarding the sequent arrow, I always find it quite unnatural when is employed as syntax without justification. For instance, such notation is commonplace in the linear logic community, but such abuse of notation is typically only justified in the presence of a Deduction Theorem for the logic (which fails for linear logic).
Why do you distinguish ‘inner’ and ‘outer’ zones of your sequents, rather than left and right zones with respect to and ? Choosing inner and outer suggests that proximity to is meaningful and, to me, hints at strong binding of in the judgement that renders formulas in the outer zones out of scope of the . (I accept that this is a psychological point, not necessarily mathematical).
Correct. Some people who want to use focused proof systems may not have strong opinions about the “right” way to display them. Hopefully, my explanation of design choices might lead to uniformity in those cases.
In defense of this choice, it appears long ago in Girard’s popular textbook “Proofs and Types” (1989). I resisted this choice until I started applying the sequent calculus to operational semantics: since the is used for numerous other notions in that setting, keeping with Gentzen’s original notation only confused people.
Yes, this is a psychologically motivated choice, but I can motivate it with more words. When applying sequent calculus to concurrency theory, I usually view the multisets in the sequent (on the left and right) as encoding objects such as resources and processes. For example, all the processes operating in a computer might be encoded as individual formulas in the right-hand side multiset of a sequent, and resources such as memory can be encoded on the left-hand side. In such a setting where sequents could contain hundreds of items, the symbol is the only anchor. Thus, if I want a temporary staging area, it seems natural to position it next to that anchor. This perspective motivates having the left and right staging areas next to .
I had another thought about using inner/outer rather than left/right: I think your preferred notation could actually be justified from the point of view of non-commutative logic. Here we have, e.g., , and so having inverted orders on each side of the sequent arrow may make sense.
Regarding the , I would reiterate that I only find the notation problematic when the Deduction Theorem fails for the logic at hand (or, wrt Curry-Howard, if term annotations are not explicit). Despite widespread practice, I feel it is mathematically incorrect to use it in, say, linear logic. Other arrows, e.g. , are acceptable when various arrows abound.
Thanks for this interesting observation regarding non-commutative connectives. In my original post, I should have pointed out that Girard used this “inner/outer rather than left/right” distinction in his paper on On the unity of logic (APAL, 1993, doi). In that paper, he had two zones on both the left and the right: the inner zones are the classical-maintenance zones, and the outer zones are the linear-maintenance zones. I switch these zones around in my writing: inner zones are linear, and outer zones are classical. I do this since linear resources (switches, registers, tokens, etc.) seem to be the center of the action, while the classical zone (the logic program) does not change much during proof search. The “center of action” means near the center of the sequent, which is the turnstile. Putting the linear zone next to the turnstile in (unfocus) sequents is similar to putting the and staging areas next to the turnstile in focused systems: the staging area is a linear maintenance zone as well.
It is odd to push the point that linear logic does not have the deduction theorem. Of course, in one sense, that is true. However, we have learned that (in some computational logic settings), we should allow two classes of hypotheses (corresponding to the classical and linear zones) and modify the deduction theorem to handle them explicitly.
Regarding other kinds of sequent arrows, I should mention that both Girard and Jean Gallier tried to introduce (in the early 1990s) a new turnstile symbol, where the foot was shorter and thicker, as a replacement for the sequent arrow (see Gallier’s “On the Correspondence Between Proofs and Lambda-Terms,” p. 10, url). But, unfortunately, it doesn’t seem that that symbol has caught on.
I didn’t know that was meant to be a different symbol! I thought it was just a typesetting variation.
The reason I keep pushing the point is that, for a long time (with origins arguably going back to Frege), notations like have denoted formally defined mathematical relations. For many decades it has been standard to write if is an -provable/derivable (where is a set of rules) under additional axioms/initial sequents from . Indeed, to quote Kleene’s famous 1952 book “Introduction to Metamathematics” (page 88 7th reprint) when defining :
Now while the precise use has varied in the literature, almost all uses* (I know of in proof theory) until relatively recently (e.g. ’90s) seem to be compatible with the aforementioned definition (e.g. thanks to Deduction). In Linear Logic, using as the sequent arrow means one loses this formal notion of consequence. I do not see the problem with simply maintaining arrow-like symbols for syntax, and turnstile-like symbols for judgements.
*I suppose one exception to what I am saying is the literature on ‘abstract consequence relations’, where is widely employed to vary over judgements, even with properties not compatible with the definition I mentioned above. My impression is that such conventions are more typical in interactions with topology and algebra than pure proof theory.
(with apologies for the now considerable digression from the main point of your post!)
Another different use of can be found in Dana Scott’s Information Systems.