Eng (Sambo) Boris

PhD Student in Computer Science, Team LoVe, LIPN (Université Sorbonne Paris Nord, France) domain_name@hotmail.fr
« Return to Thoughts

Meaning and Use in Gentzen's logical systems

proof theory history technical

Prerequisite: natural deduction, sequent calculus, intuitionistic logic, classical logic

Introduction

In this post, we will investigate the differences between natural deduction and the sequent calculus, two systems introduced by Gentzen during the 20th century. They represent two way of looking at the same object: logic but with a different point of view providing a different understanding. Let’s start with natural deduction $NJ$.

Natural Deduction

Conventions and their use

The Natural Deduction represents logic as a set of rules divided in two categories: introduction rules (introducing a symbol) and elimination rule (eliminating a symbol). It is our common language used to describe what inference is. The intuition behind these rules is that the introductions are definitions we use in order to allow reasoning to take place and that the elimination rules are ways of using these definitions.

In the bottom-top reading, we basically manipulate our symbols in the goal/conclusion and introduce new hypotheses with the introduction rule of the implication until we have a conclusion matching an hypothesis (an evidence: an assumed statement is proven). Two interesting points:

This system shows that we can see formulas and proof with the point of view of a set of rules but also that we interpret it as a computational system (simply-typed lambda-calculus) thanks to the Curry-Howard correspondence.

Harmony

In order to have a consistent logical system, we need an harmony between introductions and eliminations corresponding to the fact that the definition and the use of a convention should be complementary. In Proof Theory, it is translated as the simplification of introductions preceding eliminations. For instance, the succession of rules $\infer[\land e 1]{\infer[\land i]{A \quad B}{A \land B}}{A}$ is exactly the $\pi_1 (A, B)$ for functional programming. They both reduce to $A$ thus describing a kind of computation leading to the Curry-Howard correspondence.

Sequent Calculus

Logic from the intuition of space

In the Sequent Calculus, we have either introductions on the left or on the right of a turnstile symbol $\vdash$. We now manipulate both sides of a sequent $A_1, …, A_n \vdash B_1, …, B_k$. A proof ends when all its branches have a same formula on the left and on the right. We created a space divided in two parts where we can act freely on each formulas. The distinction between classical and intuitionistic logic becomes a mere distinction between the restriction of space: either at most one place ($LJ$) or no restriction ($LK$).

Let’s use $A \vdash A$ as the axiom rule. The act of reasoning becomes a manipulation of symbols in space where the negation becomes the exchange between right and left. In $LK$, we can even break this (artificial) separation by using a monolateral version where all sequents are of the form $\vdash A_1, …, A_n$. Since we work with entities within a same space, we need “structural rules” in order duplicates formulas we want to use several times, discard formulas we don’t need or exchange them so we can exactly connect formulas (as in the axiom rule). In $NJ$, they were implicit in the management of contexts (the axiom rule was $\Gamma, A \vdash A$).

The existence of structural rules lead to an alternative definition of disjunction and conjunction with a different management of the context. We have the multiplicative and the additive connectives. They are equivalent thanks to the structural rules but by restricting the structural rules, we obtain different connectives leading to Linear Logic.

Decomposing use to show its computational interactions

To understand the difference between natural deduction and sequent calculus, it is interesting to look at how $NJ$ can be translated into $LJ$. If we look at the translation, we can see that only elimination rules are translated. Let’s take the translation of an elimination of conjunction:

While in $NJ$, the formula $A \land B$ directly leads to $A$, the system $LJ$ show more details about how can $A \land B$ lead to $A$ in a setting of spatial management of formulas. It is at a “lower-level” (in the sense of programming languages) and decomposes the logical use in a more detailed way.

Moreover, the translation always make a cut rule appear connecting a formula in conclusion and another one in hypothesis thus acting like an extension cord of the logical flow of reasoning. When using the monolateral version of the sequent calculus, we see that the axiom rule and the cut rules are two ways of connecting a formula and its dual in two different ways. The axiom rule connect two dual conclusions and the cut rule connect two dual hypothesis. We revealed a mechanism of interaction within the logical reasoning which can be further explored if we look at things from the right angle. Logic seems now similar to a physical phenomena we’re trying to understand by using mathematical models.