⊢ Boris ENG

PhD Student in Computer Science, Team LoVe, LIPN (Université Sorbonne Paris Nord, France) [last name][first name]@hotmail.fr

Meaning and Use in Gentzen's logical systems

proof theory history technical

Prerequisite: natural deduction, sequent calculus, intuitionistic logic, classical logic

Introduction

In this post, we will investigate the differences between natural deduction and the sequent calculus, two systems introduced by Gentzen during the 20th century. They represent two ways of looking at logic. We start with natural deduction called NJ.

Natural Deduction

Conventions and their use

The system NJ represents logic as a set of rules divided into two categories: introduction rules (introducing a symbol) and elimination rules (eliminating a symbol). It is our common language used to describe what inference is. The intuition behind these rules is that the introductions are definitions we use in order to allow reasoning to take place and that the elimination rules are ways of using/consuming these definitions.

In the bottom-top reading, we basically manipulate our symbols in the goal/conclusion and introduce new hypotheses with the introduction rule of the implication until we have a conclusion matching a hypothesis (an evidence/assumed statement). Two interesting points appear:

• the system is naturally intuitionistic due to the fact that we justify a unique statement at each step of the proof;
• although it captures all the mathematical use of logic, it is not so natural. Think of the introduction of the disjunction introduces $A \lor B$ from $A$ where we lose information. It is not something we usually want in practice.

This system shows that we can see formulas and proofs with the point of view of a set of rules. By the Curry-Howard correspondence, we may also interpret it as a computational system (simply-typed lambda-calculus) manipulating symbols.

Harmony

In order to have a consistent logical system, we need an harmony between introductions and eliminations corresponding to the fact that the definition and the use of a convention should be complementary. In proof theory, it is translated as the simplification of introductions preceding eliminations. For instance, the succession of rules $\infer[\land e 1]{\infer[\land i]{A \quad B}{A \land B}}{A}$ is exactly the $\pi_1 (A, B)$ of functional programming. They both reduce to A thus describing a sound computation.

Sequent Calculus

Logic from the intuition of space

In the Sequent Calculus, we have either introductions on the left or on the right of a turnstile symbol $\vdash$. We now manipulate both sides of a sequent $A_1, …, A_n \vdash B_1, …, B_k$. A proof ends when all its branches have the same formula on the left and on the right (an axiom). We created a space divided into two parts where we can act freely on each formulas. The distinction between classical and intuitionistic logic becomes a mere distinction between the restriction of space: either at most one place (LJ) or no restriction (LK).

The act of reasoning becomes a manipulation of symbols in space where the negation becomes the exchange between right and left. In LK, we can even break this (artificial) separation by using a monolateral version where all sequents are of the form $\vdash A_1, …, A_n$, leading to a sort of space of interaction with some rules gathering some formulas with the goal of ultimately connecting a formula and its negation (a sort of resolution of contradiction).

Decomposing use to show its computational interactions

To understand the difference between natural deduction and sequent calculus, it is interesting to look at how NJ can be translated into LJ. If we look at the translation, we can see that only elimination rules are translated. Let’s take the translation of an elimination of conjunction:

$\infer[\land e 1]{\Gamma \vdash A \land B}{\Gamma \vdash A} \quad \leadsto \quad \infer[cut]{\Gamma \vdash A \land B \quad \infer[\land l]{\infer[ax]{}{A \vdash A}}{A \land B \vdash A}}{\Gamma \vdash A}$

While in NJ, the formula $A \land B$ directly leads to $A$, the system $LJ$ show more details about how can $A \land B$ lead to $A$ in a setting of spatial management of formulas. It is at a “lower-level” (in the sense of programming languages) and decomposes the logical use in a more detailed way.

Moreover, the translation always makes a cut rule appear connecting a formula in conclusion and another one in hypothesis thus acting like an extension cord of the logical flow of reasoning. When using the monolateral version of the sequent calculus, we see that the axiom rule and the cut rules are two ways of connecting a formula and its dual in two different ways. The axiom rule connect two dual conclusions and the cut rule connect two dual hypothesis. We revealed a mechanism of interaction within the logical reasoning which can be further explored if we look at things from the right angle. Logic now appears as similar to physical phenomena studied by using mathematical models, a science of interaction, in some sense.