# Girard's conceptual knitting (version 4)

**Prerequisite**: Curry-Howard correspondence, linear logic, proof-nets, basic knowledge of Transcendental Syntax

This text may be extended further depending on how far my understanding goes. Last update July 14th, 2021.

# Context : the Curry-Howard correspondence

The Curry-Howard correspondence shows that some logical systems match with some computational systems (typically natural deduction for minimal logic and simply typed lambda-calculus) but does that mean that logic and computation are actually the same thing? It seems to me that the current Curry-Howard correspondence is merely a point where both meet; an intersection between reasoning and computation. We will see that even though computation and logic can be entangled/mixed, their nature may be more subtle.

# The Blind Spot and the logical space

## Traditions and the unreachable essence of logic

When looking at the History of formal logic, we can observe that it was studied through an “essentialist” point of view for a long time. It means that the meaning of the logical rules was supposed to be prior to their existence: the so-called *rules* of reasoning. The evolution of logic followed the conception/understanding we had of it, leading to different ideas:

*model theory*and the Tarski’s*classical truth*is not satisfying when we want to understand the idea of proof. In the same fashion, Frege’s*predicate logic*was very useful but very opaque and limited. The*why*and the computational side of proofs was lacking.- With Hilbert’s
*formalism*, logic appears as something written down as mere symbols by following conventions checked mechanically/computationally. Despite the interesting idea of mechanical checking, all explanations of logic were external because of these conventions. - With Heyting’s
*formalisation of intuitionistic logic*related to the more general*BHK interpretation*, proofs correspond to functions/programs. This very interesting idea is limited by the fact that now, it is the idea of “correct proof” we can doubt of: the*correct proofs*are the ones defined to be correct. The*how*or the “proof of the proof” is lacking. In the same fashion, with the*categorical interpretation of logic*, we start from predefined entities such as objects and morphisms having certain properties. The study of logic thus starts from primitive definitions of logic.

This “biased” study of logic was still useful and produced several results such as Gödel’s incompleteness theorems or connections with computer science through the Curry-Howard correspondence and later linear logic. It also shows a good point of essentialism: its authority. We get both order/productivity and irrefutability (we can easily check that we indeed followed the definitions).

## Revolution and the chaotic existence of logic

The idea of *Blind Spot* (to which Girard often referred to and which is reminiscent of Bachelard and Althusser’s epistemological rupture) suggests that our access to knowledge is obtruded by obstacles we have to remove in order to access the “reality” of a concept. A science evolves by “revolutions” and doubt of its prior foundations. Although this idea can be vague, it seems rather relevant in the context of formal logic. The idea is that our understanding of logic is -not wrong- but biased and living in a closed world preventing us from reaching its hidden mechanisms. If we take any logical system, its proofs are limited to its rules and formulas (what Girard calls “bunker”). How can we free logic from our preconceived understanding of it?

The *logical space* is the space in which logical objects (relatively to a given context) live in. The first satisfying materialisation of proofs as mathematical objects comes with Gentzen’s natural deduction and sequent calculus. The space of proofs is made of well-formed trees constructed with rules. This space is very restricted because we cannot materialise wrong proofs and study the effects/implications of their existence. We can say that this space is opaque in some sense. Linear logic extended the logical space by making explicit the management of resources in reasoning. We obtain two conjunctions, two disjunctions and two connectives ? and ! related to the management of hypotheses as resources. Later, proof-nets, first seen as a syntactic convenience, actually constitutes a further extension of the logical space. Proofs are now very general hypergraphs linking formulas. Some proofs are wrong and others are correct. We can distinguish them with a correctness criterion (typically Danos-Regnier’s one) by testing them with an appropriate set of tests. Girard’s Geometry of Interaction programme goes even further by representing Danos-Regnier’s correction graphs by the same kind of object as proofs. The Transcendental Syntax is the natural successor of that logical exploration. As we will see, it lies on a strong conceptual knitting. This knitting seems to be an interesting update of Kant’s epistemology taking computation into account.

# A computational update of Kant’s epistemology

## Analytic / Answers

In order to give ground to the foundations of logic, we have to find a very transparent and objective computational space (at least more than the sequent calculus, proof-nets or the geometry of interaction) from which logic will emerge. Such a space is said to be “analytic”. The lambda-calculus, is an interesting candidate for its simplicity but not an adequate one because of the hidden complexity of beta-reduction. Moreover, the beta-reduction is an external procedure and we want all computation to be self-operated/internal/elementary/primitive in some sense, similarly to how chemical/biological interactions work. We would like a system “which has nothing to hide”. Girard already showed in the context of his Geometry of Interaction programme that we could use first-order terms to represent the physical addresses of objects occurring in a proof. He generalises that idea to a model of computation called *stars and constellations*. These computational objects constitute a system where Logic is reconstructed from our primitive intuition of space: locations/addresses, associations/routing and propagation of information. I believe that Girard’s analytics corresponds to an idea of computation which can be described by topological operations (for instance Seiller’s interaction graphs compute by juxtaposition of graphs).

The analytic space is the space of answers. What we do is nothing more than computing on syntactic objects. As long as we do not interpret the answers we compute, the answers are meaningless/objective in some sense. The space of answers can be divided into two parts: the *constat* (result) and the *performance* (procedure). The constat is the final answer we get and the performance is the procedure leading to that answer. The first one is explicit (final, no more operation can be done) and the second one is implicit in the sense that the access to the information it conveys is indirect: we need to start a computation. It corresponds to Kant’s analytic a posteriori and a priori. In the sequent calculus, the implication `=>`

is constative because accumulated (a simple left/right introduction of symbol) while `|-`

is performative, because sequents are destructed/altered by the cut-elimination procedure. This is a difference unknown to semantic realism since `=>`

and `|-`

are identified by semantics.

## Synthetic / Questions

The synthetics is where the meaningless get its meaning. Giving a meaning to something is arbitrary, dependent of a subject. We represent questions with formulas and answers with proofs encoded in the computational space described above. Giving meaning is formatting the meaningless in the same sense that simple types describe the behaviour of the untyped lambda-calculus in terms of input/output relationship. As with typing, the absence of format leads to a chaotic computation (untyped lambda-calculus).

The emergence of meaning is divided into two kinds: the *usine* (factory) and the *usage* (use). The usine is related to a priori typing (Church-like). We define an arbitrary set of tests (usually also encoded in the computational space) defining logical correctness and if a computational object passes all the test it is certified as correct (think of proof-structure and proof-nets). It corresponds to Kant’s synthetic a posteriori since the checking is empirical. It is also possible to think of a car or vaccine tested in a factory/laboratory in order to certify its use. The use corresponds to Wittgenstein’s so-called *meaning-as-use*. The computational objects are classified depending of how they interact with each other. It is a posteriori typing (Curry-like) where types are descriptions of computational behaviours. They are usually difficult to reason with and impossible to finitely characterise. It corresponds to Kant’s synthetic a priori which is impossible to predict / independent from experience. It can be illustrated as the effects of a vaccine being impossible to fully predict (because of infinite number of possible situation and context it can be used in) or a car doing things we did not expect.

The factory and use are related by stating that the tests of the factory ensures a “good use”. Otherwise no one would accept to drive a car. We need a reasonable connexion between the tests and the uses. It is usually formulated as an *adequacy theorem* stating that if a computational object passes all tests certifying answers of a formula **A**, then they are indeed in the behaviour corresponding of **A**. Moreover, logical correctness (factory) should ensure a terminating cut-elimination (use). This actually gives a new status to usual proofs, for instance coming from the sequent calculus. When we write a proof, it actually implicitly comes with “tests” asserting that the proof is well-formed: what we can call a “proof of the proof”. The proof and the proof of the proof should be encoded as objets of the same kind with a sound symmetric interaction (we still have to define what is a sound interaction).

## Interaction, testing and certainty

The idea of testing is reminiscent of Popper’s idea of refutability by experiments (I heard that Popper was often misinterpreted/exaggerated). This idea suggests that a claim/theory is valid until one reaches a refutation of it by testing it against an experiment, thus distinguishing a “true science” from a “pseudo-science”. Even though it appears as perfectly valid, it suffers from its essentialist root. A claim/theory exists within a subjective/arbitrary framework and the idea of refutation itself may be opaque (a measure is dependent of both the tested and the tester/tool but also their interaction/contact). Moreover, the “testing” of a claim/theory may be infinite, meaning that we may never reach an absolute certainty. In the Transcendental Syntax, we make the test, the tested and the testing computationally explicit.

Although a proof of a claim and a proof of its negation cannot usually coexist, they virtually appear and have a role in proof-nets and the Transcendental Syntax. This may be linked to Hegel’s contradictory foundations (I never read Hegel) where a concept lies on its negation itself mutually lying on the former in a conflicting relationship. In the Transcendental Syntax, formulas and their negations are set of constellations (computational objects) which satisfy a certain criterion of orthogonality (relation specifying what we mean by a sound interaction) when interacting together. They both exist as computational entities but only one “wins” when trying to resolve a conflict. The winner is considered to be “true/valid/provable” (a synthesis) in the logical sense.

# Knitting / Separation

We describe what relates or separates the four classes of our conceptual knitting.

**Constat/Performance.** Both are separated by Church and Turing’s indecidability. An implicit computational artefact cannot be reduced to the set of its results because we cannot always predict their existence unless we do a computation (Turing’s halting problem). In other words, the implicit cannot be simply replaced by the explicit. They are linked by computation, a mean to go from performance/implicit to constat/explicit.

**Usine/Usage.** Both are separated by Gödel’s incompleteness theorems. They show the existence of unprovable formulas and the gap between provability and truth. The “format” we create for logic cannot be reduced to its use because the actual use (which cannot be foreseen by the usine) can sometimes escape the boundaries we imagined and the use we expected for logic.

**Performance/Usine.** The factory has a subjective part because we choose want kind of test we would like to perform. Some tests are better than others. Nonetheless, it also has an explicit part: performance is part of the usine. Because testing itself is performative, explicit and objective: only computation is needed for the checking. What is contingent is the interpretation we get from the result.

**Performance/Usage.** If we look at cut-elimination, the result conveys a meaning but it is also a computational performance. The performance is also part of the usage but the usage cannot be reduced to that because the formulas are determined by the (usually infinite) set of interaction it can have, something overwhelming the simple idea of explicit performance. Unlike semantic realism, the idea of meaning is contextual, subjective rather than objective. In particular, connectives are defined by what they can interact with.

# Comments

**The logico-functional space.** Girard’s chosen analytics is actually a variant of Robinson’s resolution and very close to resolution-based systems (for instance Kowalski connection graphs a.k.a. clause graph resolution). Therefore, we can do logic programming with stars and constellations but since the Transcendental Syntax reconstructs linear logic and that from linear logic, we can naturally simulate the lambda-calculus, we have a space where functional programming and logic programming meet.

**The analytic space of unification.** If we understand the idea of analytic space as being a computational model where computation can be strictly described topologically, then Robinson’s resolution is analytic because Herbrand’s unification can be seen as a connection of wires. The specialisation of a term can be seen as creating subwires. The unification can also be described as pointers traversing graphs corresponding to terms.

**Liberalised typing.** The transcendental syntax can be seen as a kind of realisability theory with respect to the geometry of interaction with the choice of convenient realisers (here constellations). Interestingly, the model of stars and constellations can naturally represent logic programs but also models which are also dynamical systems such as Wang tiles, abstract tile assembly model (aTAM) or the flexible tiles used in DNA computing. We can imagine providing a type theory of these models.

**A better understanding of complexity.** As constellations are generalisations of Girard’s flows (mostly studied by Bagnol) and Seiller’s interaction graphs we can imagine applications in implicit computational complexity. Moreover, there is the belief that a better understanding of logic could lead to a better understanding of complexity since logic can be understood as constraints of computation. The Transcendental Syntax may be able to give a new presentation of predicate logic and be related to descriptive complexity where predicate logic has an important role.

**From DNA computing to logic.** Some works describe an encoding of Horn clauses, logic programming or Robinson’s resolution by using DNA computation itself. This leads to the possibility of checking or computing related to logic through the interpretation of the Transcendental Syntax.