Boris ENG

PhD Student in Computer Science, Team LoVe, LIPN (Université Sorbonne Paris Nord, France) [last name][first name]@hotmail.fr
« Return to Thoughts

Girard's conceptual knitting (version 4)

philosophy

Prerequisite: Curry-Howard correspondence, linear logic, proof-nets, basic knowledge of Transcendental Syntax

This text may be extended further depending on how far my understanding goes. Last update April 20, 2021.

Context : the Curry-Howard correspondence

The Curry-Howard correspondence shows that some logical systems match with some computational systems (typically natural deduction for minimal logic and simply typed lambda-calculus) but does that mean that logic and computation are actually the same thing? It seems to me that the current Curry-Howard correspondence is merely a point where both meet; an intersection between reasoning and computation. We will see that even though computation and logic can be entangled/mixed, their nature may be more subtle.

The Blind Spot and the logical space

Traditions and the unreachable essence of logic

When looking at the History of formal logic, we can observe that it was studied through an essentialist point of view for a long time. It means that the meaning of the logical rules was supposed to be prior to their existence: the so-called rules of reasoning. The evolution of logic followed the conception/understanding we had of it, leading to different ideas:

This “biased” study of logic was still useful and produced several results such as Gödel’s incompleteness theorems or connections with Computer Science through the Curry-Howard correspondence and later linear logic. It also shows a good point of essentialism: its authority. We get both order/productivity and irrefutability (we can easily check that we indeed followed the definitions).

Revolution and the chaotic existence of logic

The idea of Blind Spot (to which Girard often referred to) is close to the epistemological rupture of Bachelard and Althusser where linear logic’s proof-nets can be understood as a paradigm shift in the sense of Kuhn. The idea suggests that our access to knowledge is obtruded by obstacles we have to remove in order to access to the “reality” of a concept. A science evolves by “revolutions” and doubt of its prior foundations. Although this idea can be vague, it seems rather relevant in the context of formal logic. The idea is that our understanding of logic is -not wrong- but biased and living in a closed world preventing us from reaching its hidden mechanisms (the blinds spots). If we take any logical system, its proofs are limited to its rules and formulas (what Girard calls “bunker”). How can we free logic from our preconceived understanding of it?

The logical space is the space in which logical objects (relatively to a given context) live. The first satisfying materialisation of proofs as mathematical objects comes with the natural deduction and the sequent calculus of Gentzen. The space of proofs is made of well-formed trees constructed with rules. This space is very restricted because we cannot materialise wrong proofs and study the effect of their existence. We can say that this space is opaque in some sense. Linear Logic extended the logical space by making explicit the management of resources in reasoning. We obtain two conjunctions, two disjunctions and two connectives ? and ! related to the management of hypotheses as resources. Later, proof-nets, first seen as a syntactic convenience, actually constitutes a further extension of the logical space. Proofs are now very general hypergraphs linking formulas. Some proofs are wrong and others are correct. We can distinguish them with a correctness criterion (typically Danos-Regnier’s one) by testing them with an appropriate set of tests. The Geometry of Interaction programme of Girard goes even further by representing Danos-Regnier’s correction graphs by the same kind of object as proofs. The Transcendental Syntax is the natural successor of that logical exploration. As we will see, it lies on a strong conceptual knitting. This knitting seems to be an interesting update of Kant’s epistemology taking computation into account.

A computational update of Kant’s epistemology

Analytic / Answers

In order to give ground to the foundations of logic, we have to find a very transparent and objective computational space (at least more than the sequent calculus, proof-nets or the geometry of interaction) from which Logic will emerge. Such a space is said to be “analytic”. The lambda-calculus, is an interesting candidate for its simplicity but not an adequate one because of the hidden complexity of the beta-reduction. Moreover, the beta-reduction is an external procedure and we want all computation to be self-operated/internal/elementary/primitive in some sense, similarly to how chemical/biological interactions work. We would like a system “which has nothing to hide”. Girard already showed in the context of his Geometry of Interaction program that we could use first-order terms to represent the physical addresses of objects occurring in a proof. He generalises that idea to a model of computation called stars and constellations. These computational objects constitute a system where Logic is reconstructed from our primitive intuition of space: locations/addresses, associations/routing and propagation of information. I believe that Girard’s analytics corresponds to an idea of computation which can be described by topological operations (for instance Seiller’s interaction graphs compute by juxtaposition of graphs).

The analytic space is the space of answers. What we do is nothing more than computing on syntactic objects. As long as we do not interpret the answers we compute, the answers are meaningless/objective in some sense. The space of answers can be divided into two parts: the constat (result) and the performance (procedure). The constat is the finale answer we get and the performance is the procedure leading to that answer. The first one is explicit (finale, no more operation can be done) and the second one is implicit in the sense that the access to the information it conveys is indirect: we need to start a computation. It corresponds to Kant’s analytic a posteriori and a priori. In the Sequent Calculus, the implication => is constative because accumulated (a simple left/right introduction of symbol) while |- is performative, because sequents are destructed/altered by the cut-elimination procedure. This is a difference unknown to semantic realism since => and |- are identified by semantics.

Synthetic / Questions

The synthetic is where the meaningless get its meaning. Giving a meaning to something is arbitrary, dependent of a subject. We represent questions with formulas and answers with proofs encoded in the computational space described above. Giving meaning is formatting the meaningless in the same sense that simple types describe the behaviour of the untyped lambda-calculus in terms of input/output relationship. As with typing, the absence of format leads to a chaotic computation (untyped lambda-calculus).

The emergence of meaning is divided into two kinds: the usine (factory) and the usage (use). The usine is related to a priori typing (Church-like). We define an arbitrary set of tests (usually also encoded in the computational space) defining logical correctness and if a computational object passes all the test it is certified as correct (think of proof-structure and proof-nets). It corresponds to Kant’s synthetic a posteriori since the checking is empirical. It is also possible to think of a car or vaccine tested in a factory/laboratory in order to certify its use. The use corresponds to Wittgenstein’s so-called meaning-is-use. The computational objects are classified depending of how they interact with each other. It is a posteriori typing (Curry-like) where types are descriptions of computational behaviours. They are usually difficult to reason with and impossible to finitely characterise. It corresponds to Kant’s synthetic a priori which is impossible to predict / independent from experience. It can be illustrated as the effects of a vaccine being impossible to fully predict (because of infinite number of possible situation and context it can be used in) or a car doing things we didn’t expect.

The factory and use are related by stating that the tests of the factory ensures a good use. Otherwise no one would accept to drive a car. We need a reasonable connection between the tests and the uses. It is usually formulated as an adequation theorem telling that if a computational object passes all tests certifying answers of a formula A then they are indeed in the behaviour corresponding of A. Moreover, logical correctness (factory) should ensure a terminating cut-elimination (use). This actually gives a new status to usual proofs, for instance coming from the sequent calculus. When we write a proof, it actually implicitly comes with “tests” telling that the proof is well-formed. What we can call “proof of the proof”. The proof and the proof of the proof should be encoded as same kind object with a sound interaction (we still have to define what is a sound interaction).

Interaction, testing and certainty

The idea of tests is close to Popper’s idea of refutability by experiments. This idea suggests that a claim/theory is valid until one reach a refutation of it by testing it against an experiment, thus distinguishing a “true science” from a “pseudo-science”. Even though it appears as perfectly valid, it suffers from its essentialist root. A claim/theory exists within a subjective/arbitrary framework and the idea of refutation itself may be opaque (a test of temperature is dependent of both the tested/water and the tester/thermometer but also their interaction/contact). Moreover, the “testing” of a claim/theory may be infinite, meaning that we may never reach an absolute certainty. In the Transcendental Syntax, we make the test, the tested and the testing computationally explicit.

Although a proof of a claim and a proof of its negation can’t usually coexist, they virtually appear and have a role in proof-nets and the Transcendental Syntax. This may be linked to Hegel’s contradictory foundations where a concept lies on its negation itself mutually lying on the former in a conflicting relationship. In the Transcendental Syntax, formulas and their negations are set of constellations (computational objects) which satisfy a certain criterion of orthogonality (relation specifying what we mean by a sound interaction) when interacting together. They both exist as computational entities but only one “wins” when trying to resolve a conflict. The winner is considered to be “true/valid/provable” in the logical sense.

Knitting / Separation

We describe what links or separates the four classes of our conceptual knitting.

Constat/Performance. Both are separated by Church and Turing’s indecidability. An implicit computational artefact cannot be reduced to the set of its results because we cannot always predict their existence unless we do a computation (Turing’s halting problem). In other words, the implicit cannot be simply replaced by the explicit. They are linked by computation, a mean to go from performance/implicit to constat/explicit.

Usine/Usage. Both are separated by Gödel’s incompleteness theorems. They show the existence of unprovable formulas and the gap between provability and truth. The “format” we create for logic cannot be reduced to its use because the actual use can sometimes escape the boundaries we imagined and the use we expected.

Performance/Usine. The factory has a subjective part because we choose want kind of test we want to perform a certification/labeling. Some tests are better than others. Nonetheless, it also has an explicit part: performance is part of the usine. Because testing itself is performative, explicit and objective: we just have to perform a computation. What is contingent is the interpretation we get from the result.

Performance/Usage. If we look at cut-elimination, the result conveys a meaning but it is also a computational performance. The performance is also part of the usage but the usage cannot be reduced to that because the formulas are determined by the (usually infinite) set of interaction it can have, something overwhelming the simple idea of explicit performance. Unlike semantic realism, the idea of meaning is contextual, subjective rather than objective.

Comments

The logico-functional space. Girard’s chosen analytics is actually a variant of Robinson’s resolution and very close to resolution-based systems (for instance Kowalski connection graphs a.k.a clause graph resolution). Therefore, we can do logic programming with stars and constellations but since the Transcendental Syntax reconstructs linear logic and that from linear logic, we can naturally simulate the lambda-calculus, we have a space where functional programming and logic programming meet.

The analytic space of unification. If we understand the idea of analytic space as being a computational model where computation can be strictly described topologically, then Robinson’s resolution is analytic because Herbrand’s unification can be seen as a connection of wires. The specialisation of a term can be seen as subwires. The unification can also be described as pointers traversing graphs corresponding to terms.

Liberalised typing. The transcendental syntax can be seen as a kind of realisability theory with respect to the geometry of interaction with the choice of a convenient realiser (here constellations). Interestingly, the model of stars and constellations can naturally represent logic programs but also models which are also dynamical systems such as Wang tiles, abstract tile assembly model (aTAM) or the flexible tiles used in DNA computing. We can imagine providing a type theory of these models.

A better understanding of complexity. As constellations are generalisation of Girard’s flows (mostly studied by Bagnol) and Seiller’s interaction graphs we can imagine applications in implicit computational complexity. Moreover, there is the belief that a better understanding of logic could lead to a better understanding of complexity since logic can be understood as constraints of computation. The Transcendental Syntax may be able to give a new presentation of predicate logic and be linked to descriptive complexity where predicate logic has an important role.

From DNA computing to logic. Some works describe an encoding of Horn clauses, logic programming or Robinson’s resolution by using DNA computation itself. This leads to the possibility of checking or computing related to logic through the interpretation of the Transcendental Syntax.