« Return to Thoughts
Logical interaction and the social life of proofs
Prerequisite: Curry-Howard correspondence, linear logic, proof-nets, basic knowledge of Transcendental Syntax
I illustrate the synthetic part of the Transcendental Syntax: the distinction between meaning as factory or use. All references to sociologic phenomena are purely metaphorical.
From the beginnings of the discovery of linear logic, there was the intuition that linear logic was about “interaction” for several good reasons.
- Formulas in linear logic can be seen as limited resources or “concrete” objects. We have a more primitive implication $A \multimap B$ which can be connected to a formula $A$. Both occurrences of $A$ will interact, be destructed and finally produce $B$.
- The monolateral version of the sequent calculus for linear logic makes this more explicit. The axiom and the cut rule can be seen as two kind of interaction between $A$ and $A^\bot$. The later can be removed by a computational procedure representing a sort of elimination/resolution of contradiction. Proof-nets make this idea more natural.
- Girard’s geometry of interaction (in its original form) sees the computational content of logic as emerging from local interaction between computational entities studied from general mathematics. In this context, we are interested in interaction between dual objects of the same kinds (permutations, partitions, partial isometries, …).
I believe that Girard’s Transcendental Syntax currently is the only place where we can give a true meaning to the idea of “logical interaction” even though it was a major intuition for a long time.
The stellar resolution
In the transcendental syntax, all objects live within the stellar resolution which is a model of computation using constellations as elementary objects. Constellations can be understood as sets of kind of molecules called stars which can interact with each other along compatible rays and form a tiling representing a network. Each such tiling for a constellation C (there can be infinitely many of them but finitely many “correct” ones) can be evaluated into a star by a step-by-step edge contraction and the propagation of information to the unconnected rays. This procedure can fail in case of conflict. It is basically a resolution of conflicts in a network (hence the name “stellar resolution”). The set of all evaluations of networks constitutes a new constellation Ex(C) called the “execution” of C.
I have a funny experiment to illustrate how this model computes. Imagine a social network as a big constellation where each person is a star and the rays are his posts. Let’s say that people propagate some news. Some posts will overlap and interact with each other in order to complete information. Imagine someone writing “Today, something happened”. Another one write “Today, a car accident happened”. So the network now get the information that “something” is “a car accident”. Later, another one says “from my information, it was a bike accident”. Contradiction! The whole network collapses because of its incoherence. The execution of the constellation Ex(SocialNetwork) is the set of coherent information we can infer.
This is very similar to the idea of local interaction occurring in physics and biology. Actually, the stellar resolution can be seen as a generalisation of a model called flexible tiles which is used in DNA computing. Moreover, it seems that DNA-based computation is able to simulate Horn clauses and clausal resolution which is very similar to the stellar resolution.
More interestingly, we can make two constellations C, C’ interact by merging them into a big constellation C + C’ we can execute into Ex(C+C’). The stellar resolution is a uniform space of entities which can compute by local interaction.
A sociologic interpretation
It is not the first time that one talks about a “sociology of proofs” to illustrate the interaction occurring in linear logic. Authors such as Samuel Tronçon or Arnaud Valence mention this idea in their PhD thesis. Girard uses the terminology vehicle and format (“gabarit” in French) which is a kind of specification of the vehicle. In his geometry of interaction and some papers of Transcendental Syntax, he also used some terms related to sociology. I would like to try to illustrate the Transcendental Syntax through a friendly sociologic interpretation.
We can see a set of constellations as a society where constellations represent individuals. The Transcendental Syntax then suggests two ways of putting them into groups. If we take the illustration very seriously, an individual is not necessarily a person. The illustration is still coherent if we accept animals, objects or even an abstract notion of environment as individual. The term entity would probably be more general. The model of computation we choose as basis has an effect on the construction of such entities. In our case, we can see both an individual and its environment as entities of the same kind. The asymmetric duality subject/object becomes symmetric in some sense. I didn’t read Hegel so I’m not sure it is related but I believe that it is an instance of the idea that consciousness and reality are made of the same “texture/material”.
Typing by stereotypes / labelling
We select a sample of finite representative individuals forming a group ~A representing tests for a category A (e.g male, female, potentially criminal, nice people). We would like to judge if an individual x can be put into the group A. In order to do so, we test x against all individuals in ~A with a definition of what “passing a test” means. If x passes all the tests of ~A then it is part of the group A. Such a group is called type.
We can see that because the choice of sample is arbitrary and its association with a concept subjective, A can be said to be a stereotype. Clearly, it misses some cases (some individuals considered men not passing the standard tests for men) but has the advantage of being tractable. We can easily check that an individual is part of a group. It provides a kind of relative certainty with respect to the stereotypes (subjective and partial but still a kind of certainty).
If applied to objects, this corresponds to labelling certified objects or validating a vaccine. The tests are not perfect but at least we get what we expect (the vaccine should have an effect against a virus without too many bad secondary effects). This world of labelling is rather convenient/comfortable because we can easily name things and what they are/do. A good factory ensures that a knife can be efficiently used for cooking but I can still kill someone with it and it’s probably not something they especially expect. This leads to Wittgenstein’s idea of meaning-as-use.
Typing by interaction / use
Another way to classify individuals is to look at how they interact with each other. The previous idea of “passing a test” now becomes “two individuals interact correctly” (notice that test and tested become symmetric, of the same kind). We start by choosing a criterion (orthogonality relation) defining what we mean by a good interaction between two individuals. We can form an arbitrary group by selecting any individuals we want. The orthogonal ~A of a group A is the set of all individuals which interacts well with the individuals of A. Such a group is called pre-conduct (or pre-behaviour).
This idea of typing is way more flexible but also very complex. Depending on the orthogonality relation, the orthogonal can be very huge. It is very difficult to reason with such notion of typing. An individual can be part of infinitely many types and subtypes naturally exist. However, the advantage is that everyone can be put into a type which is usually not the case with typing by stereotypes.
What we call a conduct or behaviour is a pre-conduct such that A = ~~A, which is a closure by interaction implying that A is characterised by a set of tests ~A (possibly infinitely many ones). These tests represent all the possible interaction of a set of constellations A. Infinitely many tests for the infinitely many uses of an object.
If there too few types, the society is less expressive which sounds quite boring but also problematic: some individuals are not accepted because their behaviour is not accepted by the defined types. If there are too many accepted behaviours, it becomes chaotic and you will have to remember the pronoun of every individuals. But more importantly, we would like the stereotypes to be at least confirmed sometimes (stereotypes usually do not come from nowhere). It takes the form of an adequacy theorem (similarly to classical realisability) stating that the tests for A should ensure that an individual x being tested is indeed in the conduct corresponding to A. In other words, we would like the stereotypes to be not so wrong although they will usually miss some cases.
Cause and consequence
It is not obvious in that interactive world of independent entities but it is possible to make a notion of cause emerge. In the case of type as labels, it is the usual notion of logical consequence. No one forbid us from defining the meaning of logical consequence as a symbol with some rules as in natural deduction or sequent calculus and then checking that it works as expected (the so-called harmony in natural deduction or cut-elimination in sequent calculus).
In the case of typing by interaction/use, the notion of cause/consequence is deeply related to the (potentially infinite) interaction/use of entities. If it happens that when a constellation x interacts with any constellation in a conduct A and the result of that interaction is of conduct B then x is of conduct A -> B. A perfect coffee machine c (represented as a constellation) would be in a conduct Coin -> Coffee but also in Credit card -> Coffee (notice the multiple typing). If we look at the complex machinery of interaction occurring in c, the cause of getting a cup of coffee is the interaction between some stars. However, when generalising the idea of consequence to types/conducts, it may be intractable or undecidable to tell what cause some conduct A because of the potentially infinite possible interactions. What caused our existence? Can we even know?
Digression: dimensions of interaction
This is a very vague idea but notice that local interactions happen in an atomic scale (DNA, atoms, molecules, …) but can also be studied in a greater scale with proofs and programs but also between entities such as humans, environment etc. Imagine a human interacting with a computer making a program interacting with its environment, itself triggering an electrical interaction in the circuits of the computer. The human user is also subject to an interaction between brain cells.
I am currently wondering if it is possible to seriously talk about such a phenomena of different dimensions/scales of interaction. For instance, is there a kind of hierarchy of structures of interaction? Can we see a constellation as a star and a pre-type as a constellation and so on? Or should we consider a theory of interaction as a sort of toolbox working from a certain point of view?