Eng (Sambo) Boris

PhD Student in Computer Science, Team LoVe, LIPN (Université Sorbonne Paris Nord, France) domain_name@hotmail.fr
« Return to Thoughts

Questions (currently 10 available)

constantly updated

Here are some questions I’m currently interested in. These questions may be superficial, meaningless or may even have been already solved.

  1. How are logic/computation related to space/time? Linear Logic seems to provide an understanding of Logic through the primitive intuition of space and time. Mathematical proofs which were seen as sequential mental acts evolving through the time became proof-nets describing kind of topological operations of Logic. The lost of time is recovered with polarization and focalization, exhibiting a mechanism of dialogue between positives and negatives. How do interaction and information evolve in space and time? What can Logic teach us about the study of time (e.g sequentiality and concurrency)? What can the study of space (e.g topology) teach us about the mechanisms of logic?

  2. Does a pragmatic approach of computation solve the identity crisis of Logic? It was once thought that classical logic didn’t have any computational interpretation. Then came an interpretation with innocent instructions (such as call/cc from Scheme). Can practical use of programming provide a significant creativity and understanding of Logic? What is the logical interpretation of usual entities of programming? What the computational interpretation of some great axioms and theorems can teach us?

  3. How are Logic and Computation related to Cognition and Nature? Linear Logic has some links with Quantum Computation (quantum circuits, type systems). How is it connected to quantum computation and quantum information? Is there any true connection at all or is it superficial? Is it an exaggeration to relate cognition, computation, nature, quantum, and logic? Logic and computation seem to be part of the nature but does the nature itself use them as well (for instance, does the nature compute and use Logic as a normative constraint)?

  4. What is the origin of historical prejudices and misconceptions of Logic? What does the history of Logic teach us about the way we think and what/why we believe? Similarly to Physics and Biology, should Logic be considered as a science (observing “coherent interaction” as a natural phenomenon)? It may be interesting to investigate the biological, psychological and sociological aspects of Logic. Why was Logic thought and organized the way it was? How can the understanding of our failures prevent us from going in the wrong direction?

  5. What makes something computationally difficult or easy? Implicit Computational Complexity allows to express computational complexity classes by syntactical restrictions on formal languages. It would be interesting to explore the deep structure of these restrictions. It seems that algebraic tools may give an understanding we didn’t had with the algorithmic and classical point of view.

  6. What Logic and Computation really are and how are they similar or different? Even today it is not clear what computation and logic are. The Curry-Howard isomorphism tells us that Logic and Computation intersects somewhere: a point where Logic and converging coherent computation meet. Something similar to what separate analytic and synthetic seems to deeply separate computation and what we call logic. Logic has both a restrictive (existential) and normative (essential) effect on computational behaviors (untyped computation) which can be both formulated within Proof Theory thanks to the cut-elimination procedure. What about the purely locative and interactional Turing Machines? Where do they stand?

  7. What is the common ground of syntactical formats? Some formalisms took a particular format mainly due to historical or practical reasons but are equivalent to other ones. For instance, the inductive description of regular languages, finite state automata, regular grammars, regular expressions and read-only Turing machines all define regular languages. Can they be abstracted to only keep their interactional potential and use of information/data? Although similar, these formats are far from being strictly/really equivalent. How do they differ? How do concepts (specification) become syntax (usable/sharable entities)? Is a study of the “administrative” aspects of mathematical practice and formats possible? The BHK interpretation is such a concept involving several syntaxes. And Linear Logic shows that some formats (proof-nets) are more convenient and less opaque/transparent than others.

  8. What is non-determinism and what is its role in the human/machine duality? A kind of non-determinism seems to exist within classical logic especially in the classical Sequent Calculus. Although there are a kind of connection between the two branches of the law of the excluded-middle (prove ~A then focus on A by retraction), to prove a statement of the shape (A or ~A) amount to try to prove A, then ~A, then return to A. We can’t predict the outcome (if there is one). We can also remark that classical logic (in its pure form) lacks computational content (non-confluency) but is the more natural way of reasoning for humans (even though it was rejected by intuitionism). Do all of that have any connection at all with the separation between humans and machines?

  9. How do mathematical concepts emerge from thoughts? How do intuitive concepts such as sets, numbers and functions emerge? Numbers for instance, have several definitions (Church, Parigot, Peano, Von Neumann…). Is there any kind of “dialogue” involving our mind (subject/subject) and our environment (subject/object) ensuring this emergence? Can we classify concepts emerging through these interactions? For instance, plurality, unity, existence, universality… What is the correlation between that and our conception of space and time? (are numbers emerging from our conception of time and sets from our conception of space?)

  10. What is the point of convergence of the dialectical evolution between humans and machines? The point zero is the point where humans first thought about automatizing their activities (before computers came). Let’s call these automatising mechanisms “machines”. Humans made machines to improve their life and to free them from repetitive and meaningless (indeed subjective and evolving) behaviors. The more machines are improved, the more the concept of human intelligence is redefined. During the era of Artificial Intelligence, machines could approximate what we understood as being human intelligence but then redefined the meaning of intelligence itself in the process. The creation of challenges for machines implicitly lead to the improvement of humanity (always a step ahead of machines). How will the role of human and machines merge more and more through the time? Where does that dialectical evolution lead to? What are the sociological/psychological effect of this conflict?