Blog‎ > ‎

Tau and the Crisis of Truth

posted Sep 10, 2016, 11:25 AM by Ohad Asor   [ updated Sep 10, 2016, 11:28 AM ]

We live in a world in which no one knows the law. Except trivial cases, you cannot know what is legal and what is not, all you can do is try and see what is the judge’s or policeman’s opinion after you've taken your actions, which of course differs from one to another. Or you can consult a lawyer that will tell you that there are no absolute answers, and at the end of the day it’s a matter of probability which unfortunately no one knows how to calculate. You can do your best to have a productive life and follow the rules as you understand them or as lawyers guide you, but no one can guarantee that you will not be considered a criminal, or that legal actions will not be taken against you. Similarly one can live a life of harming so many people and no legal system will stop it even if the system is aware to the actions taken. Such pessimistic situation is not new and is not local, and to my taste was best described by Franz Kafka.

Playing with words and taking them into any desired direction, consciously or not, with good or bad intentions, was always there since humanity learned to speak. The worst lies contain only truths and the worst crimes are licensed, and arguments can be given to justify almost anything. This “crisis of truth” is the foundation of the post-modern stream in philosophy, notably the Deconstructivist approach which demonstrates how texts may be interpreted in many (contradicting) ways. "There is no one truth" is the basis of post-modernism. But can we at least have some island of truth in which social contracts can be useful and make sense?

Not only laws cannot be understood in an absolute way, but cannot even  trivially be made by an absolute process. Laws have to be changed with time and therefore we also need laws of changing the laws. But then we’ll need laws-of-changing-the-laws-of-changing-the-laws, ad infinitum. We therefore remain without any logical basis for the process of rulemaking, not only the crisis of deciding what is legal and what is illegal. This paradox was described on the online-available book The Paradox of Self Amendment by Peter Suber. Suber’s solution is summarized on the appendix of that book that describes the game Nomic. He offers a two-tier system in which we have mutable and immutable rules, including rules of transmuting a rule (namely to make immutable rule mutable or vice versa). This way one can avoid the infinite hierarchy of amendment-schemes, but we still remain with the “crisis of truth” in which there is no one truth, especially when you desperately need it, and despite the very purpose of laws to have a certain social order rather disinformation and chaos.

When stepping back from the world of legalities and trying to answer similar questions in the world of numbers and mathematical objects, the task doesn’t become easy but arguably easier. Logicians have contemplated those ideas for centuries and came up with the notion of Decidability. A language is decidable if every expressible statement on this language can be decided whether it follows from rules expressed in that language as well. An astonishing example would be Godel’s theorem: if our language is able to express Peano arithmetic, namely, it contains the natural numbers 1,2,3,... equipped with addition and multiplication and their rules (that indeed determine them uniquely), then there will always be some true statements about Peano arithmetic that cannot be proved from the definitions. Infinitely many axioms will therefore be required in order to be able to decide the correctness of every statement. An example of decidability would be Presburger arithmetic: if we give up multiplication and define numbers with addition only (in a certain way), we are then back to decidability and every expressible statement could be decided in a finite process to be either true or false. Same applies to the 2000 years old Euclid geometry which is a decidable theory indeed.

We would therefore be interested in creating a social process in which we express laws in a decidable language only, and collaboratively form amendable social contracts without diving into paradoxes. This is what Tau-Chain is about.

This long introduction came to stress the importance and central role of decidability in generally in life and specifically in Tau. If we would like Tau to give a real contribution to the world, it must use a decidable language. The initial Tau design was aiming to answer those requirements, but recently I’ve found out that I was wrong, and the initially chosen language is simply undecidable. Martin-Lof Type Theory (MLTT) has only certain decidable aspects (specifically, equality). But it is not the case that everything expressible is decidable. This situation led me to a conclusion that MLTT was a wrong choice, and another logic should be considered. Indeed some statements of mine on blogposts and videos were mistaken, and MLTT does not have decidable typecheck in general.

MLTT is about so-called “dependent types” which I’ll try to give a small taste of in this paragraph. We have “terms” and “types”. Can see terms as statements and types as specifications that those statements are required to fulfill. How general can be the specifications? Dependent types comes to give them maximum expressiveness in some sense: types can depend on terms, and terms can depend on types (and that’s where the name “dependent types” comes from). We are able to express any specification that our term language is able to express, or more accurately, types can be stated by a program that calculates them, that may depend on other parts of the code. Programs may include specifications, and specifications may include programs. Typecheck is the process of making sure that terms indeed meet their types. But if types can be generic programs, we have to require them to be executed and return a result in a finite time. For that we’ll have to make sure that our programs halt, so we have reached the Halting problem. Only if you can assert that your types are calculated in a finite process, then the very process of typecheck becomes decidable. But this task is undecidable by itself as the Halting problem is undecidable. So if you want to reach decidability over dependent types, you’ll first have to pass an undecidable barrier of halting. This is possible in many cases as many programs are able to be written in a form that provably terminates (e.g. primitive recursion which is the basis of total functional programming), but it is inherently impossible to come up with such a method for all programs, otherwise the Halting problem would become decidable.

Dependently typed languages are widely used in the world of computer-aided proofs and formal verification, but they have a reputation of being hard to work with. The hardness stems not only from their theory being significantly more complicated than of other languages, but the harder part is convincing the typechecker (or termination and totality checker that try to assure that the typecheck process is finite indeed) that your code meets its specifications. The typecheck process is therefore not automatic, and requires a significant amount of human intervention just in order to convince it to accept true statements, a difficulty that stems exactly from the undecidability of the language. Subsequently we lose the ability to always and fully automatically determine whether a sentence is true, or equivalently, if some action (term) is legal (well-typed) or illegal according to given rules.

The remedy for this situation is to pick a different logic, a “complete” one. A complete logic means that everything is decidable, and not only certain aspects of it (like equality in MLTT). Godel showed that every logic that is complete and is able to express Peano arithmetic, will inevitably be inconsistent, so in order to enjoy both consistency and completeness we'll have to give up a part of arithmetic. Not only there are already known complete and consistent logics we can use, but very recently such families of languages were discovered to be much more expressive than previously known. Specifically, certain classes of Monadic Second Order Logic (MSO or MSOL) were found to be complete and consistent since the 1960s, but they did not have satisfactory expressiveness abilities. It was not until 2006 that Luke Ong came up with a proof of completeness of MSOL over so-called “Higher Order Recursion Schemes” which is a simply-typed (rather dependently-typed) higher order functional language with recursion, that dramatically enhanced the expressiveness of known completely decidable languages. Since then advancements in both theory and practice were made, notably by Naoki Kobayashi.

It was a long journey until I found out that my basic premises were wrong and indeed I was not a logician when the journey began. Fortunately we can now modify Tau’s design into something that is even better than we thought before, even if MLTT was decidable. To mention three advantages: the existence of a direct computational model, the ability to infer programs, and simplicity of use.

MSO theories have a direct computational model in terms of automatons. MSO formulas are able to be translated into redundant machines that read a statement and decide whether it meets the formula. Moreover, those automatons can be minimized in a unique way, so we even get a provably optimal code. MLTT on the other hand does not enjoy a direct translation to automatons but rather the logic-automata connection it admits a more general concept of programs-as-proofs, where the proof process is left more loosely specified in computational terms and a direct translation to automatons does not exist. It cannot possibly exist since automatons are an explicit finite decision procedure, while dependent types are not completely decidable.

The second notable advantage is automatic programming, more commonly called “program synthesis” in the literature. If we give a specification of a program (or rule base), can the machine calculate the program (or rules) by itself? This task is undecidable on MLTT, but in a complete logic it is possible indeed. We can have correct-by-construction programs and contracts, by merely stating what we expect them to do, and commanding the machine to find such a structure that admits our requirements.

MSO logic is simpler to use for the reasons mentioned above: it is a simpler language, and does not require human intervention to convince that a true statement is true indeed. It also represents a more intuitive logic:  MLTT is an intuitionistic logic which means that a statement that proved to be false doesn't necessarily imply that it's contrast is true. For example, one cannot always use a proof by contradiction. Intuitionistic logic allows to prove nonexistence by contradiction: it can be assumed that something exists, derive a contradiction, and by that conclude it does not exist. But it cannot support a proof by contradiction of nonexistence: assuming that something does not exist and deriving a contradiction does not prove that it exists, under intuitionistic logic. More formally, intuitionistic logic explicitly negates the law of excluded middle. And this is of course highly unintuitive, and allows the possibility of human mistakes especially over complicated structures. On the other hand, MSO is a Classical logic, means it includes the law of excluded middle: it's simply impossible for anything to be both true and false at the same time. The middle is completely excluded, while if we add this rule to MLTT, it loses its consistency.

As a conclusion, our journey turned out to be different than we expected from the aspect of implementation, but it will end up with a better product that will give us a chance to solve the crisis of truth and the paradox of self amendment in a global social and electronic context. I personally intend (and promised) to keep working full time until I fulfill Tau and Agoras that will introduce many more features than described here. Recently I gave a filmed condensed description of the products I’m committed to deliver. Another article and another video are on their way to explain more details. If you are willing to help achieving these goals and you find yourself (or your friend) understanding Kobayashi's papers, I'll be very glad for cooperation.