Blog‎ > ‎

The Art of Self-Reference

posted May 27, 2018, 5:09 AM by Ohad Asor   [ updated May 27, 2018, 5:13 AM ]
This post, like the previous two, comes to shed light on the same subjects. The post "The New Tau" went bottom up from TML to Agoras and focused on scaling discussions.  The post "From Agoras to TML" went the other way around, and focused on knowledge economy. Here we approach the subject by emphasizing on self definition.

Our main example here would be of legislation. Consider a parliament which creates and changes laws. Suppose the parliament is interested in changing an existing law.  Well, in a normal world, they can't just go and change the law, as there should be laws of changing the laws.  Distinguishing between laws and laws of changing laws, we refer to the formers as "first order laws" and to the latters a "second order laws". A third order law would be a law that regards second order laws (and possibly first order as well).

As it looks, we will need infinitely many orders of laws, therefore infinitely many laws, just in order not to leave the law completely unprotected: if there are no laws of changing the laws (second order laws), then there's absolutly nothing preventing from changing first order laws in any case.  And that's not enough: if some second order law is preventing some law change, we can also change that second order law, as long as we don't have a third order law that prevents it.

So according to this approach, just in order to have any kind of protection against arbitrary law change, we got to have infinitely many laws. In technical terms, it means that no high order logic would be enough, as any order of formula will still be finite, and the maximum order of a formula will correspond to the maximum order of a law, which would then be completely unprotected against modification, as above. This shows how any trial to solve the problem using high order logic or type theory is doomed to fail, unless they incorporate the following remedy.

And the remedy is to consider recursive rules. Infinite order rules, as shown, are a necessity of everyday life.  Consider the law: "all laws can be changed only upon majority vote, including this law". What would be the order of this law? If you say that it's of order is N, then I can say that it's of order N+1 really, because it refers to itself, and a law that refers to an order N law is by definition of order at least N+1. So, the order of this law, is really infinite or undefined (can go either way). It refers to itself, and creates a "logical infinite loop".

The reader might be familiar with self-referencing statements in forms of paradoxes, like the liar paradox, or simply "this statement is false". Self-referencing definitions may not only be contradictory but also meaningless: "the green-colored things are those who have green color", defines "green-colored things", but in a circular definition that gives us no information at all.  Nevertheless, our example law "all laws can be changed only upon majority vote, including this law" makes a perfect sense. It refers to itself, but there's no problem with that, we can still understand and follow this law. It would be ridiculus to reject this law just because we cannot attach to it any finite order in the sense of high order logic.

Definitions by self-reference appear very frequently.  We can define a number X by "X is the number such that X=4X-6". It defines X by means of itself, yet a perfectly valid way to say X=2. In programming, it is about recursive functions, functions that call themselves, directly or indirectly. However naively programming this definition would immediately fail:

"function what_is_x(): return 4*what_is_x() - 6"

this function will run forever, and will not admit our common sense that X=4X-6 is a simple valid finite definition.

So we really need to be able to define things in terms of themselves, and we can see how common programming and logic paradigms fail to do so.

The family of logics that incorporate self-reference as a language primitive is called "Fixed-Point Logics".  Self-reference, recursion, fixed point, are more or less synonyms in the field of computer science. There is an urban legend saying that the use of recursion automatically implies undecidability and by that uselessness in many scopes including the scope of law. This is probably because another way to define Turing machines is using "Recursively Enumerable Sets", tying generic computation with recursion. However this is not always the case. There exist decidable fixed-point logics, for example FO[LFP] and FO[PFP] (both are decidable logics because they operate over finite structures).  We won't get into the details now, I'll just mention that TML is FO[PFP].

(Decidability very roughly means, for the reader unfamiliar with the concept yet, that it is possible to answer all relevant questions. A too expressive language will become undecidable because it will require many [if not most] of the questions to need infinite time to answer).  

We have pointed to a logical framework that makes sense for legislation and showed how other logical frameworks break down. Similarly we showed that common programming paradigms also break down on these aspects, and our alternative paradigm is viable for programming languages as well. To require the law to be written in a decidable fixed-point logic, is a highly nontrivial and highly specific requirement, but I think that we demonstrated how there isn't really any other way to go.

But of course even though it eliminates so many other possibilities, it is still not enough. It basically amounts to the kind of language of law: to allow recursion and keep it decidable. Still, a lot more things has to be said about such a language and the use of it, to be adequate for legislation. This has to do with the field called KRR (Knowledge Representation and Reasoning). I don't have too much to say about it: we basically adopt the ontological model together with the relational machine model. For more information about knowledge representation as ontologies and relations, can refer to resources about the Semantic Web.

So we move next from language to knowledge to how knowledge is acquired, in a social setting? Well, it is never the case that people simply write down all their opinions. Opinions are communicated by (and also arised from) events and discussions. More deeply, opinions arise from interest, or even deeper, from questions.

And now we reach to speak about the most important thing in life: the questions. We have the notion of a correct answer. We can even program computers to tell whether a given answer to a given question, is correct. We are interested in correct answers to our questions. But what would be the kind of questions we are interested at? Well, we are interested in interesting questions, yet another circular definition that happen to be meaningless. Which questions are interesting?

While the answer to the question "for which X we have X=4X-6" to be X=2 is correct to any intelligent being, to anyone who can understand the question. The information in this specific question is enough to objectively deem the answer X=2 as correct, no matter humans, machines, or aliens. Still, it contains absolutly no information regarding why would this be interesting, if any.

Questions being interesting, is not only subjective, but inherently always stem from the arbitrary preferences of the asker. A dog might ask to dig and a cat might ask a mat. There isn't such a thing "an interesting question", but "a question interesting for certain beings in certain time". Our questions come from our human nature and our personal nature. We are defined by the questions we find interesting, so much more than we are defined by the answers we give. Similarly, cooperation between people interested in the same questions, makes much more sense comparing to cooperation between people agreeing on the same answers. And many would be much more interested in finding people asking same questions as them, than finding people giving same answers. Questions are therefore a major aspect of tau, including the points raised and beyond (for example the role of questions in the setting of open world assumption vs the closed world assumption).

For clarity we can distinguish between "questions" and "queries". By queries we refer to questions in which we expect to have an immediate answer, e.g. the case where we feed a machine with information, and then query that information. The machine will not and should not return new information, it will only use the infromation that we gave to it.  In contrast, by "questions" we refer to questions in which we don't expect to have an available answer yet. A question is a tool to define which knowledge is desired. Questions usually come before the knowledge, not the other way around.  They are a tool to scope a discussion or an exploration into certain areas of knowledge. A tool which the machine will never be able to simulate, but given the human input of which questions are interesting, it can greatly help us in finding correct answers by discussions about those questions.

We can now conclude with the role of the people vs the role of the machine, on tau: humans are for questions, and machines are for answers. More broadly, I see this as a philosophical truth that should guide any AI aspirations.

From our six steps of language, knowledge, discussion, collaboration, choice, and knowledge economy, we now covered how in order to reach the goal of social choice that may refer to the process of social choice itself, we need very specific kinds of formalisms which goes down to the language level (without covering the internet of languages from previous posts), and from there we touched some additional apects of knowledge and discussion.

Next time: Who gets to decide?