Blog‎ > ‎

The New Tau

posted Dec 30, 2017, 3:27 PM by Ohad Asor   [ updated Dec 30, 2017, 3:28 PM ]

We are interested in a process in which a small or very large group of people repeatedly reach and follow agreements. We refer to such processes as Social Choice. We identify five aspects arising from them: language, knowledge, discussion, collaboration, and choice about choice. We propose a social choice mechanism by a careful consideration of these aspects.


Some of the main problems with collaborative decision making have to do with scales and limits that affect flow and processing of information. Those limits are so believed to be inherent in reality such that they're mostly not considered to possibly be overcomed. For example, we naturally consider the case in which everyone has a right to vote, but what about the case in which everyone has an equal right to propose what to vote over?


In small groups and everyday life we usually don't vote but express our opinions, sometimes discuss them, and the agreement or disagreement or opinions map arises from the situation. But on large communities, like a country, we can only think of everyone having a right to vote to some limited number of proposals. We reach those few proposals using hierarchical (rather decentralized) processes, in the good case, in which everyone has some right to propose but the opinions flow through certain pipes and reach the voting stage almost empty from the vast information gathered in the process. Yet, we don't even dare to imagine an equal right to propose just like an equal right to vote, for everyone, in a way that can actually work. Indeed how can that work, how can a voter go over equally-weighted one million proposals every day?


All known methods of discussions so far suffer from very poor scaling. Twice more participants is rarely twice the information gain, and when the group is too big (even few dozens), twice more participants may even reduce the overall gain into half and below, not just to not improve it times two.


It turns out that under certain assumptions we can reach truly efficiently scaling discussions and information flow, where 10,000 people are actually 100 times more effective than 100 people, in terms of collaborative decision making and collaborative theory formation. But for this we'll need the aid of machines, and we'll also need to help them to help us.


Specifically, the price is using only certain languages, that may still evolve with time, and by that letting computers be able to understand what we speak about, to understand things said during the discussions. Since no one knows how to make computers understand natural languages, we'll have to make a step towards machines and use machine comprehensible languages. We'll detail more about this point, but before let's speak a little about self-amendment.


We describe a decentralized computer network, Tau Chain, and as such, which social decisions may it support? The most that computers can do is to run programs. Over Tau Chain we can gather knowledge and agree or disagree over it, and we can also actually do something and perform actions as arises from the discussion over the platform. Those actions, are nothing but computer programs. And the most important program on our scope is the platform itself.


The main choices collaboratively be made over the system, are about the system itself. Tau, is a discussion about Tau. Or in a little more elaborate yet succinct definition:


Consider a process, denoted by X, of people, forming and following another process denoted by Y. Tau is the case where X=Y.


That's the Tau. What the Tau is doesn't matter, what matters is that it can change into anything we want it to be. Further, Tau is a computer program, so we refer to a program that changes itself up to the collaborative opinions and decisions of its users.


It should be remarked that we do not let Tau guess the people's opinion, or even perform well-educated guesses as in machine learning, and that's maybe the main reason we use logic. Things said over the platform are as formal and definite as computer programs, they just deal with generic knowledge rather machine instructions.


Having that, a collaboratively self-amending program, it can transform into virtually any program we'd like it to be, or many programs at once. Indeed Tau does not speak only about itself but open for creation of any other individual or collaborative activities, such that we make it possible for small and very large groups to discuss, share and organize knowledge, detect consensus and disagreements, and coordinate actions in forms of programs.


The five aspects in social mentioned on the beginning correspond to the roadmap of Tau. Here's a brief summary to be emphasized more in the rest of this post: Implementation of TML and the internet of languages is the first step. Then comes the Alpha which is a discussion platform. Then the Beta which is about collaboratively following processes (not just defining them), specifically, it's about not just knowledge but also programs. Alpha and Beta are not fully decentralized in their infrastructure as in Bitcoin. Afterwards, and with the help of the Alpha and the Beta, comes Tau which is a decentralized self-amending social choice platform. On top of it we'll have a knowledge market which is one of Agoras' three components (the other two is computational resources market like Zennet, and newly designed economy offering features like risk free interest without printing new money, by implementing a derivatives market).


In order for machines to boost our discussion and collaboration abilities, they have to have access to the meaning of what we say. Machines use certain kinds of languages while humans use different kinds. For machines to use human languages, is something no one knows how to do, and for humans to directly use machine languages is pretty much inconvenient to the extent that it simply doesn't fit common knowledge-sharing human communication: machine languages are made of machine instructions, while knowledge representation is of a different nature. In another words, machines expect operational information, while humans make a lot of use in declarative language. Indeed one of Tau's goals is to let us focus on the "know-what" and let machines figure out the "know-how".


We therefore suggest a widely previously-suggested (cf. e.g. the article "Knowledge Representation and Classical Logic" by Lifschitz et al) place in the middle between human and machine languages, which is logic. Formal logic is largely natural to humans and is something machines can work with. But still "formal logic" isn't anything particular, as it doesn't point to any language but is a vague description of a family of languages.


We postulate that there should not and can not be a single universal language. There is no reason for one language to be optimal (or even adequate) for all needs. We therefore come up with a meta-language that is able to define new languages, but hey this would be back to square one with one universal [meta-]language. We therefore require the meta-language to be able to redefine itself and change, just as it can define other languages. By that we get not only many languages but also a self-amending language, which is an important part in a self-amending system.


It turns out that logics that can define themselves and have nice logical properties like decidability are not very common. We have Universal Turing Machines, but a less expressive and more informative (e.g. decidable) language is not easy to find. We adopt the logic PFP which its expressiveness is PSPACE-complete as known from Finite Model Theory books, and shown to be able to define itself in Imhof, 1999 "Logics that define their own semantics".


From here we continue to the Internet of Languages. Using the meta-language which we call TML (Tau Meta-Language, can get impression from the ongoing work on github) users define new languages by specifying logical formulas to describe what it means for two documents in different languages to have same meaning. In other words, to define a new language, one needs to define how it translates a semantics-preserving translation into an existing language. Semantics in our scope is ontological (objects and relations), and not operational semantics as in programming languages. By that we get an internet of knowledge representation languages that make the choice of language to not matter. A document in one language can be routed (using TML programs) into different languages.


We do not refer to translation as in French to Chinese, as we already stressed that we don't deal with natural languages. Of course, theoretically, it might be the case that one day someone will program over TML something that can understand natural language completely, but we don't count on such an event. Indeed there are many formalisms of natural language that are quite close to the full language and comfortable for humans to work with (what we refer to "simple enough English that machines can understand"), so we can expect TML to process human-comprehensible languages to some extent. But TML is intended also for machine-only languages. For example one might want to convert a document into a formatted HTML or into a Wiki, or to convert a program in some high-level language to machine code, or to synthesize code from logic.


More generally, TML is intended to be a compiler-compiler. In order to be so efficiently and not having to consider the logic of the language[s] again and again with every compilation of documents written in it, we take the approach of Partial Evaluation, which gives rise to additional very desirable features for a compiler-compiler, in the form of Futamura projections.


Now that we can express knowledge and opinions in various languages (precisely those languages that users define over the internet of languages over time), we can communicate using those languages. We consider Human-Human communication, or more specifically Human-Machine-Human communication. The machine is not an equal part in the conversation, it is only a machine, it only organizes what we say and is able to do so since we encode our information in a way accessible to it. A user can broadcast an idea to another user, and at this narrow scope of transmitting one idea between two people we can already enjoy from three benefits: easy explaining, easy understanding, and formalizing knowledge as a byproduct.


Specifically, the explainer doesn't need to make the other user understand, they only need to make the machine understand. This task might be simpler on some aspects and more complex on others, still machines are certainly less bound to organization and scale than humans. Having achieved an idea formalized in a machine comprehensible language, the second user can now not only translate it to other knowledge representation languages or to organize it as they see fit or to compare it to other formalized ideas, they can also ask the machine all questions they got. Since the machine understood the subject completely, and by understood we indeed refer to the theoretical ability to answer all questions (decidability arises again here), it can help the user understand by the same definition of understanding, as it can answer all the user's questions, without the need to refer the question to the original idea's author.


But the Alpha is beyond such case. The Alpha is about discussions of any scale. It is structured as discussions just like on forums or social networks, with posts and comments, that can appear in a team or a profile. A profile (or identity) is a place where people will typically post their personal opinions, and will be able to share them with other profiles they're connected with. A team is a group of identities, created and configured by some user, and intended to deal with a certain subject. For example a team could collaboratively develop of a software product, or compose an agreed law or contract, or simply any scientific/philosophical/social/nonsense ideas.


So far sounds just like any other discussion platform, but here we can have many more features thanks to using machine comprehensible languages. To list a few: automatically detect repeated argument by same person, or collecting what each person said during the discussion and map all the agreement and disagreement points, or to list all opinions and then who agrees with them rather (speakers per opinion rather opinions per speaker), or to organize the information put on the discussion in more organized and readable forms like a wiki. It can even comment automatically: suppose you see a post by someone expressing some opinion, but you already expressed in length your opinion about the subject in the past. You could then click "autocomment" and the system will automatically express your opinion, based on the exact information you provided in the past, and relative to the post you're autocommenting to. Or, maybe most importantly, to calculate the set of statements agreed by everyone with no exception, under some scope in concern, can be network wide, or per team, or per profiles connected to my profile, oe per discussion, and so on. Remember, this is not a magic at all, once everything is written in logic (or given we have a logic that can translate it into logic, namely TML definitions of the documents' languages).


Over the Alpha we teach the network a lot of knowledge, intentionally or as a byproduct of discussions. We also form theories that we agree on and all contributed to. What can we do with this knowledge? Ultimately, in the computer world, all we can do is run computer programs. On the Beta we will be able to discuss programs, and then actually run them. On Tau, we'll have a special team called Tau, such that whenever the group accepts a new decision, Tau's code is automatically modified. Over the Beta we can make true those things that we agreed as desirable over the Alpha, on our discussions. Once a team agrees or modifies its agreement on a specification of some program, no code need to be written or rewritten as it can be done automatically, as everything is already in machine-comprehensible languages. Synthesizing code from specifications is yet another language translation to be done over the internet of languages, but of course adequate language transformers have to be developed over it in order to allow such. This is a good example of things that are more easily said than done, and the details are highly technical. It is suffice to mention that the cutting-edge synthesis capabilities appear in the MSO+λY world.


Choice about choice, is to choose how to choose. To be able to change the choice mechanism itself, or in another words, the rules of changing the rules, or equivalently, to change Tau's code with time. This as for itself raises paradoxes and constraints the possible logics. If rules can change themselves, they inevitably contradict themselves as they try to say something else. How can we formalize such process in a paradox-free manner? One may be misled to identify rules about rules and choice about choice with higher order logic, but this isn't enough. Consider for example the rule "all rules, including this, can be modified only upon majority". Since this rule operates on itself as well, it has no finite order. We therefore need recursion in order to deal with rules of changing the rules. This is an important aspect involved in the choice of fixed-point logic for TML, and λY calculus on the Beta (apropos, Bauer showed in "On Self-Interpreters For System-T and Other Typed λ-Calculi" that a language can self-interpret only if it has fixed point, which rules out total programming languages).


An approach for rule-changing that was considered on the old Tau is Nomic's approach. To explain Nomic's approach and the new Tau's approach we'll use an example. Consider two lawyers each representing two sides of some deal, trying to converge into a contract such that both lawyers agree on. One way would be the following. First lawyer suggests a clause in the contract, and the second lawyer may agree or not. If agreed, then the clause is appended, otherwise it isn't. Then it's the second lawyer's turn to propose a clause and so on. This would be the Nomic way. The equivalent for Tau is to apply successive code patches with time. By that we pose an asymmetry between opinions that came first. There's a lot to say about this asymmetry and how Tau manages to avoid it almost completely, but for now, consider the case where a newly proposed clause contradicts an old clause. If we don't want to give priority to what came first, they will then have to amend the new or old clause or even more clauses, and not by default delete the old clause.


Another way would be that on every turn, each lawyer submits a whole contract draft, and the other lawyer may either accept it or propose a different draft. Requiring each draft to be logically consistent, we will never have to deal with contradictions of past vs future. It eliminates completely the need to look back. But it still cannot scale. What if we had a million lawyers, will they read a million drafts?


Over Tau we can take all those million contract drafts, which correspond to proposals of Tau's next full code, and in a quite straight-forward way (thanks to the logical formalism of the documents) calculate the precise core that everyone agree on, and list the points to be resolved. We don't need to vote, we do it just as in small groups in real life: we just speak, and the opinions map arise from the conversation to any intelligent listener.


So much more to be said, and will be said in further blogposts and other publications, but that's all for now. I'll be more than happy for your opinions and approaches regarding the mentioned issues, especially practical social choice and how to make discussions scale.

Comments