Blog

From Agoras to TML

posted Mar 12, 2018, 6:30 PM by Ohad Asor   [ updated Mar 12, 2018, 7:01 PM ]

In the last blogpost we went through our five stages in chronological and conceptually bottom-up order: language, knowledge, discussion, collaboration, change, and knowledge market. The first five steps are about creation of a knowledge society, and from there we can build a knowledge economy on top of it. On this post we go backwards: we consider what we mean by knowledge economy and what is required for that to happen, and go down the hierarchy until language.

To make the ideas more concrete, let's consider a decentralized search engine, which was one of Agoras' goals that was later generalized into the concept of knowledge economy. Google has ~1M physical servers which need to crawl, index, and search the internet. I don't know how big their data is, but it must be huge. So having the whole internet indexed and to actually search it, one needs enormous amounts of computing power and data storage. In a decentralized network featuring a decentralized search engine, who is going to pay for that?

Since this task of maintaining a web search engine depends on "physical" data: on data that is completely unknown until someone goes to the internet and discovers it. Therefore, such a task cannot be completely trustless, as one cannot prove that they downloaded and indexed into the search engine the correct data, and didn't modify, replace, or omit it. But although this problem isn't completely solvable, it is still solvable to some probabilistic extent, with risks that can be lowered arbitrarily, theoretically. We will not cover this whole subject now, but we did on Zennet's materials, cf. e.g. http://zennet.sc/about or more documents in Zennet's bitcointalk thread. My pricing formula appears at http://zennet.sc/zennetpricing.pdf which eliminates the risk of mispricing, a risk that can be significantly exploited. Additional way to probabilistically verify unverifiable computations is to calculate the same thing more than once (by randomly choosing more providers), so increasing the cost linearly, decreases the risk exponentially (e.g. x10 more cost yields ^10 less risk).

So having a decentralized search engine requires Zennet-like capabilities, the ability to fairly rent (and rent-out) computational resources, under acceptable risk in the user's terms (as a function of cost). Our knowledge market will surely require such capabilities, and is therefore one of the three main ingredients of Agoras. But let's continue: say we have hardware rent market, what's next towards a decentralized search engine?

A web search engine consists of users and maintainers. The users supply search queries, and the maintainers answer the queries quickly, and to do so quickly they must have the whole web already indexed and stored. Naturally, users have to pay to maintainers a payment that depends on the amount of usage and the cost of maintenance.

But in a decentralized network, the users and the maintainers are really the same entity. To maintain the network all one has to do is to run the client and by that participate in indexing and searching. A user might query a certain amount of queries per day, while their computer can answer another certain amount of queries per day (to other users as, no single user will store the whole internet, while you can't predict what you're going to query tomorrow). The computer, can probably answer orders of magnitudes more queries than a single average user can manually provide. So a home user who uses dozens or hundreds of "googlings" per day, and also runs a client that supports the network, I'd expect such a user not only to not have to pay, but to earn, as they'll serve others more than they consume. But we will also have heavier users, such that for example consumes the search services automatically in large magnitudes. Such users will have to pay and will probably not end up break even. It's a zero-sum game, but the money flows from the big entities to the smaller ones.

OK so we described a system that one can type a query and "ask the internet", and the engine returns an answer, together with an economic system that supports the production, pricing, shipment, and arena (agora), of some knowledge (the knowledge of the text on the web). Now it begins to sound as an example of a knowledge economy.

If stopping here, we didn't actually contribute much: so we'll have a better and decentralized search engine that gives many users some income, but life will continue more or less the same. Searching, as we know it, is no different that searching a web page with Ctrl-F together with a thesaurus. That's what Google is more or less using, it Ctrl-F's the internet with an open thesaurus. But making an economy out of knowledge has many more possibilities. Importantly, we'd like to incorporate into our knowledge economy also deeper and more meaningful knowledge than "those words or their equivalents are mentioned on that website".

Ultimately, we'd like to upload many (if not most) of our thoughts, opinions, and intellect, to the internet. We already do it. But all our search engines know to do is to use Ctrl-F and a thesaurus. namely it operates in a very shallow level, not even "shallow understanding". But knowledge that people actually seek for, is never this way. We don't seek for documents that contain certain words, but for documents that actually answer our question. Similarly, we don't seek for professionals in which all they know is to search the internet, but when we seek for professionals we expect from them a deep understanding of their subject of expertise, not by their ability to mention a bag of words. Otherwise we'd do good enough with Google and wouldn't need professionals anymore.

And we'd like to have a futuristic electronic knowledge economy out of this all. To shed a broader light on the picture (and leaving aside for this post the whole monetary side of the economy, the monetary features that Agoras will support), an economy cannot exist without a society, and is a thing within society. Unsurprisingly, all economists consider themselves as a kind of social scientists. Unlike physics, economics depends, and cannot be defined without, concepts that exist purely in human's imagination. The principle that yields this status of economics is simple but very deep: we cannot have an economy without some subjective valuation (or "utility function"), namely the ability to say "I prefer this over that". Can think of it like taste: some prefer chocolate over ice cream, while some crazies prefer ice cream over chocolate. They might even be ready to pay different prices. In the bottom line, it comes down to ethical value systems in the broad sense, defining good and bad, better and worse. Which are indeed not physical terms but exist purely on our imagination. And this doesn't make them a single bit less important.

Let me give a (really) pale example of a knowledge-economical feature that can be made over Agoras thanks to its Tau underlying infrastructure, giving it access to meaningful knowledge. Suppose there's some institute or faculty of philosophy that took the hard mission to formalize whole philosophy books in a tau-comprehensible language. They might offer you to submit questions and get answers that rely on their expensive proprietary database, for payment. Then can just declare this data, over the platform, to be private, and to incorporate it into certain discussions only upon paid subscription.

Each one of us is a research institute of something, featuring unique knowledge. Such knowledge should have direct economic value, namely a pure knowledge-cash transaction.

Alright so we need also a "knowledge society" to be the framework that our "knowledge economy" lives at. We will touch several aspects in which it is required, but let's follow our route and speak about the ability to change. An economy, is a game, and it's highly undesirable to not be able to change the rules of the game over time, which is again a social choice and highly depends on ethical values. Only society decides the new economical (and social) rules, as otherwise it's technically not going to happen. But what does it mean to "change the economy" or "change the society"?

It means a lot, but we don't care. We are computer people. Our knowledge society and economy, Tau and Agoras, are computer programs. So "what does it mean to change economy/society" comes to a very clear practical concept: a decentralized network that changes its own code.

To change its own code, as it sounds, is a sensitive thing to do, both by practice and by design, both from technical reasons and social reasons. The old tau considered only the technical side (and even that in a wrong way), and neglected the social side. Indeed there are questions that we don't know to give an apriori answer as there is no best answer, for example: who can change the rules (in a decentralized setting)? There is no perfect answer to this question, and every solution suffers from major disadvantages, yet, as there is no other way, the users will have to decide which partial solution they see as less worse. So the "governance model" will ultimately have to be decided by the users, and that's one of the benefits of beginning the discussion platform centralized, to reach a broad and machine-assisted consensus about the nature of the decentralized network since its beginning.

But aside "who gets to change the rules" there are many, many social aspects in the process of change that require attention and do have good solutions. For some taste of those aspects please see the previous blogpost (for example "who gets to vote" vs "who gets to decide what to vote over", a question that becomes meaningless given our vote-less solution).

A self amending core, is the Tau, as also explained in the latter post. After we have the ability to change, we can proceed to an evolving electronic knowledge economy. So we covered two steps on our way down: from knowledge market to change, namely from Agoras to Tau. Let's keep going down backwards through collaboration (beta), discussion and knowledge (alpha), and language (TML). But we'll do it in short.

Collaboration, as "change" above, means a lot, but we don't care as we're computer people. To collaborate means to do something together, and the maximum that we can do in the computer world, is to run a program. Any p2p network is a form of collaboration: coordinated execution of code. On Tau we'll use the language translation abilities of TML in order to convert specification of programs to code of programs, automatically, and by that building and running a program together reduces to discussions, enhanced with the ability to convert logic to code, run it, and coordinate its execution among parties.

From here to discussions. In order to agree which code to run (or even organize the opinions out of a large groups), and which new rules to set, and to actually have an electronic knowledge society, and to formalize knowledge in a human-convenient process, and to have this all effectively in a very large scale (millions or more of users), we need discussions that scale. And for this topic I have no choice but refer you again to the last blogpost.

We reached down our road to the fundamental concepts of knowledge itself, and language that is even more fundamental than knowledge. Our knowledge society/economy can be fruitful more than as in the existing world, only if the machine have access to the meaning of things said, and not just to how they're written. We create the Internet of Languages, by letting users define new knowledge-representation languages (that might be very close to natural languages), and by that making an ever growing classes of documents to become machine comprehensible (let alone an ever growing number of documents). TML and the internet of languages will widely support more kinds languages than just knowledge representation, but that'd be enough for now.

On this post we emphasized more on the top of the pyramid and less on its bottom, in contrary to the previous post. I hope it was somehow helpful. Thanks for reading!

The New Tau

posted Dec 30, 2017, 3:27 PM by Ohad Asor   [ updated Dec 30, 2017, 3:28 PM ]

We are interested in a process in which a small or very large group of people repeatedly reach and follow agreements. We refer to such processes as Social Choice. We identify five aspects arising from them: language, knowledge, discussion, collaboration, and choice about choice. We propose a social choice mechanism by a careful consideration of these aspects.


Some of the main problems with collaborative decision making have to do with scales and limits that affect flow and processing of information. Those limits are so believed to be inherent in reality such that they're mostly not considered to possibly be overcomed. For example, we naturally consider the case in which everyone has a right to vote, but what about the case in which everyone has an equal right to propose what to vote over?


In small groups and everyday life we usually don't vote but express our opinions, sometimes discuss them, and the agreement or disagreement or opinions map arises from the situation. But on large communities, like a country, we can only think of everyone having a right to vote to some limited number of proposals. We reach those few proposals using hierarchical (rather decentralized) processes, in the good case, in which everyone has some right to propose but the opinions flow through certain pipes and reach the voting stage almost empty from the vast information gathered in the process. Yet, we don't even dare to imagine an equal right to propose just like an equal right to vote, for everyone, in a way that can actually work. Indeed how can that work, how can a voter go over equally-weighted one million proposals every day?


All known methods of discussions so far suffer from very poor scaling. Twice more participants is rarely twice the information gain, and when the group is too big (even few dozens), twice more participants may even reduce the overall gain into half and below, not just to not improve it times two.


It turns out that under certain assumptions we can reach truly efficiently scaling discussions and information flow, where 10,000 people are actually 100 times more effective than 100 people, in terms of collaborative decision making and collaborative theory formation. But for this we'll need the aid of machines, and we'll also need to help them to help us.


Specifically, the price is using only certain languages, that may still evolve with time, and by that letting computers be able to understand what we speak about, to understand things said during the discussions. Since no one knows how to make computers understand natural languages, we'll have to make a step towards machines and use machine comprehensible languages. We'll detail more about this point, but before let's speak a little about self-amendment.


We describe a decentralized computer network, Tau Chain, and as such, which social decisions may it support? The most that computers can do is to run programs. Over Tau Chain we can gather knowledge and agree or disagree over it, and we can also actually do something and perform actions as arises from the discussion over the platform. Those actions, are nothing but computer programs. And the most important program on our scope is the platform itself.


The main choices collaboratively be made over the system, are about the system itself. Tau, is a discussion about Tau. Or in a little more elaborate yet succinct definition:


Consider a process, denoted by X, of people, forming and following another process denoted by Y. Tau is the case where X=Y.


That's the Tau. What the Tau is doesn't matter, what matters is that it can change into anything we want it to be. Further, Tau is a computer program, so we refer to a program that changes itself up to the collaborative opinions and decisions of its users.


It should be remarked that we do not let Tau guess the people's opinion, or even perform well-educated guesses as in machine learning, and that's maybe the main reason we use logic. Things said over the platform are as formal and definite as computer programs, they just deal with generic knowledge rather machine instructions.


Having that, a collaboratively self-amending program, it can transform into virtually any program we'd like it to be, or many programs at once. Indeed Tau does not speak only about itself but open for creation of any other individual or collaborative activities, such that we make it possible for small and very large groups to discuss, share and organize knowledge, detect consensus and disagreements, and coordinate actions in forms of programs.


The five aspects in social mentioned on the beginning correspond to the roadmap of Tau. Here's a brief summary to be emphasized more in the rest of this post: Implementation of TML and the internet of languages is the first step. Then comes the Alpha which is a discussion platform. Then the Beta which is about collaboratively following processes (not just defining them), specifically, it's about not just knowledge but also programs. Alpha and Beta are not fully decentralized in their infrastructure as in Bitcoin. Afterwards, and with the help of the Alpha and the Beta, comes Tau which is a decentralized self-amending social choice platform. On top of it we'll have a knowledge market which is one of Agoras' three components (the other two is computational resources market like Zennet, and newly designed economy offering features like risk free interest without printing new money, by implementing a derivatives market).


In order for machines to boost our discussion and collaboration abilities, they have to have access to the meaning of what we say. Machines use certain kinds of languages while humans use different kinds. For machines to use human languages, is something no one knows how to do, and for humans to directly use machine languages is pretty much inconvenient to the extent that it simply doesn't fit common knowledge-sharing human communication: machine languages are made of machine instructions, while knowledge representation is of a different nature. In another words, machines expect operational information, while humans make a lot of use in declarative language. Indeed one of Tau's goals is to let us focus on the "know-what" and let machines figure out the "know-how".


We therefore suggest a widely previously-suggested (cf. e.g. the article "Knowledge Representation and Classical Logic" by Lifschitz et al) place in the middle between human and machine languages, which is logic. Formal logic is largely natural to humans and is something machines can work with. But still "formal logic" isn't anything particular, as it doesn't point to any language but is a vague description of a family of languages.


We postulate that there should not and can not be a single universal language. There is no reason for one language to be optimal (or even adequate) for all needs. We therefore come up with a meta-language that is able to define new languages, but hey this would be back to square one with one universal [meta-]language. We therefore require the meta-language to be able to redefine itself and change, just as it can define other languages. By that we get not only many languages but also a self-amending language, which is an important part in a self-amending system.


It turns out that logics that can define themselves and have nice logical properties like decidability are not very common. We have Universal Turing Machines, but a less expressive and more informative (e.g. decidable) language is not easy to find. We adopt the logic PFP which its expressiveness is PSPACE-complete as known from Finite Model Theory books, and shown to be able to define itself in Imhof, 1999 "Logics that define their own semantics".


From here we continue to the Internet of Languages. Using the meta-language which we call TML (Tau Meta-Language, can get impression from the ongoing work on github) users define new languages by specifying logical formulas to describe what it means for two documents in different languages to have same meaning. In other words, to define a new language, one needs to define how it translates a semantics-preserving translation into an existing language. Semantics in our scope is ontological (objects and relations), and not operational semantics as in programming languages. By that we get an internet of knowledge representation languages that make the choice of language to not matter. A document in one language can be routed (using TML programs) into different languages.


We do not refer to translation as in French to Chinese, as we already stressed that we don't deal with natural languages. Of course, theoretically, it might be the case that one day someone will program over TML something that can understand natural language completely, but we don't count on such an event. Indeed there are many formalisms of natural language that are quite close to the full language and comfortable for humans to work with (what we refer to "simple enough English that machines can understand"), so we can expect TML to process human-comprehensible languages to some extent. But TML is intended also for machine-only languages. For example one might want to convert a document into a formatted HTML or into a Wiki, or to convert a program in some high-level language to machine code, or to synthesize code from logic.


More generally, TML is intended to be a compiler-compiler. In order to be so efficiently and not having to consider the logic of the language[s] again and again with every compilation of documents written in it, we take the approach of Partial Evaluation, which gives rise to additional very desirable features for a compiler-compiler, in the form of Futamura projections.


Now that we can express knowledge and opinions in various languages (precisely those languages that users define over the internet of languages over time), we can communicate using those languages. We consider Human-Human communication, or more specifically Human-Machine-Human communication. The machine is not an equal part in the conversation, it is only a machine, it only organizes what we say and is able to do so since we encode our information in a way accessible to it. A user can broadcast an idea to another user, and at this narrow scope of transmitting one idea between two people we can already enjoy from three benefits: easy explaining, easy understanding, and formalizing knowledge as a byproduct.


Specifically, the explainer doesn't need to make the other user understand, they only need to make the machine understand. This task might be simpler on some aspects and more complex on others, still machines are certainly less bound to organization and scale than humans. Having achieved an idea formalized in a machine comprehensible language, the second user can now not only translate it to other knowledge representation languages or to organize it as they see fit or to compare it to other formalized ideas, they can also ask the machine all questions they got. Since the machine understood the subject completely, and by understood we indeed refer to the theoretical ability to answer all questions (decidability arises again here), it can help the user understand by the same definition of understanding, as it can answer all the user's questions, without the need to refer the question to the original idea's author.


But the Alpha is beyond such case. The Alpha is about discussions of any scale. It is structured as discussions just like on forums or social networks, with posts and comments, that can appear in a team or a profile. A profile (or identity) is a place where people will typically post their personal opinions, and will be able to share them with other profiles they're connected with. A team is a group of identities, created and configured by some user, and intended to deal with a certain subject. For example a team could collaboratively develop of a software product, or compose an agreed law or contract, or simply any scientific/philosophical/social/nonsense ideas.


So far sounds just like any other discussion platform, but here we can have many more features thanks to using machine comprehensible languages. To list a few: automatically detect repeated argument by same person, or collecting what each person said during the discussion and map all the agreement and disagreement points, or to list all opinions and then who agrees with them rather (speakers per opinion rather opinions per speaker), or to organize the information put on the discussion in more organized and readable forms like a wiki. It can even comment automatically: suppose you see a post by someone expressing some opinion, but you already expressed in length your opinion about the subject in the past. You could then click "autocomment" and the system will automatically express your opinion, based on the exact information you provided in the past, and relative to the post you're autocommenting to. Or, maybe most importantly, to calculate the set of statements agreed by everyone with no exception, under some scope in concern, can be network wide, or per team, or per profiles connected to my profile, oe per discussion, and so on. Remember, this is not a magic at all, once everything is written in logic (or given we have a logic that can translate it into logic, namely TML definitions of the documents' languages).


Over the Alpha we teach the network a lot of knowledge, intentionally or as a byproduct of discussions. We also form theories that we agree on and all contributed to. What can we do with this knowledge? Ultimately, in the computer world, all we can do is run computer programs. On the Beta we will be able to discuss programs, and then actually run them. On Tau, we'll have a special team called Tau, such that whenever the group accepts a new decision, Tau's code is automatically modified. Over the Beta we can make true those things that we agreed as desirable over the Alpha, on our discussions. Once a team agrees or modifies its agreement on a specification of some program, no code need to be written or rewritten as it can be done automatically, as everything is already in machine-comprehensible languages. Synthesizing code from specifications is yet another language translation to be done over the internet of languages, but of course adequate language transformers have to be developed over it in order to allow such. This is a good example of things that are more easily said than done, and the details are highly technical. It is suffice to mention that the cutting-edge synthesis capabilities appear in the MSO+λY world.


Choice about choice, is to choose how to choose. To be able to change the choice mechanism itself, or in another words, the rules of changing the rules, or equivalently, to change Tau's code with time. This as for itself raises paradoxes and constraints the possible logics. If rules can change themselves, they inevitably contradict themselves as they try to say something else. How can we formalize such process in a paradox-free manner? One may be misled to identify rules about rules and choice about choice with higher order logic, but this isn't enough. Consider for example the rule "all rules, including this, can be modified only upon majority". Since this rule operates on itself as well, it has no finite order. We therefore need recursion in order to deal with rules of changing the rules. This is an important aspect involved in the choice of fixed-point logic for TML, and λY calculus on the Beta (apropos, Bauer showed in "On Self-Interpreters For System-T and Other Typed λ-Calculi" that a language can self-interpret only if it has fixed point, which rules out total programming languages).


An approach for rule-changing that was considered on the old Tau is Nomic's approach. To explain Nomic's approach and the new Tau's approach we'll use an example. Consider two lawyers each representing two sides of some deal, trying to converge into a contract such that both lawyers agree on. One way would be the following. First lawyer suggests a clause in the contract, and the second lawyer may agree or not. If agreed, then the clause is appended, otherwise it isn't. Then it's the second lawyer's turn to propose a clause and so on. This would be the Nomic way. The equivalent for Tau is to apply successive code patches with time. By that we pose an asymmetry between opinions that came first. There's a lot to say about this asymmetry and how Tau manages to avoid it almost completely, but for now, consider the case where a newly proposed clause contradicts an old clause. If we don't want to give priority to what came first, they will then have to amend the new or old clause or even more clauses, and not by default delete the old clause.


Another way would be that on every turn, each lawyer submits a whole contract draft, and the other lawyer may either accept it or propose a different draft. Requiring each draft to be logically consistent, we will never have to deal with contradictions of past vs future. It eliminates completely the need to look back. But it still cannot scale. What if we had a million lawyers, will they read a million drafts?


Over Tau we can take all those million contract drafts, which correspond to proposals of Tau's next full code, and in a quite straight-forward way (thanks to the logical formalism of the documents) calculate the precise core that everyone agree on, and list the points to be resolved. We don't need to vote, we do it just as in small groups in real life: we just speak, and the opinions map arise from the conversation to any intelligent listener.


So much more to be said, and will be said in further blogposts and other publications, but that's all for now. I'll be more than happy for your opinions and approaches regarding the mentioned issues, especially practical social choice and how to make discussions scale.

Tau and the Crisis of Truth

posted Sep 10, 2016, 11:25 AM by Ohad Asor   [ updated Sep 10, 2016, 11:28 AM ]

We live in a world in which no one knows the law. Except trivial cases, you cannot know what is legal and what is not, all you can do is try and see what is the judge’s or policeman’s opinion after you've taken your actions, which of course differs from one to another. Or you can consult a lawyer that will tell you that there are no absolute answers, and at the end of the day it’s a matter of probability which unfortunately no one knows how to calculate. You can do your best to have a productive life and follow the rules as you understand them or as lawyers guide you, but no one can guarantee that you will not be considered a criminal, or that legal actions will not be taken against you. Similarly one can live a life of harming so many people and no legal system will stop it even if the system is aware to the actions taken. Such pessimistic situation is not new and is not local, and to my taste was best described by Franz Kafka.


Playing with words and taking them into any desired direction, consciously or not, with good or bad intentions, was always there since humanity learned to speak. The worst lies contain only truths and the worst crimes are licensed, and arguments can be given to justify almost anything. This “crisis of truth” is the foundation of the post-modern stream in philosophy, notably the Deconstructivist approach which demonstrates how texts may be interpreted in many (contradicting) ways. "There is no one truth" is the basis of post-modernism. But can we at least have some island of truth in which social contracts can be useful and make sense?


Not only laws cannot be understood in an absolute way, but cannot even  trivially be made by an absolute process. Laws have to be changed with time and therefore we also need laws of changing the laws. But then we’ll need laws-of-changing-the-laws-of-changing-the-laws, ad infinitum. We therefore remain without any logical basis for the process of rulemaking, not only the crisis of deciding what is legal and what is illegal. This paradox was described on the online-available book The Paradox of Self Amendment by Peter Suber. Suber’s solution is summarized on the appendix of that book that describes the game Nomic. He offers a two-tier system in which we have mutable and immutable rules, including rules of transmuting a rule (namely to make immutable rule mutable or vice versa). This way one can avoid the infinite hierarchy of amendment-schemes, but we still remain with the “crisis of truth” in which there is no one truth, especially when you desperately need it, and despite the very purpose of laws to have a certain social order rather disinformation and chaos.


When stepping back from the world of legalities and trying to answer similar questions in the world of numbers and mathematical objects, the task doesn’t become easy but arguably easier. Logicians have contemplated those ideas for centuries and came up with the notion of Decidability. A language is decidable if every expressible statement on this language can be decided whether it follows from rules expressed in that language as well. An astonishing example would be Godel’s theorem: if our language is able to express Peano arithmetic, namely, it contains the natural numbers 1,2,3,... equipped with addition and multiplication and their rules (that indeed determine them uniquely), then there will always be some true statements about Peano arithmetic that cannot be proved from the definitions. Infinitely many axioms will therefore be required in order to be able to decide the correctness of every statement. An example of decidability would be Presburger arithmetic: if we give up multiplication and define numbers with addition only (in a certain way), we are then back to decidability and every expressible statement could be decided in a finite process to be either true or false. Same applies to the 2000 years old Euclid geometry which is a decidable theory indeed.


We would therefore be interested in creating a social process in which we express laws in a decidable language only, and collaboratively form amendable social contracts without diving into paradoxes. This is what Tau-Chain is about.


This long introduction came to stress the importance and central role of decidability in generally in life and specifically in Tau. If we would like Tau to give a real contribution to the world, it must use a decidable language. The initial Tau design was aiming to answer those requirements, but recently I’ve found out that I was wrong, and the initially chosen language is simply undecidable. Martin-Lof Type Theory (MLTT) has only certain decidable aspects (specifically, equality). But it is not the case that everything expressible is decidable. This situation led me to a conclusion that MLTT was a wrong choice, and another logic should be considered. Indeed some statements of mine on blogposts and videos were mistaken, and MLTT does not have decidable typecheck in general.


MLTT is about so-called “dependent types” which I’ll try to give a small taste of in this paragraph. We have “terms” and “types”. Can see terms as statements and types as specifications that those statements are required to fulfill. How general can be the specifications? Dependent types comes to give them maximum expressiveness in some sense: types can depend on terms, and terms can depend on types (and that’s where the name “dependent types” comes from). We are able to express any specification that our term language is able to express, or more accurately, types can be stated by a program that calculates them, that may depend on other parts of the code. Programs may include specifications, and specifications may include programs. Typecheck is the process of making sure that terms indeed meet their types. But if types can be generic programs, we have to require them to be executed and return a result in a finite time. For that we’ll have to make sure that our programs halt, so we have reached the Halting problem. Only if you can assert that your types are calculated in a finite process, then the very process of typecheck becomes decidable. But this task is undecidable by itself as the Halting problem is undecidable. So if you want to reach decidability over dependent types, you’ll first have to pass an undecidable barrier of halting. This is possible in many cases as many programs are able to be written in a form that provably terminates (e.g. primitive recursion which is the basis of total functional programming), but it is inherently impossible to come up with such a method for all programs, otherwise the Halting problem would become decidable.


Dependently typed languages are widely used in the world of computer-aided proofs and formal verification, but they have a reputation of being hard to work with. The hardness stems not only from their theory being significantly more complicated than of other languages, but the harder part is convincing the typechecker (or termination and totality checker that try to assure that the typecheck process is finite indeed) that your code meets its specifications. The typecheck process is therefore not automatic, and requires a significant amount of human intervention just in order to convince it to accept true statements, a difficulty that stems exactly from the undecidability of the language. Subsequently we lose the ability to always and fully automatically determine whether a sentence is true, or equivalently, if some action (term) is legal (well-typed) or illegal according to given rules.


The remedy for this situation is to pick a different logic, a “complete” one. A complete logic means that everything is decidable, and not only certain aspects of it (like equality in MLTT). Godel showed that every logic that is complete and is able to express Peano arithmetic, will inevitably be inconsistent, so in order to enjoy both consistency and completeness we'll have to give up a part of arithmetic. Not only there are already known complete and consistent logics we can use, but very recently such families of languages were discovered to be much more expressive than previously known. Specifically, certain classes of Monadic Second Order Logic (MSO or MSOL) were found to be complete and consistent since the 1960s, but they did not have satisfactory expressiveness abilities. It was not until 2006 that Luke Ong came up with a proof of completeness of MSOL over so-called “Higher Order Recursion Schemes” which is a simply-typed (rather dependently-typed) higher order functional language with recursion, that dramatically enhanced the expressiveness of known completely decidable languages. Since then advancements in both theory and practice were made, notably by Naoki Kobayashi.


It was a long journey until I found out that my basic premises were wrong and indeed I was not a logician when the journey began. Fortunately we can now modify Tau’s design into something that is even better than we thought before, even if MLTT was decidable. To mention three advantages: the existence of a direct computational model, the ability to infer programs, and simplicity of use.


MSO theories have a direct computational model in terms of automatons. MSO formulas are able to be translated into redundant machines that read a statement and decide whether it meets the formula. Moreover, those automatons can be minimized in a unique way, so we even get a provably optimal code. MLTT on the other hand does not enjoy a direct translation to automatons but rather the logic-automata connection it admits a more general concept of programs-as-proofs, where the proof process is left more loosely specified in computational terms and a direct translation to automatons does not exist. It cannot possibly exist since automatons are an explicit finite decision procedure, while dependent types are not completely decidable.


The second notable advantage is automatic programming, more commonly called “program synthesis” in the literature. If we give a specification of a program (or rule base), can the machine calculate the program (or rules) by itself? This task is undecidable on MLTT, but in a complete logic it is possible indeed. We can have correct-by-construction programs and contracts, by merely stating what we expect them to do, and commanding the machine to find such a structure that admits our requirements.


MSO logic is simpler to use for the reasons mentioned above: it is a simpler language, and does not require human intervention to convince that a true statement is true indeed. It also represents a more intuitive logic:  MLTT is an intuitionistic logic which means that a statement that proved to be false doesn't necessarily imply that it's contrast is true. For example, one cannot always use a proof by contradiction. Intuitionistic logic allows to prove nonexistence by contradiction: it can be assumed that something exists, derive a contradiction, and by that conclude it does not exist. But it cannot support a proof by contradiction of nonexistence: assuming that something does not exist and deriving a contradiction does not prove that it exists, under intuitionistic logic. More formally, intuitionistic logic explicitly negates the law of excluded middle. And this is of course highly unintuitive, and allows the possibility of human mistakes especially over complicated structures. On the other hand, MSO is a Classical logic, means it includes the law of excluded middle: it's simply impossible for anything to be both true and false at the same time. The middle is completely excluded, while if we add this rule to MLTT, it loses its consistency.


As a conclusion, our journey turned out to be different than we expected from the aspect of implementation, but it will end up with a better product that will give us a chance to solve the crisis of truth and the paradox of self amendment in a global social and electronic context. I personally intend (and promised) to keep working full time until I fulfill Tau and Agoras that will introduce many more features than described here. Recently I gave a filmed condensed description of the products I’m committed to deliver. Another article and another video are on their way to explain more details. If you are willing to help achieving these goals and you find yourself (or your friend) understanding Kobayashi's papers, I'll be very glad for cooperation.

Replacing the Intermediate Tokens

posted Apr 8, 2016, 10:28 PM by Ohad Asor

Recently we had an incident of stolen tokens due to impostors pretending to be Poloniex. We therefore invalidated the old token and issued a new token. The new token was 
already distributed to all holders.
The old asset's number is 35 (Agoras Tokens) and new asset's number is 58 (number 58, IDNI Agoras). Bittrex supports the replacement as well and trading continues as usual.
Great thanks to Richie and Bill from Bittrex, and to Zathrasc, Craig and Adam from Omni. Their support was very friendly and professional!
It is very important to make sure not to buy or sell the old tokens. They are not valid anymore and worth nothing. It is better to return or burn them as instructed here.
Thanks for all the support all around! We got contacted from Tau's and Agoras' supporters all over these days. Very appreciated!

Project Roadmap

posted Mar 14, 2016, 6:27 AM by Ohad Asor   [ updated Apr 17, 2016, 2:20 AM ]

Beginning with some recap:

Tau is a language. It is a programming language, but not only such. It may express virtually any arbitrary knowledge, rules, and processes. Examples are contracts, mathematical proofs, scientific facts and arguments, and legal documents. Its power comes from being unambiguous and consistent while still maintaining maximum (finite) expressibility. A more detailed explanation of what consistently decidable means and why it is the strongest Tau's property has a dedicated explanation on the following short talk:


This isn't new: this type of language (specifically, Martin-Lof Type Theory [MLTT]) has known implementations. Tau's innovation is by adding DHT and Blockchain primitives to the language, and by that make the language aware of other peers and be able to coordinate with them securely and meaningfully. Another innovation is adapting the RDF language family to MLTT: taking the subject-verb-object format of the famous RDF languages family (e.g. Notation3) and giving it MLTT logic, makes Tau a more accessible, readable, and portable language than other MLTT implementations.

What we develop now is a compiler to this language, means, something that takes a document written in Tau language together with a query about that document, and resolves the query. But there's another important building block which is shared knowledge and rules.

The languages incorporate a shared level expressed by the built-in blockchain. Tau-Chain is designed to have a single root chain. This chain contains code, written in Tau language, and this code is being executed. As a metaphor, imagine a piece of code that all it is doing is to download itself from github and execute it, again and again, while the code might be changed and do additional operations in the meanwhile. A Tau client downloads its own code from the root chain: it downloads a block, executes the code in it, and this code instructs how to download and run the next block. By that, the client understands the blockchain according to the network rules that were relevant at that point of time when the block was created. This gives rise to the network being self-defining. Tau begins its genesis block clean of rules, and the rules of the network will then be determined collaboratively over the network by the first users (for more information see this blogpost).

To get a glimpse of what uses such a network gives rise to, see this blogpost. For an description of Tau as a generalized blockchain, see here.  Tau also incorporates the lambda-auth design for verifiable computing, as described here.
A more detailed use case appears here and is summarized on the following short video:

Many of those use cases are to be built on Agoras. Specifically, we plan to implement over Agoras the code-for-money market, computational resources market (as on Zennet's vision), and a decentralized smart search engine (aiming to obsolete Google), among some other plans. Those require a currency, and Agoras will be a currency implemented using Tau's architecture, inheriting the self-definition and democracy abilities, protocol elasticity, and smart yet comprehensible contracts.

Tau as for itself comes with no coin and begins completely and inherently equal. It is merely a language with a blockchain, but with far reaching consequences. It is meant to be a platform for decentralized (and offline) applications. Agoras will be only one of them. Tau is designed to leave no incentive to others not to develop over it, e.g. due to its code reuse abilities and the incentive to combine hashes to one root chain, to mention only two aspects.

The selected mentioned markets that Agoras is planned to implement need abilities that are unique to Tau, and, they give unrefusable answers to some of the most disturbing bottlenecks in the world economy: software development, computational resources, and data together with processing it. An example to additional idea to be implemented on Agoras but not fully developed yet: derivatives market that also implements risk-free interest (without printing new money!).

Explaining Tau, Agoras, and their implications is not an easy task. We have been working on explanatory materials and had a significant success in spreading the word and the vision, yet this success in explaining the ideas is still very partial. The current materials still doesn't show the whole detailed picture, and are not clear enough. A new website and additional videos are currently under construction. More people are getting involved and trying to explain the ideas in their own words. This process takes time and work, but it is important that enough people will figure out what this is all about, as it is really about something big that has a fundamental impact on all aspects of life (and I understand if I didn't manage to convince you about this point yet). Its success strongly depends on how people understand it, because the users themselves are going to set the rules of the network. The larger the consensus and the more arguments are raised, the more chance to converge towards better rules. We don't have to worry about too much or too complicated information: that's why we have Tau.

Having more people understanding the project is also important from the pre-sale point of view. We need to distribute the Agoras coins somehow, so we pre-sale them and by that we also hope to receive long-term funding to the projects. Agoras project will require much more effort and human resources than the Tau client if we wish to fulfill our vision in the optimal way. The coin distribution has to be fair at the sense that people were given a reasonable opportunity to understand what this project is really about. The above has to do with the sale going on longer time than usual in the cryptocurrency world, nevertheless the project is quite different than others, and has to be managed as a serious software startup and not according to common practices in the cryptocurrency world that might suite other types of projects.

Still, early buyers have to be incentivized. We had a weekly 2% raise in price and recently suspended it. But from May 1 the price per token we sell at will be 20 cents, going up 5% monthly (recall it has nothing to do with prices people sell at on the exchange). Beginning from today we offer the following wholesale discount: every additional 10K tokens will give additional 5% discount. So if one purchases 30K tokens, the price for the first 10K is the regular price, then a 5% discount on the next 10K tokens, then 10% discount on the subsequent 10K tokens. All purchasers via email receive a tax invoice with their desired recipient name. A buyer that agrees not to receive the intermediate tokens but only the final coins gets additional 15% discount. The total marginal discount will never go below 50% (as to be fair with earlier buyers).

This current stage is extremely preliminary, as the implications of the technology aren't even fully explained yet, and even the language itself isn't ready yet. When things will look like they really are, namely, that if it all works as planned then we offer unrefusable replacement to markets of literally dozens of trillions as for today, this will be well reflected in the market cap. Far to mention, when the final coins will take action together with the already implemented futuristic markets.

For final words, your participation in thinking of the rules of changing the rules is very important. Please give some thoughts, ask questions and have discussions about how you'd like Tau to be like. If we do it right, it'll meaningfully master all mankind's knowledge and way beyond.

Short Presale Snapshot Update

posted Dec 4, 2015, 2:09 PM by Ohad Asor   [ updated Apr 23, 2016, 2:21 PM ]

  • From a total of 42,000,000 total tokens, about 3,000,000 were sold up till now (5 Dec 2015), beginning slightly more than a year ago.
  • The price per token is currently 17 cents per token.
  • The tokens are only temporary: Once Agoras system is up, they will be replaced with the actual Agoras coins. 
  • “Premine” for long-term development and 3rd party partners is 3%.
  • All unsold coins up to Agoras' genesis will be destroyed. For example, if you hold 1% out of 42M coins, and only 21M are sold, then with the actual Agoras coins you will hold a fraction of 2% out of all coins.

Tau as a Generalized Blockchain

posted Oct 16, 2015, 9:33 PM by Ohad Asor   [ updated Oct 16, 2015, 9:49 PM ]

Bitcoin may abstractly be described as a decentralized machine that appends new data (transactions) to a public shared database (ledger) given certain proofs (signatures).

One immediately asks: what if we take the Blockchain algorithm and allow arbitrary data with multiple public shared databases and with arbitrary proofs? Moreover, can the protocol itself evolve with time?

Let's break those questions into pieces:

1. Allowing arbitrary data: Bitcoin's protocol may already encode arbitrary data, but not efficiently. It costs way beyond customary storage networks or services, and the cost is justified – all nodes have to store and process that data.
2. Multiple public shared databases: we'd like to have many different applications over one Blockchain that secures them all, yet we don't want users of one application to be affected by the existence of another application.
3. Arbitrary proofs: proofs are nothing but computer programs and vice versa. At Bitcoin's scope, the proof is the signature verification function's output, sometimes with addition of scripts (examples of useful scripts at https://en.bitcoin.it/wiki/Contract), yet Bitcoin's scripts are quite weak. So we'd like to allow arbitrary code to “run on the blockchain”, a code that is able to do practically anything that a computer can do.
4. Self-definition: Since we speak of a shared database of programs, can the network's protocol itself, i.e. the client's code itself, be on-chain and auto-update itself from the chain?

1 is a special case of 3 since ultimately code can contain data (and that's basically Ethereum, Bitcoin+3). 2 can be achieved by allowing to put many programs on the blockchain, while users pick which program to run. This can be fulfilled by a blockchain containing Java/C++/Python/Precompiled programs, in other words: a decentralized appstore that auto-updates its own code (4). We call this construction Naive Tau.

Why is it naive? Because apparently it answers all requirements, but de-facto we didn't decentralize anything: the code itself still comes from centralized hands. Since we have to either trust the program's developers for its safety or for other guarantees, we can no longer expect users to trust all programs on the network or a subnetwork. That, even if executing the program is required in order to validate a financial transaction and get into a public ledger. This is unlike, say, Bitcoin's blockchain, that transactions and scripts are “innocent” enough for everyone to safely publish or validate.

A popular concern among projects dealing with trustless computing is the Halting problem. How can we know that execution of a given code will ever halt, rather stuck all machines on the network? It turns out that under a Turing Complete language, it is impossible to build a software that decides whether given source-code, its execution will ever halt. Ethereum confronted this problem with so-called Gas. The code is indeed executed among all machines participating in the network once they validate the blocks, but the code's author pays per how many computational steps the code has actually performed. If it performs too much calculations (“out of gas”), it is then aborted.

Obviously, we would like to know much more than only whether a code is halting or not, but also its exact running time given some input, without the need to actually execute it. Similarly, we would like to have custom user-defined guarantees about code without executing it, like that it does not do insecure operations (by any well-defined measures of insecurity, such as connecting to the local network or accessing private files). We will also be very happy to know that the code actually meet its own requirements – namely, that it is correct. We then won't need tedious, full-of-holes QA process, once we have a mathematical proof that a code meets our requirements.

But it turns out that life aren't so easy. The Halting problem over Turing machines became so famous not because its answer is “No”, but because its answer is both “Yes” and “No”! A contradiction. This situation is mildly called “undecidability”. And this can go arbitrarily far: any such contradiction can be elevated to any statement, supplying a proof that it is true, and also supplying a proof that it is false!

Because of such pathologies, we can never trust a proof over a Turing complete language.  Completeness here happens to be precisely the completeness Godel spoke about, when he proved that any language that is expressive enough to describe arithmetic, must be either inconsistent or incomplete. Since Turing complete languages are also “Godel-complete”, they are provably inconsistent.

Godel, therefore, left us with two paths: completeness or consistency, while the latter implies decidability: the inability to prove an incorrect statement. Completeness is the ability to prove any correct theorem. Alas, if you can prove any correct theorem, then you can prove also any incorrect theorem!

This is of course a very unpleasant situation for philosophers, logicians, mathematicians, and computer scientists. But a breaktrough came during the 70s and the 80s by Per Martin-Lof: he presented the (now called) Martin-Lof Type Theory (MLTT). MLTT is a programming language with such an expressive power, that it is considered to replace the old foundations of mathematics that consist of first-order classical logic and ZFC set theory. This is as serious as it sounds: new foundations of mathematics. This is the mathematics of the 21st century.

Why are those foundations “better”?
1. They are in a language that fits computers too. We can now reformulate all math (a process that has already began, e.g. under the project of Homotopy Type Theory) in a way that a computer can comprehend, verify our proofs, and discover new proofs and theorems.
2. This system is decidable. Although it picks Godel's consistency path, it turns out that the completeness it is giving up is not so painful: it is only incomplete with respect to infinite codes. It cannot prove infinitely long mathematical claims or run code that assumes infinite computers (unlike Turing machines that are infinite by definition).

MLTT covers all what matters: finite computers and finite math formulas (infinite sets are welcome, even construction of the reals using Dedekind cuts – after all, this math gets into a finite book). As a result, given a syntactically well-formed code in MLTT together with a proof that “this code will halt” or “this code will never perform more than 1M operations”, we can trust the proof. The decidability guarantees that only true statements can be proved. Moreover: a machine can seek for a proof itself. This is called Automated Theorem Proving, but for complex enough problems their runtime is astronomically high. Yet, a “skeleton” of a complex proof can be supplied by human, letting the machine verifying it by trying to fill the missing parts of the proof.

Back to our Blockchain construction, we can now put MLTT-compatible language on our blockchain, and we can now supply proofs for anything that we can prove, in a way that a 3rd party is able to trust our proofs. We need not to worry anymore about halting, or about security measures, or about correctness – we can simply require proofs for them all.

On Tau we take RDF-family languages' syntax (notably Notation3 and Nquads, which consist of triples of “subject verb object”) and give them MLTT's semantics. Equipped with an automated theorem prover that JIT-compiles the proof search (or verification, as a special case), it offers a general-purpose programming language that is decidable, so code can provide proofs about itself (without even getting into self-reference paradoxes!). The language incorporates a Blockchain and DHT as mentioned several times in other locations. Thanks to MLTT and automatic reasoning, non-naive Tau is an intelligent, aware (at the sense it can reason over its own code), safe and trustless decentralized network, while remain so even when allowing general and arbitrary code to run.

Of course, an explanation of what is Tau should come now and tell more about its features and uses, and we have discussed those in the past in various places (especially on this blog and on bitcointalk). So far on this post we concluded the Naive Tau construction, why it is doomed, and what is the correct way to perform secure multiparty computation.

Proof of Code Execution

posted Sep 6, 2015, 12:38 AM by Ohad Asor   [ updated Sep 6, 2015, 12:39 AM ]

Putting it altogether, code is rules and queries, while the compiler (being essentially an autoprover) follows the rules and their consequences and outputs answers (while performing any side-effects). This is a closed loop: the code is retrieved from tau-chain upon a new block, using, obviously, some tau client. So we accept and execute block T by executing the code that appeared at block T-1, while block 0 is the genesis block.

The code of the client should allow users run other tau code according to their preferences. It can be thought as the Appstore/Play application itself, while the network contain many other applications.

Tau does not only provide a decentralized appstore, but delivers additional and essentially more powerful technologies. We mentioned several times the decidability of the code itself and its consequences. Now we'd like to discuss an independent feature which brings more power to tau applications (or the root chain itself).

The autoprover is equipped with mechanism of hashing the proof search tree. The prover basically performs a breadth-first search that begins from the query and takes valid steps (according to the rules in the code), and tries to find facts along the search, and by this finishing a proof (a chain of valid moves from facts to the query).

This tree can be hashed like a Merkle tree. Eventually, the hash of the tree's root is a proof that the whole computation was followed, since one can replay the computation and verify the hash. There are much more efficient methods than replaying the whole tree, as in lambda-auth which presents the algorithm we intend to implement.

By that, one can prove and verify a local execution of code. Exogenic information (like IO) is treated as Monads, IO is a monad and is not an Auth type. For more information about monads please refer to functional programming literature, but note that this weakness of IO isn't always a trouble: when a user A knows they're connecting to server B, they can trust B's proof of flow that originates from the requests A initiated locally.

How this can elevate crowd computing, and the performance issues arising at this scope, will be a discussion from some other time. Let's mention some more specific applications for now: One example would be an exchange. The exchange can reveal its own code, and constantly provide proofs that that very code was executed. Another example would be a casino, proving that the whole game indeed followed the rules, including, say, a certain hash computation for random number generation. Also, serving as a DHT or even vanilla Bitcoin client can be proved.

A derived feature is the ability to verify participation on the network itself: one can prove that they ran a tau client, and designers of voting schemes may rely on this ability. It is also possible to automatically award coins given a proof of execution (over say Agoras, but not over non-tau coins like BTC), just like it is possible to set any rules to any proof.

I guess that would be enough to ignite the imagination. The 21st advancements of decentralization, cryptography, provability and verifiable computing, open frontiers to a new kind of networking abilities and experience.


Decentralized Democracy and the Role of the First Users

posted Aug 31, 2015, 11:22 PM by Ohad Asor   [ updated Jul 12, 2016, 1:02 PM ]

Recently we mentioned several times the logical and technological properties of tau, and now maybe it is time to take a step back and look at the broader picture. We mentioned that tau's programming language is actually rules and logic. We meet rules and logic in many other aspects of life: a relatively small example is that the pure logic structure in solutions to existing problems can be used automatically to cope with unsolved problems, even in an entirely non-related field, like some Chemical structure being isomorphic to some existing computer program.

This is still a certain use case and is aside the main point, and the main point is decentralized democracy.

Tau is able to formalize laws, query their consequences, and verify their consistency. It is able to do so in a fully decentralized mode. Tau cannot directly interact with the laws of reality, but it defines its own rules. The network is self-definable and self-amendable. The software client is a compiler that downloads source code from the blockchain and executes that code. The code may (and should occasionally) vary from block to block. Essentially, the most important part of the client's code is the conditions for accepting the next block.

The rules, protocol, and specific behavior of the network are of course a main factor in the evolution of the network. But specifying them is not our first task upon genesis. Our first task is to specify how rules can be changed, somehow disregarding what those rules actually are.

Tau's logic and infrastructure allow separation of Contexts. By Context we mean a set of rules that applies only to participants that actively wish to subscribe to that context. The terminology comes from the RDF world: tau's language is formed by quadruples of subject-verb-object-context, so the fact or rule described in "subject-verb-object" applies only at the given context. Nevertheless, code reuse across contexts is possible, and is commonly denoted as context1:subject context2:predicate etc. Contexts can also be pegged on the root chain as sidechains can be pegged on Bitcoin's chain.

On every context, one may have any kind of custom rules. So the root chain should not intervene on what happens in custom contexts, but define only the rules vital for its own existence. Tau can be bootstrapped with simple arbitrary rules, even roster of first participants, or centralized insertion of code by us. Whatever those rules will be, they are temporary, and our first task is to define how and when rules can be changed at all. Afterwards, we follow the process of rule-making we formalized at the first step, and define together the rules themselves. We, tau's developers, will be highly active at those post-genesis processes, but we will not hold any extra power. Each one of us will begin at genesis as equal exactly like any one else. 

We put a technology for decentralized democracy on the table, that can scale from democracy over a small club in the form of Context tau, into, who knows, maybe even state laws. I see no technological barrier for that. Nevertheless, we also do not have any magical ultimate set of rules.

The task is hard, and the philosophical questions are overwhelming. But there is a limit to what technology can do. It can give us a platform for decentralized democracy, but it cannot set the rules for us. I cannot exaggerate about the importance of the task of bootstrapping a decentralized democracy and forming its "constitution", therefore I won't, and will only mention that I guess that many readers share the same feeling with me - that we probably can do at least slightly better than our current situation.

Obviously, the last thing we can do as developers of a democratic system is to set its rules ourselves.

Formalizing a real-life decentralized democracy is not the first users' duty, of course. It is a much larger-scale process. The first users will need to define how to change the rules of the root chain - not even deal with the technical details like the maximum block size, but what happens if we want to change the block size, namely what are the conditions for a root-chain's rule change. It is an interesting litmus test as for itself. How would the rules of changing the rules evolve given our new logical and decentralization abilities, one can only barely imagine. Let's all hope that tau will actually evolve into collaboration of morals, not only economics, code and knowledge.

September's Special Offer

posted Aug 26, 2015, 11:21 PM by Ohad Asor   [ updated Aug 26, 2015, 11:22 PM ]

Dear All,


Beginning from today until the end of September by GMT, there following special offer is applicable for large buys:
Amount of 50K Agoras tokens: 0.0006 BTC per token.
Amount of 100K Agoras tokens: 0.0005 BTC per token.

This offer is given due to people in the community willing to have a price in BTC, to have a quantity discount, and to have opportunity to buy in past prices.

Purchasing can be done by leaving a message here on idni.org or by emailing me ohad@idni.org

1-10 of 19