Blog

Tau and the Crisis of Truth

posted Sep 10, 2016, 11:25 AM by Ohad Asor   [ updated Sep 10, 2016, 11:28 AM ]

We live in a world in which no one knows the law. Except trivial cases, you cannot know what is legal and what is not, all you can do is try and see what is the judge’s or policeman’s opinion after you've taken your actions, which of course differs from one to another. Or you can consult a lawyer that will tell you that there are no absolute answers, and at the end of the day it’s a matter of probability which unfortunately no one knows how to calculate. You can do your best to have a productive life and follow the rules as you understand them or as lawyers guide you, but no one can guarantee that you will not be considered a criminal, or that legal actions will not be taken against you. Similarly one can live a life of harming so many people and no legal system will stop it even if the system is aware to the actions taken. Such pessimistic situation is not new and is not local, and to my taste was best described by Franz Kafka.


Playing with words and taking them into any desired direction, consciously or not, with good or bad intentions, was always there since humanity learned to speak. The worst lies contain only truths and the worst crimes are licensed, and arguments can be given to justify almost anything. This “crisis of truth” is the foundation of the post-modern stream in philosophy, notably the Deconstructivist approach which demonstrates how texts may be interpreted in many (contradicting) ways. "There is no one truth" is the basis of post-modernism. But can we at least have some island of truth in which social contracts can be useful and make sense?


Not only laws cannot be understood in an absolute way, but cannot even  trivially be made by an absolute process. Laws have to be changed with time and therefore we also need laws of changing the laws. But then we’ll need laws-of-changing-the-laws-of-changing-the-laws, ad infinitum. We therefore remain without any logical basis for the process of rulemaking, not only the crisis of deciding what is legal and what is illegal. This paradox was described on the online-available book The Paradox of Self Amendment by Peter Suber. Suber’s solution is summarized on the appendix of that book that describes the game Nomic. He offers a two-tier system in which we have mutable and immutable rules, including rules of transmuting a rule (namely to make immutable rule mutable or vice versa). This way one can avoid the infinite hierarchy of amendment-schemes, but we still remain with the “crisis of truth” in which there is no one truth, especially when you desperately need it, and despite the very purpose of laws to have a certain social order rather disinformation and chaos.


When stepping back from the world of legalities and trying to answer similar questions in the world of numbers and mathematical objects, the task doesn’t become easy but arguably easier. Logicians have contemplated those ideas for centuries and came up with the notion of Decidability. A language is decidable if every expressible statement on this language can be decided whether it follows from rules expressed in that language as well. An astonishing example would be Godel’s theorem: if our language is able to express Peano arithmetic, namely, it contains the natural numbers 1,2,3,... equipped with addition and multiplication and their rules (that indeed determine them uniquely), then there will always be some true statements about Peano arithmetic that cannot be proved from the definitions. Infinitely many axioms will therefore be required in order to be able to decide the correctness of every statement. An example of decidability would be Presburger arithmetic: if we give up multiplication and define numbers with addition only (in a certain way), we are then back to decidability and every expressible statement could be decided in a finite process to be either true or false. Same applies to the 2000 years old Euclid geometry which is a decidable theory indeed.


We would therefore be interested in creating a social process in which we express laws in a decidable language only, and collaboratively form amendable social contracts without diving into paradoxes. This is what Tau-Chain is about.


This long introduction came to stress the importance and central role of decidability in generally in life and specifically in Tau. If we would like Tau to give a real contribution to the world, it must use a decidable language. The initial Tau design was aiming to answer those requirements, but recently I’ve found out that I was wrong, and the initially chosen language is simply undecidable. Martin-Lof Type Theory (MLTT) has only certain decidable aspects (specifically, equality). But it is not the case that everything expressible is decidable. This situation led me to a conclusion that MLTT was a wrong choice, and another logic should be considered. Indeed some statements of mine on blogposts and videos were mistaken, and MLTT does not have decidable typecheck in general.


MLTT is about so-called “dependent types” which I’ll try to give a small taste of in this paragraph. We have “terms” and “types”. Can see terms as statements and types as specifications that those statements are required to fulfill. How general can be the specifications? Dependent types comes to give them maximum expressiveness in some sense: types can depend on terms, and terms can depend on types (and that’s where the name “dependent types” comes from). We are able to express any specification that our term language is able to express, or more accurately, types can be stated by a program that calculates them, that may depend on other parts of the code. Programs may include specifications, and specifications may include programs. Typecheck is the process of making sure that terms indeed meet their types. But if types can be generic programs, we have to require them to be executed and return a result in a finite time. For that we’ll have to make sure that our programs halt, so we have reached the Halting problem. Only if you can assert that your types are calculated in a finite process, then the very process of typecheck becomes decidable. But this task is undecidable by itself as the Halting problem is undecidable. So if you want to reach decidability over dependent types, you’ll first have to pass an undecidable barrier of halting. This is possible in many cases as many programs are able to be written in a form that provably terminates (e.g. primitive recursion which is the basis of total functional programming), but it is inherently impossible to come up with such a method for all programs, otherwise the Halting problem would become decidable.


Dependently typed languages are widely used in the world of computer-aided proofs and formal verification, but they have a reputation of being hard to work with. The hardness stems not only from their theory being significantly more complicated than of other languages, but the harder part is convincing the typechecker (or termination and totality checker that try to assure that the typecheck process is finite indeed) that your code meets its specifications. The typecheck process is therefore not automatic, and requires a significant amount of human intervention just in order to convince it to accept true statements, a difficulty that stems exactly from the undecidability of the language. Subsequently we lose the ability to always and fully automatically determine whether a sentence is true, or equivalently, if some action (term) is legal (well-typed) or illegal according to given rules.


The remedy for this situation is to pick a different logic, a “complete” one. A complete logic means that everything is decidable, and not only certain aspects of it (like equality in MLTT). Godel showed that every logic that is complete and is able to express Peano arithmetic, will inevitably be inconsistent, so in order to enjoy both consistency and completeness we'll have to give up a part of arithmetic. Not only there are already known complete and consistent logics we can use, but very recently such families of languages were discovered to be much more expressive than previously known. Specifically, certain classes of Monadic Second Order Logic (MSO or MSOL) were found to be complete and consistent since the 1960s, but they did not have satisfactory expressiveness abilities. It was not until 2006 that Luke Ong came up with a proof of completeness of MSOL over so-called “Higher Order Recursion Schemes” which is a simply-typed (rather dependently-typed) higher order functional language with recursion, that dramatically enhanced the expressiveness of known completely decidable languages. Since then advancements in both theory and practice were made, notably by Naoki Kobayashi.


It was a long journey until I found out that my basic premises were wrong and indeed I was not a logician when the journey began. Fortunately we can now modify Tau’s design into something that is even better than we thought before, even if MLTT was decidable. To mention three advantages: the existence of a direct computational model, the ability to infer programs, and simplicity of use.


MSO theories have a direct computational model in terms of automatons. MSO formulas are able to be translated into redundant machines that read a statement and decide whether it meets the formula. Moreover, those automatons can be minimized in a unique way, so we even get a provably optimal code. MLTT on the other hand does not enjoy a direct translation to automatons but rather the logic-automata connection it admits a more general concept of programs-as-proofs, where the proof process is left more loosely specified in computational terms and a direct translation to automatons does not exist. It cannot possibly exist since automatons are an explicit finite decision procedure, while dependent types are not completely decidable.


The second notable advantage is automatic programming, more commonly called “program synthesis” in the literature. If we give a specification of a program (or rule base), can the machine calculate the program (or rules) by itself? This task is undecidable on MLTT, but in a complete logic it is possible indeed. We can have correct-by-construction programs and contracts, by merely stating what we expect them to do, and commanding the machine to find such a structure that admits our requirements.


MSO logic is simpler to use for the reasons mentioned above: it is a simpler language, and does not require human intervention to convince that a true statement is true indeed. It also represents a more intuitive logic:  MLTT is an intuitionistic logic which means that a statement that proved to be false doesn't necessarily imply that it's contrast is true. For example, one cannot always use a proof by contradiction. Intuitionistic logic allows to prove nonexistence by contradiction: it can be assumed that something exists, derive a contradiction, and by that conclude it does not exist. But it cannot support a proof by contradiction of nonexistence: assuming that something does not exist and deriving a contradiction does not prove that it exists, under intuitionistic logic. More formally, intuitionistic logic explicitly negates the law of excluded middle. And this is of course highly unintuitive, and allows the possibility of human mistakes especially over complicated structures. On the other hand, MSO is a Classical logic, means it includes the law of excluded middle: it's simply impossible for anything to be both true and false at the same time. The middle is completely excluded, while if we add this rule to MLTT, it loses its consistency.


As a conclusion, our journey turned out to be different than we expected from the aspect of implementation, but it will end up with a better product that will give us a chance to solve the crisis of truth and the paradox of self amendment in a global social and electronic context. I personally intend (and promised) to keep working full time until I fulfill Tau and Agoras that will introduce many more features than described here. Recently I gave a filmed condensed description of the products I’m committed to deliver. Another article and another video are on their way to explain more details. If you are willing to help achieving these goals and you find yourself (or your friend) understanding Kobayashi's papers, I'll be very glad for cooperation.

Replacing the Intermediate Tokens

posted Apr 8, 2016, 10:28 PM by Ohad Asor

Recently we had an incident of stolen tokens due to impostors pretending to be Poloniex. We therefore invalidated the old token and issued a new token. The new token was 
already distributed to all holders.
The old asset's number is 35 (Agoras Tokens) and new asset's number is 58 (number 58, IDNI Agoras). Bittrex supports the replacement as well and trading continues as usual.
Great thanks to Richie and Bill from Bittrex, and to Zathrasc, Craig and Adam from Omni. Their support was very friendly and professional!
It is very important to make sure not to buy or sell the old tokens. They are not valid anymore and worth nothing. It is better to return or burn them as instructed here.
Thanks for all the support all around! We got contacted from Tau's and Agoras' supporters all over these days. Very appreciated!

Project Roadmap

posted Mar 14, 2016, 6:27 AM by Ohad Asor   [ updated Apr 17, 2016, 2:20 AM ]

Beginning with some recap:

Tau is a language. It is a programming language, but not only such. It may express virtually any arbitrary knowledge, rules, and processes. Examples are contracts, mathematical proofs, scientific facts and arguments, and legal documents. Its power comes from being unambiguous and consistent while still maintaining maximum (finite) expressibility. A more detailed explanation of what consistently decidable means and why it is the strongest Tau's property has a dedicated explanation on the following short talk:


This isn't new: this type of language (specifically, Martin-Lof Type Theory [MLTT]) has known implementations. Tau's innovation is by adding DHT and Blockchain primitives to the language, and by that make the language aware of other peers and be able to coordinate with them securely and meaningfully. Another innovation is adapting the RDF language family to MLTT: taking the subject-verb-object format of the famous RDF languages family (e.g. Notation3) and giving it MLTT logic, makes Tau a more accessible, readable, and portable language than other MLTT implementations.

What we develop now is a compiler to this language, means, something that takes a document written in Tau language together with a query about that document, and resolves the query. But there's another important building block which is shared knowledge and rules.

The languages incorporate a shared level expressed by the built-in blockchain. Tau-Chain is designed to have a single root chain. This chain contains code, written in Tau language, and this code is being executed. As a metaphor, imagine a piece of code that all it is doing is to download itself from github and execute it, again and again, while the code might be changed and do additional operations in the meanwhile. A Tau client downloads its own code from the root chain: it downloads a block, executes the code in it, and this code instructs how to download and run the next block. By that, the client understands the blockchain according to the network rules that were relevant at that point of time when the block was created. This gives rise to the network being self-defining. Tau begins its genesis block clean of rules, and the rules of the network will then be determined collaboratively over the network by the first users (for more information see this blogpost).

To get a glimpse of what uses such a network gives rise to, see this blogpost. For an description of Tau as a generalized blockchain, see here.  Tau also incorporates the lambda-auth design for verifiable computing, as described here.
A more detailed use case appears here and is summarized on the following short video:

Many of those use cases are to be built on Agoras. Specifically, we plan to implement over Agoras the code-for-money market, computational resources market (as on Zennet's vision), and a decentralized smart search engine (aiming to obsolete Google), among some other plans. Those require a currency, and Agoras will be a currency implemented using Tau's architecture, inheriting the self-definition and democracy abilities, protocol elasticity, and smart yet comprehensible contracts.

Tau as for itself comes with no coin and begins completely and inherently equal. It is merely a language with a blockchain, but with far reaching consequences. It is meant to be a platform for decentralized (and offline) applications. Agoras will be only one of them. Tau is designed to leave no incentive to others not to develop over it, e.g. due to its code reuse abilities and the incentive to combine hashes to one root chain, to mention only two aspects.

The selected mentioned markets that Agoras is planned to implement need abilities that are unique to Tau, and, they give unrefusable answers to some of the most disturbing bottlenecks in the world economy: software development, computational resources, and data together with processing it. An example to additional idea to be implemented on Agoras but not fully developed yet: derivatives market that also implements risk-free interest (without printing new money!).

Explaining Tau, Agoras, and their implications is not an easy task. We have been working on explanatory materials and had a significant success in spreading the word and the vision, yet this success in explaining the ideas is still very partial. The current materials still doesn't show the whole detailed picture, and are not clear enough. A new website and additional videos are currently under construction. More people are getting involved and trying to explain the ideas in their own words. This process takes time and work, but it is important that enough people will figure out what this is all about, as it is really about something big that has a fundamental impact on all aspects of life (and I understand if I didn't manage to convince you about this point yet). Its success strongly depends on how people understand it, because the users themselves are going to set the rules of the network. The larger the consensus and the more arguments are raised, the more chance to converge towards better rules. We don't have to worry about too much or too complicated information: that's why we have Tau.

Having more people understanding the project is also important from the pre-sale point of view. We need to distribute the Agoras coins somehow, so we pre-sale them and by that we also hope to receive long-term funding to the projects. Agoras project will require much more effort and human resources than the Tau client if we wish to fulfill our vision in the optimal way. The coin distribution has to be fair at the sense that people were given a reasonable opportunity to understand what this project is really about. The above has to do with the sale going on longer time than usual in the cryptocurrency world, nevertheless the project is quite different than others, and has to be managed as a serious software startup and not according to common practices in the cryptocurrency world that might suite other types of projects.

Still, early buyers have to be incentivized. We had a weekly 2% raise in price and recently suspended it. But from May 1 the price per token we sell at will be 20 cents, going up 5% monthly (recall it has nothing to do with prices people sell at on the exchange). Beginning from today we offer the following wholesale discount: every additional 10K tokens will give additional 5% discount. So if one purchases 30K tokens, the price for the first 10K is the regular price, then a 5% discount on the next 10K tokens, then 10% discount on the subsequent 10K tokens. All purchasers via email receive a tax invoice with their desired recipient name. A buyer that agrees not to receive the intermediate tokens but only the final coins gets additional 15% discount. The total marginal discount will never go below 50% (as to be fair with earlier buyers).

This current stage is extremely preliminary, as the implications of the technology aren't even fully explained yet, and even the language itself isn't ready yet. When things will look like they really are, namely, that if it all works as planned then we offer unrefusable replacement to markets of literally dozens of trillions as for today, this will be well reflected in the market cap. Far to mention, when the final coins will take action together with the already implemented futuristic markets.

For final words, your participation in thinking of the rules of changing the rules is very important. Please give some thoughts, ask questions and have discussions about how you'd like Tau to be like. If we do it right, it'll meaningfully master all mankind's knowledge and way beyond.

Short Presale Snapshot Update

posted Dec 4, 2015, 2:09 PM by Ohad Asor   [ updated Apr 23, 2016, 2:21 PM ]

  • From a total of 42,000,000 total tokens, about 3,000,000 were sold up till now (5 Dec 2015), beginning slightly more than a year ago.
  • The price per token is currently 17 cents per token.
  • The tokens are only temporary: Once Agoras system is up, they will be replaced with the actual Agoras coins. 
  • “Premine” for long-term development and 3rd party partners is 3%.
  • All unsold coins up to Agoras' genesis will be destroyed. For example, if you hold 1% out of 42M coins, and only 21M are sold, then with the actual Agoras coins you will hold a fraction of 2% out of all coins.

Tau as a Generalized Blockchain

posted Oct 16, 2015, 9:33 PM by Ohad Asor   [ updated Oct 16, 2015, 9:49 PM ]

Bitcoin may abstractly be described as a decentralized machine that appends new data (transactions) to a public shared database (ledger) given certain proofs (signatures).

One immediately asks: what if we take the Blockchain algorithm and allow arbitrary data with multiple public shared databases and with arbitrary proofs? Moreover, can the protocol itself evolve with time?

Let's break those questions into pieces:

1. Allowing arbitrary data: Bitcoin's protocol may already encode arbitrary data, but not efficiently. It costs way beyond customary storage networks or services, and the cost is justified – all nodes have to store and process that data.
2. Multiple public shared databases: we'd like to have many different applications over one Blockchain that secures them all, yet we don't want users of one application to be affected by the existence of another application.
3. Arbitrary proofs: proofs are nothing but computer programs and vice versa. At Bitcoin's scope, the proof is the signature verification function's output, sometimes with addition of scripts (examples of useful scripts at https://en.bitcoin.it/wiki/Contract), yet Bitcoin's scripts are quite weak. So we'd like to allow arbitrary code to “run on the blockchain”, a code that is able to do practically anything that a computer can do.
4. Self-definition: Since we speak of a shared database of programs, can the network's protocol itself, i.e. the client's code itself, be on-chain and auto-update itself from the chain?

1 is a special case of 3 since ultimately code can contain data (and that's basically Ethereum, Bitcoin+3). 2 can be achieved by allowing to put many programs on the blockchain, while users pick which program to run. This can be fulfilled by a blockchain containing Java/C++/Python/Precompiled programs, in other words: a decentralized appstore that auto-updates its own code (4). We call this construction Naive Tau.

Why is it naive? Because apparently it answers all requirements, but de-facto we didn't decentralize anything: the code itself still comes from centralized hands. Since we have to either trust the program's developers for its safety or for other guarantees, we can no longer expect users to trust all programs on the network or a subnetwork. That, even if executing the program is required in order to validate a financial transaction and get into a public ledger. This is unlike, say, Bitcoin's blockchain, that transactions and scripts are “innocent” enough for everyone to safely publish or validate.

A popular concern among projects dealing with trustless computing is the Halting problem. How can we know that execution of a given code will ever halt, rather stuck all machines on the network? It turns out that under a Turing Complete language, it is impossible to build a software that decides whether given source-code, its execution will ever halt. Ethereum confronted this problem with so-called Gas. The code is indeed executed among all machines participating in the network once they validate the blocks, but the code's author pays per how many computational steps the code has actually performed. If it performs too much calculations (“out of gas”), it is then aborted.

Obviously, we would like to know much more than only whether a code is halting or not, but also its exact running time given some input, without the need to actually execute it. Similarly, we would like to have custom user-defined guarantees about code without executing it, like that it does not do insecure operations (by any well-defined measures of insecurity, such as connecting to the local network or accessing private files). We will also be very happy to know that the code actually meet its own requirements – namely, that it is correct. We then won't need tedious, full-of-holes QA process, once we have a mathematical proof that a code meets our requirements.

But it turns out that life aren't so easy. The Halting problem over Turing machines became so famous not because its answer is “No”, but because its answer is both “Yes” and “No”! A contradiction. This situation is mildly called “undecidability”. And this can go arbitrarily far: any such contradiction can be elevated to any statement, supplying a proof that it is true, and also supplying a proof that it is false!

Because of such pathologies, we can never trust a proof over a Turing complete language.  Completeness here happens to be precisely the completeness Godel spoke about, when he proved that any language that is expressive enough to describe arithmetic, must be either inconsistent or incomplete. Since Turing complete languages are also “Godel-complete”, they are provably inconsistent.

Godel, therefore, left us with two paths: completeness or consistency, while the latter implies decidability: the inability to prove an incorrect statement. Completeness is the ability to prove any correct theorem. Alas, if you can prove any correct theorem, then you can prove also any incorrect theorem!

This is of course a very unpleasant situation for philosophers, logicians, mathematicians, and computer scientists. But a breaktrough came during the 70s and the 80s by Per Martin-Lof: he presented the (now called) Martin-Lof Type Theory (MLTT). MLTT is a programming language with such an expressive power, that it is considered to replace the old foundations of mathematics that consist of first-order classical logic and ZFC set theory. This is as serious as it sounds: new foundations of mathematics. This is the mathematics of the 21st century.

Why are those foundations “better”?
1. They are in a language that fits computers too. We can now reformulate all math (a process that has already began, e.g. under the project of Homotopy Type Theory) in a way that a computer can comprehend, verify our proofs, and discover new proofs and theorems.
2. This system is decidable. Although it picks Godel's consistency path, it turns out that the completeness it is giving up is not so painful: it is only incomplete with respect to infinite codes. It cannot prove infinitely long mathematical claims or run code that assumes infinite computers (unlike Turing machines that are infinite by definition).

MLTT covers all what matters: finite computers and finite math formulas (infinite sets are welcome, even construction of the reals using Dedekind cuts – after all, this math gets into a finite book). As a result, given a syntactically well-formed code in MLTT together with a proof that “this code will halt” or “this code will never perform more than 1M operations”, we can trust the proof. The decidability guarantees that only true statements can be proved. Moreover: a machine can seek for a proof itself. This is called Automated Theorem Proving, but for complex enough problems their runtime is astronomically high. Yet, a “skeleton” of a complex proof can be supplied by human, letting the machine verifying it by trying to fill the missing parts of the proof.

Back to our Blockchain construction, we can now put MLTT-compatible language on our blockchain, and we can now supply proofs for anything that we can prove, in a way that a 3rd party is able to trust our proofs. We need not to worry anymore about halting, or about security measures, or about correctness – we can simply require proofs for them all.

On Tau we take RDF-family languages' syntax (notably Notation3 and Nquads, which consist of triples of “subject verb object”) and give them MLTT's semantics. Equipped with an automated theorem prover that JIT-compiles the proof search (or verification, as a special case), it offers a general-purpose programming language that is decidable, so code can provide proofs about itself (without even getting into self-reference paradoxes!). The language incorporates a Blockchain and DHT as mentioned several times in other locations. Thanks to MLTT and automatic reasoning, non-naive Tau is an intelligent, aware (at the sense it can reason over its own code), safe and trustless decentralized network, while remain so even when allowing general and arbitrary code to run.

Of course, an explanation of what is Tau should come now and tell more about its features and uses, and we have discussed those in the past in various places (especially on this blog and on bitcointalk). So far on this post we concluded the Naive Tau construction, why it is doomed, and what is the correct way to perform secure multiparty computation.

Proof of Code Execution

posted Sep 6, 2015, 12:38 AM by Ohad Asor   [ updated Sep 6, 2015, 12:39 AM ]

Putting it altogether, code is rules and queries, while the compiler (being essentially an autoprover) follows the rules and their consequences and outputs answers (while performing any side-effects). This is a closed loop: the code is retrieved from tau-chain upon a new block, using, obviously, some tau client. So we accept and execute block T by executing the code that appeared at block T-1, while block 0 is the genesis block.

The code of the client should allow users run other tau code according to their preferences. It can be thought as the Appstore/Play application itself, while the network contain many other applications.

Tau does not only provide a decentralized appstore, but delivers additional and essentially more powerful technologies. We mentioned several times the decidability of the code itself and its consequences. Now we'd like to discuss an independent feature which brings more power to tau applications (or the root chain itself).

The autoprover is equipped with mechanism of hashing the proof search tree. The prover basically performs a breadth-first search that begins from the query and takes valid steps (according to the rules in the code), and tries to find facts along the search, and by this finishing a proof (a chain of valid moves from facts to the query).

This tree can be hashed like a Merkle tree. Eventually, the hash of the tree's root is a proof that the whole computation was followed, since one can replay the computation and verify the hash. There are much more efficient methods than replaying the whole tree, as in lambda-auth which presents the algorithm we intend to implement.

By that, one can prove and verify a local execution of code. Exogenic information (like IO) is treated as Monads, IO is a monad and is not an Auth type. For more information about monads please refer to functional programming literature, but note that this weakness of IO isn't always a trouble: when a user A knows they're connecting to server B, they can trust B's proof of flow that originates from the requests A initiated locally.

How this can elevate crowd computing, and the performance issues arising at this scope, will be a discussion from some other time. Let's mention some more specific applications for now: One example would be an exchange. The exchange can reveal its own code, and constantly provide proofs that that very code was executed. Another example would be a casino, proving that the whole game indeed followed the rules, including, say, a certain hash computation for random number generation. Also, serving as a DHT or even vanilla Bitcoin client can be proved.

A derived feature is the ability to verify participation on the network itself: one can prove that they ran a tau client, and designers of voting schemes may rely on this ability. It is also possible to automatically award coins given a proof of execution (over say Agoras, but not over non-tau coins like BTC), just like it is possible to set any rules to any proof.

I guess that would be enough to ignite the imagination. The 21st advancements of decentralization, cryptography, provability and verifiable computing, open frontiers to a new kind of networking abilities and experience.


Decentralized Democracy and the Role of the First Users

posted Aug 31, 2015, 11:22 PM by Ohad Asor   [ updated Jul 12, 2016, 1:02 PM ]

Recently we mentioned several times the logical and technological properties of tau, and now maybe it is time to take a step back and look at the broader picture. We mentioned that tau's programming language is actually rules and logic. We meet rules and logic in many other aspects of life: a relatively small example is that the pure logic structure in solutions to existing problems can be used automatically to cope with unsolved problems, even in an entirely non-related field, like some Chemical structure being isomorphic to some existing computer program.

This is still a certain use case and is aside the main point, and the main point is decentralized democracy.

Tau is able to formalize laws, query their consequences, and verify their consistency. It is able to do so in a fully decentralized mode. Tau cannot directly interact with the laws of reality, but it defines its own rules. The network is self-definable and self-amendable. The software client is a compiler that downloads source code from the blockchain and executes that code. The code may (and should occasionally) vary from block to block. Essentially, the most important part of the client's code is the conditions for accepting the next block.

The rules, protocol, and specific behavior of the network are of course a main factor in the evolution of the network. But specifying them is not our first task upon genesis. Our first task is to specify how rules can be changed, somehow disregarding what those rules actually are.

Tau's logic and infrastructure allow separation of Contexts. By Context we mean a set of rules that applies only to participants that actively wish to subscribe to that context. The terminology comes from the RDF world: tau's language is formed by quadruples of subject-verb-object-context, so the fact or rule described in "subject-verb-object" applies only at the given context. Nevertheless, code reuse across contexts is possible, and is commonly denoted as context1:subject context2:predicate etc. Contexts can also be pegged on the root chain as sidechains can be pegged on Bitcoin's chain.

On every context, one may have any kind of custom rules. So the root chain should not intervene on what happens in custom contexts, but define only the rules vital for its own existence. Tau can be bootstrapped with simple arbitrary rules, even roster of first participants, or centralized insertion of code by us. Whatever those rules will be, they are temporary, and our first task is to define how and when rules can be changed at all. Afterwards, we follow the process of rule-making we formalized at the first step, and define together the rules themselves. We, tau's developers, will be highly active at those post-genesis processes, but we will not hold any extra power. Each one of us will begin at genesis as equal exactly like any one else. 

We put a technology for decentralized democracy on the table, that can scale from democracy over a small club in the form of Context tau, into, who knows, maybe even state laws. I see no technological barrier for that. Nevertheless, we also do not have any magical ultimate set of rules.

The task is hard, and the philosophical questions are overwhelming. But there is a limit to what technology can do. It can give us a platform for decentralized democracy, but it cannot set the rules for us. I cannot exaggerate about the importance of the task of bootstrapping a decentralized democracy and forming its "constitution", therefore I won't, and will only mention that I guess that many readers share the same feeling with me - that we probably can do at least slightly better than our current situation.

Obviously, the last thing we can do as developers of a democratic system is to set its rules ourselves.

Formalizing a real-life decentralized democracy is not the first users' duty, of course. It is a much larger-scale process. The first users will need to define how to change the rules of the root chain - not even deal with the technical details like the maximum block size, but what happens if we want to change the block size, namely what are the conditions for a root-chain's rule change. It is an interesting litmus test as for itself. How would the rules of changing the rules evolve given our new logical and decentralization abilities, one can only barely imagine. Let's all hope that tau will actually evolve into collaboration of morals, not only economics, code and knowledge.

September's Special Offer

posted Aug 26, 2015, 11:21 PM by Ohad Asor   [ updated Aug 26, 2015, 11:22 PM ]

Dear All,


Beginning from today until the end of September by GMT, there following special offer is applicable for large buys:
Amount of 50K Agoras tokens: 0.0006 BTC per token.
Amount of 100K Agoras tokens: 0.0005 BTC per token.

This offer is given due to people in the community willing to have a price in BTC, to have a quantity discount, and to have opportunity to buy in past prices.

Purchasing can be done by leaving a message here on idni.org or by emailing me ohad@idni.org

Exchange Began

posted Aug 12, 2015, 8:46 AM by Ohad Asor

From this moment, Agoras intermediate tokens are being traded in https://masterxchange.com/market.php?currency=msctau

Code and Money: Which Comes First?

posted Jul 21, 2015, 7:18 PM by Ohad Asor   [ updated Jul 21, 2015, 8:02 PM ]

Let Lisa be a programmer and Bart be a business man.

Bart would like Lisa to develop a software for his business. Since they don't previously know each other, they naturally ask themselves how they can trust the other party. Bart would like to pay when the program is ready and working, and Lisa would like to get paid before she even begins her work, so it won't turn out she worked for free. It will be fair to agree somewhere in the middle, like defining milestones, but this does not completely solve the trust issue, only roughly reducing it somehow.

How can decentralized applications solve this trust issue? We have the Blockchain algorithm that removes the need to trust in many cases. On Bitcoin, one can supply a proof of coin ownership in a form of cryptographic signature, which allows them to spend those coins. Can this approach help Bart and Lisa?

Blockchain could help only when given one vital property. Less a property of the blockchain and more a property of the code itself, and this goes back to a more basic question: when Bart gets to put his hands on the code, how can he know it works as he expected?

Software verification/testing/quality-assurance is a major part of a software's lifecycle. It is not full-proof: we see buggy software everywhere, whether those bugs are misalignment against the original requirements of what the software should do, or security holes, or unexpected behavior. Making a quality software is a long, expensive, and speculative process with painful convergence rates.

Computer scientists knew long ago that this situation is due to the logical nature of our programming languages themselves. Things were wonderful if we could
only express formally the desired software's requirements, and the computer will be able to tell whether a given code meet those requirements. In fact, this turns out to be possible if we pick a subset of the programming languages called Totally Functional Programming Languages (TFPL).

Given a code in TFPL, the provable assertions about it are exactly the correct assertions about it. Therefore, it is always possible to supply a proof, together with the code, that it meets certain requirements. That is not the case for Turing complete languages, as described briefly on my previous post.

So, Lisa can write the program in TFPL and Bart can verify that the program meets his requirements. He will still have to hire a programmer to write those formal requirements (at least in many cases), but obviously expressing the requirements is much less hard that meeting those requirements.

We have therefore solved the verification problem, but what with the payment-trust problem we began with?

Solving this code-money chicken-egg problem is one feature that Tau-Chain will provide soon. Tau is a TFPL language;  it gives TFPL meaning to existing languages (RDF family which are extremely human readable). Tau client is also a theorem prover and verifier, so it can prove assertions about a given Tau code. It also functions as a full blockchain node, but proofs can be simply any proofs at the mathematical sense, up to tau's typesystem (which covers all finite turing machines, hence virtually all possible kinds of computers).

So, Bart could lock some coins in a multisig address with Lisa (that's like a Bitcoin's equivalent to a shared bank account requiring both signatures), and give a rule on Tau that the coins will be released to Lisa only if she supplies a proof that her code meets the desired requirements. Such proof will be verified by the whole network so to speak (or by "miners", the specific form is again up to the user's rules and how that specific coin over tau is built), and Lisa will trustlessly be rewarded the coins she deserve.

It also worth mentioning that Bart can be sure that Lisa didn't hide any surprises in the code: he can automatically verify that the code is not doing things it should not do, like accessing private data. We are therefore entering a new era in the software world, where software can really be trusted, and humans can trust each other on a much wider variety of transactions.

Sounds hard?
We already finished developing a very powerful prover and perform a lot of tests over it. It didn't break yet, but we keep testing. It needs a bit more development to go beyond an RDF reasoner into a full Tau node: we have to finish the type system constraints, finish DHT&Blockchain builtins, write the genesis block, and voila!

1-10 of 17