mining theory - Is there a limit on the size of the block

[meta] [long and rambling] [philosophy] Flirting with Nihilism ; Is Conversation Pointless?

I used to have an ideal of conversation as being a Socratic dialogue: people ask probing questions of each other about important philosophical issues and reveal the mistakes in each other’s logic, ultimately reaching life-changing conclusions through the argument.
Over the years, I have often tried to engage in such debate and while there have been some interesting discussions more typically it has been a pointless endeavor which simply leaves everyone involved convinced more thoroughly of their rightness and doubting the sanity of the others. To be sure, my abrasive style and personality have played a role, but this has itself in part come from the mistaken views of the world and human nature which I held.
By the standard I used to hold, conversation is largely pointless.
Now, of course, most normal people would characterize argument and debate as being very different than conversation. This is because their objectives in conversation are not trying to change someone’s mind but to exchange more basic information or simply enjoy each other’s company. To be sure, there is clear value in conversation of this sort, but it is not what I’m talking about here.
What I’m suggesting is that the attempt to change someone’s mind by reason is generally impossible and not worth trying.
We all have certain foundational beliefs which no one is likely to be able to change. What exactly these are varies widely by person: for some people it is religious, for others political, or philosophical principles (whether or not a person explicitly recognizes them as philosophy).
But what is common about them is that no matter how sound an argument may appear to some “neutral”, disinterested third party, if someone tries to reason you out of these core positions, your mind will always be able to find a rationalization in response. And again regardless of how strong or weak this response is judged by our hypothetical third party, it will be sufficient to convince you that your position is right and likely cause you to be somewhat annoyed at the person making an “obviously” bad faith argument against something so basic.
This is not, as it might appear to be, an argument that people are irrational nor that I am any different in this. It is simply a fundamental side-effect of the process of reasoning. One must have a basis upon which to operate; there is no green field to work from.
Even Descartes’ brilliant argument showing how one can prove to themself that one exists, which is as close to pure reason without any external base as it is possible to get, itself relies upon certain foundations: the desirability of trying to prove such “trivial” matters; the value of pure reason as a source for truth. And it rapidly moves on by “the clear light of reason” to further argument which rests upon, essentially, “it seems obvious to me that…”
This is, essentially, a generalization of Gödel's incompleteness theorems (not that I fully understand them and certainly don’t know how to prove it, but simply that I recognize this point is not a fundamentally original thought) and similar: we must always start with something which we simply believe and cannot prove.
This may seem trivial, but I do not think it is. What it suggests to me is that the space for productive discussion in disagreement is far smaller than is often believe by those who, as I used to, believe that everything that matters could in theory be resolved by honest, wise intellectual exploration.
Therefore, when we find ourselves in disagreement with others on a fundamental level, there may well be no resolution to be had other than the phrase I have so hated for so long: “agree to disagree”. Others are able to accept this with grace and move on. I have always been bothered by reaching such impasses. If we are in disagreement about an important issue which is the basis for much of our reasoning, then how could we come to any agreement on any of the things which follow as a result of that basic view? And how could we have meaningful discussion if we simply must avoid all consequences of a major section of our world view?
And so, to me, the segmenting of discourse into “echo chambers” of various sorts makes perfect sense: people must have a shared foundation in order to be able to meaningfully communicate productively.
And yet, that is not enough to make forward progress either, because while less irritating perhaps, discussion simply among those who agree is pointless too.
So, then, if we cannot in general convince anyone else of anything that matters, nor do we wish to simply preach to the choir, what discourse could be useful?
  • Disagreement on facts rather than worldview
Among people who agree about, for instance, valid sources and other basic foundations, but who disagree about a particular fact, then linking to a source can actually be productive. This is as opposed to those who do not have such an agreement beforehand, where linking to a source can just be a self-congratulatory act which is unproductive.
  • Challenging one’s own worldview (or refining arguments ; this write-up’s category)
If you truly wish to have your own beliefs examined for whether or not they are incorrect, or their limitations; or if you simply want to try to refine and make more coherent something which you’ve vaguely thought, then it is useful to express your views not for the purpose of changing another’s opinion, but for the purpose of understanding your own.
This is what this piece is for me: a response to my own thoughts, particularly set in motion about a month ago when the title occurred to me, about what is and isn’t useful ground for disagreement and debate.
  • Responding to an invitation to challenge another’s world view (limited circumstances)
This is something which is often presented but rarely genuine: whether intentionally or unintentionally many people who present themselves as looking to have their own views challenged are actually going to respond with rationalizations to anything presented. While that means that this apparent opening is often unproductive, there is a kernel of sincere and thoughtful people who are open to examining their beliefs and are looking for something to build on to do so. I actually consider this less important than it may often seem, as people’s desire for self-justification generally means that there is abundant material available for one who looks to understand any major viewpoint, so there is relatively little need to try to seek to be the one to minister to such a unicorn.
But, since so much conversation is pointless, recognizing it and moving on rather than getting bogged down in it is more productive.
There is a large realm of productive material, for us to gain knowledge on a more “superficial” level - much does have a shared basis, and we don’t see these issues visibly in, for instance, learning a language or learning history (even though there really are philosophical issues underlying both, but generally speaking a person can find much productive ground). Since our time is limited, it makes sense to pursue these unambiguously productive areas instead of beating our heads against a wall trying to convince others about things we consider important but which are foundational for them and which they have not sought to question.
Examples of pointless discussions from unbridgeable views:
  • Bitcoin vs gold
While this might seem unintuitive as I have written about how I see Bitcoin and gold as having complementary strengths rather than being in opposition, what I mean when I express it as a pointless category is that trying to tell a Bitcoin fanatic about the uses of gold is a waste of time, just as trying to explain to Peter Schiff that it’s possible for people to create sustainable and meaningful value out of nothingness arbitrarily is a waste of time.
The “Bitcoin-only” and “gold-only” camps will never change their mind as a result of someone trying to come in and ‘save’ or ‘educate’ them.
  • Tsla vs tslaq
Quite similarly: those who believe Tesla will save the world have essentially no common ground to discuss productively with those who believe Tesla will go bankrupt and vice versa. There are simply fundamental differences about approaches to reasoning which will not be surmounted by the surface arguments like “but he lands rockets!” or “look at the historical rates of cash burn!”
Less obviously: the actual outcome will not resolve this either. Even if Tesla becomes wildly successful, the critics will still believe dishonest and unjustifiable actions to have been taken. Even if Tesla fails, the supporters will not believe Musk did anything wrong.
World views are generally proof against what might seem from the outside to be events which should break them.
  • Religious preference
Given the general argument I’ve made, this should be fairly obvious. However, I spent a fair amount of time over the years arguing some of this from both sides (when I was much younger I was an evangelical Christian ; by college age I was an “evangelical atheist” ; in hindsight, I don’t consider either a good way to spend time).
  • Political preference
This too follows clearly from the positions I’ve taken. No socialist is going to convince a capitalist that profit is evil. No capitalist will convince a socialist that profit is good. (Choose whatever dichotomy and examples you find appropriate.)
Examples of productive discussions sharing a common foundation:
These examples have been generally covered previously or are self-explanatory.
  • Specific facts between mutually respecting people
  • World view in rare cases where seeker wants introduction to different view
  • Skills (languages) or Bodies of knowledge (history)
  • Determining positions
This last is a potentially interesting one: even when changing a person’s mind may be impossible and unproductive, unbridgeable divides as far as resolving the disagreement can still be productive in terms of understanding what exactly the relative positions are. The socialist may not convince the capitalist of anything, but if they simply want to understand what the capitalist believes, that could be productive (and vice versa).
Of course, if it devolves into trying to convince the other person they’re wrong rather than trying to understand their position, it goes back to being pointless.
My choices:
As a practical matter, what am I doing with these views?
  • Unfollowing and blocking easily on Twitter
I’ve been using Twitter for the first time lately. I find it interesting and useful in many ways. However, there is a potential to waste a lot of time there trying to argue things where the other person will never be convinced.
Other people wiser than myself may well choose to hear all voices, but I wish to maintain an environment where I find what I’m reading useful and where the people I hear from are ones where I think I could engage productively or at least enjoyably.
So I am unfollowing and blocking fairly quickly when I encounter views which I consider to indicate unbridgeable divides of opinion. Unfollow is my first step. If I wasn’t following the person previously and find something irritating they say, I will block rather than engage.
In rare instances I will engage where I think there may be a productive discussion but I won’t go into extended arguments.
  • Maintain open discussion here
However, I have long held a different view for this subreddit: I specifically want to encourage and allow dissenting voices. I wish to avoid building an echo chamber here for a multitude of reasons, but in particular because I think it’s extremely important to try to avoid having a blindspot about critical issues with the system or will drawbacks to any proposed changes.
In the framework of what I’ve discussed here, this could be seen as a combination of wanting to maintain space for understanding other views (even if we don’t change each other’s minds) as well as trying to deliberately hold open at least in some cases a willingness to change my mind. For instance, while I am strongly opposed to ever removing the 337 nillion cap, I want to allow such discussion and encourage anyone with those views to be free to express them. In other areas, like the block size cap, while I had held a stronger position toward larger blocks on principle, and still tend in that direction, some of the discussion previously here and more broadly on the other side has at least led me to more of a wait-and-see approach.
Some various concluding thoughts:
  • Difference between random vs long-term relationships and discussion.
Twitter is an example where there is generally little to no strength of relationship as a starting point. This makes any challenging discussion more difficult and less likely to be productive and is part of my hair-trigger in unfollowing and blocking.
However, in general I don’t tend to believe that in most cases a long-term relationship changes what ground is productive so much as it changes the calculation about whether one unproductive area justifies severing a connection. Necessarily those we have chosen to be part of our lives will not agree with us in everything, but there are likely to be areas not worth engaging in.
For those we have no particular prior need or desire to have in our lives, I see no reason to accept any noise.
  • Low value in one-way communication upon determining different foundation (watching videos or twitter feed, etc. upon significant disagreement)
I tend to believe that without some shared common basis, there’s not much point in taking in what’s presented by someone. This can be seen as creating one’s own echo chamber, but the more positive way I might put it is that you have to have a reason to respect what a person is saying for it to be worthwhile to listen to them. Someone whose arguments you are likely to reject almost entirely is not worth listening to in the first place, as the goal should be to get new information or listen to someone who might change your views rather than to simply suffer through for the sake of having heard what can be known in advance to be something you will reject.
  • Tie-in to views on crypto: neither dedicated pumpers nor dedicated critics are useful. Nor are we looking to bring in people who aren’t interested. So what’s left? Those who are seekers are welcome and we should look to assist with helping people get exposure and broader knowledge of strengths and weaknesses.
Weighing the various perspectives on the overall issue I’ve put together here led me to this view which is a refinement of what I’ve thought for a while. We do not want mindless fanatics. And we do not want to try to convince people who aren’t interested or those who have already decided the whole concept is worthless.
However, there is still a productive sliver left there: people who are interested in cryptocurrency but have not already set their opinions in stone. This fits in with the early mindset I had that we would want to have offering technical support as a core competency (and which is something I’ve been proud of this community doing well). This also fits into my original coin-a-day series, long defunct, trying to offer a view of the strengths and weaknesses of various cryptocurrencies. Perhaps this may be worth reviving in some form in the future, but the general concept at least fits my views here: offering a perspective to those who are looking for it, but not trying to convince anyone who has already decided and just wants to argue.
As I have been discussing here, at this point my choice is generally: I make no attempt to change someone’s mind. Occasionally I’ll make one response I expect to be futile. Twitter is not suited towards meaningful in-depth communication and there is no established relationship to anchor a difficult discussion. In closer relationships, topics simply become no-go areas.
I’ve probably swung too far towards avoidance from confrontation at this point, but I’ve got too much scar tissue from years of fruitless argument.
Some examples of aspects I collected while putting this together
https://twitter.com/Venado_0320/status/1125060447646957570 (twitter thread which shows a specific example of TSLA bull vs bear thesis and pointlessness of general discussion although room for discussion on one specific aspect)
https://twitter.com/Jonathankrier1/status/1125922545679527936 (twitter thread example of production conversation - starting from same point of view but slightly different knowledge)
https://old.reddit.com/RealTesla/comments/bmcy5q/teslacom_forum_5719_collision_while_using_auto/emwbohb/ (comment about people changing their mind over time)
https://twitter.com/jposhaughnessy/status/1128005778445611010 (news and knowing what isn’t so)
submitted by coinaday to nyancoins [link] [comments]

Mimblewimble in IoT—Implementing privacy and anonymity in INT Transactions

Mimblewimble in IoT—Implementing privacy and anonymity in INT Transactions

https://preview.redd.it/kyigcq4j5p331.png?width=1280&format=png&auto=webp&s=0584cd96378f51ead05b447397dcb0489995af4e

https://preview.redd.it/rfc3cw7q5p331.png?width=800&format=png&auto=webp&s=2b10b33defa0b354e0144745dd20c2f257812f29

The years of 2017 and ’18 were years focused on the topic of scaling. Coins forked and projects were hyped with this word as their sole mantra. What this debate brought us were solutions and showed us where we are right now satisfying the current need when paired with a plan for the future. What will be the focus of years to come will be anonymity and fungibility in mass adoption.
In the quickly evolving world of connected data, privacy is becoming a topic of immediate importance. As it stands, we trust our privacy to centralized corporations where safety is ensured by the strength of your passwords and how much effort an attacker dedicates to breaking them. As we grow into the new age of the Internet, where all things are connected, trustless and cryptographic privacy must be at the base of all that it rests upon. In this future, what is at risk is not just photographs and credit card numbers, it is everything you interact with and the data it collects.
If the goal is to do this in a decentralized and trustless network, the challenge will be finding solutions that have a range of applicability that equal the diversity of the ecosystem with the ability to match the scales predicted. Understanding this, INT has begun research into implementing two different privacy protocols into their network that conquer two of the major necessities of IoT: scalable private transactions and private smart contracts.

Mimblewimble

One of the privacy protocols INT is looking into is Mimblewimble. Mimblewimble is a fairly new and novel implementation of the same elements of Elliptic-Curve Cryptography that serves as the basis of most cryptocurrencies.

https://preview.redd.it/dsr6s6vt5p331.png?width=800&format=png&auto=webp&s=0249e76907c3c583e565edf19276e2afaa15ae08

In bitcoin-wizards IRC channel in August 2016, an anonymous user posted a Tor link to a whitepaper claiming “an idea for improving privacy in bitcoin.” What followed was a blockchain proposal that uses a transaction construction radically different than anything seen today creating one of the most elegant uses of elliptic curve cryptography seen to date.
While the whitepaper posted was enough to lay out the ideas and reasoning to support the theory, it contained no explicit mathematics or security analysis. Andrew Poelstra, a mathematician and the Director of Research at Blockstream, immediately began analyzing its merits and over the next two months, created a detailed whitepaper [Poel16] outlining the cryptography, fundamental theorems, and protocol involved in creating a standalone blockchain.
What it sets out to do as a protocol is to wholly conceal the values in transactions and eliminate the need for addresses while simultaneously solving the scaling issue.

Confidential Transactions

Let’s say you want to hide the amount that you are sending. One great way to hide information that is well known and quick: hashing! Hashing allows you to deterministically produce a random string of constant length regardless of the size of the input, that is impossible to reverse. We could then hash the amount and send that in the transaction.

X = SHA256(amount)
or
4A44DC15364204A80FE80E9039455CC1608281820FE2B24F1E5233ADE6AF1DD5 = SHA256(10)

But since hashing is deterministic, all someone would have to do would be to catalog all the hashes for all possible amounts and the whole purpose for doing so in the first place would be nullified. So instead of just hashing the amount, lets first multiply this amount by a private blinding factor*.* If kept private, there is no way of knowing the amount inside the hash.

X = SHA256(blinding factor * amount)

This is called a commitment, you are committing to a value without revealing it and in a way that it cannot be changed without changing the resultant value of the commitment.
But how then would a node validate a transaction using this commitment scheme? At the very least, we need to prove that you satisfy two conditions; one, you have enough coins, and two, you are not creating coins in the process. The way most protocols validate this is by consuming a previous input transaction (or multiple) and in the process, creating an output that does not exceed the sum of the inputs. If we hash the values and have no way validate this condition, one could create coins out of thin air.

input(commit(bf,10), Alice) -> output(commit(bf,9), BOB), outputchange(commit(bf,5), Alice)
Or
input(4A44DC15364204A80FE80E9039455CC1608281820FE2B24F1E5233ADE6AF1DD5, Alice) ->
output(19581E27DE7CED00FF1CE50B2047E7A567C76B1CBAEBABE5EF03F7C3017BB5B7, Bob)
output(EF2D127DE37B942BAAD06145E54B0C619A1F22327B2EBBCFBEC78F5564AFE39D, Alice)

As shown above, the later hashed values look just as valid as anything else and result in Alice creating 4 coins and receiving them as change in her transaction. In any transaction, the sum of the inputs must equal the sum of the outputs. We need some way of doing mathematics on these hashed values to be able to prove:

commit(bf1,x) = commit(bf2,y1) + commit(bf3,y2)

which, if it is a valid transaction would be:

commit(bf1,x) - commit(bf2+bf3,y1+y2) = commit(bf1-(bf2+bf3),0)

Or just a commit of the leftover blinding factors.

By the virtue of hashing algorithms, this isn’t possible. To verify this we would have to make all blinding factors and amounts public. But in doing so, nothing is private. How then can we make a valued public that is made with a private-value in such a way that you cannot reverse engineer the private value and still validate it satisfies some condition? It sounds a bit like public and private key cryptography…
What we learned in our primer on Elliptic-Curve Cryptography was that by using an elliptic curve to define our number space, we can use a point on the curve, G, and multiply it by any number, x, and what you get is another valid point, P, on the same curve. This calculation is quick but in taking the resultant point and the publically known generator point G, it is practically impossible to figure out what multiplier was used. This way we can use the point P as the public key and the number x as the private key. Interestingly, they also have the curious property of being additive and communicative.
If you take point P which is xG and add point Q to it which is yG, its resulting point, W = P + Q, is equal to creating a new point with the combined numbers x+y. So:
https://preview.redd.it/yv0knclr6p331.png?width=800&format=png&auto=webp&s=9a3abccdc164e615651147141736356013e4b829
This property, homomorphism, allows us to do math with numbers we do not know.
So if instead of using the raw amount and blinding factor in our commit, we use them each multiplied by a known generator point on an elliptic curve. Our commit can now be defined as:
https://preview.redd.it/aas2wm0u6p331.png?width=800&format=png&auto=webp&s=c3ebb5728f755f30e878ce5f1885397f6667d4f3
This is called a Pedersen Commitment and serves as the core of all Confidential Transactions.
Let’s call the blinding factors r, and the amounts v, and use H and G as generator points on the same elliptic curve (without going deep into Schnorr signatures, we will just accept that we have to use two different points for the blinding factor and value commits for validation purposes**). Applying this to our previous commitments:
https://preview.redd.it/zf246t8z6p331.png?width=800&format=png&auto=webp&s=17e2e155c59002f05f38ccb27082f79a5dd98a1f
and using the communicative properties:
https://preview.redd.it/km4fuf017p331.png?width=800&format=png&auto=webp&s=13541d62ec3f6e5728388b7a8d995c3829364a42
which for a valid transaction, this would equal:
with ri, vi being the values for the input, ro,vo being the values for the output and rco, vco being the values for the change output.

This resultant difference is just a commit to the excess blinding factor, also called a commitment-to-zero:
https://preview.redd.it/tqnwao667p331.png?width=800&format=png&auto=webp&s=9da5ecab5c670024f171a441e0d2477cf8f41a56
You can see that in any case where the blinding factors were selected randomly, the commit-to-zero will be non-zero and in fact, is still a valid point on the elliptic curve with a public key,
https://preview.redd.it/19ry9i297p331.png?width=800&format=png&auto=webp&s=4fb6628a01dc784816e1aea43cc0f5cfb025af52
And private key being the difference of the blinding factors.
So, if the sum of the inputs minus the sum of the outputs produces a valid public key on the curve, you know that the values have balanced to zero and no coins were created. If the resultant difference is not of the form
https://preview.redd.it/71mpdobb7p331.png?width=800&format=png&auto=webp&s=143d28da48d40208d5ef338444b3c7edea1fab9c
for some excess blinding factor, it would not be a valid public key on the curve, and we would know that it is not a balanced transaction. To prove this, the transaction is then signed with this public key to prove the transaction is balanced and that all blinding factors are known, and in the process, no information about the transaction have been revealed (the by step details of the signature process can be read in [Arvan19]).
All the above work assumed the numbers were positive. One could create just as valid of a balanced transaction with negative numbers, allowing users to create new coins with every transaction. Called Range Proofs, each transaction must be accompanied by a zero-knowledge argument of knowledge to prove that a private committed value lies within a predetermined range of values. Mimblewimble, as well as Monero, use BulletProofs which is a new way of calculating the proof which cuts down the size of the transaction by 80–90%.

*Average sizes of transactions seen in current networks or by assuming 2 input 2.5 output average tx size for MW

Up to this point, the protocol described is more-or-less identical between Mimblewimble and Monero. The point of deviation is how transactions are signed.
In Monero, there are two sets of keys/addresses, the spend keys, and the view keys. The spend key is used to generate and sign transactions, while the view key is used to “receive” transactions. Transactions are signed with what is called a Ring Signature which is derived from the output being spent, proving that one key out of the group of keys possesses the spend key. This is done by creating a combined Schnorr signature with your private key and a mix of decoy signers from the public keys of previous transactions. These decoy signers are all mathematically equally valid which results in an inability to determine which one is the real signer. Being that Monero uses Pedersen Commitments shown above, the addresses are never publically visible but are just used for the claiming, signing of transactions and generating blinding factors.
Mimblewimble, on the other hand, does not use addresses of any type. Yes. That’s right, no addresses. This is the true brilliance of the protocol. What Jedusor proved was that the blinding factors within the Pedersen commit and the commit-to-zero can be used as single-use public/private key pairs to create and sign transactions.
All address based protocols using elliptic-curve cryptography generate public-private key pairs in essentially the same way. By multiplying a very large random number (k_priv) by a point (G) on an elliptic curve, the result (K_pub) is another valid point on the same curve.
https://preview.redd.it/pt2xr33i7p331.png?width=800&format=png&auto=webp&s=1785cebcc842cab19b3987d848b2029032ae1195
This serves as the core of all address generation. Does that look familiar?
Remember this commit from above:
https://preview.redd.it/w9ooxudk7p331.png?width=800&format=png&auto=webp&s=d94ad3ac103352aa4c9653934d61cccc25a6bf8f
Each blinding factor multiplied by generator point G (in red) is exactly that! r•G is the public key with private key r! So instead of using addresses, we can use these blinding factors as proof we own the inputs and outputs by using these values to build the signature.
This seemingly minor change removes the linkability of addresses and the need for a scriptSig process to check for signature validity, which greatly simplifies the structure and size of Confidential Transactions. Of course, this means (at this time) that the transaction process requires interaction between parties to create signatures.

CoinJoin

Even though all addresses and amounts are now hidden, there is still some information that can be gathered from the transactions. In the above transaction format, it is still clear which outputs are consumed and what comes out of the transaction. This “transaction graph” can reveal information about the owners of the blinding factors and build a picture of the user based on seen transaction activity. In order to further hide and condense information, Mimblewimble implements an idea from Greg Maxwell called CoinJoin [Max13] which was originally developed for use in Bitcoin. CoinJoin is a trustless method for combining multiple inputs and outputs from multiple transactions, joining them into a single transaction. What this does is a mask that sender paid which recipient. To accomplish this in Bitcoin, users or wallets must interact to join transactions of like amounts so you cannot distinguish one from the other. If you were able to combine signatures without sharing private keys, you could create a combined signature for many transactions (like ring signatures) and not be bound by needing like amounts.

In this CoinJoin tx, 3 addresses have 4 outputs with no way of correlating who sent what
In Mimblewimble, doing the balance calculation for one transaction or many transactions still works out to a valid commit-to-zero. All we would need to do is to create a combined signature for the combined transaction. Mimblewimble is innately enabled to construct these combined signatures with the commit of Schnorr challenge transaction construction. Using “one-way aggregate signatures” (OWAS), nodes can combine transactions, while creating the block, into a single transaction with one aggregate signature. Using this, Mimblewimble joins all transactions at the block level, effectively creating each block as one big transaction of all inputs consumed and all outputs created. This simultaneously blurs the transaction graph and has the power to remove in-between transactions that were spent during the block, cutting down the total size of blocks and the size of the blockchain.

Cut-through

We can take this one step further. To validate this fully “joined” block, the node would sum all of the output commitments together, then subtract all the input commitments and validate that the result is a valid commit-to-zero. What is stopping us from only joining the transactions within a block? We could theoretically combine two blocks, removing any transactions that are created and spent in those blocks, and the result again is a valid transaction of just unspent commitments and nothing else. We could then do this all the way back to the genesis block, reducing the whole blockchain to just a state of unspent commitments. This is called Cut-through. When doing this, we don’t have any need to retain the range proofs of spent outputs, they have been verified and can be discarded. This lends itself to a massive reduction in blockchain growth, reducing growth from O*(number of txs)* to O*(number of unspent outputs)*.
To illustrate the impact of this, let’s imagine if Mimblewimble was implemented in the Bitcoin network from the beginning, with the network at block 576,000, the blockchain is about 210 GB with 413,675,000 total transactions and 55,400,000 total unspent outputs. In Mimblewimble, transaction outputs are about 5 kB (including range proof ~5 kB and Pedersen commit ~33 bytes), transaction inputs are about 32 bytes and transaction proof are about 105 bytes (commit-to-zero and signature), block headers are about 250 bytes (Merkle proof and PoW) and non-confidential transactions are negligible. This sums up to a staggering 5.3 TB for a full sync blockchain of all information, with “only” 279 GB of that being the UTXOs. When we cut-through, we don’t want to lose all the history of transactions, so we retain the proofs for all transactions as well as the UTXO set and all block headers. This reduces the blockchain to 322 GB, a 94% reduction in size. The result is basically a total consensus state of only that which has not been spent with a full proof history, greatly reducing the amount of sync time for new nodes.
If Bulletproofs are implemented, the range proof is reduced from over 5kB to less than 1 kB, dropping the UTXO set in the above example from 279 GB to 57 GB.

*Based on the assumptions and calculations above.

There is also an interesting implication in PoS blockchains with explicit finality. Once finality has been obtained, or at some arbitrary blockchain depth beyond it, there is no longer the need to retain range proofs. Those transactions have been validated, the consensus state has been built upon it and they make up the vast majority of the blockchain size. If we say in this example that finality happens at 100 blocks deep, and assume that 10% of the UTXO set is pre-finality, this would reduce the blockchain size by another 250 GB, resulting in a full sync weight of 73 GB, a 98.6% reduction (even down 65% from its current state). Imagine this. A 73 GB blockchain for 10 years of fully anonymous Bitcoin transactions, and one third the current blockchain size.
It’s important to note that cut-through has no impact on privacy or security. Each node may choose whether or not to store the entire chain without performing any cut-through with the only cost being increased disk storage requirements. Cut-through is purely a scalability feature resulting in Mimblewimble based blockchains being on average three times smaller than Bitcoin and fifteen times smaller than Monero (even with the recent implementation of Bulletproofs).

What does this mean for INT and IoT?

Transactions within an IoT network require speed, scaling to tremendous volumes, adapting to a variety of uses and devices with the ability to keep sensitive information private. Up till now, IoT networks have focused solely on scaling, creating networks that can transact with tremendous volume with varying degrees of decentralization and no focus on privacy. Without privacy, these networks will just make those who use it targets who feed their attackers the ammunition.
Mimblewimble’s revolutionary use of elliptic-curve cryptography brings us a privacy protocol using Pedersen commitments for fully confidential transactions and in the process, removes the dependence on addresses and private keys in the way we are used to them. This transaction framework combined with Bulletproofs brings lightweight privacy and anonymity on par with Monero, in a blockchain that is 15 times smaller, utilizing full cut-through. This provides the solution to private transactions that fit the scalability requirements of the INT network.
The Mimblewimble protocol has been implemented in two different live networks, Grin and Beam. Both are purely transactional networks, focused on the private and anonymous transfer of value. Grin has taken a Bitcoin-like approach with community-funded development, no pre-mine or founders reward while Beam has the mindset of a startup, with VC funding and a large emphasis on a user-friendly experience.
INT, on the other hand, is researching implementing this protocol either on the main chain, creating all INT asset transfer private or as an optional and add-on subchain, allowing users to transfer their INT from non-private chain to the private chain, or vice versa, at will.

Where it falls short?

What makes this protocol revolutionary is the same thing that limits it. Almost all protocols, like Bitcoin, Ethereum, etc., use a basic scripting language with a function calls out in the actual transaction data that tells the verifier what script to use to validate it. In the simplest case, the data provided with the input calls “scriptSig” and provides two pieces of data, the signature that matches the transaction and the public key that proves you own the private key that created it. The output scripts use this provided data with the logic passed with it, to show the validator how to prove they are allowed to spend it. Using the public key provided, the validator then hashes it, checks that it matches the hashed public key in the output, if it does, it then checks to make sure the signature provided matches the input signature.
https://preview.redd.it/5u6m1eiv7p331.png?width=1200&format=png&auto=webp&s=3729eb12037107ae744d15cea9f9bc1e18a3c719
This verification protocol allows some limited scripting ability in being able to tell validators what to do with the data provided. The Bitcoin network can be updated with new functions allowing it to adapt to new processes or data. Using this, the Bitcoin protocol can verify multiple signatures, lock transactions for a defined timespan and do more complex things like lock bitcoin in an account until some outside action is taken.
In order to achieve more widely applicable public smart contracts like those in Ethereum, they need to be provided data in a non-shielded way or create shielded proofs that prove you satisfy the smart contract conditions.
In Mimblewimble, as a consequence of using the blinding factors as the key pairs, greatly simplifying the signature verification process, there are no normal scripting opportunities in the base protocol. What is recorded on the blockchain is just:

https://preview.redd.it/dwhiuc8y7p331.png?width=1200&format=png&auto=webp&s=69ea0a7797bc94a9766a4b31a639666bf9f1ebc4
  • Inputs used — which are old commits consumed
  • New outputs — which are new commits to publish
  • Transaction kernel — which contains the signature for the transaction with excess blinding factor, transaction fee, and lock_height.
And none of these items can be related to one another and contain no useful data to drive action.
There are some proposals for creative solutions to this problem by doing so-called scriptless-scripts†. By utilizing the properties of the Schnorr signatures used, you can achieve multisig transactions and more complex condition-based transactions like atomic cross-chain swaps and maybe even lightning network type state channels. Still, this is not enough complexity to fulfill all the needs of IoT smart contracts.
And on top of it all, implementing cut-through would remove transactions that might be smart contracts or rely on them.
So you can see in this design we can successfully hide values and ownership but only for a single dimensional data point, quantity. Doing anything more complex than transferring ownership of coin is beyond its capabilities. But the proof of ownership and commit-to-zero is really just a specific type of Zero-knowledge (ZK) proof. So, what if, instead of blinding a value we blind a proof?
Part 2 of this series will cover implementing private smart contracts with zkSNARKs.

References and Notes

https://github.com/ignopeverell/grin/blob/mastedoc/intro.md
https://github.com/mimblewimble/grin/blob/mastedoc/pow/pow.md
https://github.com/mimblewimble/grin/wiki/Grin-and-MimbleWimble-vs-ZCash
https://bitcointalk.org/index.php?topic=30579
[poel16] http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble-andytoshi-INCOMPLETE-DRAFT-2016-10-06-001.pdf
** In order to prove that v=0 and therefore the commit to zero, in fact, has no Hcomponent without revealing r, we must use Schnorr protocol:
prover generates random integer n, computes and sends point 𝑇←n𝐻
verifier generates and sends random integer 𝑖
prover computes and sends integer 𝑠←𝑖𝑏+n modq, where q is the (public) order of the curve
verifier knowing point r𝐻 computes point 𝑖(r𝐻), then point 𝑖(r𝐻)+𝑇; computes point 𝑠𝐻; and ensures 𝑖(r𝐻)+𝑇=𝑠𝐻.
[Arvan19] https://medium.com/@brandonarvanaghi/grin-transactions-explained-step-by-step-fdceb905a853
[Bulletproofs] https://eprint.iacr.org/2017/1066.pdf
[Max13] https://bitcointalk.org/?topic=279249
[MaxCT]https://people.xiph.org/~greg/confidential_values.txt
[Back13]https://bitcointalk.org/index.php?topic=305791.0
http://diyhpl.us/wiki/transcripts/grincon/2019/scriptless-scripts-with-mimblewimble/
https://tlu.tarilabs.com/cryptography/scriptless-scripts/introduction-to-scriptless-scripts.html#list-of-scriptless-scripts
http://diyhpl.us/~bryan/papers2/bitcoin/2017-03-mit-bitcoin-expo-andytoshi-mimblewmble-scriptless-scripts.pdf
submitted by INTCHAIN to INT_Chain [link] [comments]

Why Bitcoin is the Way that it is . . .or (How I Learned to Stop Worrying and Love the Volatility!)

Please forgive the long post here and try to bear with me until the end of it if you can!
It is unfortunate reality, that too many in the bitcoin community (even those who have been here a long time) do not understand on an economic level what is going on with bitcoin; and because of that, they often get turned off or afraid of it or even rage against the volatility and ignorantly blame all the evil others who "treat it like an investment" or "only use it to speculate" (as if these were wholly conscious shifts away from what we could otherwise be treating bitcoin as right now). Some of you have probably already seen my posts to this same effect; most will have not and the concept of money developing on a market is quite foreign to most. To that effect, and out of a natural curiosity, I have spent years studying it and applying the economics of money to this amazing cryptocurrency which I am so fond of.

So, for everyone out there: For all the newbies who just don't get it. . .for all techies who could use a dose of understanding of the economic side. . . for all the veterans here who could use a little refresher and a more in-depth look at the incentives shaping our beloved ecosystem, let's take this point-by-point:

-There is no such thing as intrinsic value, even for gold or silver or dollars or diamonds or water...however, commodities and commodity monies have use-value, outside of their use as money (e.g. gold as jewelry and use as conductor, etc.).
-Value is subjective to individuals and a thing only has as much value as an individual assigns to it in regards to it's serviceability towards an end. If other individuals also value it for their ends, it can be traded or exchanged and a market price develops.
-If this thing or commodity happens to possess certain properties (properties we now call "monetary properties") such as high degrees of fungibility, scarcity, divisibility, homogeneity, transportability, durability, etc., then it has the potential to have high usefulness (and thus likely high value and thus higher exchange price) as a network good. One bitcoin unit is not going to be worth much if 2 dudes have some units and no one else does (maybe some subjective, novelty or numismatic value to those two individuals), so will not fetch a high exchange price (and will not circulate much...which is irrelevant of course if you're not trying to use it as money).
-A bubble, or spike in the price of tulips is not likely to be sustainable or borne of future-looking value achievement or creation; because a tulip doesn't really get any more useful the higher it's price; and a tulip doesn't really get any more useful the more people who have them, or the more ubiquitously they are held in society. A tulip is not much of a network good. A tulip does not possess good actualized or latent monetary properties. A tulip is lovely and useful/valuable aesthetically and for other non-monetary use-cases.
-A bitcoin (unit) does not have a whole lot of apparent use-value outside of use as money; outside of a vast network of holders. Bitcoin has exemplary monetary properties, or latent properties of a network good.
-money is extremely necessary and useful to an economy (useful to most all individuals who make up an economy), because it serves as a: 1. Medium of indirect exchange (alleviating barter inefficiencies/double coincidence of wants), 2. Unit of account (value is subjective and interpersonal value comparisons are impossible, yet we have to attempt to find common standard in order to enjoy the massive benefits of trade and division of labor), 3. Store of value. A thing is not money unless it fulfills these three roles to a high degree. BITCOIN IS NOT YET MONEY. Bitcoin is a proto-money virtual token or digital commodity. It is especially not widely used as a unit of account. This is important. This is part of why Bitcoin is so price volatile right now, and its achievement is also dependent upon bitcoin's volatility right now. We'll get to that in more depth.
-Money (for a large, non-trusting economy) can come about in only two main ways: you can have government create it and bootstrap it's initial value and maintain a demand-sink through taxation etc; or you can have it develop voluntarily on the market. I leave for another discussion the relative merits of either origin and why one might prefer one over the other. I will assume that the reader at this point sees value in having Bitcoin become money in the formal economic sense that I already explained, or that the reader at least wants the asset to either increase in price or become more useful as a payment currency...in either which case, the reader by default wants to see bitcoin progress further towards moneyness. That's right, you thought you didn't care about money...you thought you cared about the cool tech, or the payment network, or sticking it to the banks...well guess what? Jokes on you. You can pursue those aspects, because they are necessarily auxiliary to bitcoin becoming money and self-sustaining at all, so that you can continue to pursue your little pet aspects. And just like good design...you won't even know it's there. You won't know or appreciate when bitcoin is money and that it supports your penchant..but it won't matter; the invisible hand will have used you like a sheepskin for it's much grander emergent designs.
-Bitcoin has clearly not been declared money by any government fiat, and it is pretty clearly undergoing market processes which are shaping it's network effect, or it's demise. It is up to voluntary market processes (and absence of forceful interference by governments; intentional or unintentional; direct or far removed causes) to succeed or fail in it's path towards moneyness or any kind of self-sustaining existence you think it may have in it's DNA.
-There are three main factors which affect the exchange-price volatility of a commodity: 1. liquidity/market cap/saturation 2. normalization of use-cases (what can bitcoin be used for now?...not much. what will bitcoin be able to be used for when it has high network effect?...money! Is there any other use-case liable to change or add to the money use-case?...not likely. Compare to gold or silver which have the initial benefit of usefulness pre-network effect...but also suffer, as a money from other use-cases being a source of shocks) 3. Mass psychology. We are excluding exogenous supply shocks here for what should be obvious reasons as applied to bitcoin.
-Now, how does a market commodity like gold or bitcoin go from an obscure token to being a unit-of-account, high-network-effect, most liquid, most saleable asset in an economy, i.e. money? Why would people use a token or commodity as a unit of indirect exchange, when virtually no one else does? Or so how does that token or commodity get initially distributed, widely and ubiquitously enough that it can be commonly valued and exchanged with good assurance that most everyone else wants this token, thus I can be confident in accepting it, knowing I will be able to recoup my value later in exchange with anyone else? How does it become a unit of account (we "price" everything in it, within some economic sphere wherein bitcoin is serving as money) if it's price changes so much so often? How will we get to where, constantly seeing goods and services for sale denominated in BTC, helps to give inertia to our price expectations and dampen the volatility?
-There's a catch 22 here. A coordination problem. A classic market failure. It is group rational to have a money, a common token of indirect exchange...but before it is already money, it is individually rational to reject a token as worthless (aside from it's other uses which you may or may not have for it). There are free-riding and assurance problems as the token matures (e.g. why should I invest in or hold the token in order to help bring about money, when it looks like everyone else might? What price do we all settle on for the unit token, in future equilibrium or in more stable distribution era, i.e. when the market is saturated and supply has normalized?). This is why historically, we don't see many truly stateless monies (or stateless market orders for that matter), but rather see high trust credit system among tribes, or money tied to wergeld offerings to kings/states/priests. But this does not mean that market money is impossible, has never existed, nor does it mean it is without value to pursue. On the contrary; resolution of market failure into more efficient and effective market institutions has been the secular trend of most all of economic history. We are not stopping at central bank issued fiat notes (as necessary or evil as you may think this mode may be).
-There needs to be a number of phases of "bootstrapping" in order to kinda short-circuit this catch-22 -there needs to be some kind of market mechanism to alleviate the free-riding and assurance problems - and there needs to be an extended period of price discovery throughout these periods and processes to coordinate the exchange value.
-Some examples of bootstrapping taking place are: the first bitcoin transactions; where someone took a risk of assigning a rather arbitrary value to their tokens (yes, cost of mining is intuitive starting point, a schelling point, but still arbitrary) and exchanged them for some pizzas. Holders holding in ideological commitment strategy...while the rest of the world looks on and laughs. Etc. Evangelizing the use of the currency (some methods are more tasteful and effective than others of course). At some point, some of us will take a chance, take a risk, and "price" our goods and services in BTC (even though the volatility is still not down to where most people will be comfortable doing so). We are highly educated/intelligent, highly technologically empowered beings in the 21st century; economics and the study of money is a science; there is massive precedent for money and it's need and ways it can come about and compete with other money; we know these things consciously; we are not ants or simple nodes who in aggregate form a complex system borne of the interplay of individually-simple rule sets; and so not all our market processes need be completely emergent in the granular sense; we can plan even our voluntary economies and analytically (rather than just heuristically) organize it's structure to great extent, through entrepreneurial behavior and leadership and forward-looking risk taking. In other words: bootstrapping bitcoin means in large part: willing it into position as money (yes that's right buttcoiners: we are imagining, we are playing money...we are faking it until we make it). Sounds like fodder for detractors? That's fine. It is what it is. The coordination problems are real. The alternative is state force and government money, for good or for bad. We've got plenty of government money and we have plenty of very nice payment networks: let's do something new and interesting!
-Markets work in-band and out-of-band in all sorts of mechanisms, in order to alleviate or overcome significant market failure; including free-riding and assurance problems. Some of the mechanisms are: lottery/high risk-reward scenarios, philanthropy, assurance contracts, dominant assurance contracts, value adds, philanthropy and combinations of these. The public good of getting a new money or monetary system competing on the market, can be produced on shorter time-scales than what these market failures have limited people to in the past. We have modern travel and communication networks, information technology and mature and robust financial and futures/derivatives markets. We have more wealth than mankind has ever had, in order to be able to lengthen our time-preferences and weather extended periods of volatility and scientifically (analytically) see the logical ends and test our hypotheses. But with all of this, we have something which has always been an unconscious market response to price discovery of a new commodity and pushing through free-riding: Initial VOLATILITY! It acts like a lottery of sorts: It allows the more risk-averse to free-ride, not be exposed to loss, but not reap massive individual gains. . . yet it incentivizes the risk-tolerant to take risks, absorb or suffer losses, and also reap massive individual gains; thus the commodity is adopted and distributed, wider and wider (with diminishing returns and diminishing risk and diminishing free-riding). But this is why some gold-bugs get hung up on "intrinsic value" or "no backing!" or deride bitcoin for it's apparent lack of usefulness outside of money. You see, the utility of gold or silver outside of money is not what made them good money. . .it is what made them into money at all (in market-produced metallic monetary systems, or to the extent they were not state-engendered). Those non-monetary use-cases put gold and silver into the hands of many people, which helped bootstrap those monies. But it was ultimately their good monetary properties (scarcity, durability, divisibility, etc.) which maintained gold and silver as a network good, which made buying those commodities different than buying tulips in a craze. Bitcoin has it's in-built payment network, which inseparably imputes usefulness to the scarce token or share of the network. This property is auxiliary to bitcoin becoming money, but it is not universally a requirement; nor are non-monetary use-cases required indefinitely, in order to prop up or provide backstop to the value of a money. That's why it is a medium of indirect exchange. It is valued for the fact that everyone else is in a state (call it a useful mass-delusion if you will) of valuing the token; but also a share of a highly valuable public good, which is the monetary system as a whole. Remember, value is subjective and need not be tied to anything concrete or tangibly "useful". There is no conundrum here or infinite regression. Value and the nature of economic goods is abstract. The only regression required here, if any, is a reference to the price at the prior transaction. And if you insist further, you can trace all the way back, through all the speculation and all the price swings and all the purchases. . . back to the first Bitcoin Pizza Day transaction, and witness the bootstrapping of one of bitcoin's first exchange prices come into existence. You must deny the existence of a current price, in order to assert the extreme (or un-nuanced) versions of Menger's Origins of money and Mises' regression theorem; in order to deny the possibility of this "un-backed" "no use-value" currency, indeed having a market price and yet not having spiraled into death throes, long ago.
-Price discovery is pretty self-explanatory and it is of course talked about ad nauseum. But it does bear repeating that: speculation is not bad. In fact, every time you buy or sell or trade something, you are speculating. If you arbitrarily view monetary trades or monetary trades of a certain size or risk-level or intent to create a change in the mass psychology of the market as "manipulation" or somehow morally or ethically questionable. . . you can take it to the morality police and talk to them about it, or something. I don't know. But if you think that speculative activity is somehow unproductive or not conducive to bitcoin becoming money or otherwise successful; then you are manifestly wrong, I've already shown ample reasoning as to why this is so, and you need only ask one question of yourself to further understand why: How else do you propose or imagine that millions of (mostly) unacquainted strangers across planet earth can coordinate what the exchange value of a bitcoin should be at any given moment? If you are ignorant enough of economics to believe that some kind of government regulation which in effect price sets or installs price ceilings or floors or trading limitations is good for bitcoin (whatever that means to you). . . I invite you to please happily create your own exchanges and markets which enforce those specific rules; but allow others to trade in arenas where they are not subject to those rules. We are producing money through massive coordination challenges and market failures. . . you can find ways of setting up voluntary markets or walled-gardens which effectively enforce some trading rules. But should you be unconvinced and still choose the old, neanderthal ways of the blunt hammer of the state, just remember: Prices mean something. They communicate essential information. You cannot arbitrarily choose to attenuate or accentuate those signals or distort them, and hope to achieve a money truly picked by the market, by individual preferences. May the best arena win (truly) and may it win for some and the other win for others (diversity is beautiful). But know that no matter what you do, you will not ever be able to globally enforce such a regime and so you can only hamper the free trade of bitcoin; you cannot stop it, and I do not believe that you can even hamper it enough to create a stagnation which reverses bitcoin's development into money. But you will certainly not help it.
-Lastly, it is also important to acknowledge that while the payment network aspect of bitcoin is inseparable from the token, the concept of payment network and money token are distinct. Paper money has sneaker-net, fiat in general has bank-wires of all sorts and credit card networks and paypals, etc., and bitcoin has it's in-built Bitcoin payment network. It is of course necessary to have at least a rudimentary payment network built-in and as the foundation of a crypto-token, since we meatbags cannot manipulate the virtual. . . however, don't let this create an over-emphasis on the importance of the payment network. We have many payment networks already. Bitcoin's has it's strengths (even in it's current primitive state) and it's weaknesses as compared to bank networks and credit card networks. I really really really hope and do not intend here to start any kind of resurgence of drama over the block-size issue. This is not a cryptic sleight at either side of the debate (of which I have always been quite sympathetic to aspects of both arguments anyway). I truly hope that I have helped more of you also see how the money aspect transcends the payment network (that the payment network is at the very least necessary, but does not necessarily have to achieve a particular level of utility or functionality. . . beyond aiding the fledgling token in becoming ubiquitously held, such that it's good monetary properties can take over and network effects ensconce the token as money). May ALL the on-chain solutions and further layers work towards, not only a highly useful payment network; but one which propels it's native token to the full status of money.
submitted by kwanijml to Bitcoin [link] [comments]

Dr. Feng Cao: The history and Status of TPS in blockchain

Dr. Feng Cao: The history and Status of TPS in blockchain

https://preview.redd.it/44pmijsgm3u11.jpg?width=1300&format=pjpg&auto=webp&s=9298b9eb231d89a8f1842da78e03f49e17b6fd25

1. TPS Origin of Blockchain

TPS (Transaction Per Second) is not a new word, it is usually used in database to refer to the number of database transactions performed per second. It can be calculated by dividing the number of transactions processed per unit time by the length of time. There are already many ways to improve TPS in traditional database, and the transaction types include insert, delete, query and update. However, in the world of blockchain, all of transactions saved in blocks are hard to be tampered. Thus, what we can do for blockchain transactions only refers to insert and query.
In some ways, blockchain is a new type of distributed database system. The impossible triangle problem about decentralization (number of nodes), efficiency (TPS) and security in distributed systems, is the core issue in the design of blockchain system. Therefore, it makes no sense to simply talk about the TPS of the blockchain without considering decentralization and security. Some projects claim that they have solved the impossible triangle problem, it is just market propaganda in some ways. You don't have to take it seriously. A theorem is called a theorem because it couldn’t be broken so easily.
In traditional database, transactions are stored in various tables, and the number of rows in a table is usually unlimited. As long as there are better approaches to access data such as index and memory data writing, TPS could be well promoted. In the blockchain system, transactions are packaged in block and one block is chained after another. Also, the validity of transactions depends on the consensus of most nodes in system. Thus the TPS of the blockchain is limited by the block size (the number of transactions that can be packed in one block) and the time of block generation. The time of block generation is the sum of the time required for generating a new block and the time required for the nodes to reach consensus.
Since the Bitcoin blockchain was born in 2009, it has been widely criticized that TPS is not high enough. In particular, the POW consensus mechanism is too slow. It takes 10 minutes to generate a block (in fact, a block in 10 minutes is not a limitation of the POW mechanism itself, but an ingenious design for generating a new bitcoin). However, due to the small number of users at that time, the requirements for large-scale TPS were not so urgent. In 2013, Ethereum proposed a blockchain-based smart contract, which opened up a new imagination for the large-scale industry application of the blockchain. Ethereum 1.0 is still based on the improved POW algorithm which generate a block in 15 seconds. TPS seems like high enough for a long time. Even some people hold the view that TPS is not important for blockchain.

2. Isn’t TPS the core issue in blockchain?

All of this has changed with the industry application of blockchain technology. The proposal of Ethereum smart contract and the rise of consortium blockchains in 2015 opened the door for the industry application of blockchain. People attempt to apply blockchain to various industries, finance, supply chain, energy, medical, education, and e-commerce, etc. However, all of these industry applications have requirements for TPS, e.g., various financial services, booking train tickets online and e-commerce. Alibaba would announce a new system peak transaction number on Double 11 shopping carnival (Online promotion day of November 11th each year). It is TPS of their system. When we shopping online, no one can bear the phone without response for a while. The development of consortium blockchains and the application of the industry in 2016 have made many blockchain development teams realize the importance of TPS.
In addition to the TPS, the system response time (RT, Response Time) is also an important indicator that directly affects the user experience in blockchain applications. The TPS affects the system RT. When system is not overload, all transactions in the unit time of block generation can be packaged in one block and the system response time is equal to the time of block generation. However, when the system is overload, that is, all transactions in the unit time cannot be packaged in the same block, the system response time will increase exponentially with the number of new blocks that need to be generated.
The system confirmation time is another related indicator. In simple terms, it is the time to wait for the transaction to be confirmed. Taking online payment as an example, the system response time is the time required to initiate a deduction and the system confirmation time is the time required to complete deduction and the transaction confirmation. In the POW system, the transaction needs to wait for 6 blocks to get final confirmation. In order to enhance the user experience, some trading systems allow the confirmation of transactions in two blocks during small transactions, which is a tradeoff between the user experience and the finality of the transaction.
Although TPS has attracted the attention of practitioners in the application of the consortium blockchain, the impact is relatively limited, and has not caused a wide range of concerns. In 2017, ICO, a popular project, rose, usually attracted the attention of thousands of users around the world. In the same year, Status started a crowd sales, it raised a three-day jam in Ethereum. People can't stand the experience that the transaction is too late to confirm. The blockchain TPS bottleneck has once again become a hot spot in the industry.
In 2018, several public-chain projects were also launched. One of the important purposes was to enhance TPS and make large-scale blockchain applications possible. Blink of an eye, it’s the second half of 2018 now, EOS was born from the beginning of the nominal million-level TPS to the actual landing of 3000+TPS. TPS uselessness has once again risen. One of the arguments is that EOS's TPS is almost idle in normal time, and 10 is enough for usual use. When people can't find the application scenario, TPS is a pseudo-demand. Is it really the truth? In fact, on the contrary, blockchain application innovations are endless. Without a strong TPS support, any large-scale global application can only be a dream. The bottleneck of TPS limits the pace of innovation in blockchain applications. Just as we always need a higher performance computer, the pace of human social information and asset digitization can never be stopped.

3. Are we talking about the same TPS?

Corresponding to the opinion of the TPS is useless, the statement of millions of TPS is endless. Fans of various projects often compare the TPS of this project with another project. Are we really talking about TPS as the same TPS?
First of all, once we mention TPS, we can not ignore the blockchain network structure and the nodes’ software and hardware configuration. TPS can only be compared under the same network and node hardware environment. There are some blockchian network factors we should consider:
How many nodes in the system? Dozens, hundreds, thousands or tens of thousands?
How many nodes participate in consensus? Nodes that do not participate in the consensus in the system cannot contribute to the decentralization of the system.
What is the geographical distribution of these network nodes? Are they in a LAN, or are they distributed in a city, in a province, in a country or in several countries? Are they distributed intercontinental or distributed all over the world?
How does the node hardware and software configuration? Such as network bandwidth, memory capacity, whether the disk is SSD, disk IO speed, disk capacity, CPU frequency, number of CPU cores, operating system, etc.
In a word,the high TPS in a limited WAN is often hard to achieve in a global WAN. Because network delay often makes TPS greatly decreased or even nodes unable to reach consensus and stabilize blocks.
Second, where is the transaction set from for the test? Is it a manually generated data set or a real transaction set? What is the details of the transaction set? Such as the number of Tx(transactions), the complexity of Tx(asset transfer, smart contract calling, cross-chain, cross-sharding ), and the duration of time (a few minutes, hours, days, months or years?),etc.
Finally, what’s the statistical methods of TPS? In the same network with hardware, software and test environment, different statistical methods of TPS will lead to different test results. There are some different computational methods of TPS as below:
1) Normal window N: As the test progresses, continuously increases the window length N. Divide the number of all transactions processed by the current system by the current window length.
TPS = Sum(Tx) / N.
2) Segment window w: Segment the time axis by the window length w. Count the number of transactions processed in each window w, then divide it by the time window w.
TPS = Sum (Tx in window w) / w.
3) Sliding window sw: Slide the time axis by the window length w. Count the number of transactions processed in each sliding window sw, then divide by the time window sw.
TPS = Sum (Tx in sliding window sw) / sw.
If we set the same time window, we can continuously obtain a variety of different TPS, and we can further calculate the average TPS and peak TPS.

https://preview.redd.it/4kcxr1y6o3u11.png?width=640&format=png&auto=webp&s=a4bcfead142d5c5c3fed9cb8fc6cbf392d5ae34f
Take the above figure as an example. Suppose a point represents 100 transactions Tx.
1.Normal window(window length = 8)TPS value is

The average TPS is (62.5+56.25+66.67+75)/4 = 65.105, peak TPS is 75
  1. Segment window(window length = 8) TPS value is

The average TPS is (62.5+50+87.5+100)/4 = 75, peak TPS is 100
  1. Sliding window(window length = 8) TPS value is


The average TPS is
(62.5+62.5+62.5+62.5+75+75+75+62.5+50+37.5+37.5+37.5+37.5+50+62.5+75+87.5+100+100+100+100+100+100+100+100)/25 = 72.5, peak TPS is 100
Obviously, different window types result in different average TPS and peak TPS

https://preview.redd.it/lemyx2nto3u11.jpg?width=560&format=pjpg&auto=webp&s=e4b0e7affe19e947ec31df9b2d8eb16d65a4f413
We can infer that different window length would get slightly different results.

4. Approach to improve TPS

At present, it is a common urgent affair to improve TPS for every public chain system. Everyone is actively developing various algorithms to improve the system's TPS. Common methods are divided into the following categories:
- Increase the block size. This is the easiest and most effective way. By increasing the block size, more Tx could be packed in the same block. Under the premise that the time of block generation is given, more Tx are packaged, which means higher TPS. For example, the BCH is a block size expansion of the BTC. However, increasing blocks size will increase the communication cost between nodes in each consensus process. Thus block size can’t be expanded indefinitely.
- Increase the frequency of block generation. When the number of packed Tx in a single block is given, increasing the frequency of block generation can obviously improve the TPS. For example, a block is generated from 10 minutes increased to 15 seconds. However, increasing the frequency of block generation too much often sacrifices the stability of the system, especially in the case of large WAN delays.
- Use higher performance computers (nodes) such as dedicated mining machines. Replace time-consuming software calculations with hardware and accelerate hardware to achieve faster processing speeds, such as various bitcoin mining machines.
- Side chain, under chain and status channel. The side chain is a concept relative to the main chain. The main chain is often referred to the blockchain system that needs to be further improved performance and it is also difficult to change in the short term, such as Bitcoin and Ethereum. The basic idea of side chain and under-chain is to create a relatively high-speed or relatively low-security side chain, and put the small amount but high-frequency transactions on the side chain for quick confirmation, and then return to main chain when it is really necessary to settle. The status channel is the invention of Lightning Network, which is an independent channel established between the two accounts to achieve fast transaction. Besides, the transitivity of the channel makes blockchain become a network with various channels, so as to achieve rapid inter-transfer between any two accounts.
-Sharding. Sharding is a typical "divide and conquer" computational approach. The basic idea is to dynamically separate nodes in a blockchain network into several different fragments. All Tx received in the unit time are allocated to different groups. The sharding technique can be specifically classified as token level sharding and smart contract level sharding. Most sharding techniques can only be achieved at the token level. For the sharding of smart contracts, there are no particularly good solutions due to the more complicated state sharding problems. Some projects have proposed state sharding solutions in restricted environments.
- Native multi-chain. Native multi-chain is a typical parallelization method. Different from the traditional bitcoin and Ethereum single-chain structure, the structure of the multi-chain system usually contains one main chain and several sub-chains, and multiple chains can generate blocks at the same time, which makes the block calculation parallelized and greatly improves the TPS. Although the idea of the native multi-chain is easy to understand, in the real development process, we need to dedicate to solve the interoperability of the main chain and the sub-chain. Otherwise, the main chain can easily become the bottleneck of the multi-chain system and thus affecting the scalability of the multi-chain system.
- New consensus algorithm. Convert from POW to POS. Typical POS algorithms include algorithms of the DPOS and BFT. For example, EOS is based on the DPOS algorithm. The new generation of blockchain 3.0 systems often use BFT algorithm and its successors, such as Algorand, Definity, COSMOS/Tendermint, PCHAIN, etc. The traditional PBFT algorithm has the problem of high communication complexity, usually N2, which is often only applicable to the consortium blockchains scenario. At present, each new BFT algorithm often achieves the purpose of reducing communication cost by introducing dynamic or random. Although there is no need to wait for six blocks to achieve finality like the traditional POW algorithm because PBFT has the characteristics of real-time consensus within a single block, there is still a problem with PBFT that internal nodes need four-time consensus. COSMOS/Tendermint innovatively reduced the internal 4 consensus of the PBFT algorithm to 2 consensuses. PCHAIN's PDBFT further reduced the internal 4 consensus to 1 consensus, which greatly reduced the communication cost between nodes.

5. Prospect

The development of blockchain technology will continue to increase along with the blockchain refactoring the entire process of human society. With the emergence of a faster and more stable blockchain 3.0 system, we will usher in a new blueprint for the blockchain value of the Internet and the global village.

submitted by pchain_org to Pchain_Org_Official [link] [comments]

The Size of Blocks: Policy Tool or Emergent Phenomenon? [my presentation proposal for scaling bitcoin hong kong]

The Size of Blocks: Policy Tool or Emergent Phenomenon?

Abstract. This talk will explore the question of whether block size is a useful tool for enforcing policy or an emergent phenomenon of the network. From the policy-tool perspective, the block size represents a "lever" that policy makers might use to balance the cost of operating a node with the market price for block space (i.e., transaction fees). Assuming only that block space obeys the laws of supply and demand, we will show that for any given market condition, there exists an artificial block size limit that will produce greater network security and slower blockchain growth than the equivalent no-limit case, at the expense of higher transaction fees and reduced economic activity. Despite the flexibility offered by the policy-tool approach, we will show that little empirical evidence exists that the network is capable of enforcing a strict block size limit, questioning whether the policy-tool perspective is valid. We conclude by suggesting a simple change to the Bitcoin software to empower each node operator to more easily express his free choice regarding the size of blocks he is willing to accept while simultaneously ensuring that his node tracks consensus.
Key words: 1. Block size limit. 2. Consensus rules. 3. Rationalism. 4. Empiricism.

1. Introduction

I envision the block size debate as two groups standing on either side of a lever labeled "block size." The group on the right is pulling the handle while yelling "lower fees and more economic activity!" The group on the left is pulling back shouting "lower node costs and higher security!" I will refer to both of these groups as the Rationalists. There is also a third group I call the Empiricists: this group is observing the debate, pondering whether the lever is actually connected to the network.
The talk I am proposing for Hong Kong has two purposes. The first is to view the block size limit from the perspective of the Rationalists and deduce how the position of the block-size lever affects the network. The second is to view the problem from the perspective of the Empiricists and, based only on what we have observed about the network, abduce the rules that the network obeys (including whether the network obeys a rule regarding the block size limit at all). The talk will conclude by showing that strict agreement on a block size limit is not required, and by suggesting a simple change to the Bitcoin software to promote scaling.

2. Quasi-static Variation of the Block Size

During this part of the talk, we will deduce how changes to the block size limit, Qmax, affect the behavior of the system, while holding all other variables constant, and assuming that the maximum size of blocks can be programmed. We will build off of the supply and demand curves presented by the author in his Montreal talk, and show that:
  1. Policy actions are only effective for Qmax < Q*, where Q* is the free market equilibrium block size.
  2. Network security is maximized for some Qmax | 0 < Qmax < Q* , resulting in greater network security than the no-limit case.
  3. The Blockchain’s growth rate increases monotonically as Qmax increases (for Qmax < Q*).
  4. The price per byte for block space decreases monotonically as Qmax increases (for Qmax < Q*).
  5. Total economic activity grows monotonically as Qmax increases (for Qmax < Q*), and is maximized for all Qmax >= Q*.

3. What Laws does the Network Obey Empirically?

In the previous section, we assumed that the network would obey the programmed rules and then we used deductive reasoning to study the effect of varying one of those rules (block size limit). In this segment of my talk, we will use empirical observations and abductive reasoning (assuming no a priori knowledge of the program code), to conclude that the network obeys the following rules:
  1. Coins cannot be moved without valid signatures.
  2. Coins cannot be spent twice.
  3. The rate at which new coins are created is strictly controlled.
  4. Nodes build upon the chain that contains the greatest cumulative work (referred to simply as the longest chain).
We will then look for empirical evidence of a block size limit. I will show that over the history of Bitcoin, the block size distribution has shown distinct "peaks" at certain block size extrema, but that these peaks have often collapsed as new peaks formed at greater block sizes. Without a priori knowledge of the program code, only sparse evidence for a block size limit exists.
Next, we will examine various emergent properties of the Bitcoin network including the market price of a bitcoin, the network hash rate, and Bitcoin’s adoption metrics. I will show that quantitative measures for each of these emergent phenomena are highly correlated with each other, and grow exponentially with time, albeit at different growth rates. I will then compare their growth patterns with the growth of the average block size, concluding that historically the block size has behaved very similarly (i.e., it behaves like an emergent phenomenon).

4. How Should Decisions Regarding Block Size Be Made?

In this final section, I will postulate two theorems:
  1. A node with a block size limit greater than the hash-power weighted median will always follow the longest chain.
  2. An excessive (e.g., greater than 1 MB) block will be accepted into the longest chain if it is smaller than the hash-power weighted median block size limit.
I will argue that it is more important that nodes follow the longest chain composed of valid transactions than dogmatically adhere to an arbitrary block size limit. I will end my talk by proposing a simple change for bitcoin software to allow a node operator to express his free choice regarding the size of blocks he is willing to accept while simultaneously ensuring that his node tracks consensus (e.g., be "BIP101 ready").
submitted by Peter__R to bitcoinxt [link] [comments]

We fight. There is Bitcoin or there is nothing.

Bitcoin's design, architecture and key aspects were set in the first release (v0.1)
Yes there were bugs, yes optimizations needed to be made (and will still need to be)
However there is no fundamental reason that key aspects such as Natural Ordering needs to be abandoned and changed to a different datastructure.
There is no math theorem or proof or any evidence whatsoever that Natural Ordering is inadequate for global cash. It is a lie that we must remove the fundamental Natural Ordering property and replace it with anything else.
In fact transactions are spent and created and verified in Natural Order. Removing this fundamental principle will do harm and it will no longer be Bitcoin.
Some people Know that Bitcoin Cash has Everything it needs for global cash and massive scale, and other people will try to tell you that it does not have what it takes and needs the fundamental principle of Natural Ordering removed
I'm of the opinion that we have everything we need right now and merely need miner's to optimize their nodes and remove code level bottlenecks to scale well into many Gigabyte block sizes. Evidence and research by Bitcoin Unlimited and other groups have confirmed massive scale via the Terabyte research teams and others.
Beware that anyone tells you that "Bitcoin Cash Natural Ordering Must Be Removed to Unlock the Future". It's a complete lie that we need to lose this elegant Natural ordering to scale. It is not an "optimization" as many are calling it, but a complete re-design of bitcoin's utxo management operations.
Demand actual math/computational theorems and proofs from anyone telling you that we must lose the Natural Ordering principle and replace it with CTOR (or anything else)
We are going to be spending the time in the coming days and weeks compiling a massive list of "CTOR fallacies, omissions, and distortions". It will not be too difficult since they have precisely 0 evidence ( actually worse since tx validation will be slower under CTOR. We will merely systematically go through every argument and make air tight rebuttals based on pure logic and reason for everyone to see the deception unravel.
submitted by etherbid to btc [link] [comments]

The PHANTOM Technical Whitepaper[Version 0.1] The PHANTOM Team PHANTOM Technical Whitepaper V2 pass1

Abstract: PHANTOM software introduces a new blockchain structure that aims to increase transaction rates. Please note that Phantom's design allows it to double its transaction rate, adding hundreds of nodes to its network. As of this article, the Phantom network has multiple nodes. Copyright © 2018 Phantomblock No one may use, copy, or distribute any material in the White Paper without permission. Disclaimer: This Phantom Technical White Paper v2 is for reference only. The accuracy or conclusions of this white paper are not guaranteed, but this white paper (1) does not make or expressly waive all express, implied, statutory or other forms of representations and warranties, including but not limited to: marketability, suitable for a particular purpose, suitability, purpose, title, or non-infringement; (2) the contents of this white paper are error-free; and (3) they do not infringe the rights of third party. The block and its affiliates shall not be liable for any kind of damages resulting from the use, reference or reliance on this white paper or any content. It is excluded even if you are informed of the possibility of such damage. In no case will it be blocked. Any person or its affiliates are responsible for any damages, losses, liabilities, expenses or any prior individuals or entities. Any kind of purse, whether direct, indirect, consequential, compensatory, incidental, actual, exemplary, punitive or special, for the use, citation or reliance of this white paper or any content contained in this document including but not limited to any business loss, income, profit, data, use, goodwill or other intangible losses. Phantom is a redesign from scratch and has been researching and developing for more than 2 years. The cornerstone of Phantom's design is to divide the mining network into smaller parts. The agreed-upon group is called as fragment and each group can process transactions in parallel. If Phantom's network is 8,000 nodes, Phantom will automatically create 10 sub-networks, each with a size of 800, in a decentralized manner without a trusted coordinator. Now, if a subnetwork can agree on a set of (for example) 100 transactions at a time, then 10 subnetworks can agree on a total of 1 transaction and there are a total of 1,000 transactions. The key to secure aggregation is to ensure that sub-networks handle different transactions (without overlapping) without the need for double spending. These assumptions are similar to existing blockchain-based solutions. We assume that the mining network will have a small number of malicious nodes/identifiers with a total computational power of frac, which is(<1/4) of the entire network. It is based on a standard scheme, but it has a new two-level block chain structure. The algorithm has a highly optimized consensus algorithm. Phantom also provides an innovative, special-purpose smart contract language and execution environment that leverages the underlying architecture to provide large-scale and highly-efficient complications to the orbital platform. The smart contract language in Phantom follows the data flow programming style, in which smart contracts can be represented as directed graphs. The node is an operation or func. The arc between the two nodes represents the output of the first node and the input to the second node. Once all the inputs to the node are in effect, and therefore become a data stream, the nodes are activated (or operated). w Contracts are essentially parallel and suitable for decentralized systems, including simple calculations such as search, sorting, and linear algebraic calculations. Complex calculations such as training neural networks, data mining, financial modeling, scientific calculations, and general MapReduce tasks. The smart contract language in Phantom follows the data flow programming style, in which smart contracts can be represented as directed graphs. The node is an operation or func. The arc between the two nodes represents the output of the first node and the input to the second node. Once all the inputs to the node are in effect, and therefore become a data stream, the nodes are activated (or operated). w contracts are essentially parallel and suitable for decentralized systems, including simple calculations such as search, sorting, and linear algebraic calculations, and complex calculations, such as training neural networks, data mining, financial modeling, scientific calculations, and general MapReduce tasks. Background System settings and assumptions Smart contract layer Free Usage Easy to upgrade and failback Low latency Sequential performance Parallel performance Transaction confirmation Proof of transaction as a bet (Tapos) Named permission levels Permission mapping Evaluation privilege Default privilege group Privilege calculation Forced procrastination Parallel execution of application certainty Minimum communication delay Completeness and integrity Conclusion Background Blockchain technology was introduced in 2008 with the introduction of Bitcoin. Since then, entrepreneurs and developers have been trying to promote this technology to support a wider range of applications. Applications on f single blockchain platform. Although many blockchain platforms have difficulty in supporting decentralized applications, application-specific blockchains such as Bitshare Distributed Exchange (2014) and Steem's social media platform (2016) have become the heavily used blockchains for thousands of active users every day. They do this by increasing performance to thousands of transactions per second. The delay was reduced to 1.5 seconds, eliminating the cost per transaction and providing a user experience similar to that provided by existing centralized services. The existing blockchain platform has a huge burden and limited computing power, which hinders widespread adoption of blockchain. System settings and assumptions Entities in Phantom. There are two main entities Phantom: Users. Users are external entities that use Phantom's infrastructure to transfer funds or run smart contracts. Nodes of the Phantom consensus project running in the network are rewarded for their services. In the rest of this white paper, we used the terms miner and nodes interchangeably. Phantom's mining network is further divided into several smaller networks called fragments. A group called a DS node is assigned to a fragment, and this group of ds nodes is also referenced. As a DS committee, each fragment and DS committee has a leader. Leaders play an important role in Phantom's consensus agreement and the overall operation of the network. Each user has a public private key pair for a digital signature scheme. Each node in the network has a related IP address and a public key that is used as an identifier. Phantom has an intrinsic token called payings, or pays for short. Payings grants platform usage rights to users when it is used to pay for transactions or smart running. Regarding the contract in this white paper, any reference to amount, value, balance, or payment should be assumed to be pays antagonistic model. We assume that the mining network has a small fraction of Byzantine nodes/identities at any point in time. Its total computational power is at most a part of the entire network, where 0≤f<1 and n is the total size of the network. The factor is an arbitrary constant that is bounded by the selected parameter to obtain a reasonable constant parameter. We further assume that the honest node S is reliable during the operation of the protocol, and failed or disconnected nodes are calculated in the Byzantine fraction. The Byzantine node can deviate from the protocol, delete or modify the message, and send different messages to the honest node. In addition, all Byzantine nodes can be collocated. We assume that the total computational power of the Byzantine opponent is still limited to the standard cryptographic hypothesis of the probability polygon, nominally the opponent. However, we believe that information of honest node (in the case of network partitions) can be delivered to honest destinations after a certain constraint δ, but δ may change. The bound δ is used to ensure activity but not safe. If this timing and connection assumption is not satisfied, the Byzantine node may significantly delay the message (simulation c gain), or worse, "eclipse" the network. In the case of network partitions, according to the requirements of the CAP theorem, one can only choose between consistency and availability. In Phantom, we OOSE must be consistent and sacrifice usability. Smart contract layer Phantom provides an innovative, special-purpose smart contract language and execution environment that leverages the underlying architecture to provide large-scale and efficient computing. Platform, in this section, we will introduce the smart contract layer using data flow programming architecture. A. Using Data Flow Paradigm to Calculate Fragments Phantom's smart contract language and its execution platform are designed to take advantage of the underlying network and transactional segmentation architecture. The segmentation architecture is ideal for running calculations. Efficient work, the key idea is as follows: Only a subset of the network (such as fragmentation) can perform calculations. We call this method computational sand. Compared with existing smart contract architectures (such as Islien), ding requires a very different approach to handle contracts when Phantom calculates the fragments. In Phantom, each complete N requires ODE to perform the same calculation to verify the calculation results and update the global state. Although safe, this completely redundant programming model is a daunting e for running large-scale computations that can be easily parallelized. Examples include simple calculations such as search, sort, and linear algebra calculations, as well as more complex calculations. UCH is used to train neural networks, data mining, and financial modeling. Phantom's computational segmentation method relies on a new smart contract language. It is not Turing-complete, but for many applications, scalability is much better. The smart contract language Liqa follows the data flow programming style. In the data flow execution model, the contract is represented by a directed graph. The nodes in the figure are primitive instructions or operations. Tedarcs between two nodes of Derek represent data dependencies between operations, which is the output to the first operation and the input to the second operation. A node is activated when all its i are activated (or operated). Nput is available. This is in stark contrast to the classic von Neumann execution model (as used in Phantom), where instruction, regar is executed only when the program counter reaches it, regardless of whether it can be executed earlier. The main advantage of using the data flow method is that multiple instructions can be executed at the same time. Therefore, if several nodes in the graph are activated at the same time, they can be executed, d parallel. This simple principle offers possibilities for massively parallel execution. To see this, we show a simple sequential program in Figure 1a, which has three instructions, 1b, w and e, rendering the data stream variants. Under the Von Neumann execution model, the program will run in three time units: first calculate A, then B and finally C. The model does not capture the fact that the DB can be calculated independently. On the other hand, the data flow program can calculate these two values in parallel. Once A and B are available, the node that performed the addition is activated. When running on Phantom's slicing network, each node in the data flow program can ultimately be attributed to a fragment, or even a subset of nodes in a fragment. Therefore, the architecture is id. The EAL is used for any MapReduce-style computational task, where some nodes perform mapping tasks and another node can act as a reducer to aggregate the work done by each mapper. To facilitate the execution of data flow programs, Phantom's SMART contract language has the following features: Operate globally shared virtual memory space across the entire blockchain. The middle unit is locked in the virtual shared memory space during execution. In the process of submitting to the block chain, check the results of the middle point. B. Smart security budgeting In addition to providing the benefits of parallelism from the data flow computing model, Phantom also provides a flexible security budgeting mechanism for computing sharding. This feature is Ena. Computational resources in a blockchain network are segmented by coverage over a consensus process. Calculation fragment allows Phantom users and application running on Phantom specifies the size of the negotiation group to be calculated for each subtask. Then, each consensus team will be assigned to calculate the same subtask and produce results. User specified conditions. After accepting the results, for example, all members of the consensus group must produce the same result, or 3/4 of them must produce the same result, and so on. Users of applications running on Phantom can budget for how much she wants to spend on computing and security. In particular, users running a particular deep learning application may choose sp. When running more different neural network tasks, more gas costs will be ended instead of repeating the same calculations for too many nodes. In this case, she can specify a smaller consensus group to run each neural network calculation. On the other hand, complex financial modeling algorithms that require higher accuracy may require a consensus group consisting of more nodes to compute the critical values. Some algorithms are better protected against potential tampering and manipulation. C. Extensible application Phantom aims to provide a platform to run highly scalable calculations in multiple areas such as data mining, machine learning, and financial modeling. Self-supporting effective s intensive Turing-complete programs are very challenging, and there are public blockchains that support Turing-complete smart contracts (eg, Etal um), and Phantom focuses on specific applications, but today it did not meet the requirements. Calculations with parallel computational load: Scientific calculations of big data are a typical example where a large amount of distributed computing power is required. Moreover, these calculations have a high degree of parallelism, such as linear algebra operations on large matrices, searches for massive data, and simulations on large datasets. Phantom PRO calculation tasks are a cheap and short-term turnaround option. In addition, Phantom can be used as an off-the-shelf, highly reliable resource with a large amount of computation if appropriate incentive mechanism, computational segmentation and security budgets are available. Training of neural networks: With the increasing popularity and use of machine learning (especially deep learning), an infrastructure must be established that allows for deep learning models of trai. n is about large data sets. As we all know, the training of large data sets is crucial to the accuracy of the model. For this reason, Phantom's computational segmentation and streaming languages will be particularly useful for building machine learning applications. It will serve as a basic structure. You can run tools like TensorFlow by performing different calculations independently on the task group of the Phantom node, to calculate gradients, apply activation functions, calculate training losses, etc. The application of high complexity and high accuracy algorithms: Unlike the above applications, some applications, such as calculations on financial models, may require high-precision calculations. Any slight deviation in a part of the calculation may result in a significant loss of investment. Such an application can be in the consensus group with more task nodes in Phantom to allow them to cross. Giving each other's calculation results, the key challenge of offloading the computational tasks of such financial modeling algorithms to a common platform, such as Phantom is the concern of dat. About the privacy and intellectual property of the algorithm, first of all, we envisage that some of the well-known parts of this calculation can be placed in Phantom for efficient and secure calculations first, and future research and development of wh ILE will further strengthen the protection of data privacy and intellectual property rights for such applications.. One of the concepts we need is the cumulative weight of a transaction: it is defined as the sum of its own weight for a particular transaction plus the own weight for all transactions directly or in Indir. Approval of transaction: the algorithm of cumulative weight calculation. These boxes represent transactions, and the small numbers in the SE corner of each box represent their own transactions. Weights: bold numbers indicate cumulative weights. For example, transaction F is approved directly or indirectly by transactions A, B, C, and E. The cumulative weight of F is 9 = 3 1 31 1, w HICH is the sum of the self-weight of F and the sum of the weights of A, B, C, and E. D. More We define the weights and related concepts of the transaction. The proportion of the weight of the transaction and the node to which the issuing node invests: in current IM for Phantom, the weight may only assume a value of 3n, where n is a positive integer and belongs to a non-empty interval of acceptable values. In fact, I do not know how Wei has achieved good results in practice. The important thing is that each transaction has a positive integer whose weight is attached to it. In general, the idea is that the transactions with greater weight are more "i". It's more important than a small weight deal. To avoid spam and other attack patterns, it is assumed that no entity can generate a large number of transactions with “acceptable” weights in the sho. One of the concepts we need is the cumulative weight of a transaction: it is defined as the sum of its own weight for a particular transaction plus its own weight for all transactions directly or in Indir. Approve this transaction. The algorithm for calculating cumulative weights is shown below. These boxes represent transactions, and the small number in the SE corner of each box represents its own transaction. Weights, bold numbers indicate cumulative weights. For example, transaction F is approved directly or indirectly by transactions A, B, C, and E. The cumulative weight of F is 9 = 3 1 31 1, w HICH is the sum of the self-weight of F and the sum of the weights of A, B, C, and E. Let's define "hint" as an unapproved transaction in a confusion graph. In the top chaotic snapshot, the only hints are A and C, when new transaction X arrives and approves A and C in b. Tottom snapshot, X becomes the only hint. The cumulative weight of all other transactions has increased by 3, and the weight of X itself has increased its own weight. We need to introduce two additional variables to discuss the approval algorithm. First of all, for the entangled business site, we will introduce its: Height: The length of the longest directional path leading to creation; Depth: The length of the longest reverse path pointing to some tip. There is a height of 1 and a height of 4 because of the reverse path F, D, B, A, and D has a height of 3 and a depth of 2. In addition, let us introduce the concept of scores. By definition, a transaction is the sum of its own weights for all transactions approved by the transaction, plus the transaction's own weight. The only tips are A and C. Transaction A approves transaction B, D, F, and G directly or indirectly, so A's score is 1+3+1+3+1=9. Similarly, the score for c is 1+1+1+3+1=7. In order to understand the arguments presented in this article, one can safely assume that all transactions have their own weight equal to one. From now on, we insist on this assumption. Under this assumption, Transaction X's cumulative weight is 1 plus the number of transactions that directly or indirectly approve X, with a score of 1 plus direct or the transactions of I, being recognized indirectly by X-rays. We note that in the definitions of this section, cumulative weight is the most important measure, although height, depth, and score will be briefly entered in some discussions. E. Splitting attack The following attack scheme for the Phantom algorithm. Under high load conditions, an attacker can try to split kelp into two branches and balance them. This will allow these two branches to continue to grow. The attacker must place at least two conflicting transactions at the beginning of the split to prevent the honest node from effectively connecting, while referring to the branches. Then, the attackers hope that about half of the networks can contribute to each branch so that they can "compensate" for randomization. Volatility means a relatively small personal computing power. If this technique works, the attacker will be able to spend the same money on both branches. To defend against such attacks, we need to use a "sharp threshold" rule, which makes it too difficult to maintain the balance between the two branches. An example of such a rule is to select the longest chain on the bitcoin network. Let us turn this concept into an entanglement that is suffering from a splitting attack. Assume that the total weight of the first branch is 537 and the total weight of the second branch is 8528. If an honest node selection probability is very close to the first branch of 1/2, then the attacker is likely to be able to maintain the balance between these branches. However, if the probability of the first branch of an honest n ODE selection is much greater than 1/2, the attacker may not be able to maintain balance. It is not possible to maintain balance e between two branches. The latter case is due to the fact that after inevitable random fluctuations, the network will quickly select one of the branches and give up the other. In order for the Phantom algorithm, it is necessary to choose a very fast decay function f and initiate random walks at deeper nodes, making it very likely to start walking before branching. In this case, the random walk will select the “heavy” branch with a higher probability, even if the cumulative weights between the competing branches are very small. It's worth noting that due to network synchronization issues, the attacker's mission is very difficult: they may not know much of the recently released transactions. Another effective MET defense against splitting attacks is to allow a powerful entity to publish a large number of transactions on one branch at a time, thus rapidly changing power balance. Make it difficult for attackers to handle this change. If an attacker manages to keep the split, then the nearest transaction will have only about 50% of the confirmed trust (SectSect). The branches will not grow. In this case, the "honest" node may decide to start approving transactions that occurred before the fork, bypassing t. It has the opportunity to approve conflicting transactions on separate lines. One can consider other versions of the hint selection algorithm. For example, if two large subtle points are seen in a node, it will select a node with a larger sum of its own weights before performing the Phantom prompt according to the above selection algorithm. For the future implementation, the following ideas may be worth considering. We can make the transition probability defined in (13) depend on both hx-hy and hx, so that the next step of the third stage can be changed. Markov chains are almost deterministic when the walkers are in deep consensus, and become more random when the walkers approach the end. This will help to avoid entering weaker branches, while ensuring that when these two hints are selected, there is an uncertainty of randomness. End (plural of conclusion):
submitted by phantomusa to 195 [link] [comments]

Why Bitcoin is the Way that it is . . .or (How I Learned to Stop Worrying and Love the Volatility!)

Please forgive the long post here and bear with me until the end of it if you can!
It is unfortunate reality, that too many in the bitcoin community (even those who have been here a long time) do not understand on an economic level what is going on with bitcoin; and because of that, they often get turned off or afraid of it or even rage against the volatility and ignorantly blame all the evil others who "treat it like an investment" or "only use it to speculate" (as if these were wholly conscious shifts away from what we could otherwise be treating bitcoin as right now). Some of you have probably already seen my posts to this same effect; most will have not and the concept of money developing on a market is quite foreign to most. To that effect, and out of a natural curiosity, I have spent years studying it and applying the economics of money to this amazing cryptocurrency which I am so fond of.

So, for everyone out there: For all the newbies who just don't get it. . .for all techies who could use a dose of understanding of the economic side. . . for all the veterans here who could use a little refresher and a more in-depth look at the incentives shaping our beloved ecosystem, let's take this point-by-point:

-There is no such thing as intrinsic value, even for gold or silver or dollars or diamonds or water...however, commodities and commodity monies have use-value, outside of their use as money (e.g. gold as jewelry and use as conductor, etc.).
-Value is subjective to individuals and a thing only has as much value as an individual assigns to it in regards to it's serviceability towards an end. If other individuals also value it for their ends, it can be traded or exchanged and a market price develops.
-If this thing or commodity happens to possess certain properties (properties we now call "monetary properties") such as high degrees of fungibility, scarcity, divisibility, homogeneity, transportability, durability, etc., then it has the potential to have high usefulness (and thus likely high value and thus higher exchange price) as a network good. One bitcoin unit is not going to be worth much if 2 dudes have some units and no one else does (maybe some subjective, novelty or numismatic value to those two individuals), so will not fetch a high exchange price (and will not circulate much...which is irrelevant of course if you're not trying to use it as money).
-A bubble, or spike in the price of tulips is not likely to be sustainable or borne of future-looking value achievement or creation; because a tulip doesn't really get any more useful the higher it's price; and a tulip doesn't really get any more useful the more people who have them, or the more ubiquitously they are held in society. A tulip is not much of a network good. A tulip does not possess good actualized or latent monetary properties. A tulip is lovely and useful/valuable aesthetically and for other non-monetary use-cases.
-A bitcoin (unit) does not have a whole lot of apparent use-value outside of use as money; outside of a vast network of holders. Bitcoin has exemplary monetary properties, or latent properties of a network good.
-money is extremely necessary and useful to an economy (useful to most all individuals who make up an economy), because it serves as a: 1. Medium of indirect exchange (alleviating barter inefficiencies/double coincidence of wants), 2. Unit of account (value is subjective and interpersonal value comparisons are impossible, yet we have to attempt to find common standard in order to enjoy the massive benefits of trade and division of labor), 3. Store of value. A thing is not money unless it fulfills these three roles to a high degree. BITCOIN IS NOT YET MONEY. Bitcoin is a proto-money virtual token or digital commodity. It is especially not widely used as a unit of account. This is important. This is part of why Bitcoin is so price volatile right now, and its achievement is also dependent upon bitcoin's volatility right now. We'll get to that in more depth.
-Money (for a large, non-trusting economy) can come about in only two main ways: you can have government create it and bootstrap it's initial value and maintain a demand-sink through taxation etc; or you can have it develop voluntarily on the market. I leave for another discussion the relative merits of either origin and why one might prefer one over the other. I will assume that the reader at this point sees value in having Bitcoin become money in the formal economic sense that I already explained, or that the reader at least wants the asset to either increase in price or become more useful as a payment currency...in either which case, the reader by default wants to see bitcoin progress further towards moneyness. That's right, you thought you didn't care about money...you thought you cared about the cool tech, or the payment network, or sticking it to the banks...well guess what? Jokes on you. You can pursue those aspects, because they are necessarily auxiliary to bitcoin becoming money and self-sustaining at all, so that you can continue to pursue your little pet aspects. And just like good design...you won't even know it's there. You won't know or appreciate when bitcoin is money and that it supports your penchant..but it won't matter; the invisible hand will have used you like a sheepskin for it's much grander emergent designs.
-Bitcoin has clearly not been declared money by any government fiat, and it is pretty clearly undergoing market processes which are shaping it's network effect, or it's demise. It is up to voluntary market processes (and absence of forceful interference by governments; intentional or unintentional; direct or far removed causes) to succeed or fail in it's path towards moneyness or any kind of self-sustaining existence you think it may have in it's DNA.
-There are three main factors which affect the exchange-price volatility of a commodity: 1. liquidity/market cap/saturation 2. normalization of use-cases (what can bitcoin be used for now?...not much. what will bitcoin be able to be used for when it has high network effect?...money! Is there any other use-case liable to change or add to the money use-case?...not likely. Compare to gold or silver which have the initial benefit of usefulness pre-network effect...but also suffer, as a money from other use-cases being a source of shocks) 3. Mass psychology. We are excluding exogenous supply shocks here for what should be obvious reasons as applied to bitcoin.
-Now, how does a market commodity like gold or bitcoin go from an obscure token to being a unit-of-account, high-network-effect, most liquid, most saleable asset in an economy, i.e. money? Why would people use a token or commodity as a unit of indirect exchange, when virtually no one else does? Or so how does that token or commodity get initially distributed, widely and ubiquitously enough that it can be commonly valued and exchanged with good assurance that most everyone else wants this token, thus I can be confident in accepting it, knowing I will be able to recoup my value later in exchange with anyone else? How does it become a unit of account (we "price" everything in it, within some economic sphere wherein bitcoin is serving as money) if it's price changes so much so often? How will we get to where, constantly seeing goods and services for sale denominated in BTC, helps to give inertia to our price expectations and dampen the volatility?
-There's a catch 22 here. A coordination problem. A classic market failure. It is group rational to have a money, a common token of indirect exchange...but before it is already money, it is individually rational to reject a token as worthless (aside from it's other uses which you may or may not have for it). There are free-riding and assurance problems as the token matures (e.g. why should I invest in or hold the token in order to help bring about money, when it looks like everyone else might? What price do we all settle on for the unit token, in future equilibrium or in more stable distribution era, i.e. when the market is saturated and supply has normalized?). This is why historically, we don't see many truly stateless monies (or stateless market orders for that matter), but rather see high trust credit system among tribes, or money tied to wergeld offerings to kings/states/priests. But this does not mean that market money is impossible, has never existed, nor does it mean it is without value to pursue. On the contrary; resolution of market failure into more efficient and effective market institutions has been the secular trend of most all of economic history. We are not stopping at central bank issued fiat notes (as necessary or evil as you may think this mode may be).
-There needs to be a number of phases of "bootstrapping" in order to kinda short-circuit this catch-22 -there needs to be some kind of market mechanism to alleviate the free-riding and assurance problems - and there needs to be an extended period of price discovery throughout these periods and processes to coordinate the exchange value.
-Some examples of bootstrapping taking place are: the first bitcoin transactions; where someone took a risk of assigning a rather arbitrary value to their tokens (yes, cost of mining is intuitive starting point, a schelling point, but still arbitrary) and exchanged them for some pizzas. Holders holding in ideological commitment strategy...while the rest of the world looks on and laughs. Etc. Evangelizing the use of the currency (some methods are more tasteful and effective than others of course). At some point, some of us will take a chance, take a risk, and "price" our goods and services in BTC (even though the volatility is still not down to where most people will be comfortable doing so). We are highly educated/intelligent, highly technologically empowered beings in the 21st century; economics and the study of money is a science; there is massive precedent for money and it's need and ways it can come about and compete with other money; we know these things consciously; we are not ants or simple nodes who in aggregate form a complex system borne of the interplay of individually-simple rule sets; and so not all our market processes need be completely emergent in the granular sense; we can plan even our voluntary economies and analytically (rather than just heuristically) organize it's structure to great extent, through entrepreneurial behavior and leadership and forward-looking risk taking. In other words: bootstrapping bitcoin means in large part: willing it into position as money (yes that's right buttcoiners: we are imagining, we are playing money...we are faking it until we make it). Sounds like fodder for detractors? That's fine. It is what it is. The coordination problems are real. The alternative is state force and government money, for good or for bad. We've got plenty of government money and we have plenty of very nice payment networks: let's do something new and interesting!
-Markets work in-band and out-of-band in all sorts of mechanisms, in order to alleviate or overcome significant market failure; including free-riding and assurance problems. Some of the mechanisms are: lottery/high risk-reward scenarios, philanthropy, assurance contracts, dominant assurance contracts, value adds, philanthropy and combinations of these. The public good of getting a new money or monetary system competing on the market, can be produced on shorter time-scales than what these market failures have limited people to in the past. We have modern travel and communication networks, information technology and mature and robust financial and futures/derivatives markets. We have more wealth than mankind has ever had, in order to be able to lengthen our time-preferences and weather extended periods of volatility and scientifically (analytically) see the logical ends and test our hypotheses. But with all of this, we have something which has always been an unconscious market response to price discovery of a new commodity and pushing through free-riding: Initial VOLATILITY! It acts like a lottery of sorts: It allows the more risk-averse to free-ride, not be exposed to loss, but not reap massive individual gains. . . yet it incentivizes the risk-tolerant to take risks, absorb or suffer losses, and also reap massive individual gains; thus the commodity is adopted and distributed, wider and wider (with diminishing returns and diminishing risk and diminishing free-riding). But this is why some gold-bugs get hung up on "intrinsic value" or "no backing!" or deride bitcoin for it's apparent lack of usefulness outside of money. You see, the utility of gold or silver outside of money is not what made them good money. . .it is what made them into money at all (in market-produced metallic monetary systems, or to the extent they were not state-engendered). Those non-monetary use-cases put gold and silver into the hands of many people, which helped bootstrap those monies. But it was ultimately their good monetary properties (scarcity, durability, divisibility, etc.) which maintained gold and silver as a network good, which made buying those commodities different than buying tulips in a craze. Bitcoin has it's in-built payment network, which inseparably imputes usefulness to the scarce token or share of the network. This property is auxiliary to bitcoin becoming money, but it is not universally a requirement; nor are non-monetary use-cases required indefinitely, in order to prop up or provide backstop to the value of a money. That's why it is a medium of indirect exchange. It is valued for the fact that everyone else is in a state (call it a useful mass-delusion if you will) of valuing the token; but also a share of a highly valuable public good, which is the monetary system as a whole. Remember, value is subjective and need not be tied to anything concrete or tangibly "useful". There is no conundrum here or infinite regression. Value and the nature of economic goods is abstract. The only regression required here, if any, is a reference to the price at the prior transaction. And if you insist further, you can trace all the way back, through all the speculation and all the price swings and all the purchases. . . back to the first Bitcoin Pizza Day transaction, and witness the bootstrapping of one of bitcoin's first exchange prices come into existence. You must deny the existence of a current price, in order to assert the extreme (or un-nuanced) versions of Menger's Origins of money and Mises' regression theorem; in order to deny the possibility of this "un-backed" "no use-value" currency, indeed having a market price and yet not having spiraled into death throes, long ago.
-Price discovery is pretty self-explanatory and it is of course talked about ad nauseum. But it does bear repeating that: speculation is not bad. In fact, every time you buy or sell or trade something, you are speculating. If you arbitrarily view monetary trades or monetary trades of a certain size or risk-level or intent to create a change in the mass psychology of the market as "manipulation" or somehow morally or ethically questionable. . . you can take it to the morality police and talk to them about it, or something. I don't know. But if you think that speculative activity is somehow unproductive or not conducive to bitcoin becoming money or otherwise successful; then you are manifestly wrong, I've already shown ample reasoning as to why this is so, and you need only ask one question of yourself to further understand why: How else do you propose or imagine that millions of (mostly) unacquainted strangers across planet earth can coordinate what the exchange value of a bitcoin should be at any given moment? If you are ignorant enough of economics to believe that some kind of government regulation which in effect price sets or installs price ceilings or floors or trading limitations is good for bitcoin (whatever that means to you). . . I invite you to please happily create your own exchanges and markets which enforce those specific rules; but allow others to trade in arenas where they are not subject to those rules. We are producing money through massive coordination challenges and market failures. . . you can find ways of setting up voluntary markets or walled-gardens which effectively enforce some trading rules. But should you be unconvinced and still choose the old, neanderthal ways of the blunt hammer of the state, just remember: Prices mean something. They communicate essential information. You cannot arbitrarily choose to attenuate or accentuate those signals or distort them, and hope to achieve a money truly picked by the market, by individual preferences. May the best arena win (truly) and may it win for some and the other win for others (diversity is beautiful). But know that no matter what you do, you will not ever be able to globally enforce such a regime and so you can only hamper the free trade of bitcoin; you cannot stop it, and I do not believe that you can even hamper it enough to create a stagnation which reverses bitcoin's development into money. But you will certainly not help it.
-Lastly, it is also important to acknowledge that while the payment network aspect of bitcoin is inseparable from the token, the concept of payment network and money token are distinct. Paper money has sneaker-net, fiat in general has bank-wires of all sorts and credit card networks and paypals, etc., and bitcoin has it's in-built Bitcoin payment network. It is of course necessary to have at least a rudimentary payment network built-in and as the foundation of a crypto-token, since we meatbags cannot manipulate the virtual. . . however, don't let this create an over-emphasis on the importance of the payment network. We have many payment networks already. Bitcoin's has it's strengths (even in it's current primitive state) and it's weaknesses as compared to bank networks and credit card networks. I really really really hope and do not intend here to start any kind of resurgence of drama over the block-size issue. This is not a cryptic sleight at either side of the debate (of which I have always been quite sympathetic to aspects of both arguments anyway). I truly hope that I have helped more of you also see how the money aspect transcends the payment network (that the payment network is at the very least necessary, but does not necessarily have to achieve a particular level of utility or functionality. . . beyond aiding the fledgling token in becoming ubiquitously held, such that it's good monetary properties can take over and network effects ensconce the token as money). May ALL the on-chain solutions and further layers work towards, not only a highly useful payment network; but one which propels it's native token to the full status of money.
submitted by kwanijml to btc [link] [comments]

The PHANTOM Technical Whitepaper

1.PHANTOM BLOCKCHAIN Cryptocurrencies and smart contract platforms are becoming a shared computational resource. One could view these platforms as a new generation of computers that synchronize over thousands of individual computers. However, existing cryptocurrencies and smart contract platforms have widely recognized limitations in scaling. Average transaction rates in Bitcoin , Ethereum , and related cryptocurrencies have been limited to below 10 (usually about 3-7) transactions per second (Tx/s). As the number of applications utilizing public cryptocurrencies and smart contract platforms grow, the demand for processing high transaction rates in the order of hundreds of Tx/s is increasing. A global payment network would likely require tens of thousands of Tx/s in capacity. Can we build a decentralized and open blockchain platform capable of processing at that scale? The limitations in scaling up existing protocols are somewhat fundamental — they are rooted in the design of the consensus and network protocols. Therefore, even though reengineering the parameters of the existing protocols in say Bitcoin or Ethereum (e.g., the block size or the block rate) may show some speedup, to support applications that need processing of thousands of Tx/s however requires rethinking the underlying protocols from scratch. We present PHANTOM— a new blockchain platform that is designed to scale in transaction rates. As the number of miners in PHANTOM increases, its transaction rates are expected to increase as well. Specifically, PHANTOM’s design allows its transaction rates to roughly double with every few hundred nodes added to its network. As of this writing, the Ethereum mining network is over 30,000 nodes. PHANTOM is a redesign from scratch and has been under research and development for over 2 years. The cornerstone in PHANTOM’s design is the idea of sharding — dividing the mining network into smaller consensus groups called shards each capable of processing transactions in parallel. If the mining network of PHANTOM is say 8000 miners, PHANTOM automatically creates 10 sub-networks each of size 800 miners, in a decentralized manner without a trusted co-ordinator. Now, if one sub-network can agree on a set of (say) 100 transactions in one time epoch, then 10 sub-networks can agree on a total of 1000 transactions in aggregate. The key to aggregating securely is to ensure that sub-networks process different transactions (with no overlaps) without double-spending. The assumptions are similar to existing blockchain-based solutions. We assume that the mining network will have a fraction of malicious nodes/identities with a total computational power that is a fraction (< 1/4) of the complete network. It is based on a standard proof-of-work scheme, however, it has a new two-layer blockchain structure. It features a highly optimized consensus algorithm for processing shards. PHANTOM further comes with an innovative special-purpose smart contract language and execution environment that leverage the underlying architecture to provide a large scale and highly efficient computation platform. The smart contract language in PHANTOM follows a dataflow programming style, where the smart contract can be represented as a directed graph. Nodes in the graph are operations or functions, while an arc between two nodes represent the output of the first and the input to the second. A node gets activated (or operational) as soon as all of its inputs become valid and thus a dataflow contract is inherently parallel and suitable for decentralized systems such as PHANTOM. The sharded architecture is ideal for running large-scale computations that can be easily parallelized. Examples include simple computations such as search, sort and linear algebra computations, to more complex computations such as training a neural net, data mining, financial modeling, scientific computing and in general any MapReduce task among others. This document outlines the technical design of PHANTOM blockchain protocol. PHANTOM has a new, conceptually clean and modular design. It has six layers: the cryptographic layer (Section III), data layer (Section IV), the network layer (Section V), the consensus layer (Section VI), the smart contract layer (Section VII) and the incentive layer (Section VIII). Before we present the different layers, we first discuss the system settings, underlying assumptions and threat model in Section II. 2. SYSTEM SETTING AND ASSUMPTIONS Entities in PHANTOM. There are two main entities in PHANTOM: users and miners. A user is an external entity who uses PHANTOM’s infrastructure to transfer funds or run smart contracts. Miners are the nodes in the network who run PHANTOM’s consensus protocol and get rewarded for their service. In the rest of this whitepaper, we interchangeably use the terms miner and node. PHANTOM’s mining network is further divided into several smaller networks referred to as a shard. A miner is assigned to a shard by a set of miners called DS nodes. This set of DS nodes is also referred to as the DS committee. Each shard and the DS committee has a leader. The leaders play an important role in the PHANTOM’s consensus protocol and for the overall functioning of the network. Each user has a public, private key pair for a digital signature scheme and each miner in the network has an associated IP address and a public key that serves as an identity. Intrinsic token. PHANTOM has an intrinsic token called Zillings or ZILs for short. Zillings give platform usage rights to the users in terms of using it to pay for transaction processing or run smart contracts. Throughout this whitepaper, any reference to amount, value, balance or payment, should be assumed to be counted in ZIL. Adversarial model. We assume that the mining network at any point of time has a fraction of byzantine nodes/identities with a total computational power that is at most of the complete network, where 0 ≤ f < 1 and n is the total size of the network. The factor is an arbitrary constant bounded away from selected as such to yield reasonable constant parameters. We further assume that honest nodes are reliable during protocol runs, and failed or disconnected nodes are counted in the byzantine fraction. Byzantine nodes can deviate from the protocol, drop or modify messages and send different messages to honest nodes. Further, all byzantine nodes can collude together. We assume that the total computation power of the byzantine adversaries is still confined to standard cryptographic assumptions of probabilistic polynomial-time adversaries. We however assume that messages from honest nodes (in the absence of network partition) can be delivered to honest destinations after a certain bound δ, but δ may be time-varying. The bound δ is used to ensure liveness but not safety . In case such timing and connectivity assumptions are not met, it becomes possible for byzantine nodes to delay the messages significantly (simulating a gain in computation power) or worse “eclipse” the network . In the event of network partition, as dictated by the CAP theorem, one can only choose between consistency and availability . In PHANTOM, we choose to be consistent and sacrifice availability. 3. SMART CONTRACT LAYER PHANTOM comes with an innovative special-purpose smart contract language and execution environment that leverages the underlying architecture to provide a large scale and highly efficient computation platform. In this section, we present the smart contract layer that employs a dataflow programming architecture. A. Computational Sharding using Dataflow Paradigm PHANTOM’s smart contract language and its execution platform is designed to leverage the underlying network and transaction sharding architecture. The sharded architecture is ideal for running computation-intensive tasks in an efficient manner. The key idea is the following: only a subset of the network (such as a shard) would perform the computation. We refer to this approach as computational sharding. In contrast with existing smart contract architectures (such as Ethereum), computational sharding in PHANTOM takes a very different approach towards how to process contracts. In Ethereum, every full node is required to perform the same computation to validate the outcome of the computation and update the global state. Albeit being secure, such a fully redundant programming model is prohibitively expensive for running large-scale computations that can be easily parallelized. Examples include simple computations such as search, sort and linear algebra computations, to more complex computations such as training a neural net, data mining, financial modeling, etc. PHANTOM’s computational sharding approach relies on a new smart contract language that is not Turing-complete but scales much better for a multitude of applications. The smart contract language in PHANTOM follows a dataflow programming style , . In the dataflow execution model, a contract is represented by a directed graph. Nodes in the graph are primitive instructions or operations. Directed arcs between two nodes represent the data dependencies between the operations, i.e., output of the first and the input to the second. A node gets activated (or operational) as soon as all of its inputs are available. This stands in contrast to the classical von Neumann execution model (as employed in Ethereum), in which an instruction is only executed when the program counter reaches it, regardless of whether or not it can be executed earlier. The key advantage of employing a dataflow approach is that more than one instruction can be executed at once. Thus, if several nodes in the graph become activated at the same time, they can be executed in parallel. This simple principle provides the potential for massive parallel execution. To see this, we present a simple sequential program in Figure 1a with three instructions and in Figure 1b, we present its dataflow variant. Under the von Neumann execution model, the program would run in three time units: first computing A, then B and finally C. The model does not capture the fact that A and B can be independently computed. The dataflow program on the other hand can compute these two values in parallel. The node that performs addition gets activated as soon as A and B are available. When run on the PHANTOM’s sharded network, each node in the dataflow program can be eventually attributed to a single shard or even a small subset of nodes within a shard. Hence the architecture is ideal for any MapReduce style computational tasks, where some node perform the mapping task while another node can work as a reducer to aggregate the work done by each mapper. In order to facilitate the execution of a dataflow program,, PHANTOM’s smart contract language has the following features: 1)Operating over a virtual memory space shared globally across the entire blockchain. 2)Locking of intermediate cells in the virtual shared memory space during execution. 3)Checkpointing intermediate results during execution committed to blockchain. B. Smart Security Budgeting Apart form the benefits of parallelization offered from the dataflow computation model, PHANTOM further provides a flexible security budgeting mechanism for computational sharding. This feature is enabled by sharding the computational resources in the blockchain network via an overlay above the consensus process. Computational sharding allows users of PHANTOM and applications running on PHANTOM to specify the sizes of consensus groups to compute for each of the subtasks. Each consensus group will then be tasked to compute the same subtask, and produce the results. The user specifies the condition on acceptance of the results, e.g., all in the consensus group must produce the same results, or 3/4 of them must produce the same results, etc. A user of the application running on PHANTOM can budget how much she wants to spend on computing and security, respectively. In particular, a user running a particular deep learning application may spend more gas fee on running more of different neural network tasks than letting too many nodes repeating the same computation. In this case, she can specify a smaller consensus group for running each neural network computation. On the other hand, a sophisticated financial modeling algorithm that requires greater precision may task consensus groups of larger number of nodes to compute the critical portions of the algorithm to be more resilient against potential tampering and manipulation of the results. C. Scalable Applications: Examples PHANTOM aims to provide a platform to run highly scalable computations in a multitude of fields such as data mining, machine learning and financial modeling to name a few. Since supporting efficient sharding of Turing-complete programs is very challenging, and there exist public blockchains that support Turing-complete smart contracts (e.g., Ethereum), PHANTOM focuses on specific applications with requirements not met today. 1)Computation with parallelizable computation load: Scientific computing over large data is a typical example where one requires a large amount of distributed computing power. Moreover, most of these computations are highly parallelizable, examples include linear algebra operations on large matrices, search in the sea of huge amount of data and simulation on a large dataset among others. PHANTOM provides such computing tasks an inexpensive and short turnaround alternative. Moreover, with the right incentive in place with computational sharding, and security budgeting PHANTOM can be leveraged as a readily available and highly reliable resource for such heavy computation load. 2)Train neural nets: With the ever growing popularity and use cases of machine learning (in particular deep learning), it is imperative to have an infrastructure that allows deep learning models to train on large datasets. It is well known that training on large datasets is crucial to a model’s accuracy. To this end, PHANTOM’s computational sharding and dataflow language will be particularly useful to build machine learning applications. It will serve as an infrastructure that may run tools like TensorFlow[] by tasking groups of PHANTOM nodes to independently perform different computations such as computing gradients, apply activation function, compute training loss, etc. 3)Application with high complexity and high precision algorithms: Different from the applications mentioned above, some applications, such as computations over financial models, may require high precision. Any minor deviation in one part of the computation may incur heavy losses in investments. Such applications can task consensus groups of larger number of nodes in PHANTOM to allow them to cross-check the computational results of each other. The key challenge in offloading the computational tasks of such financial modeling algorithms to a public platform, such as PHANTOM, is the concern for data privacy and intellectual property of the algorithms. To begin with, we envision certain well known portion of such computation can be placed to PHANTOM for efficient and secure computation first, while the future research and development of PHANTOM will further strengthen the protection of data privacy and intellectual property for such applications. 4.Weights and more In this section we define the weight of a transaction, and related concepts. The weight of a transaction is proportional to the amount of work that the issuing node invested into it. In the current implementation of PHANTOM, the weight may only assume values 3n, where n is a positive integer that belongs to some nonempty interval of acceptable values[]. In fact, it is irrelevant to know how the weight was obtained in practice. It is only important that every transaction has a positive integer, its weight, attached to it. In general, the idea is that a transaction with a larger weight is more “important” than a transaction with a smaller weight. To avoid spamming and other attack styles, it is assumed that no entity can generate an abundance of transactions with “acceptable” weights in a short period of time. 4)One of the notions we need is the cumulative weight of a transaction: it is defined as the own weight of a particular transaction plus the sum of own weights of all transactions that directly or indirectly approve this transaction. The algorithm for cumulative weight calculation is illustrated in Figure 1. The boxes represent transactions, the small number in the SE corner of each box denotes own weight, and the bold number denotes the cumulative weight. For example, transaction F is directly or indirectly approved by transactions A,B,C,E. The cumulative weight of F is 9 = 3 + 1 + 3 + 1 + 1, which is the sum of the own weight of F and the own weights of A,B,C,E. Let us define “tips” as unapproved transactions in the tangle graph. In the top tangle snapshot of Figure 1, the only tips are A and C. When the new transaction X arrives and approves A and C in the bottom tangle snapshot, X becomes the only tip. The cumulative weight of all other transactions increases by 3, the own weight of X. We need to introduce two additional variables for the discussion of approval algorithms. First, for a transaction site on the tangle, we introduce its height: the length of the longest oriented path to the genesis; depth: the length of the longest reverse-oriented path to some tip. For example, G has height 1 and depth 4 in Figure 2 because of the reverse path F,D,B,A, while D has height 3 and depth 2. Also, let us introduce the notion of the score. By definition, the score of a transaction is the sum of own weights of all transactions approved by this transaction plus the own weight of the transaction itself. In Figure 2, the only tips are A and C. Transaction A directly or indirectly approves transactions B,D,F,G, so the score of A is 1+3+1+3+1 = 9. Analogously, the score of C is 1 + 1 + 1 + 3 + 1 = 7. In order to understand the arguments presented in this paper, one may safely assume that all transactions have an own weight equal to 1. From now on, we stick to this assumption. Under this assumption, the cumulative weight of transaction X becomes 1 plus the number of transactions that directly or indirectly approve X, and the score becomes 1 plus the number of transactions that are directly or indirectly approved by X. Let us note that, among those defined in this section, the cumulative weight is (by far!) the most important metric, although height, depth, and score will briefly enter some discussions as well.
Figure 1: DAG with weight assignments before and after a newly issued transaction, X. The boxes represent transactions, the small number in the SE corner of each box denotes own weight, and the bold number denotes the cumulative weight.
Figure 2: DAG with own weights assigned to each site, and scores calculated for sites A and C. 5.Splitting attack Aviv Zohar suggested the following attack scheme against the proposed MCMC algorithm. In the high-load regime, an attacker can try to split the tangle into two branches and maintain the balance between them. This would allow both branches to continue to grow. The attacker must place at least two conflicting transactions at the beginning of the split to prevent an honest node from effectively joining the branches by referencing them both simultaneously. Then, the attacker hopes that roughly half of the network would contribute to each branch so that they would be able to “compensate” for random fluctuations, even with a relatively small amount of personal computing power. If this technique works, the attacker would be able to spend the same funds on the two branches. To defend against such an attack, one needs to use a “sharp-threshold” rule that makes it too hard to maintain the balance between the two branches. An example of such a rule is selecting the longest chain on the Bitcoin network. Let us translate this concept to the tangle when it is undergoing a splitting attack. Assume that the first branch has total weight 537, and the second branch has total weight 528. If an honest node selects the first branch with probability very close to 1/2, then the attacker would probably be able to maintain the balance between the branches. However, if an honest node selects the first branch with probability much larger than 1/2, then the attacker would probably be unable to maintain the balance. The inability to maintain balance between the two branches in the latter case is due to the fact that after an inevitable random fluctuation, the network will quickly choose one of the branches and abandon the other. In order to make the MCMC algorithm behave this way, one has to choose a very rapidly decaying function f, and initiate the random walk at a node with large depth so that it is highly probable that the walk starts before the branch bifurcation. In this case, the random walk would choose the “heavier” branch with high probability, even if the difference in cumulative weight between the competing branches is small. It is worth noting that the attacker’s task is very difficult because of network synchronization issues: they may not be aware of a large number of recently issued transactions[]. Another effective method for defending against a splitting attack would be for a sufficiently powerful entity to instantaneously publish a large number of transactions on one branch, thus rapidly changing the power balance and making it difficult for the attacker to deal with this change. If the attacker manages to maintain the split, the most recent transactions will only have around 50% confirmation confidence (Section 1), and the branches will not grow. In this scenario, the “honest” nodes may decide to start selectively giving their approval to the transactions that occurred before the bifurcation, bypassing the opportunity to approve the conflicting transactions on the split branches. One may consider other versions of the tip selection algorithm. For example, if a node sees two big subtangles, then it chooses the one with a larger sum of own weights before performing the MCMC tip selection algorithm outlined above. The following idea may be worth considering for future implementations. One could make the transition probabilities defined in (13) depend on both Hx − Hy and Hx in such a way that the next step of the Markov chain is almost deterministic when the walker is deep in the tangle, yet becomes more random when the walker is close to tips. This will help avoid entering the weaker branch while assuring sufficient randomness when choosing the two tips to approve. Conclusions: We considered attack strategies for when an attacker tries to double-spend by“outpacing” the system. The “large weight” attack means that, in order to double-spend, the attackertries to give a very large weight to the double-spending transaction so that it would outweigh the legitimate subtangle. This strategy would be a menace to the network in the case where the allowed own weight is unbounded. As a solution, we may limit the own weight of a transaction from above, or set it to a constant value. In the situation where the maximal own weight of a transaction is m, the best attack strategy is to generate transactions with own weight m that reference the double-spending transaction. When the input flow of “honest” transactions is large enough compared to the attacker’s computational power, the probability that the double-spending transaction has a larger cumulative weight can be estimated using the formula ). The attack method of building a “parasite chain” makes approval strategiesbased on height or score obsolete since the attacker’s sites will have higher values for these metrics when compared to the legitimate tangle. On the other hand, the MCMC tip selection algorithm described in Section 4.1 seems to provide protection against this kind of attack. The MCMC tip selection algorithm also offers protection against the lazy nodesas a bonus. 6.Resistance to quantum computations It is known that a sufficiently large quantum computer35 could be very efficient for handling problems that rely on trial and error to find a solution. The process of finding a nonce in order to generate a Bitcoin block is a good example of such a problem. As of today, one must check an average of 268 nonces to find a suitable hash that allows a new block to be generated. It is known ) that a quantum computer√
would need Θ( N) operations to solve a problem that is analogous to the Bitcoin puzzle stated above. This same problem would need Θ(N) operations on a classical√ computer. Therefore, a quantum computer would be around 268 = 234 ≈ 17 billion times more efficient at mining the Bitcoin blockchain than a classical computer. Also, it is worth noting that if a blockchain does not increase its difficulty in response to increased hashing power, there would be an increased rate of orphaned blocks. For the same reason, a “large weight” attack would also be much more efficient on a quantum computer. However, capping the weight from above, as suggested in Section 4, would effectively prevent a quantum computer attack as well. This is evident in iota because the number of nonces that one needs to check in order to find a suitable hash for issuing a transaction is not unreasonably large.
submitted by Phtchain to u/Phtchain [link] [comments]

Explained Bitcoin, Cryptocurrency and Blockchain Technology & Twitter hack  Marathi  Dheera Unit- 7 ( p block ) Part - 2 ( class 12th ) Bayes theorem Choosing the right brick size for renko charts Best cryptocurrency to Invest 2020 - The Complete Guide Things To Know Before You Buy

How Bitcoin Loses to the CAP Theorem. 15 Jan 2016. Paul Kernfeld dot com. those two situations look exactly the same! When the size of a network can change, there’s no surefire way to determine whether a majority of participants is present. This is why Bitcoin cannot enforce consistency. the transaction will be inserted into a block. These rewards are included in the block that generates them. Bitcoin imposes a 100-block delay before rewards earned through mining can be spent. In practice, miners gather in large pools. Figure 3 presents the distribution of computing power of the pools operating on Bitcoin in May 2018. The figure illustrates that 10 mining pools represented I would like to define a nice beamer block environment with the following properties. a) it is always centered on the slide (horizontally) b) it is only slightly longer than its content. The aim is to put an equation inside the block and use the block as a highlighter for this equation. The Blockchain and the CAP Theorem Goland explains how this could be particularly problematic with digital currencies such as bitcoin. A user may believe they have received bitcoins, exchange As a matter of fact, once you have your block in a minipage, it will take the size of that minipage, so you can control block size there instead of using this answer. – Waldir Leoncio Jun 16 '17 at 12:38

[index] [6615] [2082] [6277] [2485] [8559] [78] [24032] [15709] [5090] [6348]

Explained Bitcoin, Cryptocurrency and Blockchain Technology & Twitter hack Marathi Dheera

Here is an interview in which I apply political economy principles to the Bitcoin block size limit issue: Bitcoin Block Size Political Economy (4 May 2016). Here is a PDF version. But how does bitcoin actually work? ... 3Blue1Brown 3,715,578 views. 26:21. Bayes' Theorem - The Simplest Case - Duration: 5:31. Trefor Bazett 563,582 views. 5:31. This equation will change how ... Bitcoin Q&A: Scaling and the block size debate - Duration: 4:46. aantonop 13,705 views. ... How to motivate yourself to change your behavior Tali Sharot TEDxCambridge - Duration: 16:49. Researcher at Open Bitcoin Privacy Project and developer of BIP47, Justus Ranvier chats with Amanda about implications of artificial scarcity on blockchains,... Size of block chain can be 6-7GB. After installing miner gets account number. In this passbook all the transactions related with bitcoins all around workd gets recorded.

Flag Counter