The Safecoin divisibility thread is getting quite long spanning nearly two 2.5 years of discussion with many offshoots and interesting ideas. I thought that it might be good to consider a specific aspect of the discussion here. This is just a little brainstorming exercise for probing some extremes. If you don’t want to read the entire thought-train, or if you’re too practical just scroll to the bottom where I’ve asked a few questions to instigate some opinions.
TL;DR …
In previous conversations, forum members have indicated a preference for having a rather fine granularity of safecoin at launch. Reasons for this have been described by others to include the following:
- The use of the network for the burgeoning field of IOT devices; both their current and future uses.
- Improving the network’s ability to handle micro-transactions and/or micro-tipping without the need for creating utility coins or exchanging for alt-safecoin denominations.
- Improving the network’s ability for farmers to earn smaller rewards more frequently, rather than waiting longer periods of time for a unit safecoin payout.
- Improving the network’s ability to better handle hypothetical future edge cases involving large amounts of lost coins.
- Improving the network’s ability to easily adapt to human behavior and external fiat market forces.
- Improving network utilization of improvements in technology that will reduce costs over time.
Already, @neo and @anon40790172 have both presented ideas and methods for employing a local “balance” or ledger sheet counters (using a uint32 or uint64) in order to handle fractional coin amounts efficiently without the need to follow the “classic safecoin” route to divisibility as described in the appendix of the Project Safe whitepaper. The approaches are similar to that of having an account PUT balance, but is intended direct use of fractional safecoin instead.
Recently, @oetyng presented some basic figures for the case of 1 safecoin being divisible into approx. 10^11 parts as might be the case following @neo’s recommendation for a “balance method”. The argument was reasonable, but one question that arises from this exercise is, “How do we know how much divisibility is enough for all possible situations in a network that needs to persist data forever?” One way is to be a realist and be practical, since one can’t possibly consider all situations just pick something and fix it often in light of new information. The other option is to identify some sort of consensus as to the bounds on magnitude of divisibility that will suitable for long periods of time in order to improve robustness. This is challenging, we are only able to prognosticate so far into the future, and requirements vary greatly since users are willing to except different degrees of divisibility depending on the application or one’s subjective point of view. For example, I am a big fan of classic safecoin and like the simplicity of an approach of slowly divides coins as the network evolves. So my subjective point of view the matter has mostly been, “eh, let the network divide classic coins as size allows… you IOT guys can pay bots or likes in SafePUTS til then.” However, more and more I recognize that there are drawbacks to this gut reaction. The technological and marketing opportunities that open up when one considers, for example, a forum or social media app (ex. Decorum) directly charges something on the order of magnitude of 10^-18 safecoin for every thumbs-up or “like”, or a micro-robot that farms for itself in order to pay for charging it’s batteries as dirvine has described on his blog, are indeed truly vast. Applications and use cases like these will only bring in more user/investor support for the network. The same goes for monetizing other aspects that one might want to be cost free, but fear for rampant abuse of those mechanisms requires security solutions more complicated. In other words, it may just be much much easier to increase security simply by having a finite but negligible fee associated with most network features.
Considering @oetyng’s argument, rather than speculate on how much divisibility is adequate or required based on real world analogies, I think it may be worthwhile to take divisibility to it’s rational limit, ie. infinite divisibility. Since infinity is a tricky thing, let’s settle for a common definition of approximately infinite such that a quasi-infinite divisibility can be defined for safecoin. Although intuitive and familiar, I think that attempting to use real world analogies in order to come to agreement on some measure of quasi-infinite divisibility is fraught with peril. For example, one might attempt to consider two extrema at opposite ends of observable reality, ie. the volumetric ratio of the “future visibility limit” of the observable universe to that of the volume of a Planck cell, the limiting resolution of scientific comprehension (~2.0026937174703272e+185 divisibility by the way). This may seem reasonable to some and impractical to others so going this route will never lead to a consensus. I would therefore propose to keep it quasi-simple and agree to use one of the definition’s we are also all familiar with, ie. IEEE 754. This standard provides a set of clear and reasonable definitions of computational infinity, as well as maximum values which can be used to represent various levels of quasi-Infinity. Consider the following definitions from the standard for floating point numbers:
- binary32_Max : ~3.402823 E38
- binary64_Max : ~1.7976931348623157 E308
Since floating point values are not applicable for use in lossless accounting due to round off, let us consider the bit depth’s required to achieve similar divisibility with unsigned integers:
- uint32_max : 4294967295
- uint64_max : 18446744073709551615
- uint128_max : 340282366920938463463374607431768211455
-
uint256_max : 115792089237316195423570985008687907853
269984665640564039457584007913129639935 -
uint512_max : 1340780792994259709957402499820584612747936
58205923933777235614437217640300735469768018742981669034
27690031858186486050853753882811946569946433649006084096 -
uint1024_max : 1797693134862315907729305190789024733617976978
94230657273430081157732675805500963132708477322407536021120
11387987139335765878976881441662249284743063947412437776789
34248654852763022196012460941194530829520850057688381506823
42462881473913110540827237163350510684586298239947245938479
716304835356329624224137215
Although both 32bit and 64bit unsigned integers both appear to provide nice granularity, note the one to one correspondence between the binary32_max and uint128_max, as well as binary64_max, and uint1024_max. I view this direct correspondence as a general consensus on the minimum granularity required to achieve quasi-infinite divisibility existing somewhere between 128 and 1024 bits. Although I said I would resort to real world analogies, consider the Planck scale example provided above which demonstrates a hypothetical 616 bit divisibility for the universe. Although 1024bits does make a pretty 32bit x 32bit square…
Relating this back to finances now, if we follow @oetyng and take the current max estimated world debt supply of $1200 Trillion, we find the different tiers of quasi-infinite divisibility yielding at the ultimate limit of economic market share:
- uint32_max : 279396.77 $/div
- uint64_max : 6.505213034913027 E-5 $/div
- uint128_max : 3.5264830524668625 E-24 $/div
- uint256_max : 1.0363402266113334 E-62 $/div
- uint512_max : 8.950008877440248 E-140 $/div
- uint1024_max : 6.675221575521604 E-294 $/div
Questions
Yes, this discussion has been more than slightly ridiculous. If you’ve made it this far, please consider the following questions:
-
Considering hypothetical future scenarios, does a single uint64 per coin provide enough granularity for all use cases for all time? @neo describes a 64bit-depth as a balance between programming ease and reasonableness … he’s right. Even so, can you think of a scenario where uint64 would not provide enough divs?
-
The “balance methods” efficiently employ a local wallet/purse parameter for holding fractional safecoins or “divs” to enable constant time transactions. However, they could also represent any level of divisibility as a simple array of one or more 64bit unsigned integers (ex. safebalance_256bit_divs = new uint64_t[4] ) or 32bit unsigned integers (ex. safebalance_96bit_divs = new uint32_t[3]). Multi-precision libraries make addition and subtraction easy too. Do you think that including levels divisibility >uint64 than one would see need at launch adds much of a burden to network resources and a hypothetical implementation, considering the long term goals of the network?
-
The most appealing and simple solutions discussed in the forums for SAFE network edge case challenges seem to keep coming back to pro-growth strategies seeking both greater and greater storage capacity while also requiring greater divisibility. It would appear that the levels of divisibility provided by >uint64 balance ledgers transitions us to a notion of fluidity rather than granularity. Can you think of ways where the fluidity of at least 2^96 divs per safecoin could give benefits that would help solve any other security, app design, farming, or PUT balance related constraints/issues which you have come across in your work/reading?
-
Is the use of >uint64 divisibility unreasonable?
-
Is quasi-infinite divisibility useful?